Sample records for large scale virtual

  1. Verbalizing, Visualizing, and Navigating: The Effect of Strategies on Encoding a Large-Scale Virtual Environment

    ERIC Educational Resources Information Center

    Kraemer, David J. M.; Schinazi, Victor R.; Cawkwell, Philip B.; Tekriwal, Anand; Epstein, Russell A.; Thompson-Schill, Sharon L.

    2017-01-01

    Using novel virtual cities, we investigated the influence of verbal and visual strategies on the encoding of navigation-relevant information in a large-scale virtual environment. In 2 experiments, participants watched videos of routes through 4 virtual cities and were subsequently tested on their memory for observed landmarks and their ability to…

  2. Large-Scale Networked Virtual Environments: Architecture and Applications

    ERIC Educational Resources Information Center

    Lamotte, Wim; Quax, Peter; Flerackers, Eddy

    2008-01-01

    Purpose: Scalability is an important research topic in the context of networked virtual environments (NVEs). This paper aims to describe the ALVIC (Architecture for Large-scale Virtual Interactive Communities) approach to NVE scalability. Design/methodology/approach: The setup and results from two case studies are shown: a 3-D learning environment…

  3. Megatux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-09-25

    The Megatux platform enables the emulation of large scale (multi-million node) distributed systems. In particular, it allows for the emulation of large-scale networks interconnecting a very large number of emulated computer systems. It does this by leveraging virtualization and associated technologies to allow hundreds of virtual computers to be hosted on a single moderately sized server or workstation. Virtualization technology provided by modern processors allows for multiple guest OSs to run at the same time, sharing the hardware resources. The Megatux platform can be deployed on a single PC, a small cluster of a few boxes or a large clustermore » of computers. With a modest cluster, the Megatux platform can emulate complex organizational networks. By using virtualization, we emulate the hardware, but run actual software enabling large scale without sacrificing fidelity.« less

  4. Sex differences in virtual navigation influenced by scale and navigation experience.

    PubMed

    Padilla, Lace M; Creem-Regehr, Sarah H; Stefanucci, Jeanine K; Cashdan, Elizabeth A

    2017-04-01

    The Morris water maze is a spatial abilities test adapted from the animal spatial cognition literature and has been studied in the context of sex differences in humans. This is because its standard design, which manipulates proximal (close) and distal (far) cues, applies to human navigation. However, virtual Morris water mazes test navigation skills on a scale that is vastly smaller than natural human navigation. Many researchers have argued that navigating in large and small scales is fundamentally different, and small-scale navigation might not simulate natural human navigation. Other work has suggested that navigation experience could influence spatial skills. To address the question of how individual differences influence navigational abilities in differently scaled environments, we employed both a large- (146.4 m in diameter) and a traditional- (36.6 m in diameter) scaled virtual Morris water maze along with a novel measure of navigation experience (lifetime mobility). We found sex differences on the small maze in the distal cue condition only, but in both cue-conditions on the large maze. Also, individual differences in navigation experience modulated navigation performance on the virtual water maze, showing that higher mobility was related to better performance with proximal cues for only females on the small maze, but for both males and females on the large maze.

  5. SmallTool - a toolkit for realizing shared virtual environments on the Internet

    NASA Astrophysics Data System (ADS)

    Broll, Wolfgang

    1998-09-01

    With increasing graphics capabilities of computers and higher network communication speed, networked virtual environments have become available to a large number of people. While the virtual reality modelling language (VRML) provides users with the ability to exchange 3D data, there is still a lack of appropriate support to realize large-scale multi-user applications on the Internet. In this paper we will present SmallTool, a toolkit to support shared virtual environments on the Internet. The toolkit consists of a VRML-based parsing and rendering library, a device library, and a network library. This paper will focus on the networking architecture, provided by the network library - the distributed worlds transfer and communication protocol (DWTP). DWTP provides an application-independent network architecture to support large-scale multi-user environments on the Internet.

  6. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younge, Andrew J.; Pedretti, Kevin; Grant, Ryan

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In thismore » paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.« less

  7. The Immersive Virtual Reality Lab: Possibilities for Remote Experimental Manipulations of Autonomic Activity on a Large Scale.

    PubMed

    Juvrud, Joshua; Gredebäck, Gustaf; Åhs, Fredrik; Lerin, Nils; Nyström, Pär; Kastrati, Granit; Rosén, Jörgen

    2018-01-01

    There is a need for large-scale remote data collection in a controlled environment, and the in-home availability of virtual reality (VR) and the commercial availability of eye tracking for VR present unique and exciting opportunities for researchers. We propose and provide a proof-of-concept assessment of a robust system for large-scale in-home testing using consumer products that combines psychophysiological measures and VR, here referred to as a Virtual Lab. For the first time, this method is validated by correlating autonomic responses, skin conductance response (SCR), and pupillary dilation, in response to a spider, a beetle, and a ball using commercially available VR. Participants demonstrated greater SCR and pupillary responses to the spider, and the effect was dependent on the proximity of the stimuli to the participant, with a stronger response when the spider was close to the virtual self. We replicated these effects across two experiments and in separate physical room contexts to mimic variability in home environment. Together, these findings demonstrate the utility of pupil dilation as a marker of autonomic arousal and the feasibility to assess this in commercially available VR hardware and support a robust Virtual Lab tool for massive remote testing.

  8. The Immersive Virtual Reality Lab: Possibilities for Remote Experimental Manipulations of Autonomic Activity on a Large Scale

    PubMed Central

    Juvrud, Joshua; Gredebäck, Gustaf; Åhs, Fredrik; Lerin, Nils; Nyström, Pär; Kastrati, Granit; Rosén, Jörgen

    2018-01-01

    There is a need for large-scale remote data collection in a controlled environment, and the in-home availability of virtual reality (VR) and the commercial availability of eye tracking for VR present unique and exciting opportunities for researchers. We propose and provide a proof-of-concept assessment of a robust system for large-scale in-home testing using consumer products that combines psychophysiological measures and VR, here referred to as a Virtual Lab. For the first time, this method is validated by correlating autonomic responses, skin conductance response (SCR), and pupillary dilation, in response to a spider, a beetle, and a ball using commercially available VR. Participants demonstrated greater SCR and pupillary responses to the spider, and the effect was dependent on the proximity of the stimuli to the participant, with a stronger response when the spider was close to the virtual self. We replicated these effects across two experiments and in separate physical room contexts to mimic variability in home environment. Together, these findings demonstrate the utility of pupil dilation as a marker of autonomic arousal and the feasibility to assess this in commercially available VR hardware and support a robust Virtual Lab tool for massive remote testing. PMID:29867318

  9. Virtual Computing Laboratories: A Case Study with Comparisons to Physical Computing Laboratories

    ERIC Educational Resources Information Center

    Burd, Stephen D.; Seazzu, Alessandro F.; Conway, Christopher

    2009-01-01

    Current technology enables schools to provide remote or virtual computing labs that can be implemented in multiple ways ranging from remote access to banks of dedicated workstations to sophisticated access to large-scale servers hosting virtualized workstations. This paper reports on the implementation of a specific lab using remote access to…

  10. Virtual Environments Supporting Learning and Communication in Special Needs Education

    ERIC Educational Resources Information Center

    Cobb, Sue V. G.

    2007-01-01

    Virtual reality (VR) describes a set of technologies that allow users to explore and experience 3-dimensional computer-generated "worlds" or "environments." These virtual environments can contain representations of real or imaginary objects on a small or large scale (from modeling of molecular structures to buildings, streets, and scenery of a…

  11. Implementing the Liquid Curriculum: The Impact of Virtual World Learning on Higher Education

    ERIC Educational Resources Information Center

    Steils, Nicole; Tombs, Gemma; Mawer, Matt; Savin-Baden, Maggi; Wimpenny, Katherine

    2015-01-01

    This paper presents findings from a large-scale study which explored the socio-political impact of teaching and learning in virtual worlds on UK higher education. Three key themes emerged with regard to constructing curricula for virtual world teaching and learning, namely designing courses, framing practice and locating specific student needs.…

  12. [A new age of mass casuality education? : The InSitu project: realistic training in virtual reality environments].

    PubMed

    Lorenz, D; Armbruster, W; Vogelgesang, C; Hoffmann, H; Pattar, A; Schmidt, D; Volk, T; Kubulus, D

    2016-09-01

    Chief emergency physicians are regarded as an important element in the care of the injured and sick following mass casualty accidents. Their education is very theoretical; practical content in contrast often falls short. Limitations are usually the very high costs of realistic (large-scale) exercises, poor reproducibility of the scenarios, and poor corresponding results. To substantially improve the educational level because of the complexity of mass casualty accidents, modified training concepts are required that teach the not only the theoretical but above all the practical skills considerably more intensively than at present. Modern training concepts should make it possible for the learner to realistically simulate decision processes. This article examines how interactive virtual environments are applicable for the education of emergency personnel and how they could be designed. Virtual simulation and training environments offer the possibility of simulating complex situations in an adequately realistic manner. The so-called virtual reality (VR) used in this context is an interface technology that enables free interaction in addition to a stereoscopic and spatial representation of virtual large-scale emergencies in a virtual environment. Variables in scenarios such as the weather, the number wounded, and the availability of resources, can be changed at any time. The trainees are able to practice the procedures in many virtual accident scenes and act them out repeatedly, thereby testing the different variants. With the aid of the "InSitu" project, it is possible to train in a virtual reality with realistically reproduced accident situations. These integrated, interactive training environments can depict very complex situations on a scale of 1:1. Because of the highly developed interactivity, the trainees can feel as if they are a direct part of the accident scene and therefore identify much more with the virtual world than is possible with desktop systems. Interactive, identifiable, and realistic training environments based on projector systems could in future enable a repetitive exercise with changes within a decision tree, in reproducibility, and within different occupational groups. With a hard- and software environment numerous accident situations can be depicted and practiced. The main expense is the creation of the virtual accident scenes. As the appropriate city models and other three-dimensional geographical data are already available, this expenditure is very low compared with the planning costs of a large-scale exercise.

  13. WeaVR: a self-contained and wearable immersive virtual environment simulation system.

    PubMed

    Hodgson, Eric; Bachmann, Eric R; Vincent, David; Zmuda, Michael; Waller, David; Calusdian, James

    2015-03-01

    We describe WeaVR, a computer simulation system that takes virtual reality technology beyond specialized laboratories and research sites and makes it available in any open space, such as a gymnasium or a public park. Novel hardware and software systems enable HMD-based immersive virtual reality simulations to be conducted in any arbitrary location, with no external infrastructure and little-to-no setup or site preparation. The ability of the WeaVR system to provide realistic motion-tracked navigation for users, to improve the study of large-scale navigation, and to generate usable behavioral data is shown in three demonstrations. First, participants navigated through a full-scale virtual grocery store while physically situated in an open grass field. Trajectory data are presented for both normal tracking and for tracking during the use of redirected walking that constrained users to a predefined area. Second, users followed a straight path within a virtual world for distances of up to 2 km while walking naturally and being redirected to stay within the field, demonstrating the ability of the system to study large-scale navigation by simulating virtual worlds that are potentially unlimited in extent. Finally, the portability and pedagogical implications of this system were demonstrated by taking it to a regional high school for live use by a computer science class on their own school campus.

  14. The Galics Project: Virtual Galaxy: from Cosmological N-body Simulations

    NASA Astrophysics Data System (ADS)

    Guiderdoni, B.

    The GalICS project develops extensive semi-analytic post-processing of large cosmological simulations to describe hierarchical galaxy formation. The multiwavelength statistical properties of high-redshift and local galaxies are predicted within the large-scale structures. The fake catalogs and mock images that are generated from the outputs are used for the analysis and preparation of deep surveys. The whole set of results is now available in an on-line database that can be easily queried. The GalICS project represents a first step towards a 'Virtual Observatory of virtual galaxies'.

  15. A Survey on Virtualization of Wireless Sensor Networks

    PubMed Central

    Islam, Md. Motaharul; Hassan, Mohammad Mehedi; Lee, Ga-Won; Huh, Eui-Nam

    2012-01-01

    Wireless Sensor Networks (WSNs) are gaining tremendous importance thanks to their broad range of commercial applications such as in smart home automation, health-care and industrial automation. In these applications multi-vendor and heterogeneous sensor nodes are deployed. Due to strict administrative control over the specific WSN domains, communication barriers, conflicting goals and the economic interests of different WSN sensor node vendors, it is difficult to introduce a large scale federated WSN. By allowing heterogeneous sensor nodes in WSNs to coexist on a shared physical sensor substrate, virtualization in sensor network may provide flexibility, cost effective solutions, promote diversity, ensure security and increase manageability. This paper surveys the novel approach of using the large scale federated WSN resources in a sensor virtualization environment. Our focus in this paper is to introduce a few design goals, the challenges and opportunities of research in the field of sensor network virtualization as well as to illustrate a current status of research in this field. This paper also presents a wide array of state-of-the art projects related to sensor network virtualization. PMID:22438759

  16. A survey on virtualization of Wireless Sensor Networks.

    PubMed

    Islam, Md Motaharul; Hassan, Mohammad Mehedi; Lee, Ga-Won; Huh, Eui-Nam

    2012-01-01

    Wireless Sensor Networks (WSNs) are gaining tremendous importance thanks to their broad range of commercial applications such as in smart home automation, health-care and industrial automation. In these applications multi-vendor and heterogeneous sensor nodes are deployed. Due to strict administrative control over the specific WSN domains, communication barriers, conflicting goals and the economic interests of different WSN sensor node vendors, it is difficult to introduce a large scale federated WSN. By allowing heterogeneous sensor nodes in WSNs to coexist on a shared physical sensor substrate, virtualization in sensor network may provide flexibility, cost effective solutions, promote diversity, ensure security and increase manageability. This paper surveys the novel approach of using the large scale federated WSN resources in a sensor virtualization environment. Our focus in this paper is to introduce a few design goals, the challenges and opportunities of research in the field of sensor network virtualization as well as to illustrate a current status of research in this field. This paper also presents a wide array of state-of-the art projects related to sensor network virtualization.

  17. The Intersection of Online and Face-to-Face Teaching: Implications for Virtual School Teacher Practice and Professional Development

    ERIC Educational Resources Information Center

    Garrett Dikkers, Amy

    2015-01-01

    This mixed-method study reports perspectives of virtual school teachers on the impact of online teaching on their face-to-face practice. Data from a large-scale survey of teachers in the North Carolina Virtual Public School (n = 214), focus groups (n = 7), and interviews (n = 5) demonstrate multiple intersections between online and face-to-face…

  18. Large-scale virtual screening on public cloud resources with Apache Spark.

    PubMed

    Capuccini, Marco; Ahmed, Laeeq; Schaal, Wesley; Laure, Erwin; Spjuth, Ola

    2017-01-01

    Structure-based virtual screening is an in-silico method to screen a target receptor against a virtual molecular library. Applying docking-based screening to large molecular libraries can be computationally expensive, however it constitutes a trivially parallelizable task. Most of the available parallel implementations are based on message passing interface, relying on low failure rate hardware and fast network connection. Google's MapReduce revolutionized large-scale analysis, enabling the processing of massive datasets on commodity hardware and cloud resources, providing transparent scalability and fault tolerance at the software level. Open source implementations of MapReduce include Apache Hadoop and the more recent Apache Spark. We developed a method to run existing docking-based screening software on distributed cloud resources, utilizing the MapReduce approach. We benchmarked our method, which is implemented in Apache Spark, docking a publicly available target receptor against [Formula: see text]2.2 M compounds. The performance experiments show a good parallel efficiency (87%) when running in a public cloud environment. Our method enables parallel Structure-based virtual screening on public cloud resources or commodity computer clusters. The degree of scalability that we achieve allows for trying out our method on relatively small libraries first and then to scale to larger libraries. Our implementation is named Spark-VS and it is freely available as open source from GitHub (https://github.com/mcapuccini/spark-vs).Graphical abstract.

  19. Global Detection of Live Virtual Machine Migration Based on Cellular Neural Networks

    PubMed Central

    Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian

    2014-01-01

    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better. PMID:24959631

  20. Global detection of live virtual machine migration based on cellular neural networks.

    PubMed

    Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian

    2014-01-01

    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better.

  1. Handheld Micromanipulation with Vision-Based Virtual Fixtures

    PubMed Central

    Becker, Brian C.; MacLachlan, Robert A.; Hager, Gregory D.; Riviere, Cameron N.

    2011-01-01

    Precise movement during micromanipulation becomes difficult in submillimeter workspaces, largely due to the destabilizing influence of tremor. Robotic aid combined with filtering techniques that suppress tremor frequency bands increases performance; however, if knowledge of the operator's goals is available, virtual fixtures have been shown to greatly improve micromanipulator precision. In this paper, we derive a control law for position-based virtual fixtures within the framework of an active handheld micromanipulator, where the fixtures are generated in real-time from microscope video. Additionally, we develop motion scaling behavior centered on virtual fixtures as a simple and direct extension to our formulation. We demonstrate that hard and soft (motion-scaled) virtual fixtures outperform state-of-the-art tremor cancellation performance on a set of artificial but medically relevant tasks: holding, move-and-hold, curve tracing, and volume restriction. PMID:23275860

  2. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE PAGES

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...

    2017-09-29

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  3. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  4. Performance/price estimates for cortex-scale hardware: a design space exploration.

    PubMed

    Zaveri, Mazad S; Hammerstrom, Dan

    2011-04-01

    In this paper, we revisit the concept of virtualization. Virtualization is useful for understanding and investigating the performance/price and other trade-offs related to the hardware design space. Moreover, it is perhaps the most important aspect of a hardware design space exploration. Such a design space exploration is a necessary part of the study of hardware architectures for large-scale computational models for intelligent computing, including AI, Bayesian, bio-inspired and neural models. A methodical exploration is needed to identify potentially interesting regions in the design space, and to assess the relative performance/price points of these implementations. As an example, in this paper we investigate the performance/price of (digital and mixed-signal) CMOS and hypothetical CMOL (nanogrid) technology based hardware implementations of human cortex-scale spiking neural systems. Through this analysis, and the resulting performance/price points, we demonstrate, in general, the importance of virtualization, and of doing these kinds of design space explorations. The specific results suggest that hybrid nanotechnology such as CMOL is a promising candidate to implement very large-scale spiking neural systems, providing a more efficient utilization of the density and storage benefits of emerging nano-scale technologies. In general, we believe that the study of such hypothetical designs/architectures will guide the neuromorphic hardware community towards building large-scale systems, and help guide research trends in intelligent computing, and computer engineering. Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. An incremental anomaly detection model for virtual machines.

    PubMed

    Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu

    2017-01-01

    Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform.

  6. An incremental anomaly detection model for virtual machines

    PubMed Central

    Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu

    2017-01-01

    Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245

  7. Inter-City Virtual Water Transfers Within a Large Metropolitan Area: A Case Study of the Phoenix Metropolitan Area in the United States

    NASA Astrophysics Data System (ADS)

    Rushforth, R.; Ruddell, B. L.

    2014-12-01

    Water footprints have been proposed as potential sustainability indicators, but these analyses have thus far focused at the country-level or regional scale. However, for many countries, especially the United States, the most relevant level of water decision-making is the city. For water footprinting to inform urban sustainability, the boundaries for analysis must match the relevant boundaries for decision-making and economic development. Initial studies into city-level water footprints have provided insight into how large cities across the globe—Delhi, Lagos, Berlin, Beijing, York—create virtual water trade linkages with distant hinterlands. This study hypothesizes that for large cities the most direct and manageable virtual water flows exist at the metropolitan area scale and thus should provide the most policy-relevant information. This study represents an initial attempt at quantifying intra-metropolitan area virtual water flows. A modified commodity-by-industry input-output model was used to determine virtual water flows destined to, occurring within, and emanating from the Phoenix metropolitan area (PMA). Virtual water flows to and from the PMA were calculated for each PMA city using water consumption data as well as economic and industry statistics. Intra-PMA virtual water trade was determined using county-level traffic flow data, water consumption data, and economic and industry statistics. The findings show that there are archetypal cities within metropolitan areas and that each type of city has a distinct water footprint profile that is related to the value added economic processes occuring within their boundaries. These findings can be used to inform local water managers about the resilience of outsourced water supplies.

  8. Virtual screening of integrase inhibitors by large scale binding free energy calculations: the SAMPL4 challenge

    PubMed Central

    Gallicchio, Emilio; Deng, Nanjie; He, Peng; Wickstrom, Lauren; Perryman, Alexander L.; Santiago, Daniel N.; Forli, Stefano; Olson, Arthur J.; Levy, Ronald M.

    2014-01-01

    As part of the SAMPL4 blind challenge, filtered AutoDock Vina ligand docking predictions and large scale binding energy distribution analysis method binding free energy calculations have been applied to the virtual screening of a focused library of candidate binders to the LEDGF site of the HIV integrase protein. The computational protocol leveraged docking and high level atomistic models to improve enrichment. The enrichment factor of our blind predictions ranked best among all of the computational submissions, and second best overall. This work represents to our knowledge the first example of the application of an all-atom physics-based binding free energy model to large scale virtual screening. A total of 285 parallel Hamiltonian replica exchange molecular dynamics absolute protein-ligand binding free energy simulations were conducted starting from docked poses. The setup of the simulations was fully automated, calculations were distributed on multiple computing resources and were completed in a 6-weeks period. The accuracy of the docked poses and the inclusion of intramolecular strain and entropic losses in the binding free energy estimates were the major factors behind the success of the method. Lack of sufficient time and computing resources to investigate additional protonation states of the ligands was a major cause of mispredictions. The experiment demonstrated the applicability of binding free energy modeling to improve hit rates in challenging virtual screening of focused ligand libraries during lead optimization. PMID:24504704

  9. Interpreting Observations of Large-Scale Traveling Ionospheric Disturbances by Ionospheric Sounders

    NASA Astrophysics Data System (ADS)

    Pederick, L. H.; Cervera, M. A.; Harris, T. J.

    2017-12-01

    From July to October 2015, the Australian Defence Science and Technology Group conducted an experiment during which a vertical incidence sounder (VIS) was set up at Alice Springs Airport. During September 2015 this VIS observed the passage of many large-scale traveling ionospheric disturbances (TIDs). By plotting the measured virtual heights across multiple frequencies as a function of time, the passage of the TID can be clearly displayed. Using this plotting method, we show that all the TIDs observed during the campaign by the VIS at Alice Springs show an apparent downward phase progression of the crests and troughs. The passage of the TID can be more clearly interpreted by plotting the true height of iso-ionic contours across multiple plasma frequencies; the true heights can be obtained by inverting each ionogram to obtain an electron density profile. These plots can be used to measure the vertical phase speed of a TID and also reveal a time lag between events seen in true height compared to virtual height. To the best of our knowledge, this style of analysis has not previously been applied to other swept-frequency sounder observations. We develop a simple model to investigate the effect of the passage of a large-scale TID on a VIS. The model confirms that for a TID with a downward vertical phase progression, the crests and troughs will appear earlier in virtual height than in true height and will have a smaller apparent speed in true height than in virtual height.

  10. The Use of Constructive Modeling and Virtual Simulation in Large-Scale Team Training: A Military Case Study.

    ERIC Educational Resources Information Center

    Andrews, Dee H.; Dineen, Toni; Bell, Herbert H.

    1999-01-01

    Discusses the use of constructive modeling and virtual simulation in team training; describes a military application of constructive modeling, including technology issues and communication protocols; considers possible improvements; and discusses applications in team-learning environments other than military, including industry and education. (LRW)

  11. Visualizing vascular structures in virtual environments

    NASA Astrophysics Data System (ADS)

    Wischgoll, Thomas

    2013-01-01

    In order to learn more about the cause of coronary heart diseases and develop diagnostic tools, the extraction and visualization of vascular structures from volumetric scans for further analysis is an important step. By determining a geometric representation of the vasculature, the geometry can be inspected and additional quantitative data calculated and incorporated into the visualization of the vasculature. To provide a more user-friendly visualization tool, virtual environment paradigms can be utilized. This paper describes techniques for interactive rendering of large-scale vascular structures within virtual environments. This can be applied to almost any virtual environment configuration, such as CAVE-type displays. Specifically, the tools presented in this paper were tested on a Barco I-Space and a large 62x108 inch passive projection screen with a Kinect sensor for user tracking.

  12. Managing a Statewide Virtual Reference Service: How Q and A NJ Works.

    ERIC Educational Resources Information Center

    Bromberg, Peter

    2003-01-01

    Describes the live virtual reference service, Q and A NJ (Question and Answer New Jersey), strategies used to meet the challenges of day-to-day management, scaled growth and quality control. Describes how it began; how long it took; how to manage a large project (constant communication; training and practice; transcript analysis and privacy;…

  13. Virtual World Currency Value Fluctuation Prediction System Based on User Sentiment Analysis.

    PubMed

    Kim, Young Bin; Lee, Sang Hyeok; Kang, Shin Jin; Choi, Myung Jin; Lee, Jung; Kim, Chang Hun

    2015-01-01

    In this paper, we present a method for predicting the value of virtual currencies used in virtual gaming environments that support multiple users, such as massively multiplayer online role-playing games (MMORPGs). Predicting virtual currency values in a virtual gaming environment has rarely been explored; it is difficult to apply real-world methods for predicting fluctuating currency values or shares to the virtual gaming world on account of differences in domains between the two worlds. To address this issue, we herein predict virtual currency value fluctuations by collecting user opinion data from a virtual community and analyzing user sentiments or emotions from the opinion data. The proposed method is straightforward and applicable to predicting virtual currencies as well as to gaming environments, including MMORPGs. We test the proposed method using large-scale MMORPGs and demonstrate that virtual currencies can be effectively and efficiently predicted with it.

  14. Virtual World Currency Value Fluctuation Prediction System Based on User Sentiment Analysis

    PubMed Central

    Kim, Young Bin; Lee, Sang Hyeok; Kang, Shin Jin; Choi, Myung Jin; Lee, Jung; Kim, Chang Hun

    2015-01-01

    In this paper, we present a method for predicting the value of virtual currencies used in virtual gaming environments that support multiple users, such as massively multiplayer online role-playing games (MMORPGs). Predicting virtual currency values in a virtual gaming environment has rarely been explored; it is difficult to apply real-world methods for predicting fluctuating currency values or shares to the virtual gaming world on account of differences in domains between the two worlds. To address this issue, we herein predict virtual currency value fluctuations by collecting user opinion data from a virtual community and analyzing user sentiments or emotions from the opinion data. The proposed method is straightforward and applicable to predicting virtual currencies as well as to gaming environments, including MMORPGs. We test the proposed method using large-scale MMORPGs and demonstrate that virtual currencies can be effectively and efficiently predicted with it. PMID:26241496

  15. Final Report: Enabling Exascale Hardware and Software Design through Scalable System Virtualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bridges, Patrick G.

    2015-02-01

    In this grant, we enhanced the Palacios virtual machine monitor to increase its scalability and suitability for addressing exascale system software design issues. This included a wide range of research on core Palacios features, large-scale system emulation, fault injection, perfomrance monitoring, and VMM extensibility. This research resulted in large number of high-impact publications in well-known venues, the support of a number of students, and the graduation of two Ph.D. students and one M.S. student. In addition, our enhanced version of the Palacios virtual machine monitor has been adopted as a core element of the Hobbes operating system under active DOE-fundedmore » research and development.« less

  16. Openwebglobe 2: Visualization of Complex 3D-GEODATA in the (mobile) Webbrowser

    NASA Astrophysics Data System (ADS)

    Christen, M.

    2016-06-01

    Providing worldwide high resolution data for virtual globes consists of compute and storage intense tasks for processing data. Furthermore, rendering complex 3D-Geodata, such as 3D-City models with an extremely high polygon count and a vast amount of textures at interactive framerates is still a very challenging task, especially on mobile devices. This paper presents an approach for processing, caching and serving massive geospatial data in a cloud-based environment for large scale, out-of-core, highly scalable 3D scene rendering on a web based virtual globe. Cloud computing is used for processing large amounts of geospatial data and also for providing 2D and 3D map data to a large amount of (mobile) web clients. In this paper the approach for processing, rendering and caching very large datasets in the currently developed virtual globe "OpenWebGlobe 2" is shown, which displays 3D-Geodata on nearly every device.

  17. Unpacking Frames of Reference to Inform the Design of Virtual World Learning in Higher Education

    ERIC Educational Resources Information Center

    Wimpenny, Katherine; Savin-Baden, Maggi; Mawer, Matt; Steils, Nicole; Tombs, Gemma

    2012-01-01

    In the changing context of globalised higher education, a series of pedagogical shifts have occurred, and with them, a number of interactive learning approaches have emerged. This article reports on findings taken from a large-scale study that explored the socio-political impact of virtual world learning on higher education in the UK, specifically…

  18. Egocentric spatial learning in schizophrenia investigated with functional magnetic resonance imaging☆

    PubMed Central

    Siemerkus, Jakob; Irle, Eva; Schmidt-Samoa, Carsten; Dechent, Peter; Weniger, Godehard

    2012-01-01

    Psychotic symptoms in schizophrenia are related to disturbed self-recognition and to disturbed experience of agency. Possibly, these impairments contribute to first-person large-scale egocentric learning deficits. Sixteen inpatients with schizophrenia and 16 matched healthy comparison subjects underwent functional magnetic resonance imaging (fMRI) while finding their way in a virtual maze. The virtual maze presented a first-person view, lacked any topographical landmarks and afforded egocentric navigation strategies. The participants with schizophrenia showed impaired performance in the virtual maze when compared with controls, and showed a similar but weaker pattern of activity changes during egocentric learning when compared with controls. Especially the activity of task-relevant brain regions (precuneus and posterior cingulate and retrosplenial cortex) differed from that of controls across all trials of the task. Activity increase within the right-sided precuneus was related to worse virtual maze performance and to stronger positive symptoms in participants with schizophrenia. We suggest that psychotic symptoms in schizophrenia are related to aberrant neural activity within the precuneus. Possibly, first-person large-scale egocentric navigation and learning designs may be a feasible tool for the assessment and treatment of cognitive deficits related to self-recognition in patients with schizophrenia. PMID:24179748

  19. GPURFSCREEN: a GPU based virtual screening tool using random forest classifier.

    PubMed

    Jayaraj, P B; Ajay, Mathias K; Nufail, M; Gopakumar, G; Jaleel, U C A

    2016-01-01

    In-silico methods are an integral part of modern drug discovery paradigm. Virtual screening, an in-silico method, is used to refine data models and reduce the chemical space on which wet lab experiments need to be performed. Virtual screening of a ligand data model requires large scale computations, making it a highly time consuming task. This process can be speeded up by implementing parallelized algorithms on a Graphical Processing Unit (GPU). Random Forest is a robust classification algorithm that can be employed in the virtual screening. A ligand based virtual screening tool (GPURFSCREEN) that uses random forests on GPU systems has been proposed and evaluated in this paper. This tool produces optimized results at a lower execution time for large bioassay data sets. The quality of results produced by our tool on GPU is same as that on a regular serial environment. Considering the magnitude of data to be screened, the parallelized virtual screening has a significantly lower running time at high throughput. The proposed parallel tool outperforms its serial counterpart by successfully screening billions of molecules in training and prediction phases.

  20. Staghorn: An Automated Large-Scale Distributed System Analysis Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gabert, Kasimir; Burns, Ian; Elliott, Steven

    2016-09-01

    Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model,more » either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drouhard, Margaret MEG G; Steed, Chad A; Hahn, Steven E

    In this paper, we propose strategies and objectives for immersive data visualization with applications in materials science using the Oculus Rift virtual reality headset. We provide background on currently available analysis tools for neutron scattering data and other large-scale materials science projects. In the context of the current challenges facing scientists, we discuss immersive virtual reality visualization as a potentially powerful solution. We introduce a prototype immersive visual- ization system, developed in conjunction with materials scientists at the Spallation Neutron Source, which we have used to explore large crystal structures and neutron scattering data. Finally, we offer our perspective onmore » the greatest challenges that must be addressed to build effective and intuitive virtual reality analysis tools that will be useful for scientists in a wide range of fields.« less

  2. Data Integration: Charting a Path Forward to 2035

    DTIC Science & Technology

    2011-02-14

    New York, NY: Gotham Books, 2004. Seligman , Len. Mitre Corporation, e-mail interview, 6 Dec 2010. Singer, P.W. Wired for War: The Robotics...articles.aspx (accessed 4 Dec 2010). Ultra-Large-Scale Systems: The Software Challenge of the Future. Study lead Linda Northrup. Pittsburgh, PA: Carnegie...Virtualization?‖ 1. 41 Ultra-Large-Scale Systems: The Software Challenge of the Future. Study lead Linda Northrup. Pittsburgh, PA: Carnegie Mellon Software

  3. Virtual Network Configuration Management System for Data Center Operations and Management

    NASA Astrophysics Data System (ADS)

    Okita, Hideki; Yoshizawa, Masahiro; Uehara, Keitaro; Mizuno, Kazuhiko; Tarui, Toshiaki; Naono, Ken

    Virtualization technologies are widely deployed in data centers to improve system utilization. However, they increase the workload for operators, who have to manage the structure of virtual networks in data centers. A virtual-network management system which automates the integration of the configurations of the virtual networks is provided. The proposed system collects the configurations from server virtualization platforms and VLAN-supported switches, and integrates these configurations according to a newly developed XML-based management information model for virtual-network configurations. Preliminary evaluations show that the proposed system helps operators by reducing the time to acquire the configurations from devices and correct the inconsistency of operators' configuration management database by about 40 percent. Further, they also show that the proposed system has excellent scalability; the system takes less than 20 minutes to acquire the virtual-network configurations from a large scale network that includes 300 virtual machines. These results imply that the proposed system is effective for improving the configuration management process for virtual networks in data centers.

  4. Designing and developing portable large-scale JavaScript web applications within the Experiment Dashboard framework

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Dzhunov, I.; Karavakis, E.; Kokoszkiewicz, L.; Nowotka, M.; Saiz, P.; Tuckett, D.

    2012-12-01

    Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Computing Grid. We demonstrate the benefits of the approach for large-scale JavaScript web applications in this context by examining the design of several Experiment Dashboard applications for data processing, data transfer and site status monitoring, and by showing how they have been ported for different virtual organisations and technologies.

  5. Virtual interface environment workstations

    NASA Technical Reports Server (NTRS)

    Fisher, S. S.; Wenzel, E. M.; Coler, C.; Mcgreevy, M. W.

    1988-01-01

    A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed at NASA's Ames Research Center for use as a multipurpose interface environment. This Virtual Interface Environment Workstation (VIEW) system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, research scenarios, and research directions are described.

  6. Parameter studies on the energy balance closure problem using large-eddy simulation

    NASA Astrophysics Data System (ADS)

    De Roo, Frederik; Banerjee, Tirtha; Mauder, Matthias

    2017-04-01

    The imbalance of the surface energy budget in eddy-covariance measurements is still a pending problem. A possible cause is the presence of land surface heterogeneity. Heterogeneities of the boundary layer scale or larger are most effective in influencing the boundary layer turbulence, and large-eddy simulations have shown that secondary circulations within the boundary layer can affect the surface energy budget. However, the precise influence of the surface characteristics on the energy imbalance and its partitioning is still unknown. To investigate the influence of surface variables on all the components of the flux budget under convective conditions, we set up a systematic parameter study by means of large-eddy simulation. For the study we use a virtual control volume approach, and we focus on idealized heterogeneity by considering spatially variable surface fluxes. The surface fluxes vary locally in intensity and these patches have different length scales. The main focus lies on heterogeneities of length scales of the kilometer scale and one decade smaller. For each simulation, virtual measurement towers are positioned at functionally different positions. We discriminate between the locally homogeneous towers, located within land use patches, with respect to the more heterogeneous towers, and find, among others, that the flux-divergence and the advection are strongly linearly related within each class. Furthermore, we seek correlators for the energy balance ratio and the energy residual in the simulations. Besides the expected correlation with measurable atmospheric quantities such as the friction velocity, boundary-layer depth and temperature and moisture gradients, we have also found an unexpected correlation with the temperature difference between sonic temperature and surface temperature. In additional simulations with a large number of virtual towers, we investigate higher order correlations, which can be linked to secondary circulations. In a companion presentation (EGU2017-2130) these correlations are investigated and confirmed with the help of micrometeorological measurements from the TERENO sites where the effects of landscape scale surface heterogeneities are deemed to be important.

  7. How many flux towers are enough? How tall is a tower tall enough? How elaborate a scaling is scaling enough?

    NASA Astrophysics Data System (ADS)

    Xu, K.; Sühring, M.; Metzger, S.; Desai, A. R.

    2017-12-01

    Most eddy covariance (EC) flux towers suffer from footprint bias. This footprint not only varies rapidly in time, but is smaller than the resolution of most earth system models, leading to a systemic scale mismatch in model-data comparison. Previous studies have suggested this problem can be mitigated (1) with multiple towers, (2) by building a taller tower with a large flux footprint, and (3) by applying advanced scaling methods. Here we ask: (1) How many flux towers are needed to sufficiently sample the flux mean and variation across an Earth system model domain? (2) How tall is tall enough for a single tower to represent the Earth system model domain? (3) Can we reduce the requirements derived from the first two questions with advanced scaling methods? We test these questions with output from large eddy simulations (LES) and application of the environmental response function (ERF) upscaling method. PALM LES (Maronga et al. 2015) was set up over a domain of 12 km x 16 km x 1.8 km at 7 m spatial resolution and produced 5 hours of output at a time step of 0.3 s. The surface Bowen ratio alternated between 0.2 and 1 among a series of 3 km wide stripe-like surface patches, with horizontal wind perpendicular to the surface heterogeneity. A total of 384 virtual towers were arranged on a regular grid across the LES domain, recording EC observations at 18 vertical levels. We use increasing height of a virtual flux tower and increasing numbers of virtual flux towers in the domain to compute energy fluxes. Initial results show a large (>25) number of towers is needed sufficiently sample the mean domain energy flux. When the ERF upscaling method was applied to the virtual towers in the LES environment, we were able to map fluxes over the domain to within 20% precision with a significantly smaller number of towers. This was achieved by relating sub-hourly turbulent fluxes to meteorological forcings and surface properties. These results demonstrate how advanced scaling techniques can decrease the number of towers, and thus experimental expense, required for domain-scaling over heterogeneous surface.

  8. Large-scale P2P network based distributed virtual geographic environment (DVGE)

    NASA Astrophysics Data System (ADS)

    Tan, Xicheng; Yu, Liang; Bian, Fuling

    2007-06-01

    Virtual Geographic Environment has raised full concern as a kind of software information system that helps us understand and analyze the real geographic environment, and it has also expanded to application service system in distributed environment--distributed virtual geographic environment system (DVGE), and gets some achievements. However, limited by the factor of the mass data of VGE, the band width of network, as well as numerous requests and economic, etc. DVGE still faces some challenges and problems which directly cause the current DVGE could not provide the public with high-quality service under current network mode. The Rapid development of peer-to-peer network technology has offered new ideas of solutions to the current challenges and problems of DVGE. Peer-to-peer network technology is able to effectively release and search network resources so as to realize efficient share of information. Accordingly, this paper brings forth a research subject on Large-scale peer-to-peer network extension of DVGE as well as a deep study on network framework, routing mechanism, and DVGE data management on P2P network.

  9. The Design and Evaluation of a Large-Scale Real-Walking Locomotion Interface

    PubMed Central

    Peck, Tabitha C.; Fuchs, Henry; Whitton, Mary C.

    2014-01-01

    Redirected Free Exploration with Distractors (RFED) is a large-scale real-walking locomotion interface developed to enable people to walk freely in virtual environments that are larger than the tracked space in their facility. This paper describes the RFED system in detail and reports on a user study that evaluated RFED by comparing it to walking-in-place and joystick interfaces. The RFED system is composed of two major components, redirection and distractors. This paper discusses design challenges, implementation details, and lessons learned during the development of two working RFED systems. The evaluation study examined the effect of the locomotion interface on users’ cognitive performance on navigation and wayfinding measures. The results suggest that participants using RFED were significantly better at navigating and wayfinding through virtual mazes than participants using walking-in-place and joystick interfaces. Participants traveled shorter distances, made fewer wrong turns, pointed to hidden targets more accurately and more quickly, and were able to place and label targets on maps more accurately, and more accurately estimate the virtual environment size. PMID:22184262

  10. Social Gaming and Learning Applications: A Driving Force for the Future of Virtual and Augmented Reality?

    NASA Astrophysics Data System (ADS)

    Dörner, Ralf; Lok, Benjamin; Broll, Wolfgang

    Backed by a large consumer market, entertainment and education applications have spurred developments in the fields of real-time rendering and interactive computer graphics. Relying on Computer Graphics methodologies, Virtual Reality and Augmented Reality benefited indirectly from this; however, there is no large scale demand for VR and AR in gaming and learning. What are the shortcomings of current VR/AR technology that prevent a widespread use in these application areas? What advances in VR/AR will be necessary? And what might future “VR-enhanced” gaming and learning look like? Which role can and will Virtual Humans play? Concerning these questions, this article analyzes the current situation and provides an outlook on future developments. The focus is on social gaming and learning.

  11. Building to Scale: An Analysis of Web-Based Services in CIC (Big Ten) Libraries.

    ERIC Educational Resources Information Center

    Dewey, Barbara I.

    Advancing library services in large universities requires creative approaches for "building to scale." This is the case for CIC, Committee on Institutional Cooperation (Big Ten), libraries whose home institutions serve thousands of students, faculty, staff, and others. Developing virtual Web-based services is an increasingly viable…

  12. Semihard processes with BLM renormalization scale setting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caporale, Francesco; Ivanov, Dmitry Yu.; Murdaca, Beatrice

    We apply the BLM scale setting procedure directly to amplitudes (cross sections) of several semihard processes. It is shown that, due to the presence of β{sub 0}-terms in the NLA results for the impact factors, the obtained optimal renormalization scale is not universal, but depends both on the energy and on the process in question. We illustrate this general conclusion considering the following semihard processes: (i) inclusive production of two forward high-p{sub T} jets separated by large interval in rapidity (Mueller-Navelet jets); (ii) high-energy behavior of the total cross section for highly virtual photons; (iii) forward amplitude of the productionmore » of two light vector mesons in the collision of two virtual photons.« less

  13. Modulation of Small-scale Turbulence Structure by Large-scale Motions in the Absence of Direct Energy Transfer.

    NASA Astrophysics Data System (ADS)

    Brasseur, James G.; Juneja, Anurag

    1996-11-01

    Previous DNS studies indicate that small-scale structure can be directly altered through ``distant'' dynamical interactions by energetic forcing of the large scales. To remove the possibility of stimulating energy transfer between the large- and small-scale motions in these long-range interactions, we here perturb the large scale structure without altering its energy content by suddenly altering only the phases of large-scale Fourier modes. Scale-dependent changes in turbulence structure appear as a non zero difference field between two simulations from identical initial conditions of isotropic decaying turbulence, one perturbed and one unperturbed. We find that the large-scale phase perturbations leave the evolution of the energy spectrum virtually unchanged relative to the unperturbed turbulence. The difference field, on the other hand, is strongly affected by the perturbation. Most importantly, the time scale τ characterizing the change in in turbulence structure at spatial scale r shortly after initiating a change in large-scale structure decreases with decreasing turbulence scale r. Thus, structural information is transferred directly from the large- to the smallest-scale motions in the absence of direct energy transfer---a long-range effect which cannot be explained by a linear mechanism such as rapid distortion theory. * Supported by ARO grant DAAL03-92-G-0117

  14. 1001 Ways to run AutoDock Vina for virtual screening

    NASA Astrophysics Data System (ADS)

    Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D.

    2016-03-01

    Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.

  15. 1001 Ways to run AutoDock Vina for virtual screening.

    PubMed

    Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D

    2016-03-01

    Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.

  16. Assessing patient acceptance of virtual clinics for diabetic retinopathy: a large scale postal survey.

    PubMed

    Ahnood, Dana; Souriti, Ahmad; Williams, Gwyn Samuel

    2018-06-01

    To explore the views of patients with diabetic retinopathy and maculopathy on their acceptance of virtual clinic review in place of face-to-face clinic appointments. A postal survey was mailed to all 813 patients under the care of the diabetic eye clinic at Singleton Hospital with 7 questions, explanatory information, and a stamped, addressed envelope available for returning completed questionnaires. Four hundred and ninety-eight questionnaires were returned indicating that 86.1% were supportive of the idea of virtual clinics, although only 56.9% were prepared for every visit to be virtual. Of respondents, 6.6% not happy to attend any virtual clinic. This is by far the largest survey of patients' attitudes regarding attending virtual clinics and confirms that the vast majority are supportive of this mode of health care delivery. Copyright © 2018 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.

  17. A Phenomenology of Learning Large: The Tutorial Sphere of xMOOC Video Lectures

    ERIC Educational Resources Information Center

    Adams, Catherine; Yin, Yin; Vargas Madriz, Luis Francisco; Mullen, C. Scott

    2014-01-01

    The current discourse surrounding Massive Open Online Courses (MOOCs) is powerful. Despite their rapid and widespread deployment, research has yet to confirm or refute some of the bold claims rationalizing the popularity and efficacy of these large-scale virtual learning environments. Also, MOOCs' reputed disruptive, game-changing potential…

  18. Impact of spatial variability and sampling design on model performance

    NASA Astrophysics Data System (ADS)

    Schrape, Charlotte; Schneider, Anne-Kathrin; Schröder, Boris; van Schaik, Loes

    2017-04-01

    Many environmental physical and chemical parameters as well as species distributions display a spatial variability at different scales. In case measurements are very costly in labour time or money a choice has to be made between a high sampling resolution at small scales and a low spatial cover of the study area or a lower sampling resolution at the small scales resulting in local data uncertainties with a better spatial cover of the whole area. This dilemma is often faced in the design of field sampling campaigns for large scale studies. When the gathered field data are subsequently used for modelling purposes the choice of sampling design and resulting data quality influence the model performance criteria. We studied this influence with a virtual model study based on a large dataset of field information on spatial variation of earthworms at different scales. Therefore we built a virtual map of anecic earthworm distributions over the Weiherbach catchment (Baden-Württemberg in Germany). First of all the field scale abundance of earthworms was estimated using a catchment scale model based on 65 field measurements. Subsequently the high small scale variability was added using semi-variograms, based on five fields with a total of 430 measurements divided in a spatially nested sampling design over these fields, to estimate the nugget, range and standard deviation of measurements within the fields. With the produced maps, we performed virtual samplings of one up to 50 random points per field. We then used these data to rebuild the catchment scale models of anecic earthworm abundance with the same model parameters as in the work by Palm et al. (2013). The results of the models show clearly that a large part of the non-explained deviance of the models is due to the very high small scale variability in earthworm abundance: the models based on single virtual sampling points on average obtain an explained deviance of 0.20 and a correlation coefficient of 0.64. With increasing sampling points per field, we averaged the measured abundance of the sampling within each field to obtain a more representative value of the field average. Doubling the samplings per field strongly improved the model performance criteria (explained deviance 0.38 and correlation coefficient 0.73). With 50 sampling points per field the performance criteria were 0.91 and 0.97 respectively for explained deviance and correlation coefficient. The relationship between number of samplings and performance criteria can be described with a saturation curve. Beyond five samples per field the model improvement becomes rather small. With this contribution we wish to discuss the impact of data variability at sampling scale on model performance and the implications for sampling design and assessment of model results as well as ecological inferences.

  19. Light domain walls, massive neutrinos and the large scale structure of the Universe

    NASA Technical Reports Server (NTRS)

    Massarotti, Alessandro

    1991-01-01

    Domain walls generated through a cosmological phase transition are considered, which interact nongravitationally with light neutrinos. At a redshift z greater than or equal to 10(exp 4), the network grows rapidly and is virtually decoupled from the matter. As the friction with the matter becomes dominant, a comoving network scale close to that of the comoving horizon scale at z of approximately 10(exp 4) gets frozen. During the later phases, the walls produce matter wakes of a thickness d of approximately 10h(exp -1)Mpc, that may become seeds for the formation of the large scale structure observed in the Universe.

  20. Virtual interface environment

    NASA Technical Reports Server (NTRS)

    Fisher, Scott S.

    1986-01-01

    A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed for use as a multipurpose interface environment. The system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, application scenarios, and research directions are described.

  1. A large scale virtual screen of DprE1.

    PubMed

    Wilsey, Claire; Gurka, Jessica; Toth, David; Franco, Jimmy

    2013-12-01

    Tuberculosis continues to plague the world with the World Health Organization estimating that about one third of the world's population is infected. Due to the emergence of MDR and XDR strains of TB, the need for novel therapeutics has become increasing urgent. Herein we report the results of a virtual screen of 4.1 million compounds against a promising drug target, DrpE1. The virtual compounds were obtained from the Zinc docking site and screened using the molecular docking program, AutoDock Vina. The computational hits have led to the identification of several promising lead compounds. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Interactive exploration of coastal restoration modeling in virtual environments

    NASA Astrophysics Data System (ADS)

    Gerndt, Andreas; Miller, Robert; Su, Simon; Meselhe, Ehab; Cruz-Neira, Carolina

    2009-02-01

    Over the last decades, Louisiana has lost a substantial part of its coastal region to the Gulf of Mexico. The goal of the project depicted in this paper is to investigate the complex ecological and geophysical system not only to find solutions to reverse this development but also to protect the southern landscape of Louisiana for disastrous impacts of natural hazards like hurricanes. This paper sets a focus on the interactive data handling of the Chenier Plain which is only one scenario of the overall project. The challenge addressed is the interactive exploration of large-scale time-depending 2D simulation results and of terrain data with a high resolution that is available for this region. Besides data preparation, efficient visualization approaches optimized for the usage in virtual environments are presented. These are embedded in a complex framework for scientific visualization of time-dependent large-scale datasets. To provide a straightforward interface for rapid application development, a software layer called VRFlowVis has been developed. Several architectural aspects to encapsulate complex virtual reality aspects like multi-pipe vs. cluster-based rendering are discussed. Moreover, the distributed post-processing architecture is investigated to prove its efficiency for the geophysical domain. Runtime measurements conclude this paper.

  3. THE VIRTUAL INSTRUMENT: SUPPORT FOR GRID-ENABLED MCELL SIMULATIONS

    PubMed Central

    Casanova, Henri; Berman, Francine; Bartol, Thomas; Gokcay, Erhan; Sejnowski, Terry; Birnbaum, Adam; Dongarra, Jack; Miller, Michelle; Ellisman, Mark; Faerman, Marcio; Obertelli, Graziano; Wolski, Rich; Pomerantz, Stuart; Stiles, Joel

    2010-01-01

    Ensembles of widely distributed, heterogeneous resources, or Grids, have emerged as popular platforms for large-scale scientific applications. In this paper we present the Virtual Instrument project, which provides an integrated application execution environment that enables end-users to run and interact with running scientific simulations on Grids. This work is performed in the specific context of MCell, a computational biology application. While MCell provides the basis for running simulations, its capabilities are currently limited in terms of scale, ease-of-use, and interactivity. These limitations preclude usage scenarios that are critical for scientific advances. Our goal is to create a scientific “Virtual Instrument” from MCell by allowing its users to transparently access Grid resources while being able to steer running simulations. In this paper, we motivate the Virtual Instrument project and discuss a number of relevant issues and accomplishments in the area of Grid software development and application scheduling. We then describe our software design and report on the current implementation. We verify and evaluate our design via experiments with MCell on a real-world Grid testbed. PMID:20689618

  4. The Effect of Realistic Appearance of Virtual Characters in Immersive Environments - Does the Character's Personality Play a Role?

    PubMed

    Zibrek, Katja; Kokkinara, Elena; Mcdonnell, Rachel

    2018-04-01

    Virtual characters that appear almost photo-realistic have been shown to induce negative responses from viewers in traditional media, such as film and video games. This effect, described as the uncanny valley, is the reason why realism is often avoided when the aim is to create an appealing virtual character. In Virtual Reality, there have been few attempts to investigate this phenomenon and the implications of rendering virtual characters with high levels of realism on user enjoyment. In this paper, we conducted a large-scale experiment on over one thousand members of the public in order to gather information on how virtual characters are perceived in interactive virtual reality games. We were particularly interested in whether different render styles (realistic, cartoon, etc.) would directly influence appeal, or if a character's personality was the most important indicator of appeal. We used a number of perceptual metrics such as subjective ratings, proximity, and attribution bias in order to test our hypothesis. Our main result shows that affinity towards virtual characters is a complex interaction between the character's appearance and personality, and that realism is in fact a positive choice for virtual characters in virtual reality.

  5. Virtual Exploration of Earth's Evolution

    NASA Astrophysics Data System (ADS)

    Anbar, A. D.; Bruce, G.; Semken, S. C.; Summons, R. E.; Buxner, S.; Horodyskyj, L.; Kotrc, B.; Swann, J.; Klug Boonstra, S. L.; Oliver, C.

    2014-12-01

    Traditional introductory STEM courses often reinforce misconceptions because the large scale of many classes forces a structured, lecture-centric model of teaching that emphasizes delivery of facts rather than exploration, inquiry, and scientific reasoning. This problem is especially acute in teaching about the co-evolution of Earth and life, where classroom learning and textbook teaching are far removed from the immersive and affective aspects of field-based science, and where the challenges of taking large numbers of students into the field make it difficult to expose them to the complex context of the geologic record. We are exploring the potential of digital technologies and online delivery to address this challenge, using immersive and engaging virtual environments that are more like games than like lectures, grounded in active learning, and deliverable at scale via the internet. The goal is to invert the traditional lecture-centric paradigm by placing lectures at the periphery and inquiry-driven, integrative virtual investigations at the center, and to do so at scale. To this end, we are applying a technology platform we devised, supported by NASA and the NSF, that integrates a variety of digital media in a format that we call an immersive virtual field trip (iVFT). In iVFTs, students engage directly with virtual representations of real field sites, with which they interact non-linearly at a variety of scales via game-like exploration while guided by an adaptive tutoring system. This platform has already been used to develop pilot iVFTs useful in teaching anthropology, archeology, ecology, and geoscience. With support the Howard Hughes Medical Institute, we are now developing and evaluating a coherent suite of ~ 12 iVFTs that span the sweep of life's history on Earth, from the 3.8 Ga metasediments of West Greenland to ancient hominid sites in East Africa. These iVFTs will teach fundamental principles of geology and practices of scientific inquiry, and expose students to the evidence from which evolutionary and paleoenvironmental inferences are derived. In addition to making these iVFT available to the geoscience community for EPO, we will evaluate the comparative effectiveness of iVFT and traditional lecture and lab approaches to achieving geoscience learning objectives.

  6. Virtual workstation - A multimodal, stereoscopic display environment

    NASA Astrophysics Data System (ADS)

    Fisher, S. S.; McGreevy, M.; Humphries, J.; Robinett, W.

    1987-01-01

    A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed for use in a multipurpose interface environment. The system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, application scenarios, and research directions are described.

  7. Knowledge-Based Methods To Train and Optimize Virtual Screening Ensembles

    PubMed Central

    2016-01-01

    Ensemble docking can be a successful virtual screening technique that addresses the innate conformational heterogeneity of macromolecular drug targets. Yet, lacking a method to identify a subset of conformational states that effectively segregates active and inactive small molecules, ensemble docking may result in the recommendation of a large number of false positives. Here, three knowledge-based methods that construct structural ensembles for virtual screening are presented. Each method selects ensembles by optimizing an objective function calculated using the receiver operating characteristic (ROC) curve: either the area under the ROC curve (AUC) or a ROC enrichment factor (EF). As the number of receptor conformations, N, becomes large, the methods differ in their asymptotic scaling. Given a set of small molecules with known activities and a collection of target conformations, the most resource intense method is guaranteed to find the optimal ensemble but scales as O(2N). A recursive approximation to the optimal solution scales as O(N2), and a more severe approximation leads to a faster method that scales linearly, O(N). The techniques are generally applicable to any system, and we demonstrate their effectiveness on the androgen nuclear hormone receptor (AR), cyclin-dependent kinase 2 (CDK2), and the peroxisome proliferator-activated receptor δ (PPAR-δ) drug targets. Conformations that consisted of a crystal structure and molecular dynamics simulation cluster centroids were used to form AR and CDK2 ensembles. Multiple available crystal structures were used to form PPAR-δ ensembles. For each target, we show that the three methods perform similarly to one another on both the training and test sets. PMID:27097522

  8. Estimation of detection thresholds for redirected walking techniques.

    PubMed

    Steinicke, Frank; Bruder, Gerd; Jerald, Jason; Frenz, Harald; Lappe, Markus

    2010-01-01

    In immersive virtual environments (IVEs), users can control their virtual viewpoint by moving their tracked head and walking through the real world. Usually, movements in the real world are mapped one-to-one to virtual camera motions. With redirection techniques, the virtual camera is manipulated by applying gains to user motion so that the virtual world moves differently than the real world. Thus, users can walk through large-scale IVEs while physically remaining in a reasonably small workspace. In psychophysical experiments with a two-alternative forced-choice task, we have quantified how much humans can unknowingly be redirected on physical paths that are different from the visually perceived paths. We tested 12 subjects in three different experiments: (E1) discrimination between virtual and physical rotations, (E2) discrimination between virtual and physical straightforward movements, and (E3) discrimination of path curvature. In experiment E1, subjects performed rotations with different gains, and then had to choose whether the visually perceived rotation was smaller or greater than the physical rotation. In experiment E2, subjects chose whether the physical walk was shorter or longer than the visually perceived scaled travel distance. In experiment E3, subjects estimate the path curvature when walking a curved path in the real world while the visual display shows a straight path in the virtual world. Our results show that users can be turned physically about 49 percent more or 20 percent less than the perceived virtual rotation, distances can be downscaled by 14 percent and upscaled by 26 percent, and users can be redirected on a circular arc with a radius greater than 22 m while they believe that they are walking straight.

  9. Full-color digitized holography for large-scale holographic 3D imaging of physical and nonphysical objects.

    PubMed

    Matsushima, Kyoji; Sonobe, Noriaki

    2018-01-01

    Digitized holography techniques are used to reconstruct three-dimensional (3D) images of physical objects using large-scale computer-generated holograms (CGHs). The object field is captured at three wavelengths over a wide area at high densities. Synthetic aperture techniques using single sensors are used for image capture in phase-shifting digital holography. The captured object field is incorporated into a virtual 3D scene that includes nonphysical objects, e.g., polygon-meshed CG models. The synthetic object field is optically reconstructed as a large-scale full-color CGH using red-green-blue color filters. The CGH has a wide full-parallax viewing zone and reconstructs a deep 3D scene with natural motion parallax.

  10. Mapping the Heavens: Probing Cosmology with Large Surveys

    ScienceCinema

    Frieman, Joshua [Fermilab

    2017-12-09

    This talk will provide an overview of recent and on-going sky surveys, focusing on their implications for cosmology. I will place particular emphasis on the Sloan Digital Sky Survey, the most ambitious mapping of the Universe yet undertaken, showing a virtual fly-through of the survey that reveals the large-scale structure of the galaxy distribution. Recent measurements of this large-scale structure, in combination with observations of the cosmic microwave background, have provided independent evidence for a Universe dominated by dark matter and dark energy as well as insights into how galaxies and larger-scale structures formed. Future planned surveys will build on these foundations to probe the history of the cosmic expansion--and thereby the dark energy--with greater precision.

  11. Exploring Learning through Audience Interaction in Virtual Reality Dome Theaters

    NASA Astrophysics Data System (ADS)

    Apostolellis, Panagiotis; Daradoumis, Thanasis

    Informal learning in public spaces like museums, science centers and planetariums is increasingly popular during the last years. Recent advancements in large-scale displays allowed contemporary technology-enhanced museums to get equipped with digital domes, some with real-time capabilities like Virtual Reality systems. By conducting extensive literature review we have come to the conclusion that little to no research has been carried out on the leaning outcomes that the combination of VR and audience interaction can provide in the immersive environments of dome theaters. Thus, we propose that audience collaboration in immersive virtual reality environments presents a promising approach to support effective learning in groups of school aged children.

  12. Verbalizing, Visualizing, and Navigating: The Effect of Strategies on Encoding a Large-Scale Virtual Environment

    PubMed Central

    Kraemer, David J.M.; Schinazi, Victor R.; Cawkwell, Philip B.; Tekriwal, Anand; Epstein, Russell A.; Thompson-Schill, Sharon L.

    2016-01-01

    Using novel virtual cities, we investigated the influence of verbal and visual strategies on the encoding of navigation-relevant information in a large-scale virtual environment. In two experiments, participants watched videos of routes through four virtual cities and were subsequently tested on their memory for observed landmarks and on their ability to make judgments regarding the relative directions of the different landmarks along the route. In the first experiment, self-report questionnaires measuring visual and verbal cognitive styles were administered to examine correlations between cognitive styles, landmark recognition, and judgments of relative direction. Results demonstrate a tradeoff in which the verbal cognitive style is more beneficial for recognizing individual landmarks than for judging relative directions between them, whereas the visual cognitive style is more beneficial for judging relative directions than for landmark recognition. In a second experiment, we manipulated the use of verbal and visual strategies by varying task instructions given to separate groups of participants. Results confirm that a verbal strategy benefits landmark memory, whereas a visual strategy benefits judgments of relative direction. The manipulation of strategy by altering task instructions appears to trump individual differences in cognitive style. Taken together, we find that processing different details during route encoding, whether due to individual proclivities (Experiment 1) or task instructions (Experiment 2), results in benefits for different components of navigation relevant information. These findings also highlight the value of considering multiple sources of individual differences as part of spatial cognition investigations. PMID:27668486

  13. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing

    PubMed Central

    Fang, Ye; Ding, Yun; Feinstein, Wei P.; Koppelman, David M.; Moreno, Juana; Jarrell, Mark; Ramanujam, J.; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249. PMID:27420300

  14. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing.

    PubMed

    Fang, Ye; Ding, Yun; Feinstein, Wei P; Koppelman, David M; Moreno, Juana; Jarrell, Mark; Ramanujam, J; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249.

  15. Demonstration of three gorges archaeological relics based on 3D-visualization technology

    NASA Astrophysics Data System (ADS)

    Xu, Wenli

    2015-12-01

    This paper mainly focuses on the digital demonstration of three gorges archeological relics to exhibit the achievements of the protective measures. A novel and effective method based on 3D-visualization technology, which includes large-scaled landscape reconstruction, virtual studio, and virtual panoramic roaming, etc, is proposed to create a digitized interactive demonstration system. The method contains three stages: pre-processing, 3D modeling and integration. Firstly, abundant archaeological information is classified according to its history and geographical information. Secondly, build up a 3D-model library with the technology of digital images processing and 3D modeling. Thirdly, use virtual reality technology to display the archaeological scenes and cultural relics vividly and realistically. The present work promotes the application of virtual reality to digital projects and enriches the content of digital archaeology.

  16. Virtual reality adaptive stimulation of limbic networks in the mental readiness training.

    PubMed

    Cosić, Kresimir; Popović, Sinisa; Kostović, Ivica; Judas, Milos

    2010-01-01

    A significant proportion of severe psychological problems in recent large-scale peacekeeping operations underscores the importance of effective methods for strengthening the stress resilience. Virtual reality (VR) adaptive stimulation, based on the estimation of the participant's emotional state from physiological signals, may enhance the mental readiness training (MRT). Understanding neurobiological mechanisms by which the MRT based on VR adaptive stimulation can affect the resilience to stress is important for practical application in the stress resilience management. After the delivery of a traumatic audio-visual stimulus in the VR, the cascade of events occurs in the brain, which evokes various physiological manifestations. In addition to the "limbic" emotional and visceral brain circuitry, other large-scale sensory, cognitive, and memory brain networks participate with less known impact in this physiological response. The MRT based on VR adaptive stimulation may strengthen the stress resilience through targeted brain-body interactions. Integrated interdisciplinary efforts, which would integrate the brain imaging and the proposed approach, may contribute to clarifying the neurobiological foundation of the resilience to stress.

  17. Visual influence on path integration in darkness indicates a multimodal representation of large-scale space

    PubMed Central

    Tcheang, Lili; Bülthoff, Heinrich H.; Burgess, Neil

    2011-01-01

    Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map. PMID:21199934

  18. Experience in using commercial clouds in CMS

    NASA Astrophysics Data System (ADS)

    Bauerdick, L.; Bockelman, B.; Dykstra, D.; Fuess, S.; Garzoglio, G.; Girone, M.; Gutsche, O.; Holzman, B.; Hufnagel, D.; Kim, H.; Kennedy, R.; Mason, D.; Spentzouris, P.; Timm, S.; Tiradani, A.; Vaandering, E.; CMS Collaboration

    2017-10-01

    Historically high energy physics computing has been performed on large purpose-built computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.

  19. Experience in using commercial clouds in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauerdick, L.; Bockelman, B.; Dykstra, D.

    Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is amore » growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.« less

  20. Circadian Rhythms in Socializing Propensity.

    PubMed

    Zhang, Cheng; Phang, Chee Wei; Zeng, Xiaohua; Wang, Ximeng; Xu, Yunjie; Huang, Yun; Contractor, Noshir

    2015-01-01

    Using large-scale interaction data from a virtual world, we show that people's propensity to socialize (forming new social connections) varies by hour of the day. We arrive at our results by longitudinally tracking people's friend-adding activities in a virtual world. Specifically, we find that people are most likely to socialize during the evening, at approximately 8 p.m. and 12 a.m., and are least likely to do so in the morning, at approximately 8 a.m. Such patterns prevail on weekdays and weekends and are robust to variations in individual characteristics and geographical conditions.

  1. Built-In Data-Flow Integration Testing in Large-Scale Component-Based Systems

    NASA Astrophysics Data System (ADS)

    Piel, Éric; Gonzalez-Sanchez, Alberto; Gross, Hans-Gerhard

    Modern large-scale component-based applications and service ecosystems are built following a number of different component models and architectural styles, such as the data-flow architectural style. In this style, each building block receives data from a previous one in the flow and sends output data to other components. This organisation expresses information flows adequately, and also favours decoupling between the components, leading to easier maintenance and quicker evolution of the system. Integration testing is a major means to ensure the quality of large systems. Their size and complexity, together with the fact that they are developed and maintained by several stake holders, make Built-In Testing (BIT) an attractive approach to manage their integration testing. However, so far no technique has been proposed that combines BIT and data-flow integration testing. We have introduced the notion of a virtual component in order to realize such a combination. It permits to define the behaviour of several components assembled to process a flow of data, using BIT. Test-cases are defined in a way that they are simple to write and flexible to adapt. We present two implementations of our proposed virtual component integration testing technique, and we extend our previous proposal to detect and handle errors in the definition by the user. The evaluation of the virtual component testing approach suggests that more issues can be detected in systems with data-flows than through other integration testing approaches.

  2. Why build a virtual brain? Large-scale neural simulations as jump start for cognitive computing

    NASA Astrophysics Data System (ADS)

    Colombo, Matteo

    2017-03-01

    Despite the impressive amount of financial resources recently invested in carrying out large-scale brain simulations, it is controversial what the pay-offs are of pursuing this project. One idea is that from designing, building, and running a large-scale neural simulation, scientists acquire knowledge about the computational performance of the simulating system, rather than about the neurobiological system represented in the simulation. It has been claimed that this knowledge may usher in a new era of neuromorphic, cognitive computing systems. This study elucidates this claim and argues that the main challenge this era is facing is not the lack of biological realism. The challenge lies in identifying general neurocomputational principles for the design of artificial systems, which could display the robust flexibility characteristic of biological intelligence.

  3. A morphologically preserved multi-resolution TIN surface modeling and visualization method for virtual globes

    NASA Astrophysics Data System (ADS)

    Zheng, Xianwei; Xiong, Hanjiang; Gong, Jianya; Yue, Linwei

    2017-07-01

    Virtual globes play an important role in representing three-dimensional models of the Earth. To extend the functioning of a virtual globe beyond that of a "geobrowser", the accuracy of the geospatial data in the processing and representation should be of special concern for the scientific analysis and evaluation. In this study, we propose a method for the processing of large-scale terrain data for virtual globe visualization and analysis. The proposed method aims to construct a morphologically preserved multi-resolution triangulated irregular network (TIN) pyramid for virtual globes to accurately represent the landscape surface and simultaneously satisfy the demands of applications at different scales. By introducing cartographic principles, the TIN model in each layer is controlled with a data quality standard to formulize its level of detail generation. A point-additive algorithm is used to iteratively construct the multi-resolution TIN pyramid. The extracted landscape features are also incorporated to constrain the TIN structure, thus preserving the basic morphological shapes of the terrain surface at different levels. During the iterative construction process, the TIN in each layer is seamlessly partitioned based on a virtual node structure, and tiled with a global quadtree structure. Finally, an adaptive tessellation approach is adopted to eliminate terrain cracks in the real-time out-of-core spherical terrain rendering. The experiments undertaken in this study confirmed that the proposed method performs well in multi-resolution terrain representation, and produces high-quality underlying data that satisfy the demands of scientific analysis and evaluation.

  4. "Tactic": Traffic Aware Cloud for Tiered Infrastructure Consolidation

    ERIC Educational Resources Information Center

    Sangpetch, Akkarit

    2013-01-01

    Large-scale enterprise applications are deployed as distributed applications. These applications consist of many inter-connected components with heterogeneous roles and complex dependencies. Each component typically consumes 5-15% of the server capacity. Deploying each component as a separate virtual machine (VM) allows us to consolidate the…

  5. System-Level Virtualization for High Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vallee, Geoffroy R; Naughton, III, Thomas J; Engelmann, Christian

    2008-01-01

    System-level virtualization has been a research topic since the 70's but regained popularity during the past few years because of the availability of efficient solution such as Xen and the implementation of hardware support in commodity processors (e.g. Intel-VT, AMD-V). However, a majority of system-level virtualization projects is guided by the server consolidation market. As a result, current virtualization solutions appear to not be suitable for high performance computing (HPC) which is typically based on large-scale systems. On another hand there is significant interest in exploiting virtual machines (VMs) within HPC for a number of other reasons. By virtualizing themore » machine, one is able to run a variety of operating systems and environments as needed by the applications. Virtualization allows users to isolate workloads, improving security and reliability. It is also possible to support non-native environments and/or legacy operating environments through virtualization. In addition, it is possible to balance work loads, use migration techniques to relocate applications from failing machines, and isolate fault systems for repair. This document presents the challenges for the implementation of a system-level virtualization solution for HPC. It also presents a brief survey of the different approaches and techniques to address these challenges.« less

  6. National randomized controlled trial of virtual house calls for Parkinson disease.

    PubMed

    Beck, Christopher A; Beran, Denise B; Biglan, Kevin M; Boyd, Cynthia M; Dorsey, E Ray; Schmidt, Peter N; Simone, Richard; Willis, Allison W; Galifianakis, Nicholas B; Katz, Maya; Tanner, Caroline M; Dodenhoff, Kristen; Aldred, Jason; Carter, Julie; Fraser, Andrew; Jimenez-Shahed, Joohi; Hunter, Christine; Spindler, Meredith; Reichwein, Suzanne; Mari, Zoltan; Dunlop, Becky; Morgan, John C; McLane, Dedi; Hickey, Patrick; Gauger, Lisa; Richard, Irene Hegeman; Mejia, Nicte I; Bwala, Grace; Nance, Martha; Shih, Ludy C; Singer, Carlos; Vargas-Parra, Silvia; Zadikoff, Cindy; Okon, Natalia; Feigin, Andrew; Ayan, Jean; Vaughan, Christina; Pahwa, Rajesh; Dhall, Rohit; Hassan, Anhar; DeMello, Steven; Riggare, Sara S; Wicks, Paul; Achey, Meredith A; Elson, Molly J; Goldenthal, Steven; Keenan, H Tait; Korn, Ryan; Schwarz, Heidi; Sharma, Saloni; Stevenson, E Anna; Zhu, William

    2017-09-12

    To determine whether providing remote neurologic care into the homes of people with Parkinson disease (PD) is feasible, beneficial, and valuable. In a 1-year randomized controlled trial, we compared usual care to usual care supplemented by 4 virtual visits via video conferencing from a remote specialist into patients' homes. Primary outcome measures were feasibility, as measured by the proportion who completed at least one virtual visit and the proportion of virtual visits completed on time; and efficacy, as measured by the change in the Parkinson's Disease Questionnaire-39, a quality of life scale. Secondary outcomes included quality of care, caregiver burden, and time and travel savings. A total of 927 individuals indicated interest, 210 were enrolled, and 195 were randomized. Participants had recently seen a specialist (73%) and were largely college-educated (73%) and white (96%). Ninety-five (98% of the intervention group) completed at least one virtual visit, and 91% of 388 virtual visits were completed. Quality of life did not improve in those receiving virtual house calls (0.3 points worse on a 100-point scale; 95% confidence interval [CI] -2.0 to 2.7 points; p = 0.78) nor did quality of care or caregiver burden. Each virtual house call saved patients a median of 88 minutes (95% CI 70-120; p < 0.0001) and 38 miles per visit (95% CI 36-56; p < 0.0001). Providing remote neurologic care directly into the homes of people with PD was feasible and was neither more nor less efficacious than usual in-person care. Virtual house calls generated great interest and provided substantial convenience. NCT02038959. This study provides Class III evidence that for patients with PD, virtual house calls from a neurologist are feasible and do not significantly change quality of life compared to in-person visits. The study is rated Class III because it was not possible to mask patients to visit type. © 2017 American Academy of Neurology.

  7. Scalable metadata environments (MDE): artistically impelled immersive environments for large-scale data exploration

    NASA Astrophysics Data System (ADS)

    West, Ruth G.; Margolis, Todd; Prudhomme, Andrew; Schulze, Jürgen P.; Mostafavi, Iman; Lewis, J. P.; Gossmann, Joachim; Singh, Rajvikram

    2014-02-01

    Scalable Metadata Environments (MDEs) are an artistic approach for designing immersive environments for large scale data exploration in which users interact with data by forming multiscale patterns that they alternatively disrupt and reform. Developed and prototyped as part of an art-science research collaboration, we define an MDE as a 4D virtual environment structured by quantitative and qualitative metadata describing multidimensional data collections. Entire data sets (e.g.10s of millions of records) can be visualized and sonified at multiple scales and at different levels of detail so they can be explored interactively in real-time within MDEs. They are designed to reflect similarities and differences in the underlying data or metadata such that patterns can be visually/aurally sorted in an exploratory fashion by an observer who is not familiar with the details of the mapping from data to visual, auditory or dynamic attributes. While many approaches for visual and auditory data mining exist, MDEs are distinct in that they utilize qualitative and quantitative data and metadata to construct multiple interrelated conceptual coordinate systems. These "regions" function as conceptual lattices for scalable auditory and visual representations within virtual environments computationally driven by multi-GPU CUDA-enabled fluid dyamics systems.

  8. iRODS: A Distributed Data Management Cyberinfrastructure for Observatories

    NASA Astrophysics Data System (ADS)

    Rajasekar, A.; Moore, R.; Vernon, F.

    2007-12-01

    Large-scale and long-term preservation of both observational and synthesized data requires a system that virtualizes data management concepts. A methodology is needed that can work across long distances in space (distribution) and long-periods in time (preservation). The system needs to manage data stored on multiple types of storage systems including new systems that become available in the future. This concept is called infrastructure independence, and is typically implemented through virtualization mechanisms. Data grids are built upon concepts of data and trust virtualization. These concepts enable the management of collections of data that are distributed across multiple institutions, stored on multiple types of storage systems, and accessed by multiple types of clients. Data virtualization ensures that the name spaces used to identify files, users, and storage systems are persistent, even when files are migrated onto future technology. This is required to preserve authenticity, the link between the record and descriptive and provenance metadata. Trust virtualization ensures that access controls remain invariant as files are moved within the data grid. This is required to track the chain of custody of records over time. The Storage Resource Broker (http://www.sdsc.edu/srb) is one such data grid used in a wide variety of applications in earth and space sciences such as ROADNet (roadnet.ucsd.edu), SEEK (seek.ecoinformatics.org), GEON (www.geongrid.org) and NOAO (www.noao.edu). Recent extensions to data grids provide one more level of virtualization - policy or management virtualization. Management virtualization ensures that execution of management policies can be automated, and that rules can be created that verify assertions about the shared collections of data. When dealing with distributed large-scale data over long periods of time, the policies used to manage the data and provide assurances about the authenticity of the data become paramount. The integrated Rule-Oriented Data System (iRODS) (http://irods.sdsc.edu) provides the mechanisms needed to describe not only management policies, but also to track how the policies are applied and their execution results. The iRODS data grid maps management policies to rules that control the execution of the remote micro-services. As an example, a rule can be created that automatically creates a replica whenever a file is added to a specific collection, or extracts its metadata automatically and registers it in a searchable catalog. For the replication operation, the persistent state information consists of the replica location, the creation date, the owner, the replica size, etc. The mechanism used by iRODS for providing policy virtualization is based on well-defined functions, called micro-services, which are chained into alternative workflows using rules. A rule engine, based on the event-condition-action paradigm executes the rule-based workflows after an event. Rules can be deferred to a pre-determined time or executed on a periodic basis. As the data management policies evolve, the iRODS system can implement new rules, new micro-services, and new state information (metadata content) needed to manage the new policies. Each sub- collection can be managed using a different set of policies. The discussion of the concepts in rule-based policy virtualization and its application to long-term and large-scale data management for observatories such as ORION and NEON will be the basis of the paper.

  9. Performance Studies on Distributed Virtual Screening

    PubMed Central

    Krüger, Jens; de la Garza, Luis; Kohlbacher, Oliver; Nagel, Wolfgang E.

    2014-01-01

    Virtual high-throughput screening (vHTS) is an invaluable method in modern drug discovery. It permits screening large datasets or databases of chemical structures for those structures binding possibly to a drug target. Virtual screening is typically performed by docking code, which often runs sequentially. Processing of huge vHTS datasets can be parallelized by chunking the data because individual docking runs are independent of each other. The goal of this work is to find an optimal splitting maximizing the speedup while considering overhead and available cores on Distributed Computing Infrastructures (DCIs). We have conducted thorough performance studies accounting not only for the runtime of the docking itself, but also for structure preparation. Performance studies were conducted via the workflow-enabled science gateway MoSGrid (Molecular Simulation Grid). As input we used benchmark datasets for protein kinases. Our performance studies show that docking workflows can be made to scale almost linearly up to 500 concurrent processes distributed even over large DCIs, thus accelerating vHTS campaigns significantly. PMID:25032219

  10. Virtually Naked: Virtual Environment Reveals Sex-Dependent Nature of Skin Disclosure

    PubMed Central

    Lomanowska, Anna M.; Guitton, Matthieu J.

    2012-01-01

    The human tendency to reveal or cover naked skin reflects a competition between the individual propensity for social interactions related to sexual appeal and interpersonal touch versus climatic, environmental, physical, and cultural constraints. However, due to the ubiquitous nature of these constraints, isolating on a large scale the spontaneous human tendency to reveal naked skin has remained impossible. Using the online 3-dimensional virtual world of Second Life, we examined spontaneous human skin-covering behavior unhindered by real-world climatic, environmental, and physical variables. Analysis of hundreds of avatars revealed that virtual females disclose substantially more naked skin than virtual males. This phenomenon was not related to avatar hypersexualization as evaluated by measurement of sexually dimorphic body proportions. Furthermore, analysis of skin-covering behavior of a population of culturally homogeneous avatars indicated that the propensity of female avatars to reveal naked skin persisted despite explicit cultural norms promoting less revealing attire. These findings have implications for further understanding how sex-specific aspects of skin disclosure influence human social interactions in both virtual and real settings. PMID:23300580

  11. Virtually naked: virtual environment reveals sex-dependent nature of skin disclosure.

    PubMed

    Lomanowska, Anna M; Guitton, Matthieu J

    2012-01-01

    The human tendency to reveal or cover naked skin reflects a competition between the individual propensity for social interactions related to sexual appeal and interpersonal touch versus climatic, environmental, physical, and cultural constraints. However, due to the ubiquitous nature of these constraints, isolating on a large scale the spontaneous human tendency to reveal naked skin has remained impossible. Using the online 3-dimensional virtual world of Second Life, we examined spontaneous human skin-covering behavior unhindered by real-world climatic, environmental, and physical variables. Analysis of hundreds of avatars revealed that virtual females disclose substantially more naked skin than virtual males. This phenomenon was not related to avatar hypersexualization as evaluated by measurement of sexually dimorphic body proportions. Furthermore, analysis of skin-covering behavior of a population of culturally homogeneous avatars indicated that the propensity of female avatars to reveal naked skin persisted despite explicit cultural norms promoting less revealing attire. These findings have implications for further understanding how sex-specific aspects of skin disclosure influence human social interactions in both virtual and real settings.

  12. Virtual Vision

    NASA Astrophysics Data System (ADS)

    Terzopoulos, Demetri; Qureshi, Faisal Z.

    Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.

  13. Advances in Multi-Sensor Scanning and Visualization of Complex Plants: the Utmost Case of a Reactor Building

    NASA Astrophysics Data System (ADS)

    Hullo, J.-F.; Thibault, G.; Boucheny, C.

    2015-02-01

    In a context of increased maintenance operations and workers generational renewal, a nuclear owner and operator like Electricité de France (EDF) is interested in the scaling up of tools and methods of "as-built virtual reality" for larger buildings and wider audiences. However, acquisition and sharing of as-built data on a large scale (large and complex multi-floored buildings) challenge current scientific and technical capacities. In this paper, we first present a state of the art of scanning tools and methods for industrial plants with very complex architecture. Then, we introduce the inner characteristics of the multi-sensor scanning and visualization of the interior of the most complex building of a power plant: a nuclear reactor building. We introduce several developments that made possible a first complete survey of such a large building, from acquisition, processing and fusion of multiple data sources (3D laser scans, total-station survey, RGB panoramic, 2D floor plans, 3D CAD as-built models). In addition, we present the concepts of a smart application developed for the painless exploration of the whole dataset. The goal of this application is to help professionals, unfamiliar with the manipulation of such datasets, to take into account spatial constraints induced by the building complexity while preparing maintenance operations. Finally, we discuss the main feedbacks of this large experiment, the remaining issues for the generalization of such large scale surveys and the future technical and scientific challenges in the field of industrial "virtual reality".

  14. DE-FG02-04ER25606 Identity Federation and Policy Management Guide: Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humphrey, Marty, A

    The goal of this 3-year project was to facilitate a more productive dynamic matching between resource providers and resource consumers in Grid environments by explicitly specifying policies. There were broadly two problems being addressed by this project. First, there was a lack of an Open Grid Services Architecture (OGSA)-compliant mechanism for expressing, storing and retrieving user policies and Virtual Organization (VO) policies. Second, there was a lack of tools to resolve and enforce policies in the Open Services Grid Architecture. To address these problems, our overall approach in this project was to make all policies explicit (e.g., virtual organization policies,more » resource provider policies, resource consumer policies), thereby facilitating policy matching and policy negotiation. Policies defined on a per-user basis were created, held, and updated in MyPolMan, thereby providing a Grid user to centralize (where appropriate) and manage his/her policies. Organizationally, the corresponding service was VOPolMan, in which the policies of the Virtual Organization are expressed, managed, and dynamically consulted. Overall, we successfully defined, prototyped, and evaluated policy-based resource management and access control for OGSA-based Grids. This DOE project partially supported 17 peer-reviewed publications on a number of different topics: General security for Grids, credential management, Web services/OGSA/OGSI, policy-based grid authorization (for remote execution and for access to information), policy-directed Grid data movement/placement, policies for large-scale virtual organizations, and large-scale policy-aware grid architectures. In addition to supporting the PI, this project partially supported the training of 5 PhD students.« less

  15. Modeling and visualizing borehole information on virtual globes using KML

    NASA Astrophysics Data System (ADS)

    Zhu, Liang-feng; Wang, Xi-feng; Zhang, Bing

    2014-01-01

    Advances in virtual globes and Keyhole Markup Language (KML) are providing the Earth scientists with the universal platforms to manage, visualize, integrate and disseminate geospatial information. In order to use KML to represent and disseminate subsurface geological information on virtual globes, we present an automatic method for modeling and visualizing a large volume of borehole information. Based on a standard form of borehole database, the method first creates a variety of borehole models with different levels of detail (LODs), including point placemarks representing drilling locations, scatter dots representing contacts and tube models representing strata. Subsequently, the level-of-detail based (LOD-based) multi-scale representation is constructed to enhance the efficiency of visualizing large numbers of boreholes. Finally, the modeling result can be loaded into a virtual globe application for 3D visualization. An implementation program, termed Borehole2KML, is developed to automatically convert borehole data into KML documents. A case study of using Borehole2KML to create borehole models in Shanghai shows that the modeling method is applicable to visualize, integrate and disseminate borehole information on the Internet. The method we have developed has potential use in societal service of geological information.

  16. Welcome to Wonderland: The Influence of the Size and Shape of a Virtual Hand On the Perceived Size and Shape of Virtual Objects

    PubMed Central

    Linkenauger, Sally A.; Leyrer, Markus; Bülthoff, Heinrich H.; Mohler, Betty J.

    2013-01-01

    The notion of body-based scaling suggests that our body and its action capabilities are used to scale the spatial layout of the environment. Here we present four studies supporting this perspective by showing that the hand acts as a metric which individuals use to scale the apparent sizes of objects in the environment. However to test this, one must be able to manipulate the size and/or dimensions of the perceiver’s hand which is difficult in the real world due to impliability of hand dimensions. To overcome this limitation, we used virtual reality to manipulate dimensions of participants’ fully-tracked, virtual hands to investigate its influence on the perceived size and shape of virtual objects. In a series of experiments, using several measures, we show that individuals’ estimations of the sizes of virtual objects differ depending on the size of their virtual hand in the direction consistent with the body-based scaling hypothesis. Additionally, we found that these effects were specific to participants’ virtual hands rather than another avatar’s hands or a salient familiar-sized object. While these studies provide support for a body-based approach to the scaling of the spatial layout, they also demonstrate the influence of virtual bodies on perception of virtual environments. PMID:23874681

  17. Worse than imagined: Unidentified virtual water flows in China.

    PubMed

    Cai, Beiming; Wang, Chencheng; Zhang, Bing

    2017-07-01

    The impact of virtual water flows on regional water scarcity in China had been deeply discussed in previous research. However, these studies only focused on water quantity, the impact of virtual water flows on water quality has been largely neglected. In this study, we incorporate the blue water footprint related with water quantity and grey water footprint related with water quality into virtual water flow analysis based on the multiregional input-output model of 2007. The results find that the interprovincial virtual flows accounts for 23.4% of China's water footprint. The virtual grey water flows are 8.65 times greater than the virtual blue water flows; the virtual blue water and grey water flows are 91.8 and 794.6 Gm 3 /y, respectively. The use of the indicators related with water quantity to represent virtual water flows in previous studies will underestimate their impact on water resources. In addition, the virtual water flows are mainly derived from agriculture, chemical industry and petroleum processing and the coking industry, which account for 66.8%, 7.1% and 6.2% of the total virtual water flows, respectively. Virtual water flows have intensified both quantity- and quality-induced water scarcity of export regions, where low-value-added but water-intensive and high-pollution goods are produced. Our study on virtual water flows can inform effective water use policy for both water resources and water pollution in China. Our methodology about virtual water flows also can be used in global scale or other countries if data available. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Research on distributed virtual reality system in electronic commerce

    NASA Astrophysics Data System (ADS)

    Xue, Qiang; Wang, Jiening; Sun, Jizhou

    2004-03-01

    In this paper, Distributed Virtual Reality (DVR) technology applied in Electronical Commerce (EC) is discussed. DVR has the capability of providing a new means for human being to recognize, analyze and resolve the large scale, complex problems, which makes it develop quickly in EC fields. The technology of CSCW (Computer Supported Cooperative Work) and middleware is introduced into the development of EC-DVR system to meet the need of a platform which can provide the necessary cooperation and communication services to avoid developing the basic module repeatedly. Finally, the paper gives a platform structure of EC-DVR system.

  19. Design and Implement of Astronomical Cloud Computing Environment In China-VO

    NASA Astrophysics Data System (ADS)

    Li, Changhua; Cui, Chenzhou; Mi, Linying; He, Boliang; Fan, Dongwei; Li, Shanshan; Yang, Sisi; Xu, Yunfei; Han, Jun; Chen, Junyi; Zhang, Hailong; Yu, Ce; Xiao, Jian; Wang, Chuanjun; Cao, Zihuang; Fan, Yufeng; Liu, Liang; Chen, Xiao; Song, Wenming; Du, Kangyu

    2017-06-01

    Astronomy cloud computing environment is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on virtualization technology, astronomy cloud computing environment was designed and implemented by China-VO team. It consists of five distributed nodes across the mainland of China. Astronomer can get compuitng and storage resource in this cloud computing environment. Through this environments, astronomer can easily search and analyze astronomical data collected by different telescopes and data centers , and avoid the large scale dataset transportation.

  20. Circadian Rhythms in Socializing Propensity

    PubMed Central

    Zhang, Cheng; Phang, Chee Wei; Zeng, Xiaohua; Wang, Ximeng; Xu, Yunjie; Huang, Yun; Contractor, Noshir

    2015-01-01

    Using large-scale interaction data from a virtual world, we show that people’s propensity to socialize (forming new social connections) varies by hour of the day. We arrive at our results by longitudinally tracking people’s friend-adding activities in a virtual world. Specifically, we find that people are most likely to socialize during the evening, at approximately 8 p.m. and 12 a.m., and are least likely to do so in the morning, at approximately 8 a.m. Such patterns prevail on weekdays and weekends and are robust to variations in individual characteristics and geographical conditions. PMID:26353080

  1. The Virtual Mouse Brain: A Computational Neuroinformatics Platform to Study Whole Mouse Brain Dynamics.

    PubMed

    Melozzi, Francesca; Woodman, Marmaduke M; Jirsa, Viktor K; Bernard, Christophe

    2017-01-01

    Connectome-based modeling of large-scale brain network dynamics enables causal in silico interrogation of the brain's structure-function relationship, necessitating the close integration of diverse neuroinformatics fields. Here we extend the open-source simulation software The Virtual Brain (TVB) to whole mouse brain network modeling based on individual diffusion magnetic resonance imaging (dMRI)-based or tracer-based detailed mouse connectomes. We provide practical examples on how to use The Virtual Mouse Brain (TVMB) to simulate brain activity, such as seizure propagation and the switching behavior of the resting state dynamics in health and disease. TVMB enables theoretically driven experimental planning and ways to test predictions in the numerous strains of mice available to study brain function in normal and pathological conditions.

  2. Modeling virtual organizations with Latent Dirichlet Allocation: a case for natural language processing.

    PubMed

    Gross, Alexander; Murthy, Dhiraj

    2014-10-01

    This paper explores a variety of methods for applying the Latent Dirichlet Allocation (LDA) automated topic modeling algorithm to the modeling of the structure and behavior of virtual organizations found within modern social media and social networking environments. As the field of Big Data reveals, an increase in the scale of social data available presents new challenges which are not tackled by merely scaling up hardware and software. Rather, they necessitate new methods and, indeed, new areas of expertise. Natural language processing provides one such method. This paper applies LDA to the study of scientific virtual organizations whose members employ social technologies. Because of the vast data footprint in these virtual platforms, we found that natural language processing was needed to 'unlock' and render visible latent, previously unseen conversational connections across large textual corpora (spanning profiles, discussion threads, forums, and other social media incarnations). We introduce variants of LDA and ultimately make the argument that natural language processing is a critical interdisciplinary methodology to make better sense of social 'Big Data' and we were able to successfully model nested discussion topics from forums and blog posts using LDA. Importantly, we found that LDA can move us beyond the state-of-the-art in conventional Social Network Analysis techniques. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. The influence of idealized surface heterogeneity on virtual turbulent flux measurements

    NASA Astrophysics Data System (ADS)

    De Roo, Frederik; Mauder, Matthias

    2018-04-01

    The imbalance of the surface energy budget in eddy-covariance measurements is still an unsolved problem. A possible cause is the presence of land surface heterogeneity, which affects the boundary-layer turbulence. To investigate the impact of surface variables on the partitioning of the energy budget of flux measurements in the surface layer under convective conditions, we set up a systematic parameter study by means of large-eddy simulation. For the study we use a virtual control volume approach, which allows the determination of advection by the mean flow, flux-divergence and storage terms of the energy budget at the virtual measurement site, in addition to the standard turbulent flux. We focus on the heterogeneity of the surface fluxes and keep the topography flat. The surface fluxes vary locally in intensity and these patches have different length scales. Intensity and length scales can vary for the two horizontal dimensions but follow an idealized chessboard pattern. Our main focus lies on surface heterogeneity of the kilometer scale, and one order of magnitude smaller. For these two length scales, we investigate the average response of the fluxes at a number of virtual towers, when varying the heterogeneity length within the length scale and when varying the contrast between the different patches. For each simulation, virtual measurement towers were positioned at functionally different positions (e.g., downdraft region, updraft region, at border between domains, etc.). As the storage term is always small, the non-closure is given by the sum of the advection by the mean flow and the flux-divergence. Remarkably, the missing flux can be described by either the advection by the mean flow or the flux-divergence separately, because the latter two have a high correlation with each other. For kilometer scale heterogeneity, we notice a clear dependence of the updrafts and downdrafts on the surface heterogeneity and likewise we also see a dependence of the energy partitioning on the tower location. For the hectometer scale, we do not notice such a clear dependence. Finally, we seek correlators for the energy balance ratio in the simulations. The correlation with the friction velocity is less pronounced than previously found, but this is likely due to our concentration on effectively strongly to freely convective conditions.

  4. Evaluative Appraisals of Environmental Mystery and Surprise

    ERIC Educational Resources Information Center

    Nasar, Jack L.; Cubukcu, Ebru

    2011-01-01

    This study used a desktop virtual environment (VE) of 15 large-scale residential streets to test the effects of environmental mystery and surprise on response. In theory, mystery and surprise should increase interest and visual appeal. For each VE, participants walked through an approach street and turned right onto a post-turn street. We designed…

  5. Ask Here PA: Large-Scale Synchronous Virtual Reference for Pennsylvania

    ERIC Educational Resources Information Center

    Mariner, Vince

    2008-01-01

    Ask Here PA is Pennsylvania's new statewide live chat reference and information service. This article discusses the key strategies utilized by Ask Here PA administrators to recruit participating libraries to contribute staff time to the service, the importance of centralized staff training, the main aspects of staff training, and activating the…

  6. Educational Games and Virtual Reality as Disruptive Technologies

    ERIC Educational Resources Information Center

    Psotka, Joseph

    2013-01-01

    New technologies often have the potential for disrupting existing established practices, but nowhere is this so pertinent as in education and training today. And yet, education has been glacially slow to adopt these changes in a large scale way, and innovations seem to be imposed mainly by students' and their changing social lifestyles than…

  7. Mission Impossible? Leadership Responsibility without Authority for Initiatives To Reorganise Schools.

    ERIC Educational Resources Information Center

    Wallace, Mike

    This paper explores how characteristics of complex educational change may virtually dictate the leadership strategies adopted by those charged with bringing about change. The change in question here is the large-scale reorganization of local education authorities (LEAs) across England. The article focuses on how across-the-board initiatives to…

  8. One Spatial Map or Many? Spatial Coding of Connected Environments

    ERIC Educational Resources Information Center

    Han, Xue; Becker, Suzanna

    2014-01-01

    We investigated how humans encode large-scale spatial environments using a virtual taxi game. We hypothesized that if 2 connected neighborhoods are explored jointly, people will form a single integrated spatial representation of the town. However, if the neighborhoods are first learned separately and later observed to be connected, people will…

  9. Estimating planktonic diversity through spatial dominance patterns in a model ocean.

    PubMed

    Soccodato, Alice; d'Ovidio, Francesco; Lévy, Marina; Jahn, Oliver; Follows, Michael J; De Monte, Silvia

    2016-10-01

    In the open ocean, the observation and quantification of biodiversity patterns is challenging. Marine ecosystems are indeed largely composed by microbial planktonic communities whose niches are affected by highly dynamical physico-chemical conditions, and whose observation requires advanced methods for morphological and molecular classification. Optical remote sensing offers an appealing complement to these in-situ techniques. Global-scale coverage at high spatiotemporal resolution is however achieved at the cost of restrained information on the local assemblage. Here, we use a coupled physical and ecological model ocean simulation to explore one possible metrics for comparing measures performed on such different scales. We show that a large part of the local diversity of the virtual plankton ecosystem - corresponding to what accessible by genomic methods - can be inferred from crude, but spatially extended, information - as conveyed by remote sensing. Shannon diversity of the local community is indeed highly correlated to a 'seascape' index, which quantifies the surrounding spatial heterogeneity of the most abundant functional group. The error implied in drastically reducing the resolution of the plankton community is shown to be smaller in frontal regions as well as in regions of intermediate turbulent energy. On the spatial scale of hundreds of kms, patterns of virtual plankton diversity are thus largely sustained by mixing communities that occupy adjacent niches. We provide a proof of principle that in the open ocean information on spatial variability of communities can compensate for limited local knowledge, suggesting the possibility of integrating in-situ and satellite observations to monitor biodiversity distribution at the global scale. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. A Typology of Ethnographic Scales for Virtual Worlds

    NASA Astrophysics Data System (ADS)

    Boellstorff, Tom

    This chapter outlines a typology of genres of ethnographic research with regard to virtual worlds, informed by extensive research the author has completed both in Second Life and in Indonesia. It begins by identifying four confusions about virtual worlds: they are not games, they need not be graphical or even visual, they are not mass media, and they need not be defined in terms of escapist role-playing. A three-part typology of methods for ethnographic research in virtual worlds focuses on the relationship between research design and ethnographic scale. One class of methods for researching virtual worlds with regard to ethnographic scale explores interfaces between virtual worlds and the actual world, whereas a second examines interfaces between two or more virtual worlds. The third class involves studying a single virtual world in its own terms. Recognizing that all three approaches have merit for particular research purposes, ethnography of virtual worlds can be a vibrant field of research, contributing to central debates about human selfhood and sociality.

  11. Particle physics and polyedra proximity calculation for hazard simulations in large-scale industrial plants

    NASA Astrophysics Data System (ADS)

    Plebe, Alice; Grasso, Giorgio

    2016-12-01

    This paper describes a system developed for the simulation of flames inside an open-source 3D computer graphic software, Blender, with the aim of analyzing in virtual reality scenarios of hazards in large-scale industrial plants. The advantages of Blender are of rendering at high resolution the very complex structure of large industrial plants, and of embedding a physical engine based on smoothed particle hydrodynamics. This particle system is used to evolve a simulated fire. The interaction of this fire with the components of the plant is computed using polyhedron separation distance, adopting a Voronoi-based strategy that optimizes the number of feature distance computations. Results on a real oil and gas refining industry are presented.

  12. Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.

    PubMed

    Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai

    2008-03-15

    A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.

  13. National randomized controlled trial of virtual house calls for Parkinson disease

    PubMed Central

    Beck, Christopher A.; Beran, Denise B.; Biglan, Kevin M.; Boyd, Cynthia M.; Schmidt, Peter N.; Simone, Richard; Willis, Allison W.; Galifianakis, Nicholas B.; Katz, Maya; Tanner, Caroline M.; Dodenhoff, Kristen; Aldred, Jason; Carter, Julie; Fraser, Andrew; Jimenez-Shahed, Joohi; Hunter, Christine; Spindler, Meredith; Reichwein, Suzanne; Mari, Zoltan; Dunlop, Becky; Morgan, John C.; McLane, Dedi; Hickey, Patrick; Gauger, Lisa; Richard, Irene Hegeman; Mejia, Nicte I.; Bwala, Grace; Nance, Martha; Shih, Ludy C.; Singer, Carlos; Vargas-Parra, Silvia; Zadikoff, Cindy; Okon, Natalia; Feigin, Andrew; Ayan, Jean; Vaughan, Christina; Pahwa, Rajesh; Dhall, Rohit; Hassan, Anhar; DeMello, Steven; Riggare, Sara S.; Wicks, Paul; Achey, Meredith A.; Elson, Molly J.; Goldenthal, Steven; Keenan, H. Tait; Korn, Ryan; Schwarz, Heidi; Sharma, Saloni; Stevenson, E. Anna; Zhu, William

    2017-01-01

    Objective: To determine whether providing remote neurologic care into the homes of people with Parkinson disease (PD) is feasible, beneficial, and valuable. Methods: In a 1-year randomized controlled trial, we compared usual care to usual care supplemented by 4 virtual visits via video conferencing from a remote specialist into patients' homes. Primary outcome measures were feasibility, as measured by the proportion who completed at least one virtual visit and the proportion of virtual visits completed on time; and efficacy, as measured by the change in the Parkinson's Disease Questionnaire–39, a quality of life scale. Secondary outcomes included quality of care, caregiver burden, and time and travel savings. Results: A total of 927 individuals indicated interest, 210 were enrolled, and 195 were randomized. Participants had recently seen a specialist (73%) and were largely college-educated (73%) and white (96%). Ninety-five (98% of the intervention group) completed at least one virtual visit, and 91% of 388 virtual visits were completed. Quality of life did not improve in those receiving virtual house calls (0.3 points worse on a 100-point scale; 95% confidence interval [CI] −2.0 to 2.7 points; p = 0.78) nor did quality of care or caregiver burden. Each virtual house call saved patients a median of 88 minutes (95% CI 70–120; p < 0.0001) and 38 miles per visit (95% CI 36–56; p < 0.0001). Conclusions: Providing remote neurologic care directly into the homes of people with PD was feasible and was neither more nor less efficacious than usual in-person care. Virtual house calls generated great interest and provided substantial convenience. ClinicalTrials.gov identifier: NCT02038959. Classification of evidence: This study provides Class III evidence that for patients with PD, virtual house calls from a neurologist are feasible and do not significantly change quality of life compared to in-person visits. The study is rated Class III because it was not possible to mask patients to visit type. PMID:28814455

  14. Virtual gastrointestinal colonoscopy in combination with large bowel endoscopy: Clinical application

    PubMed Central

    He, Qing; Rao, Ting; Guan, Yong-Song

    2014-01-01

    Although colorectal cancer (CRC) has no longer been the leading cancer killer worldwide for years with the exponential development in computed tomography (CT) or magnetic resonance imaging, and positron emission tomography/CT as well as virtual colonoscopy for early detection, the CRC related mortality is still high. The objective of CRC screening is to reduce the burden of CRC and thereby the morbidity and mortality rates of the disease. It is believed that this goal can be achieved by regularly screening the average-risk population, enabling the detection of cancer at early, curable stages, and polyps before they become cancerous. Large-scale screening with multimodality imaging approaches plays an important role in reaching that goal to detect polyps, Crohn’s disease, ulcerative colitis and CRC in early stage. This article reviews kinds of presentative imaging procedures for various screening options and updates detecting, staging and re-staging of CRC patients for determining the optimal therapeutic method and forecasting the risk of CRC recurrence and the overall prognosis. The combination use of virtual colonoscopy and conventional endoscopy, advantages and limitations of these modalities are also discussed. PMID:25320519

  15. A Virtual Study of Grid Resolution on Experiments of a Highly-Resolved Turbulent Plume

    NASA Astrophysics Data System (ADS)

    Maisto, Pietro M. F.; Marshall, Andre W.; Gollner, Michael J.; Fire Protection Engineering Department Collaboration

    2017-11-01

    An accurate representation of sub-grid scale turbulent mixing is critical for modeling fire plumes and smoke transport. In this study, PLIF and PIV diagnostics are used with the saltwater modeling technique to provide highly-resolved instantaneous field measurements in unconfined turbulent plumes useful for statistical analysis, physical insight, and model validation. The effect of resolution was investigated employing a virtual interrogation window (of varying size) applied to the high-resolution field measurements. Motivated by LES low-pass filtering concepts, the high-resolution experimental data in this study can be analyzed within the interrogation windows (i.e. statistics at the sub-grid scale) and on interrogation windows (i.e. statistics at the resolved scale). A dimensionless resolution threshold (L/D*) criterion was determined to achieve converged statistics on the filtered measurements. Such a criterion was then used to establish the relative importance between large and small-scale turbulence phenomena while investigating specific scales for the turbulent flow. First order data sets start to collapse at a resolution of 0.3D*, while for second and higher order statistical moments the interrogation window size drops down to 0.2D*.

  16. Engagement of neural circuits underlying 2D spatial navigation in a rodent virtual reality system.

    PubMed

    Aronov, Dmitriy; Tank, David W

    2014-10-22

    Virtual reality (VR) enables precise control of an animal's environment and otherwise impossible experimental manipulations. Neural activity in rodents has been studied on virtual 1D tracks. However, 2D navigation imposes additional requirements, such as the processing of head direction and environment boundaries, and it is unknown whether the neural circuits underlying 2D representations can be sufficiently engaged in VR. We implemented a VR setup for rats, including software and large-scale electrophysiology, that supports 2D navigation by allowing rotation and walking in any direction. The entorhinal-hippocampal circuit, including place, head direction, and grid cells, showed 2D activity patterns similar to those in the real world. Furthermore, border cells were observed, and hippocampal remapping was driven by environment shape, suggesting functional processing of virtual boundaries. These results illustrate that 2D spatial representations can be engaged by visual and rotational vestibular stimuli alone and suggest a novel VR tool for studying rat navigation.

  17. Engagement of neural circuits underlying 2D spatial navigation in a rodent virtual reality system

    PubMed Central

    Aronov, Dmitriy; Tank, David W.

    2015-01-01

    SUMMARY Virtual reality (VR) enables precise control of an animal’s environment and otherwise impossible experimental manipulations. Neural activity in navigating rodents has been studied on virtual linear tracks. However, the spatial navigation system’s engagement in complete two-dimensional environments has not been shown. We describe a VR setup for rats, including control software and a large-scale electrophysiology system, which supports 2D navigation by allowing animals to rotate and walk in any direction. The entorhinal-hippocampal circuit, including place cells, grid cells, head direction cells and border cells, showed 2D activity patterns in VR similar to those in the real world. Hippocampal neurons exhibited various remapping responses to changes in the appearance or the shape of the virtual environment, including a novel form in which a VR-induced cue conflict caused remapping to lock to geometry rather than salient cues. These results suggest a general-purpose tool for novel types of experimental manipulations in navigating rats. PMID:25374363

  18. Virtual reality and robotics for stroke rehabilitation: where do we go from here?

    PubMed

    Wade, Eric; Winstein, Carolee J

    2011-01-01

    Promoting functional recovery after stroke requires collaborative and innovative approaches to neurorehabilitation research. Task-oriented training (TOT) approaches that include challenging, adaptable, and meaningful activities have led to successful outcomes in several large-scale multisite definitive trials. This, along with recent technological advances of virtual reality and robotics, provides a fertile environment for furthering clinical research in neurorehabilitation. Both virtual reality and robotics make use of multimodal sensory interfaces to affect human behavior. In the therapeutic setting, these systems can be used to quantitatively monitor, manipulate, and augment the users' interaction with their environment, with the goal of promoting functional recovery. This article describes recent advances in virtual reality and robotics and the synergy with best clinical practice. Additionally, we describe the promise shown for automated assessments and in-home activity-based interventions. Finally, we propose a broader approach to ensuring that technology-based assessment and intervention complement evidence-based practice and maintain a patient-centered perspective.

  19. Virtual reality for health care: a survey.

    PubMed

    Moline, J

    1997-01-01

    This report surveys the state of the art in applications of virtual environments and related technologies for health care. Applications of these technologies are being developed for health care in the following areas: surgical procedures (remote surgery or telepresence, augmented or enhanced surgery, and planning and simulation of procedures before surgery); medical therapy; preventive medicine and patient education; medical education and training; visualization of massive medical databases; skill enhancement and rehabilitation; and architectural design for health-care facilities. To date, such applications have improved the quality of health care, and in the future they will result in substantial cost savings. Tools that respond to the needs of present virtual environment systems are being refined or developed. However, additional large-scale research is necessary in the following areas: user studies, use of robots for telepresence procedures, enhanced system reality, and improved system functionality.

  20. Sleep Enhances a Spatially Mediated Generalization of Learned Values

    ERIC Educational Resources Information Center

    Javadi, Amir-Homayoun; Tolat, Anisha; Spiers, Hugo J.

    2015-01-01

    Sleep is thought to play an important role in memory consolidation. Here we tested whether sleep alters the subjective value associated with objects located in spatial clusters that were navigated to in a large-scale virtual town. We found that sleep enhances a generalization of the value of high-value objects to the value of locally clustered…

  1. YaQ: an architecture for real-time navigation and rendering of varied crowds.

    PubMed

    Maïm, Jonathan; Yersin, Barbara; Thalmann, Daniel

    2009-01-01

    The YaQ software platform is a complete system dedicated to real-time crowd simulation and rendering. Fitting multiple application domains, such as video games and VR, YaQ aims to provide efficient algorithms to generate crowds comprising up to thousands of varied virtual humans navigating in large-scale, global environments.

  2. Women in Engineering in Turkey--A Large Scale Quantitative and Qualitative Examination

    ERIC Educational Resources Information Center

    Smith, Alice E.; Dengiz, Berna

    2010-01-01

    The underrepresentation of women in engineering is well known and unresolved. However, Turkey has witnessed a shift in trend from virtually no female participation in engineering to across-the-board proportions that dominate other industrialised countries within the 76 years of the founding of the Turkish Republic. This paper describes the largest…

  3. Virtual reality in urban water management: communicating urban flooding with particle-based CFD simulations.

    PubMed

    Winkler, Daniel; Zischg, Jonatan; Rauch, Wolfgang

    2018-01-01

    For communicating urban flood risk to authorities and the public, a realistic three-dimensional visual display is frequently more suitable than detailed flood maps. Virtual reality could also serve to plan short-term flooding interventions. We introduce here an alternative approach for simulating three-dimensional flooding dynamics in large- and small-scale urban scenes by reaching out to computer graphics. This approach, denoted 'particle in cell', is a particle-based CFD method that is used to predict physically plausible results instead of accurate flow dynamics. We exemplify the approach for the real flooding event in July 2016 in Innsbruck.

  4. Bending the Curve: Sensitivity to Bending of Curved Paths and Application in Room-Scale VR.

    PubMed

    Langbehn, Eike; Lubos, Paul; Bruder, Gerd; Steinicke, Frank

    2017-04-01

    Redirected walking (RDW) promises to allow near-natural walking in an infinitely large virtual environment (VE) by subtle manipulations of the virtual camera. Previous experiments analyzed the human sensitivity to RDW manipulations by focusing on the worst-case scenario, in which users walk perfectly straight ahead in the VE, whereas they are redirected on a circular path in the real world. The results showed that a physical radius of at least 22 meters is required for undetectable RDW. However, users do not always walk exactly straight in a VE. So far, it has not been investigated how much a physical path can be bent in situations in which users walk a virtual curved path instead of a straight one. Such curved walking paths can be often observed, for example, when users walk on virtual trails, through bent corridors, or when circling around obstacles. In such situations the question is not, whether or not the physical path can be bent, but how much the bending of the physical path may vary from the bending of the virtual path. In this article, we analyze this question and present redirection by means of bending gains that describe the discrepancy between the bending of curved paths in the real and virtual environment. Furthermore, we report the psychophysical experiments in which we analyzed the human sensitivity to these gains. The results reveal encouragingly wider detection thresholds than for straightforward walking. Based on our findings, we discuss the potential of curved walking and present a first approach to leverage bent paths in a way that can provide undetectable RDW manipulations even in room-scale VR.

  5. Generation IV Nuclear Energy Systems Construction Cost Reductions through the Use of Virtual Environments - Task 4 Report: Virtual Mockup Maintenance Task Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timothy Shaw; Anthony Baratta; Vaughn Whisker

    2005-02-28

    Task 4 report of 3 year DOE NERI-sponsored effort evaluating immersive virtual reality (CAVE) technology for design review, construction planning, and maintenance planning and training for next generation nuclear power plants. Program covers development of full-scale virtual mockups generated from 3D CAD data presented in a CAVE visualization facility. This report focuses on using Full-scale virtual mockups for nuclear power plant training applications.

  6. Segmentation and Quantitative Analysis of Epithelial Tissues.

    PubMed

    Aigouy, Benoit; Umetsu, Daiki; Eaton, Suzanne

    2016-01-01

    Epithelia are tissues that regulate exchanges with the environment. They are very dynamic and can acquire virtually any shape; at the cellular level, they are composed of cells tightly connected by junctions. Most often epithelia are amenable to live imaging; however, the large number of cells composing an epithelium and the absence of informatics tools dedicated to epithelial analysis largely prevented tissue scale studies. Here we present Tissue Analyzer, a free tool that can be used to segment and analyze epithelial cells and monitor tissue dynamics.

  7. Fish scale terrace GaInN/GaN light-emitting diodes with enhanced light extraction

    NASA Astrophysics Data System (ADS)

    Stark, Christoph J. M.; Detchprohm, Theeradetch; Zhao, Liang; Paskova, Tanya; Preble, Edward A.; Wetzel, Christian

    2012-12-01

    Non-planar GaInN/GaN light-emitting diodes were epitaxially grown to exhibit steps for enhanced light emission. By means of a large off-cut of the epitaxial growth plane from the c-plane (0.06° to 2.24°), surface morphologies of steps and inclined terraces that resemble fish scale patterns could controllably be achieved. These patterns penetrate the active region without deteriorating the electrical device performance. We find conditions leading to a large increase in light-output power over the virtually on-axis device and over planar sapphire references. The process is found suitable to enhance light extraction even without post-growth processing.

  8. Large-scale systematic analysis of 2D fingerprint methods and parameters to improve virtual screening enrichments.

    PubMed

    Sastry, Madhavi; Lowrie, Jeffrey F; Dixon, Steven L; Sherman, Woody

    2010-05-24

    A systematic virtual screening study on 11 pharmaceutically relevant targets has been conducted to investigate the interrelation between 8 two-dimensional (2D) fingerprinting methods, 13 atom-typing schemes, 13 bit scaling rules, and 12 similarity metrics using the new cheminformatics package Canvas. In total, 157 872 virtual screens were performed to assess the ability of each combination of parameters to identify actives in a database screen. In general, fingerprint methods, such as MOLPRINT2D, Radial, and Dendritic that encode information about local environment beyond simple linear paths outperformed other fingerprint methods. Atom-typing schemes with more specific information, such as Daylight, Mol2, and Carhart were generally superior to more generic atom-typing schemes. Enrichment factors across all targets were improved considerably with the best settings, although no single set of parameters performed optimally on all targets. The size of the addressable bit space for the fingerprints was also explored, and it was found to have a substantial impact on enrichments. Small bit spaces, such as 1024, resulted in many collisions and in a significant degradation in enrichments compared to larger bit spaces that avoid collisions.

  9. A Framework for Analyzing the Whole Body Surface Area from a Single View

    PubMed Central

    Doretto, Gianfranco; Adjeroh, Donald

    2017-01-01

    We present a virtual reality (VR) framework for the analysis of whole human body surface area. Usual methods for determining the whole body surface area (WBSA) are based on well known formulae, characterized by large errors when the subject is obese, or belongs to certain subgroups. For these situations, we believe that a computer vision approach can overcome these problems and provide a better estimate of this important body indicator. Unfortunately, using machine learning techniques to design a computer vision system able to provide a new body indicator that goes beyond the use of only body weight and height, entails a long and expensive data acquisition process. A more viable solution is to use a dataset composed of virtual subjects. Generating a virtual dataset allowed us to build a population with different characteristics (obese, underweight, age, gender). However, synthetic data might differ from a real scenario, typical of the physician’s clinic. For this reason we develop a new virtual environment to facilitate the analysis of human subjects in 3D. This framework can simulate the acquisition process of a real camera, making it easy to analyze and to create training data for machine learning algorithms. With this virtual environment, we can easily simulate the real setup of a clinic, where a subject is standing in front of a camera, or may assume a different pose with respect to the camera. We use this newly designated environment to analyze the whole body surface area (WBSA). In particular, we show that we can obtain accurate WBSA estimations with just one view, virtually enabling the possibility to use inexpensive depth sensors (e.g., the Kinect) for large scale quantification of the WBSA from a single view 3D map. PMID:28045895

  10. Scaling of the Urban Water Footprint: An Analysis of 65 Mid- to Large-Sized U.S. Metropolitan Areas

    NASA Astrophysics Data System (ADS)

    Mahjabin, T.; Garcia, S.; Grady, C.; Mejia, A.

    2017-12-01

    Scaling laws have been shown to be relevant to a range of disciplines including biology, ecology, hydrology, and physics, among others. Recently, scaling was shown to be important for understanding and characterizing cities. For instance, it was found that urban infrastructure (water supply pipes and electrical wires) tends to scale sublinearly with city population, implying that large cities are more efficient. In this study, we explore the scaling of the water footprint of cities. The water footprint is a measure of water appropriation that considers both the direct and indirect (virtual) water use of a consumer or producer. Here we compute the water footprint of 65 mid- to large-sized U.S. metropolitan areas, accounting for direct and indirect water uses associated with agricultural and industrial commodities, and residential and commercial water uses. We find that the urban water footprint, computed as the sum of the water footprint of consumption and production, exhibits sublinear scaling with an exponent of 0.89. This suggests the possibility of large cities being more water-efficient than small ones. To further assess this result, we conduct additional analysis by accounting for international flows, and the effects of green water and city boundary definition on the scaling. The analysis confirms the scaling and provides additional insight about its interpretation.

  11. Workload-Driven Design and Evaluation of Large-Scale Data-Centric Systems

    DTIC Science & Technology

    2012-05-09

    in the batch zone in and out of a low-power state, e.g., sending a “ hibernate ” command via ssh and using Wake-on-LAN or related technologies [85]. If...parameter values for experiments with stand-alone jobs. The mapred.child.java.opts parameter sets the maximum virtual memory of the Java child pro- cesses

  12. Teaching the blind to find their way by playing video games.

    PubMed

    Merabet, Lotfi B; Connors, Erin C; Halko, Mark A; Sánchez, Jaime

    2012-01-01

    Computer based video games are receiving great interest as a means to learn and acquire new skills. As a novel approach to teaching navigation skills in the blind, we have developed Audio-based Environment Simulator (AbES); a virtual reality environment set within the context of a video game metaphor. Despite the fact that participants were naïve to the overall purpose of the software, we found that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building using audio based cues alone. This was confirmed by a series of behavioral performance tests designed to assess the transfer of acquired spatial information to a large-scale, real-world indoor navigation task. Furthermore, learning the spatial layout through a goal directed gaming strategy allowed for the mental manipulation of spatial information as evidenced by enhanced navigation performance when compared to an explicit route learning strategy. We conclude that the immersive and highly interactive nature of the software greatly engages the blind user to actively explore the virtual environment. This in turn generates an accurate sense of a large-scale three-dimensional space and facilitates the learning and transfer of navigation skills to the physical world.

  13. Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU

    PubMed Central

    Xia, Yong; Zhang, Henggui

    2015-01-01

    Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations. PMID:26581957

  14. Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU.

    PubMed

    Xia, Yong; Wang, Kuanquan; Zhang, Henggui

    2015-01-01

    Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations.

  15. Using blackmail, bribery, and guilt to address the tragedy of the virtual intellectual commons

    NASA Astrophysics Data System (ADS)

    Griffith, P. C.; Cook, R. B.; Wilson, B. E.; Gentry, M. J.; Horta, L. M.; McGroddy, M.; Morrell, A. L.; Wilcox, L. E.

    2008-12-01

    One goal of the NSF's vision for 21st Century Cyberinfrastructure is to create a virtual intellectual commons for the scientific community where advanced technologies perpetuate transformation of this community's productivity and capabilities. The metadata describing scientific observations, like the first paragraph of a news story, should answer the questions who? what? why? where? when? and how?, making them discoverable, comprehensible, contextualized, exchangeable, and machine-readable. Investigators who create good scientific metadata increase the scientific value of their observations within such a virtual intellectual commons. But the tragedy of this commons arises when investigators wish to receive without giving in return. The authors of this talk will describe how they have used combinations of blackmail, bribery, and guilt to motivate good behavior by investigators participating in two major scientific programs (NASA's component of the Large-scale Biosphere-Atmosphere Experiment in Amazonia; and the US Climate Change Science Program's North American Carbon Program).

  16. A Computational Chemistry Database for Semiconductor Processing

    NASA Technical Reports Server (NTRS)

    Jaffe, R.; Meyyappan, M.; Arnold, J. O. (Technical Monitor)

    1998-01-01

    The concept of 'virtual reactor' or 'virtual prototyping' has received much attention recently in the semiconductor industry. Commercial codes to simulate thermal CVD and plasma processes have become available to aid in equipment and process design efforts, The virtual prototyping effort would go nowhere if codes do not come with a reliable database of chemical and physical properties of gases involved in semiconductor processing. Commercial code vendors have no capabilities to generate such a database, rather leave the task to the user of finding whatever is needed. While individual investigations of interesting chemical systems continue at Universities, there has not been any large scale effort to create a database. In this presentation, we outline our efforts in this area. Our effort focuses on the following five areas: 1. Thermal CVD reaction mechanism and rate constants. 2. Thermochemical properties. 3. Transport properties.4. Electron-molecule collision cross sections. and 5. Gas-surface interactions.

  17. Parallel computing for probabilistic fatigue analysis

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Lua, Yuan J.; Smith, Mark D.

    1993-01-01

    This paper presents the results of Phase I research to investigate the most effective parallel processing software strategies and hardware configurations for probabilistic structural analysis. We investigate the efficiency of both shared and distributed-memory architectures via a probabilistic fatigue life analysis problem. We also present a parallel programming approach, the virtual shared-memory paradigm, that is applicable across both types of hardware. Using this approach, problems can be solved on a variety of parallel configurations, including networks of single or multiprocessor workstations. We conclude that it is possible to effectively parallelize probabilistic fatigue analysis codes; however, special strategies will be needed to achieve large-scale parallelism to keep large number of processors busy and to treat problems with the large memory requirements encountered in practice. We also conclude that distributed-memory architecture is preferable to shared-memory for achieving large scale parallelism; however, in the future, the currently emerging hybrid-memory architectures will likely be optimal.

  18. The Accuracy and Precision of Position and Orientation Tracking in the HTC Vive Virtual Reality System for Scientific Research

    PubMed Central

    Niehorster, Diederick C.; Li, Li; Lappe, Markus

    2017-01-01

    The advent of inexpensive consumer virtual reality equipment enables many more researchers to study perception with naturally moving observers. One such system, the HTC Vive, offers a large field-of-view, high-resolution head mounted display together with a room-scale tracking system for less than a thousand U.S. dollars. If the position and orientation tracking of this system is of sufficient accuracy and precision, it could be suitable for much research that is currently done with far more expensive systems. Here we present a quantitative test of the HTC Vive’s position and orientation tracking as well as its end-to-end system latency. We report that while the precision of the Vive’s tracking measurements is high and its system latency (22 ms) is low, its position and orientation measurements are provided in a coordinate system that is tilted with respect to the physical ground plane. Because large changes in offset were found whenever tracking was briefly lost, it cannot be corrected for with a one-time calibration procedure. We conclude that the varying offset between the virtual and the physical tracking space makes the HTC Vive at present unsuitable for scientific experiments that require accurate visual stimulation of self-motion through a virtual world. It may however be suited for other experiments that do not have this requirement. PMID:28567271

  19. Enrichment assessment of multiple virtual screening strategies for Toll-like receptor 8 agonists based on a maximal unbiased benchmarking data set.

    PubMed

    Pei, Fen; Jin, Hongwei; Zhou, Xin; Xia, Jie; Sun, Lidan; Liu, Zhenming; Zhang, Liangren

    2015-11-01

    Toll-like receptor 8 agonists, which activate adaptive immune responses by inducing robust production of T-helper 1-polarizing cytokines, are promising candidates for vaccine adjuvants. As the binding site of toll-like receptor 8 is large and highly flexible, virtual screening by individual method has inevitable limitations; thus, a comprehensive comparison of different methods may provide insights into seeking effective strategy for the discovery of novel toll-like receptor 8 agonists. In this study, the performance of knowledge-based pharmacophore, shape-based 3D screening, and combined strategies was assessed against a maximum unbiased benchmarking data set containing 13 actives and 1302 decoys specialized for toll-like receptor 8 agonists. Prior structure-activity relationship knowledge was involved in knowledge-based pharmacophore generation, and a set of antagonists was innovatively used to verify the selectivity of the selected knowledge-based pharmacophore. The benchmarking data set was generated from our recently developed 'mubd-decoymaker' protocol. The enrichment assessment demonstrated a considerable performance through our selected three-layer virtual screening strategy: knowledge-based pharmacophore (Phar1) screening, shape-based 3D similarity search (Q4_combo), and then a Gold docking screening. This virtual screening strategy could be further employed to perform large-scale database screening and to discover novel toll-like receptor 8 agonists. © 2015 John Wiley & Sons A/S.

  20. The Accuracy and Precision of Position and Orientation Tracking in the HTC Vive Virtual Reality System for Scientific Research.

    PubMed

    Niehorster, Diederick C; Li, Li; Lappe, Markus

    2017-01-01

    The advent of inexpensive consumer virtual reality equipment enables many more researchers to study perception with naturally moving observers. One such system, the HTC Vive, offers a large field-of-view, high-resolution head mounted display together with a room-scale tracking system for less than a thousand U.S. dollars. If the position and orientation tracking of this system is of sufficient accuracy and precision, it could be suitable for much research that is currently done with far more expensive systems. Here we present a quantitative test of the HTC Vive's position and orientation tracking as well as its end-to-end system latency. We report that while the precision of the Vive's tracking measurements is high and its system latency (22 ms) is low, its position and orientation measurements are provided in a coordinate system that is tilted with respect to the physical ground plane. Because large changes in offset were found whenever tracking was briefly lost, it cannot be corrected for with a one-time calibration procedure. We conclude that the varying offset between the virtual and the physical tracking space makes the HTC Vive at present unsuitable for scientific experiments that require accurate visual stimulation of self-motion through a virtual world. It may however be suited for other experiments that do not have this requirement.

  1. Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models.

    PubMed

    Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A

    2014-01-01

    Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients.

  2. Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models

    PubMed Central

    Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A.

    2014-01-01

    Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients. PMID:25374542

  3. Tools for Virtual Collaboration Designed for High Resolution Hydrologic Research with Continental-Scale Data Support

    NASA Astrophysics Data System (ADS)

    Duffy, Christopher; Leonard, Lorne; Shi, Yuning; Bhatt, Gopal; Hanson, Paul; Gil, Yolanda; Yu, Xuan

    2015-04-01

    Using a series of recent examples and papers we explore some progress and potential for virtual (cyber-) collaboration inspired by access to high resolution, harmonized public-sector data at continental scales [1]. The first example describes 7 meso-scale catchments in Pennsylvania, USA where the watershed is forced by climate reanalysis and IPCC future climate scenarios (Intergovernmental Panel on Climate Change). We show how existing public-sector data and community models are currently able to resolve fine-scale eco-hydrologic processes regarding wetland response to climate change [2]. The results reveal that regional climate change is only part of the story, with large variations in flood and drought response associated with differences in terrain, physiography, landuse and/or hydrogeology. The importance of community-driven virtual testbeds are demonstrated in the context of Critical Zone Observatories, where earth scientists from around the world are organizing hydro-geophysical data and model results to explore new processes that couple hydrologic models with land-atmosphere interaction, biogeochemical weathering, carbon-nitrogen cycle, landscape evolution and ecosystem services [3][4]. Critical Zone cyber-research demonstrates how data-driven model development requires a flexible computational structure where process modules are relatively easy to incorporate and where new data structures can be implemented [5]. From the perspective of "Big-Data" the paper points out that extrapolating results from virtual observatories to catchments at continental scales, will require centralized or cloud-based cyberinfrastructure as a necessary condition for effectively sharing petabytes of data and model results [6]. Finally we outline how innovative cyber-science is supporting earth-science learning, sharing and exploration through the use of on-line tools where hydrologists and limnologists are sharing data and models for simulating the coupled impacts of catchment hydrology on lake eco-hydrology (NSF-INSPIRE, IIS1344272). The research attempts to use a virtual environment (www.organicdatascience.org) to break down disciplinary barriers and support emergent communities of science. [1] Source: Leonard and Duffy, 2013, Environmental Modelling & Software; [2] Source: Yu et al, 2014, Computers in Geoscience; [3] Source: Duffy et al, 2014, Procedia Earth and Planetary Science; [4] Source: Shi et al, Journal of Hydrometeorology, 2014; [5] Source: Bhatt et al, 2014, Environmental Modelling & Software ; [6] Leonard and Duffy, 2014, Environmental Modelling and Software.

  4. Exploration–exploitation trade-off features a saltatory search behaviour

    PubMed Central

    Volchenkov, Dimitri; Helbach, Jonathan; Tscherepanow, Marko; Kühnel, Sina

    2013-01-01

    Searching experiments conducted in different virtual environments over a gender-balanced group of people revealed a gender irrelevant scale-free spread of searching activity on large spatio-temporal scales. We have suggested and solved analytically a simple statistical model of the coherent-noise type describing the exploration–exploitation trade-off in humans (‘should I stay’ or ‘should I go’). The model exhibits a variety of saltatory behaviours, ranging from Lévy flights occurring under uncertainty to Brownian walks performed by a treasure hunter confident of the eventual success. PMID:23782535

  5. Formation of Virtual Organizations in Grids: A Game-Theoretic Approach

    NASA Astrophysics Data System (ADS)

    Carroll, Thomas E.; Grosu, Daniel

    The execution of large scale grid applications requires the use of several computational resources owned by various Grid Service Providers (GSPs). GSPs must form Virtual Organizations (VOs) to be able to provide the composite resource to these applications. We consider grids as self-organizing systems composed of autonomous, self-interested GSPs that will organize themselves into VOs with every GSP having the objective of maximizing its profit. We formulate the resource composition among GSPs as a coalition formation problem and propose a game-theoretic framework based on cooperation structures to model it. Using this framework, we design a resource management system that supports the VO formation among GSPs in a grid computing system.

  6. Mesoscopic Rigid Body Modelling of the Extracellular Matrix Self-Assembly.

    PubMed

    Wong, Hua; Prévoteau-Jonquet, Jessica; Baud, Stéphanie; Dauchez, Manuel; Belloy, Nicolas

    2018-06-11

    The extracellular matrix (ECM) plays an important role in supporting tissues and organs. It even has a functional role in morphogenesis and differentiation by acting as a source of active molecules (matrikines). Many diseases are linked to dysfunction of ECM components and fragments or changes in their structures. As such it is a prime target for drugs. Because of technological limitations for observations at mesoscopic scales, the precise structural organisation of the ECM is not well-known, with sparse or fuzzy experimental observables. Based on the Unity3D game and physics engines, along with rigid body dynamics, we propose a virtual sandbox to model large biological molecules as dynamic chains of rigid bodies interacting together to gain insight into ECM components behaviour in the mesoscopic range. We have preliminary results showing how parameters such as fibre flexibility or the nature and number of interactions between molecules can induce different structures in the basement membrane. Using the Unity3D game engine and virtual reality headset coupled with haptic controllers, we immerse the user inside the corresponding simulation. Untrained users are able to navigate a complex virtual sandbox crowded with large biomolecules models in a matter of seconds.

  7. Novel Web-based Education Platforms for Information Communication utilizing Gamification, Virtual and Immersive Reality

    NASA Astrophysics Data System (ADS)

    Demir, I.

    2015-12-01

    Recent developments in internet technologies make it possible to manage and visualize large data on the web. Novel visualization techniques and interactive user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. This presentation showcase information communication interfaces, games, and virtual and immersive reality applications for supporting teaching and learning of concepts in atmospheric and hydrological sciences. The information communication platforms utilizes latest web technologies and allow accessing and visualizing large scale data on the web. The simulation system is a web-based 3D interactive learning environment for teaching hydrological and atmospheric processes and concepts. The simulation systems provides a visually striking platform with realistic terrain and weather information, and water simulation. The web-based simulation system provides an environment for students to learn about the earth science processes, and effects of development and human activity on the terrain. Users can access the system in three visualization modes including virtual reality, augmented reality, and immersive reality using heads-up display. The system provides various scenarios customized to fit the age and education level of various users.

  8. Study of ion-ion plasma formation in negative ion sources by a three-dimensional in real space and three-dimensional in velocity space particle in cell model

    NASA Astrophysics Data System (ADS)

    Nishioka, S.; Goto, I.; Miyamoto, K.; Hatayama, A.; Fukano, A.

    2016-01-01

    Recently, in large-scale hydrogen negative ion sources, the experimental results have shown that ion-ion plasma is formed in the vicinity of the extraction hole under the surface negative ion production case. The purpose of this paper is to clarify the mechanism of the ion-ion plasma formation by our three dimensional particle-in-cell simulation. In the present model, the electron loss along the magnetic filter field is taken into account by the " √{τ///τ⊥ } model." The simulation results show that the ion-ion plasma formation is due to the electron loss along the magnetic filter field. Moreover, the potential profile for the ion-ion plasma case has been looked into carefully in order to discuss the ion-ion plasma formation. Our present results show that the potential drop of the virtual cathode in front of the plasma grid is large when the ion-ion plasma is formed. This tendency has been explained by a relationship between the virtual cathode depth and the net particle flux density at the virtual cathode.

  9. A new neuroinformatics approach to personalized medicine in neurology: The Virtual Brain

    PubMed Central

    Falcon, Maria I.; Jirsa, Viktor; Solodkin, Ana

    2017-01-01

    Purpose of review An exciting advance in the field of neuroimaging is the acquisition and processing of very large data sets (so called ‘big data’), permitting large-scale inferences that foster a greater understanding of brain function in health and disease. Yet what we are clearly lacking are quantitative integrative tools to translate this understanding to the individual level to lay the basis for personalized medicine. Recent findings Here we address this challenge through a review on how the relatively new field of neuroinformatics modeling has the capacity to track brain network function at different levels of inquiry, from microscopic to macroscopic and from the localized to the distributed. In this context, we introduce a new and unique multiscale approach, The Virtual Brain (TVB), that effectively models individualized brain activity, linking large-scale (macroscopic) brain dynamics with biophysical parameters at the microscopic level. We also show how TVB modeling provides unique biological interpretable data in epilepsy and stroke. Summary These results establish the basis for a deliberate integration of computational biology and neuroscience into clinical approaches for elucidating cellular mechanisms of disease. In the future, this can provide the means to create a collection of disease-specific models that can be applied on the individual level to personalize therapeutic interventions. Video abstract http://links.lww.com/CONR/A41 PMID:27224088

  10. Examination of the Relation between the Values of Adolescents and Virtual Sensitiveness

    ERIC Educational Resources Information Center

    Yilmaz, Hasan

    2013-01-01

    The aim of this study is to examine the relation between the values adolescents have and virtual sensitiveness. The study is carried out on 447 adolescents, 160 of whom are female, 287 males. The Humanistic Values Scale and Virtual Sensitiveness scale were used. Pearson Product Moment Coefficient and multiple regression analysis techniques were…

  11. Power-law versus log-law in wall-bounded turbulence: A large-eddy simulation perspective

    NASA Astrophysics Data System (ADS)

    Cheng, W.; Samtaney, R.

    2014-01-01

    The debate whether the mean streamwise velocity in wall-bounded turbulent flows obeys a log-law or a power-law scaling originated over two decades ago, and continues to ferment in recent years. As experiments and direct numerical simulation can not provide sufficient clues, in this study we present an insight into this debate from a large-eddy simulation (LES) viewpoint. The LES organically combines state-of-the-art models (the stretched-vortex model and inflow rescaling method) with a virtual-wall model derived under different scaling law assumptions (the log-law or the power-law by George and Castillo ["Zero-pressure-gradient turbulent boundary layer," Appl. Mech. Rev. 50, 689 (1997)]). Comparison of LES results for Reθ ranging from 105 to 1011 for zero-pressure-gradient turbulent boundary layer flows are carried out for the mean streamwise velocity, its gradient and its scaled gradient. Our results provide strong evidence that for both sets of modeling assumption (log law or power law), the turbulence gravitates naturally towards the log-law scaling at extremely large Reynolds numbers.

  12. Implicity restarted Arnoldi/Lanczos methods for large scale eigenvalue calculations

    NASA Technical Reports Server (NTRS)

    Sorensen, Danny C.

    1996-01-01

    Eigenvalues and eigenfunctions of linear operators are important to many areas of applied mathematics. The ability to approximate these quantities numerically is becoming increasingly important in a wide variety of applications. This increasing demand has fueled interest in the development of new methods and software for the numerical solution of large-scale algebraic eigenvalue problems. In turn, the existence of these new methods and software, along with the dramatically increased computational capabilities now available, has enabled the solution of problems that would not even have been posed five or ten years ago. Until very recently, software for large-scale nonsymmetric problems was virtually non-existent. Fortunately, the situation is improving rapidly. The purpose of this article is to provide an overview of the numerical solution of large-scale algebraic eigenvalue problems. The focus will be on a class of methods called Krylov subspace projection methods. The well-known Lanczos method is the premier member of this class. The Arnoldi method generalizes the Lanczos method to the nonsymmetric case. A recently developed variant of the Arnoldi/Lanczos scheme called the Implicitly Restarted Arnoldi Method is presented here in some depth. This method is highlighted because of its suitability as a basis for software development.

  13. Role of Open Source Tools and Resources in Virtual Screening for Drug Discovery.

    PubMed

    Karthikeyan, Muthukumarasamy; Vyas, Renu

    2015-01-01

    Advancement in chemoinformatics research in parallel with availability of high performance computing platform has made handling of large scale multi-dimensional scientific data for high throughput drug discovery easier. In this study we have explored publicly available molecular databases with the help of open-source based integrated in-house molecular informatics tools for virtual screening. The virtual screening literature for past decade has been extensively investigated and thoroughly analyzed to reveal interesting patterns with respect to the drug, target, scaffold and disease space. The review also focuses on the integrated chemoinformatics tools that are capable of harvesting chemical data from textual literature information and transform them into truly computable chemical structures, identification of unique fragments and scaffolds from a class of compounds, automatic generation of focused virtual libraries, computation of molecular descriptors for structure-activity relationship studies, application of conventional filters used in lead discovery along with in-house developed exhaustive PTC (Pharmacophore, Toxicophores and Chemophores) filters and machine learning tools for the design of potential disease specific inhibitors. A case study on kinase inhibitors is provided as an example.

  14. Integrated fringe projection 3D scanning system for large-scale metrology based on laser tracker

    NASA Astrophysics Data System (ADS)

    Du, Hui; Chen, Xiaobo; Zhou, Dan; Guo, Gen; Xi, Juntong

    2017-10-01

    Large scale components exist widely in advance manufacturing industry,3D profilometry plays a pivotal role for the quality control. This paper proposes a flexible, robust large-scale 3D scanning system by integrating a robot with a binocular structured light scanner and a laser tracker. The measurement principle and system construction of the integrated system are introduced. And a mathematical model is established for the global data fusion. Subsequently, a flexible and robust method and mechanism is introduced for the establishment of the end coordination system. Based on this method, a virtual robot noumenon is constructed for hand-eye calibration. And then the transformation matrix between end coordination system and world coordination system is solved. Validation experiment is implemented for verifying the proposed algorithms. Firstly, hand-eye transformation matrix is solved. Then a car body rear is measured for 16 times for the global data fusion algorithm verification. And the 3D shape of the rear is reconstructed successfully.

  15. Robotic gait training in multiple sclerosis rehabilitation: Can virtual reality make the difference? Findings from a randomized controlled trial.

    PubMed

    Calabrò, Rocco Salvatore; Russo, Margherita; Naro, Antonino; De Luca, Rosaria; Leo, Antonino; Tomasello, Provvidenza; Molonia, Francesco; Dattola, Vincenzo; Bramanti, Alessia; Bramanti, Placido

    2017-06-15

    Gait, coordination, and balance may be severely compromised in patients with multiple sclerosis (MS), with considerable consequences on the patient's daily living activities, psychological status and quality of life. For this reason, MS patients may benefit from robotic-rehabilitation and virtual reality training sessions. Aim of the present study was to assess the efficacy of robot-assisted gait training (RAGT) equipped with virtual reality (VR) system in MS patients with walking disabilities (EDSS 4.0 to 5.5) as compared to RAGT without VR. We enrolled 40 patients (randomized into two groups) undergoing forty RAGT±VR sessions over eight weeks. All the patients were assessed at baseline and at the end of the treatment by using specific scales. Effect sizes were very small and non-significant between the groups for Berg Balance Scale (-0.019, CI95% -2.403 to 2.365) and TUG (-0.064, 95%CI -0.408 to 0.536) favoring RAGT+VR. Effects were moderate-to-large and significant for positive attitude (-0.505, 95%CI -3.615 to 2.604) and problem-solving (-0.905, 95%CI -2.113 to 0.302) sub-items of Coping Orientation to Problem Experienced, thus largely favoring RAGT+VR. Our findings show that RAGT combined with VR is an effective therapeutic option in MS patients with walking disability as compared to RAGT without VR. We may hypothesize that VR may strengthen RAGT thanks to the entrainment of different brain areas involved in motor panning and learning. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Virtual Special Issue Preface: Forest Response to Environmental Stress: Impacts and Adaptation

    Treesearch

    Steven McNulty; Enzai Du; Elena Paoletti

    2017-01-01

    The current distribution of forest typeswas largely established at the beginning of the Holocene epoch (approximately 12,000 BCE), but forests are constantly in flux. Many regional scale stresses (e.g., drought, heat, fire, and insect) and even a few multi-regional or global stresses (e.g., 8200 BCE cooling, or the medievalwarming period) have occurred over the past 12...

  17. Predictive Anomaly Management for Resilient Virtualized Computing Infrastructures

    DTIC Science & Technology

    2015-05-27

    PREC: Practical Root Exploit Containment for Android Devices, ACM Conference on Data and Application Security and Privacy (CODASPY) . 03-MAR-14...05-OCT-11, . : , Hiep Nguyen, Yongmin Tan, Xiaohui Gu. Propagation-aware Anomaly Localization for Cloud Hosted Distributed Applications , ACM...Workshop on Managing Large-Scale Systems via the Analysis of System Logs and the Application of Machine Learning Techniques (SLAML) in conjunction with SOSP

  18. Establishing User Needs--A Large-Scale Study into the Requirements of Those Involved in the Research Process

    ERIC Educational Resources Information Center

    Grimshaw, Shirley; Wilson, Ian

    2009-01-01

    The aim of the project was to develop a set of online tools, systems and processes that would facilitate research at the University of Nottingham. The tools would be delivered via a portal, a one-stop place providing a Virtual Research Environment for all those involved in the research process. A predominantly bottom-up approach was used with…

  19. Russian Political, Economic, and Security Issues and U.S. Interests

    DTIC Science & Technology

    2007-01-18

    polonium 210 from Moscow, through Germany, to London, apparently carried by one of the Russians Litvinenko met November 1. Russian authorities deny...radio under tight state control and virtually eliminated effective political opposition. Federal forces have suppressed large-scale military resistance...Russia’s needs — food and food processing, oil and gas extraction technology, computers, communications, transportation, and investment capital — are

  20. Software Engineering Infrastructure in a Large Virtual Campus

    ERIC Educational Resources Information Center

    Cristobal, Jesus; Merino, Jorge; Navarro, Antonio; Peralta, Miguel; Roldan, Yolanda; Silveira, Rosa Maria

    2011-01-01

    Purpose: The design, construction and deployment of a large virtual campus are a complex issue. Present virtual campuses are made of several software applications that complement e-learning platforms. In order to develop and maintain such virtual campuses, a complex software engineering infrastructure is needed. This paper aims to analyse the…

  1. Effect of virtual reality in Parkinson's disease: a prospective observational study.

    PubMed

    Severiano, Maria Izabel Rodrigues; Zeigelboim, Bianca Simone; Teive, Hélio Afonso Ghizoni; Santos, Geslaine Janaína Barbosa; Fonseca, Vinícius Ribas

    2018-02-01

    To assess the effectiveness of balance exercises by means of virtual reality games in Parkinson's disease. Sixteen patients were submitted to anamnesis, otorhinolaryngological and vestibular examinations, as well as the Dizziness Handicap Inventory, Berg Balance Scale, SF-36 questionnaire, and the SRT, applied before and after rehabilitation with virtual reality games. Final scoring for the Dizziness Handicap Inventory and Berg Balance Scale was better after rehabilitation. The SRT showed a significant result after rehabilitation. The SF-36 showed a significant change in the functional capacity for the Tightrope Walk and Ski Slalom virtual reality games (p < 0.05), as well as in the mental health aspect of the Ski Slalom game (p < 0.05). The Dizziness Handicap Inventory and Berg Balance Scale showed significant changes in the Ski Slalom game (p < 0.05). There was evidence of clinical improvement in patients in the final assessment after virtual rehabilitation. The Tightrope Walk and Ski Slalom virtual games were shown to be the most effective for this population.

  2. SensorDB: a virtual laboratory for the integration, visualization and analysis of varied biological sensor data.

    PubMed

    Salehi, Ali; Jimenez-Berni, Jose; Deery, David M; Palmer, Doug; Holland, Edward; Rozas-Larraondo, Pablo; Chapman, Scott C; Georgakopoulos, Dimitrios; Furbank, Robert T

    2015-01-01

    To our knowledge, there is no software or database solution that supports large volumes of biological time series sensor data efficiently and enables data visualization and analysis in real time. Existing solutions for managing data typically use unstructured file systems or relational databases. These systems are not designed to provide instantaneous response to user queries. Furthermore, they do not support rapid data analysis and visualization to enable interactive experiments. In large scale experiments, this behaviour slows research discovery, discourages the widespread sharing and reuse of data that could otherwise inform critical decisions in a timely manner and encourage effective collaboration between groups. In this paper we present SensorDB, a web based virtual laboratory that can manage large volumes of biological time series sensor data while supporting rapid data queries and real-time user interaction. SensorDB is sensor agnostic and uses web-based, state-of-the-art cloud and storage technologies to efficiently gather, analyse and visualize data. Collaboration and data sharing between different agencies and groups is thereby facilitated. SensorDB is available online at http://sensordb.csiro.au.

  3. Preserving third year medical students' empathy and enhancing self-reflection using small group "virtual hangout" technology.

    PubMed

    Duke, Pamela; Grosseman, Suely; Novack, Dennis H; Rosenzweig, Steven

    2015-01-01

    Medical student professionalism education is challenging in scope, purpose, and delivery, particularly in the clinical years when students in large universities are dispersed across multiple clinical sites. We initiated a faculty-facilitated, peer small group course for our third year students, creating virtual classrooms using social networking and online learning management system technologies. The course emphasized narrative self-reflection, group inquiry, and peer support. We conducted this study to analyze the effects of a professionalism course on third year medical students' empathy and self-reflection (two elements of professionalism) and their perceptions about the course. Students completed the Groningen Reflection Ability Scale (GRAS) and the Jefferson Scale of Empathy (JSE) before and after the course and provided anonymous online feedback. The results of the JSE before and after the course demonstrated preservation of empathy rather than its decline. In addition, there was a statistically significant increase in GRAS scores (p < 0.001), suggesting that the sharing of personal narratives may foster reflective ability and reflective practice among third year students. This study supports previous findings showing that students benefit from peer groups and discussion in a safe environment, which may include the use of a virtual group video platform.

  4. Collaborative visual analytics of radio surveys in the Big Data era

    NASA Astrophysics Data System (ADS)

    Vohl, Dany; Fluke, Christopher J.; Hassan, Amr H.; Barnes, David G.; Kilborn, Virginia A.

    2017-06-01

    Radio survey datasets comprise an increasing number of individual observations stored as sets of multidimensional data. In large survey projects, astronomers commonly face limitations regarding: 1) interactive visual analytics of sufficiently large subsets of data; 2) synchronous and asynchronous collaboration; and 3) documentation of the discovery workflow. To support collaborative data inquiry, we present encube, a large-scale comparative visual analytics framework. encube can utilise advanced visualization environments such as the CAVE2 (a hybrid 2D and 3D virtual reality environment powered with a 100 Tflop/s GPU-based supercomputer and 84 million pixels) for collaborative analysis of large subsets of data from radio surveys. It can also run on standard desktops, providing a capable visual analytics experience across the display ecology. encube is composed of four primary units enabling compute-intensive processing, advanced visualisation, dynamic interaction, parallel data query, along with data management. Its modularity will make it simple to incorporate astronomical analysis packages and Virtual Observatory capabilities developed within our community. We discuss how encube builds a bridge between high-end display systems (such as CAVE2) and the classical desktop, preserving all traces of the work completed on either platform - allowing the research process to continue wherever you are.

  5. The contribution of virtual reality to the diagnosis of spatial navigation disorders and to the study of the role of navigational aids: A systematic literature review.

    PubMed

    Cogné, M; Taillade, M; N'Kaoua, B; Tarruella, A; Klinger, E; Larrue, F; Sauzéon, H; Joseph, P-A; Sorita, E

    2017-06-01

    Spatial navigation, which involves higher cognitive functions, is frequently implemented in daily activities, and is critical to the participation of human beings in mainstream environments. Virtual reality is an expanding tool, which enables on one hand the assessment of the cognitive functions involved in spatial navigation, and on the other the rehabilitation of patients with spatial navigation difficulties. Topographical disorientation is a frequent deficit among patients suffering from neurological diseases. The use of virtual environments enables the information incorporated into the virtual environment to be manipulated empirically. But the impact of manipulations seems differ according to their nature (quantity, occurrence, and characteristics of the stimuli) and the target population. We performed a systematic review of research on virtual spatial navigation covering the period from 2005 to 2015. We focused first on the contribution of virtual spatial navigation for patients with brain injury or schizophrenia, or in the context of ageing and dementia, and then on the impact of visual or auditory stimuli on virtual spatial navigation. On the basis of 6521 abstracts identified in 2 databases (Pubmed and Scopus) with the keywords « navigation » and « virtual », 1103 abstracts were selected by adding the keywords "ageing", "dementia", "brain injury", "stroke", "schizophrenia", "aid", "help", "stimulus" and "cue"; Among these, 63 articles were included in the present qualitative analysis. Unlike pencil-and-paper tests, virtual reality is useful to assess large-scale navigation strategies in patients with brain injury or schizophrenia, or in the context of ageing and dementia. Better knowledge about both the impact of the different aids and the cognitive processes involved is essential for the use of aids in neurorehabilitation. Copyright © 2016. Published by Elsevier Masson SAS.

  6. The Importance of Postural Cues for Determining Eye Height in Immersive Virtual Reality

    PubMed Central

    Leyrer, Markus; Linkenauger, Sally A.; Bülthoff, Heinrich H.; Mohler, Betty J.

    2015-01-01

    In human perception, the ability to determine eye height is essential, because eye height is used to scale heights of objects, velocities, affordances and distances, all of which allow for successful environmental interaction. It is well understood that eye height is fundamental to determine many of these percepts. Yet, how eye height itself is provided is still largely unknown. While the information potentially specifying eye height in the real world is naturally coincident in an environment with a regular ground surface, these sources of information can be easily divergent in similar and common virtual reality scenarios. Thus, we conducted virtual reality experiments where we manipulated the virtual eye height in a distance perception task to investigate how eye height might be determined in such a scenario. We found that humans rely more on their postural cues for determining their eye height if there is a conflict between visual and postural information and little opportunity for perceptual-motor calibration is provided. This is demonstrated by the predictable variations in their distance estimates. Our results suggest that the eye height in such circumstances is informed by postural cues when estimating egocentric distances in virtual reality and consequently, does not depend on an internalized value for eye height. PMID:25993274

  7. Virtual Inertia: Current Trends and Future Directions

    DOE PAGES

    Tamrakar, Ujjwol; Shrestha, Dipesh; Maharjan, Manisha; ...

    2017-06-26

    The modern power system is progressing from a synchronous machine-based system towards an inverter-dominated system, with a large-scale penetration of renewable energy sources (RESs) like wind and photovoltaics. RES units today represent a major share of the generation, and the traditional approach of integrating themas grid following units can lead to frequency instability. Many researchers have pointed towards using inverters with virtual inertia control algorithms so that they appear as synchronous generators to the grid, maintaining and enhancing system stability. Our paper presents a literature review of the current state-of-the-art of virtual inertia implementation techniques, and explores potential research directionsmore » and challenges. The major virtual inertia topologies are compared and classified. Through literature review and simulations of some selected topologies it has been shown that similar inertial response can be achieved by relating the parameters of these topologies through time constants and inertia constants, although the exact frequency dynamics may vary slightly. The suitability of a topology depends on system control architecture and desired level of detail in replication of the dynamics of synchronous generators. We present a discussion on the challenges and research directions which points out several research needs, especially for systems level integration of virtual inertia systems.« less

  8. Virtual Inertia: Current Trends and Future Directions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamrakar, Ujjwol; Shrestha, Dipesh; Maharjan, Manisha

    The modern power system is progressing from a synchronous machine-based system towards an inverter-dominated system, with a large-scale penetration of renewable energy sources (RESs) like wind and photovoltaics. RES units today represent a major share of the generation, and the traditional approach of integrating themas grid following units can lead to frequency instability. Many researchers have pointed towards using inverters with virtual inertia control algorithms so that they appear as synchronous generators to the grid, maintaining and enhancing system stability. Our paper presents a literature review of the current state-of-the-art of virtual inertia implementation techniques, and explores potential research directionsmore » and challenges. The major virtual inertia topologies are compared and classified. Through literature review and simulations of some selected topologies it has been shown that similar inertial response can be achieved by relating the parameters of these topologies through time constants and inertia constants, although the exact frequency dynamics may vary slightly. The suitability of a topology depends on system control architecture and desired level of detail in replication of the dynamics of synchronous generators. We present a discussion on the challenges and research directions which points out several research needs, especially for systems level integration of virtual inertia systems.« less

  9. The importance of postural cues for determining eye height in immersive virtual reality.

    PubMed

    Leyrer, Markus; Linkenauger, Sally A; Bülthoff, Heinrich H; Mohler, Betty J

    2015-01-01

    In human perception, the ability to determine eye height is essential, because eye height is used to scale heights of objects, velocities, affordances and distances, all of which allow for successful environmental interaction. It is well understood that eye height is fundamental to determine many of these percepts. Yet, how eye height itself is provided is still largely unknown. While the information potentially specifying eye height in the real world is naturally coincident in an environment with a regular ground surface, these sources of information can be easily divergent in similar and common virtual reality scenarios. Thus, we conducted virtual reality experiments where we manipulated the virtual eye height in a distance perception task to investigate how eye height might be determined in such a scenario. We found that humans rely more on their postural cues for determining their eye height if there is a conflict between visual and postural information and little opportunity for perceptual-motor calibration is provided. This is demonstrated by the predictable variations in their distance estimates. Our results suggest that the eye height in such circumstances is informed by postural cues when estimating egocentric distances in virtual reality and consequently, does not depend on an internalized value for eye height.

  10. Large-Scale Hybrid Motor Testing. Chapter 10

    NASA Technical Reports Server (NTRS)

    Story, George

    2006-01-01

    Hybrid rocket motors can be successfully demonstrated at a small scale virtually anywhere. There have been many suitcase sized portable test stands assembled for demonstration of hybrids. They show the safety of hybrid rockets to the audiences. These small show motors and small laboratory scale motors can give comparative burn rate data for development of different fuel/oxidizer combinations, however questions that are always asked when hybrids are mentioned for large scale applications are - how do they scale and has it been shown in a large motor? To answer those questions, large scale motor testing is required to verify the hybrid motor at its true size. The necessity to conduct large-scale hybrid rocket motor tests to validate the burn rate from the small motors to application size has been documented in several place^'^^.^. Comparison of small scale hybrid data to that of larger scale data indicates that the fuel burn rate goes down with increasing port size, even with the same oxidizer flux. This trend holds for conventional hybrid motors with forward oxidizer injection and HTPB based fuels. While the reason this is occurring would make a great paper or study or thesis, it is not thoroughly understood at this time. Potential causes include the fact that since hybrid combustion is boundary layer driven, the larger port sizes reduce the interaction (radiation, mixing and heat transfer) from the core region of the port. This chapter focuses on some of the large, prototype sized testing of hybrid motors. The largest motors tested have been AMROC s 250K-lbf thrust motor at Edwards Air Force Base and the Hybrid Propulsion Demonstration Program s 250K-lbf thrust motor at Stennis Space Center. Numerous smaller tests were performed to support the burn rate, stability and scaling concepts that went into the development of those large motors.

  11. Data Intensive Scientific Workflows on a Federated Cloud: CRADA Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garzoglio, Gabriele

    The Fermilab Scientific Computing Division and the KISTI Global Science Experimental Data Hub Center have built a prototypical large-scale infrastructure to handle scientific workflows of stakeholders to run on multiple cloud resources. The demonstrations have been in the areas of (a) Data-Intensive Scientific Workflows on Federated Clouds, (b) Interoperability and Federation of Cloud Resources, and (c) Virtual Infrastructure Automation to enable On-Demand Services.

  12. Research on computer-aided design of modern marine power systems

    NASA Astrophysics Data System (ADS)

    Ding, Dongdong; Zeng, Fanming; Chen, Guojun

    2004-03-01

    To make the MPS (Marine Power System) design process more economical and easier, a new CAD scheme is brought forward which takes much advantage of VR (Virtual Reality) and AI (Artificial Intelligence) technologies. This CAD system can shorten the period of design and reduce the requirements on designers' experience in large scale. And some key issues like the selection of hardware and software of such a system are discussed.

  13. Designing for Data with Ask Dr. Discovery: Design Approaches for Facilitating Museum Evaluation with Real-Time Data Mining

    ERIC Educational Resources Information Center

    Nelson, Brian C.; Bowman, Cassie; Bowman, Judd

    2017-01-01

    Ask Dr. Discovery is an NSF-funded study addressing the need for ongoing, large-scale museum evaluation while investigating new ways to encourage museum visitors to engage deeply with museum content. To realize these aims, we are developing and implementing a mobile app with two parts: (1) a front-end virtual scientist called Dr. Discovery (Dr. D)…

  14. Large Scale Hierarchical K-Means Based Image Retrieval With MapReduce

    DTIC Science & Technology

    2014-03-27

    hadoop distributed file system: Architecture and design, 2007. [10] G. Bradski. Dr. Dobb’s Journal of Software Tools, 2000. [11] Terry Costlow. Big data ...million images running on 20 virtual machines are shown. 15. SUBJECT TERMS Image Retrieval, MapReduce, Hierarchical K-Means, Big Data , Hadoop U U U UU 87...13 2.1.1.2 HDFS Data Representation . . . . . . . . . . . . . . . . 14 2.1.1.3 Hadoop Engine

  15. Teaching the Blind to Find Their Way by Playing Video Games

    PubMed Central

    Merabet, Lotfi B.; Connors, Erin C.; Halko, Mark A.; Sánchez, Jaime

    2012-01-01

    Computer based video games are receiving great interest as a means to learn and acquire new skills. As a novel approach to teaching navigation skills in the blind, we have developed Audio-based Environment Simulator (AbES); a virtual reality environment set within the context of a video game metaphor. Despite the fact that participants were naïve to the overall purpose of the software, we found that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building using audio based cues alone. This was confirmed by a series of behavioral performance tests designed to assess the transfer of acquired spatial information to a large-scale, real-world indoor navigation task. Furthermore, learning the spatial layout through a goal directed gaming strategy allowed for the mental manipulation of spatial information as evidenced by enhanced navigation performance when compared to an explicit route learning strategy. We conclude that the immersive and highly interactive nature of the software greatly engages the blind user to actively explore the virtual environment. This in turn generates an accurate sense of a large-scale three-dimensional space and facilitates the learning and transfer of navigation skills to the physical world. PMID:23028703

  16. On the (a)symmetry between the perception of time and space in large-scale environments.

    PubMed

    Riemer, Martin; Shine, Jonathan P; Wolbers, Thomas

    2018-04-23

    Cross-dimensional interference between spatial and temporal processing is well documented in humans, but the direction of these interactions remains unclear. The theory of metaphoric structuring states that space is the dominant concept influencing time perception, whereas time has little effect upon the perception of space. In contrast, theories proposing a common neuronal mechanism representing magnitudes argue for a symmetric interaction between space and time perception. Here, we investigated space-time interactions in realistic, large-scale virtual environments. Our results demonstrate a symmetric relationship between the perception of temporal intervals in the supra-second range and room size (experiment 1), but an asymmetric relationship between the perception of travel time and traveled distance (experiment 2). While the perception of time was influenced by the size of virtual rooms and by the distance traveled within these rooms, time itself affected only the perception of room size, but had no influence on the perception of traveled distance. These results are discussed in the context of recent evidence from rodent studies suggesting that subsets of hippocampal place and entorhinal grid cells can simultaneously code for space and time, providing a potential neuronal basis for the interactions between these domains. © 2018 Wiley Periodicals, Inc.

  17. Large-Scale Land Acquisition and Its Effects on the Water Balance in Investor and Host Countries

    PubMed Central

    Breu, Thomas; Bader, Christoph; Messerli, Peter; Heinimann, Andreas; Rist, Stephan; Eckert, Sandra

    2016-01-01

    This study examines the validity of the assumption that international large-scale land acquisition (LSLA) is motivated by the desire to secure control over water resources, which is commonly referred to as ‘water grabbing’. This assumption was repeatedly expressed in recent years, ascribing the said motivation to the Gulf States in particular. However, it must be considered of hypothetical nature, as the few global studies conducted so far focused primarily on the effects of LSLA on host countries or on trade in virtual water. In this study, we analyse the effects of 475 intended or concluded land deals recorded in the Land Matrix database on the water balance in both host and investor countries. We also examine how these effects relate to water stress and how they contribute to global trade in virtual water. The analysis shows that implementation of the LSLAs in our sample would result in global water savings based on virtual water trade. At the level of individual LSLA host countries, however, water use intensity would increase, particularly in 15 sub-Saharan states. From an investor country perspective, the analysis reveals that countries often suspected of using LSLA to relieve pressure on their domestic water resources—such as China, India, and all Gulf States except Saudi Arabia—invest in agricultural activities abroad that are less water-intensive compared to their average domestic crop production. Conversely, large investor countries such as the United States, Saudi Arabia, Singapore, and Japan are disproportionately externalizing crop water consumption through their international land investments. Statistical analyses also show that host countries with abundant water resources are not per se favoured targets of LSLA. Indeed, further analysis reveals that land investments originating in water-stressed countries have only a weak tendency to target areas with a smaller water risk. PMID:26943794

  18. Large-Scale Land Acquisition and Its Effects on the Water Balance in Investor and Host Countries.

    PubMed

    Breu, Thomas; Bader, Christoph; Messerli, Peter; Heinimann, Andreas; Rist, Stephan; Eckert, Sandra

    2016-01-01

    This study examines the validity of the assumption that international large-scale land acquisition (LSLA) is motivated by the desire to secure control over water resources, which is commonly referred to as 'water grabbing'. This assumption was repeatedly expressed in recent years, ascribing the said motivation to the Gulf States in particular. However, it must be considered of hypothetical nature, as the few global studies conducted so far focused primarily on the effects of LSLA on host countries or on trade in virtual water. In this study, we analyse the effects of 475 intended or concluded land deals recorded in the Land Matrix database on the water balance in both host and investor countries. We also examine how these effects relate to water stress and how they contribute to global trade in virtual water. The analysis shows that implementation of the LSLAs in our sample would result in global water savings based on virtual water trade. At the level of individual LSLA host countries, however, water use intensity would increase, particularly in 15 sub-Saharan states. From an investor country perspective, the analysis reveals that countries often suspected of using LSLA to relieve pressure on their domestic water resources--such as China, India, and all Gulf States except Saudi Arabia--invest in agricultural activities abroad that are less water-intensive compared to their average domestic crop production. Conversely, large investor countries such as the United States, Saudi Arabia, Singapore, and Japan are disproportionately externalizing crop water consumption through their international land investments. Statistical analyses also show that host countries with abundant water resources are not per se favoured targets of LSLA. Indeed, further analysis reveals that land investments originating in water-stressed countries have only a weak tendency to target areas with a smaller water risk.

  19. The German VR Simulation Realism Scale--psychometric construction for virtual reality applications with virtual humans.

    PubMed

    Poeschl, Sandra; Doering, Nicola

    2013-01-01

    Virtual training applications with high levels of immersion or fidelity (for example for social phobia treatment) produce high levels of presence and therefore belong to the most successful Virtual Reality developments. Whereas display and interaction fidelity (as sub-dimensions of immersion) and their influence on presence are well researched, realism of the displayed simulation depends on the specific application and is therefore difficult to measure. We propose to measure simulation realism by using a self-report questionnaire. The German VR Simulation Realism Scale for VR training applications was developed based on a translation of scene realism items from the Witmer-Singer-Presence Questionnaire. Items for realism of virtual humans (for example for social phobia training applications) were supplemented. A sample of N = 151 students rated simulation realism of a Fear of Public Speaking application. Four factors were derived by item- and principle component analysis (Varimax rotation), representing Scene Realism, Audience Behavior, Audience Appearance and Sound Realism. The scale developed can be used as a starting point for future research and measurement of simulation realism for applications including virtual humans.

  20. The Fold Analysis Challenge: A virtual globe-based educational resource

    NASA Astrophysics Data System (ADS)

    De Paor, Declan G.; Dordevic, Mladen M.; Karabinos, Paul; Tewksbury, Barbara J.; Whitmeyer, Steven J.

    2016-04-01

    We present an undergraduate structural geology laboratory exercise using the Google Earth virtual globe with COLLADA models, optionally including an interactive stereographic projection and JavaScript controls. The learning resource challenges students to identify bedding traces and estimate bedding orientation at several locations on a fold, to fit the fold axis and axial plane to stereographic projection data, and to fit a doubly-plunging fold model to the large-scale structure. The chosen fold is the Sheep Mountain Anticline, a Laramide uplift in the Big Horn Basin of Wyoming. We take an education research-based approach, guiding students through three levels of difficulty. The exercise aims to counter common student misconceptions and stumbling blocks regarding penetrative structures. It can be used in preparation for an in-person field trip, for post-trip reinforcement, or as a virtual field experience in an online-only course. Our KML scripts can be easily transferred to other fold structures around the globe.

  1. Towards Transparent Throughput Elasticity for IaaS Cloud Storage: Exploring the Benefits of Adaptive Block-Level Caching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicolae, Bogdan; Riteau, Pierre; Keahey, Kate

    Storage elasticity on IaaS clouds is a crucial feature in the age of data-intensive computing, especially when considering fluctuations of I/O throughput. This paper provides a transparent solution that automatically boosts I/O bandwidth during peaks for underlying virtual disks, effectively avoiding over-provisioning without performance loss. The authors' proposal relies on the idea of leveraging short-lived virtual disks of better performance characteristics (and thus more expensive) to act during peaks as a caching layer for the persistent virtual disks where the application data is stored. Furthermore, they introduce a performance and cost prediction methodology that can be used both independently tomore » estimate in advance what trade-off between performance and cost is possible, as well as an optimization technique that enables better cache size selection to meet the desired performance level with minimal cost. The authors demonstrate the benefits of their proposal both for microbenchmarks and for two real-life applications using large-scale experiments.« less

  2. A Virtual Reality Visualization Tool for Neuron Tracing

    PubMed Central

    Usher, Will; Klacansky, Pavol; Federer, Frederick; Bremer, Peer-Timo; Knoll, Aaron; Angelucci, Alessandra; Pascucci, Valerio

    2017-01-01

    Tracing neurons in large-scale microscopy data is crucial to establishing a wiring diagram of the brain, which is needed to understand how neural circuits in the brain process information and generate behavior. Automatic techniques often fail for large and complex datasets, and connectomics researchers may spend weeks or months manually tracing neurons using 2D image stacks. We present a design study of a new virtual reality (VR) system, developed in collaboration with trained neuroanatomists, to trace neurons in microscope scans of the visual cortex of primates. We hypothesize that using consumer-grade VR technology to interact with neurons directly in 3D will help neuroscientists better resolve complex cases and enable them to trace neurons faster and with less physical and mental strain. We discuss both the design process and technical challenges in developing an interactive system to navigate and manipulate terabyte-sized image volumes in VR. Using a number of different datasets, we demonstrate that, compared to widely used commercial software, consumer-grade VR presents a promising alternative for scientists. PMID:28866520

  3. Low Cost, Scalable Proteomics Data Analysis Using Amazon's Cloud Computing Services and Open Source Search Algorithms

    PubMed Central

    Halligan, Brian D.; Geiger, Joey F.; Vallejos, Andrew K.; Greene, Andrew S.; Twigger, Simon N.

    2009-01-01

    One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step by step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center website (http://proteomics.mcw.edu/vipdac). PMID:19358578

  4. Low cost, scalable proteomics data analysis using Amazon's cloud computing services and open source search algorithms.

    PubMed

    Halligan, Brian D; Geiger, Joey F; Vallejos, Andrew K; Greene, Andrew S; Twigger, Simon N

    2009-06-01

    One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step-by-step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center Web site ( http://proteomics.mcw.edu/vipdac ).

  5. A Virtual Reality Visualization Tool for Neuron Tracing.

    PubMed

    Usher, Will; Klacansky, Pavol; Federer, Frederick; Bremer, Peer-Timo; Knoll, Aaron; Yarch, Jeff; Angelucci, Alessandra; Pascucci, Valerio

    2018-01-01

    Tracing neurons in large-scale microscopy data is crucial to establishing a wiring diagram of the brain, which is needed to understand how neural circuits in the brain process information and generate behavior. Automatic techniques often fail for large and complex datasets, and connectomics researchers may spend weeks or months manually tracing neurons using 2D image stacks. We present a design study of a new virtual reality (VR) system, developed in collaboration with trained neuroanatomists, to trace neurons in microscope scans of the visual cortex of primates. We hypothesize that using consumer-grade VR technology to interact with neurons directly in 3D will help neuroscientists better resolve complex cases and enable them to trace neurons faster and with less physical and mental strain. We discuss both the design process and technical challenges in developing an interactive system to navigate and manipulate terabyte-sized image volumes in VR. Using a number of different datasets, we demonstrate that, compared to widely used commercial software, consumer-grade VR presents a promising alternative for scientists.

  6. DOVIS: an implementation for high-throughput virtual screening using AutoDock.

    PubMed

    Zhang, Shuxing; Kumar, Kamal; Jiang, Xiaohui; Wallqvist, Anders; Reifman, Jaques

    2008-02-27

    Molecular-docking-based virtual screening is an important tool in drug discovery that is used to significantly reduce the number of possible chemical compounds to be investigated. In addition to the selection of a sound docking strategy with appropriate scoring functions, another technical challenge is to in silico screen millions of compounds in a reasonable time. To meet this challenge, it is necessary to use high performance computing (HPC) platforms and techniques. However, the development of an integrated HPC system that makes efficient use of its elements is not trivial. We have developed an application termed DOVIS that uses AutoDock (version 3) as the docking engine and runs in parallel on a Linux cluster. DOVIS can efficiently dock large numbers (millions) of small molecules (ligands) to a receptor, screening 500 to 1,000 compounds per processor per day. Furthermore, in DOVIS, the docking session is fully integrated and automated in that the inputs are specified via a graphical user interface, the calculations are fully integrated with a Linux cluster queuing system for parallel processing, and the results can be visualized and queried. DOVIS removes most of the complexities and organizational problems associated with large-scale high-throughput virtual screening, and provides a convenient and efficient solution for AutoDock users to use this software in a Linux cluster platform.

  7. ChemHTPS - A virtual high-throughput screening program suite for the chemical and materials sciences

    NASA Astrophysics Data System (ADS)

    Afzal, Mohammad Atif Faiz; Evangelista, William; Hachmann, Johannes

    The discovery of new compounds, materials, and chemical reactions with exceptional properties is the key for the grand challenges in innovation, energy and sustainability. This process can be dramatically accelerated by means of the virtual high-throughput screening (HTPS) of large-scale candidate libraries. The resulting data can further be used to study the underlying structure-property relationships and thus facilitate rational design capability. This approach has been extensively used for many years in the drug discovery community. However, the lack of openly available virtual HTPS tools is limiting the use of these techniques in various other applications such as photovoltaics, optoelectronics, and catalysis. Thus, we developed ChemHTPS, a general-purpose, comprehensive and user-friendly suite, that will allow users to efficiently perform large in silico modeling studies and high-throughput analyses in these applications. ChemHTPS also includes a massively parallel molecular library generator which offers a multitude of options to customize and restrict the scope of the enumerated chemical space and thus tailor it for the demands of specific applications. To streamline the non-combinatorial exploration of chemical space, we incorporate genetic algorithms into the framework. In addition to implementing smarter algorithms, we also focus on the ease of use, workflow, and code integration to make this technology more accessible to the community.

  8. WE-AB-207B-07: Dose Cloud: Generating “Big Data” for Radiation Therapy Treatment Plan Optimization Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Folkerts, MM; University of California San Diego, La Jolla, California; Long, T

    Purpose: To provide a tool to generate large sets of realistic virtual patient geometries and beamlet doses for treatment optimization research. This tool enables countless studies exploring the fundamental interplay between patient geometry, objective functions, weight selections, and achievable dose distributions for various algorithms and modalities. Methods: Generating realistic virtual patient geometries requires a small set of real patient data. We developed a normalized patient shape model (PSM) which captures organ and target contours in a correspondence-preserving manner. Using PSM-processed data, we perform principal component analysis (PCA) to extract major modes of variation from the population. These PCA modes canmore » be shared without exposing patient information. The modes are re-combined with different weights to produce sets of realistic virtual patient contours. Because virtual patients lack imaging information, we developed a shape-based dose calculation (SBD) relying on the assumption that the region inside the body contour is water. SBD utilizes a 2D fluence-convolved scatter kernel, derived from Monte Carlo simulations, and can compute both full dose for a given set of fluence maps, or produce a dose matrix (dose per fluence pixel) for many modalities. Combining the shape model with SBD provides the data needed for treatment plan optimization research. Results: We used PSM to capture organ and target contours for 96 prostate cases, extracted the first 20 PCA modes, and generated 2048 virtual patient shapes by randomly sampling mode scores. Nearly half of the shapes were thrown out for failing anatomical checks, the remaining 1124 were used in computing dose matrices via SBD and a standard 7-beam protocol. As a proof of concept, and to generate data for later study, we performed fluence map optimization emphasizing PTV coverage. Conclusions: We successfully developed and tested a tool for creating customizable sets of virtual patients suitable for large-scale radiation therapy optimization research.« less

  9. DOVIS 2.0: an efficient and easy to use parallel virtual screening tool based on AutoDock 4.0.

    PubMed

    Jiang, Xiaohui; Kumar, Kamal; Hu, Xin; Wallqvist, Anders; Reifman, Jaques

    2008-09-08

    Small-molecule docking is an important tool in studying receptor-ligand interactions and in identifying potential drug candidates. Previously, we developed a software tool (DOVIS) to perform large-scale virtual screening of small molecules in parallel on Linux clusters, using AutoDock 3.05 as the docking engine. DOVIS enables the seamless screening of millions of compounds on high-performance computing platforms. In this paper, we report significant advances in the software implementation of DOVIS 2.0, including enhanced screening capability, improved file system efficiency, and extended usability. To keep DOVIS up-to-date, we upgraded the software's docking engine to the more accurate AutoDock 4.0 code. We developed a new parallelization scheme to improve runtime efficiency and modified the AutoDock code to reduce excessive file operations during large-scale virtual screening jobs. We also implemented an algorithm to output docked ligands in an industry standard format, sd-file format, which can be easily interfaced with other modeling programs. Finally, we constructed a wrapper-script interface to enable automatic rescoring of docked ligands by arbitrarily selected third-party scoring programs. The significance of the new DOVIS 2.0 software compared with the previous version lies in its improved performance and usability. The new version makes the computation highly efficient by automating load balancing, significantly reducing excessive file operations by more than 95%, providing outputs that conform to industry standard sd-file format, and providing a general wrapper-script interface for rescoring of docked ligands. The new DOVIS 2.0 package is freely available to the public under the GNU General Public License.

  10. What are the low- Q and large- x boundaries of collinear QCD factorization theorems?

    DOE PAGES

    Moffat, E.; Melnitchouk, W.; Rogers, T. C.; ...

    2017-05-26

    Familiar factorized descriptions of classic QCD processes such as deeply-inelastic scattering (DIS) apply in the limit of very large hard scales, much larger than nonperturbative mass scales and other nonperturbative physical properties like intrinsic transverse momentum. Since many interesting DIS studies occur at kinematic regions where the hard scale,more » $$Q \\sim$$ 1-2 GeV, is not very much greater than the hadron masses involved, and the Bjorken scaling variable $$x_{bj}$$ is large, $$x_{bj} \\gtrsim 0.5$$, it is important to examine the boundaries of the most basic factorization assumptions and assess whether improved starting points are needed. Using an idealized field-theoretic model that contains most of the essential elements that a factorization derivation must confront, we retrace in this paper the steps of factorization approximations and compare with calculations that keep all kinematics exact. We examine the relative importance of such quantities as the target mass, light quark masses, and intrinsic parton transverse momentum, and argue that a careful accounting of parton virtuality is essential for treating power corrections to collinear factorization. Finally, we use our observations to motivate searches for new or enhanced factorization theorems specifically designed to deal with moderately low-$Q$ and large-$$x_{bj}$$ physics.« less

  11. Robotic/virtual reality intervention program individualized to meet the specific sensorimotor impairments of an individual patient: a case study.

    PubMed

    Fluet, Gerard G; Merians, Alma S; Qiu, Qinyin; Saleh, Soha; Ruano, Viviana; Delmonico, Andrea R; Adamovich, Sergei V

    2014-09-01

    A majority of studies examining repetitive task practice facilitated by robots for the treatment of upper extremity paresis utilize standardized protocols applied to large groups. This study will describe a virtually simulated, robot-based intervention customized to match the goals and clinical presentation of a gentleman with upper extremity hemiparesis secondary to stroke. MP, the subject of this case, is an 85-year-old man with left hemiparesis secondary to an intracerebral hemorrhage 5 years prior to examination. Outcomes were measured before and after a 1-month period of home therapy and after a 1-month virtually simulated, robotic intervention. The intervention was designed to address specific impairments identified during his PT examination. When necessary, activities were modified based on MP's response to his first week of treatment. MP's home training program produced a 3-s decline in Wolf Motor Function Test (WMFT) time and a 5-s improvement in Jebsen Test of Hand Function (JTHF) time. He demonstrated an additional 35-s improvement in JTHF and an additional 44-s improvement in WMFT subsequent to the robotic training intervention. A 24-h activity measurement and the Hand and Activities of Daily Living scales of the Stroke Impact Scale improved following the robotic intervention. Based on his responses to training we feel that we have established that a customized program of virtually simulated, robotically facilitated rehabilitation was feasible and resulted in larger improvements than an intensive home training program in several measurements of upper extremity function in our patient with chronic hemiparesis.

  12. Development of an audio-based virtual gaming environment to assist with navigation skills in the blind.

    PubMed

    Connors, Erin C; Yazzolino, Lindsay A; Sánchez, Jaime; Merabet, Lotfi B

    2013-03-27

    Audio-based Environment Simulator (AbES) is virtual environment software designed to improve real world navigation skills in the blind. Using only audio based cues and set within the context of a video game metaphor, users gather relevant spatial information regarding a building's layout. This allows the user to develop an accurate spatial cognitive map of a large-scale three-dimensional space that can be manipulated for the purposes of a real indoor navigation task. After game play, participants are then assessed on their ability to navigate within the target physical building represented in the game. Preliminary results suggest that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building as indexed by their performance on a series of navigation tasks. These tasks included path finding through the virtual and physical building, as well as a series of drop off tasks. We find that the immersive and highly interactive nature of the AbES software appears to greatly engage the blind user to actively explore the virtual environment. Applications of this approach may extend to larger populations of visually impaired individuals.

  13. A practical approach to virtualization in HEP

    NASA Astrophysics Data System (ADS)

    Buncic, P.; Aguado Sánchez, C.; Blomer, J.; Harutyunyan, A.; Mudrinic, M.

    2011-01-01

    In the attempt to solve the problem of processing data coming from LHC experiments at CERN at a rate of 15PB per year, for almost a decade the High Enery Physics (HEP) community has focused its efforts on the development of the Worldwide LHC Computing Grid. This generated large interest and expectations promising to revolutionize computing. Meanwhile, having initially taken part in the Grid standardization process, industry has moved in a different direction and started promoting the Cloud Computing paradigm which aims to solve problems on a similar scale and in equally seamless way as it was expected in the idealized Grid approach. A key enabling technology behind Cloud computing is server virtualization. In early 2008, an R&D project was established in the PH-SFT group at CERN to investigate how virtualization technology could be used to improve and simplify the daily interaction of physicists with experiment software frameworks and the Grid infrastructure. In this article we shall first briefly compare Grid and Cloud computing paradigms and then summarize the results of the R&D activity pointing out where and how virtualization technology could be effectively used in our field in order to maximize practical benefits whilst avoiding potential pitfalls.

  14. Feasibility Pilot Study: Training Soft Skills in Virtual Worlds.

    PubMed

    Abshier, Patricia

    2012-04-01

    In a world where funding is limited, training for healthcare professionals is turning more and more to distance learning in an effort to maintain a knowledgeable and skilled work force. In 2010, Cicatelli Associates, Inc. began exploring the feasibility of using games and virtual worlds as an alternative means to teach skills-training in a distance-learning environment. The pilot study was conducted with six individuals familiar with general counseling and communication skills used by the healthcare industry to promote behavior change. Participants reported that the venue, although challenging at first, showed great potential for use with healthcare providers, as it allowed for more interaction and activities than traditional Webinars. However, there are significant limitations that must be overcome in order for this healthcare training modality to be utilized on a large scale. These limitations included a lack of microgestures and issues regarding the technology being used. In spite of the limitations, however, the potential use of virtual worlds for the training of healthcare providers exists and should be researched further. This article discusses the need and intended benefits of virtual world training as well as the results and conclusions of the pilot study.

  15. Scalable Visual Analytics of Massive Textual Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Manoj Kumar; Bohn, Shawn J.; Cowley, Wendy E.

    2007-04-01

    This paper describes the first scalable implementation of text processing engine used in Visual Analytics tools. These tools aid information analysts in interacting with and understanding large textual information content through visual interfaces. By developing parallel implementation of the text processing engine, we enabled visual analytics tools to exploit cluster architectures and handle massive dataset. The paper describes key elements of our parallelization approach and demonstrates virtually linear scaling when processing multi-gigabyte data sets such as Pubmed. This approach enables interactive analysis of large datasets beyond capabilities of existing state-of-the art visual analytics tools.

  16. Convective organization in the Pacific ITCZ: Merging OLR, TOVS, and SSM/I information

    NASA Technical Reports Server (NTRS)

    Hayes, Patrick M.; Mcguirk, James P.

    1993-01-01

    One of the most striking features of the planet's long-time average cloudiness is the zonal band of concentrated convection lying near the equator. Large-scale variability of the Intertropical Convergence Zone (ITCZ) has been well documented in studies of the planetary spatial scales and seasonal/annual/interannual temporal cycles of convection. Smaller-scale variability is difficult to study over the tropical oceans for several reasons. Conventional surface and upper-air data are virtually non-existent in some regions; diurnal and annual signals overwhelm fluctuations on other time scales; and analyses of variables such as geopotential and moisture are generally less reliable in the tropics. These problems make the use of satellite data an attractive alternative and the preferred means to study variability of tropical weather systems.

  17. The Investigation of Teacher Communication Practices in Virtual High School

    ERIC Educational Resources Information Center

    Belair, Marley

    2011-01-01

    Virtual schooling is an increasing trend for secondary education. Research of the communication practices in virtual schools has provided a myriad of suggestions for virtual school policies. Although transactional distance has been investigated in relation to certain aspects of the communication process, a small-scale qualitative study has not…

  18. Algorithm and Application of Gcp-Independent Block Adjustment for Super Large-Scale Domestic High Resolution Optical Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Sun, Y. S.; Zhang, L.; Xu, B.; Zhang, Y.

    2018-04-01

    The accurate positioning of optical satellite image without control is the precondition for remote sensing application and small/medium scale mapping in large abroad areas or with large-scale images. In this paper, aiming at the geometric features of optical satellite image, based on a widely used optimization method of constraint problem which is called Alternating Direction Method of Multipliers (ADMM) and RFM least-squares block adjustment, we propose a GCP independent block adjustment method for the large-scale domestic high resolution optical satellite image - GISIBA (GCP-Independent Satellite Imagery Block Adjustment), which is easy to parallelize and highly efficient. In this method, the virtual "average" control points are built to solve the rank defect problem and qualitative and quantitative analysis in block adjustment without control. The test results prove that the horizontal and vertical accuracy of multi-covered and multi-temporal satellite images are better than 10 m and 6 m. Meanwhile the mosaic problem of the adjacent areas in large area DOM production can be solved if the public geographic information data is introduced as horizontal and vertical constraints in the block adjustment process. Finally, through the experiments by using GF-1 and ZY-3 satellite images over several typical test areas, the reliability, accuracy and performance of our developed procedure will be presented and studied in this paper.

  19. WISDOM-II: screening against multiple targets implicated in malaria using computational grid infrastructures.

    PubMed

    Kasam, Vinod; Salzemann, Jean; Botha, Marli; Dacosta, Ana; Degliesposti, Gianluca; Isea, Raul; Kim, Doman; Maass, Astrid; Kenyon, Colin; Rastelli, Giulio; Hofmann-Apitius, Martin; Breton, Vincent

    2009-05-01

    Despite continuous efforts of the international community to reduce the impact of malaria on developing countries, no significant progress has been made in the recent years and the discovery of new drugs is more than ever needed. Out of the many proteins involved in the metabolic activities of the Plasmodium parasite, some are promising targets to carry out rational drug discovery. Recent years have witnessed the emergence of grids, which are highly distributed computing infrastructures particularly well fitted for embarrassingly parallel computations like docking. In 2005, a first attempt at using grids for large-scale virtual screening focused on plasmepsins and ended up in the identification of previously unknown scaffolds, which were confirmed in vitro to be active plasmepsin inhibitors. Following this success, a second deployment took place in the fall of 2006 focussing on one well known target, dihydrofolate reductase (DHFR), and on a new promising one, glutathione-S-transferase. In silico drug design, especially vHTS is a widely and well-accepted technology in lead identification and lead optimization. This approach, therefore builds, upon the progress made in computational chemistry to achieve more accurate in silico docking and in information technology to design and operate large scale grid infrastructures. On the computational side, a sustained infrastructure has been developed: docking at large scale, using different strategies in result analysis, storing of the results on the fly into MySQL databases and application of molecular dynamics refinement are MM-PBSA and MM-GBSA rescoring. The modeling results obtained are very promising. Based on the modeling results, In vitro results are underway for all the targets against which screening is performed. The current paper describes the rational drug discovery activity at large scale, especially molecular docking using FlexX software on computational grids in finding hits against three different targets (PfGST, PfDHFR, PvDHFR (wild type and mutant forms) implicated in malaria. Grid-enabled virtual screening approach is proposed to produce focus compound libraries for other biological targets relevant to fight the infectious diseases of the developing world.

  20. Skill Transfer and Virtual Training for IND Response Decision-Making: Analysis of Decision-Making Skills for Large Scale Incidents

    DTIC Science & Technology

    2016-01-01

    issues comes from the Fukushima Daiichi nuclear disaster (2011). The local medical health professional on staff at the U.S. embassy in Tokyo was not...distribution unlimited. This page intentionally left blank. iii EXECUTIVE SUMMARY An improvised nuclear device (IND...from the phase one analysis are as follows : • There is strong consistency in both the key decisions and underlying skills emphasized by emergency

  1. VLSI Design of Trusted Virtual Sensors.

    PubMed

    Martínez-Rodríguez, Macarena C; Prada-Delgado, Miguel A; Brox, Piedad; Baturone, Iluminada

    2018-01-25

    This work presents a Very Large Scale Integration (VLSI) design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR) model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated) input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF) based on a Static Random Access Memory (SRAM) to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS) technology show that the active silicon area of the trusted virtual sensor is 0.86 mm 2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μ s. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time).

  2. VLSI Design of Trusted Virtual Sensors

    PubMed Central

    2018-01-01

    This work presents a Very Large Scale Integration (VLSI) design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR) model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated) input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF) based on a Static Random Access Memory (SRAM) to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS) technology show that the active silicon area of the trusted virtual sensor is 0.86 mm2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μs. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time). PMID:29370141

  3. A pseudoenergy wave-activity relation for ageostrophic and non-hydrostatic moist atmosphere

    NASA Astrophysics Data System (ADS)

    Ran, Ling-Kun; Ping, Fan

    2015-05-01

    By employing the energy-Casimir method, a three-dimensional virtual pseudoenergy wave-activity relation for a moist atmosphere is derived from a complete system of nonhydrostatic equations in Cartesian coordinates. Since this system of equations includes the effects of water substance, mass forcing, diabatic heating, and dissipations, the derived wave-activity relation generalizes the previous result for a dry atmosphere. The Casimir function used in the derivation is a monotonous function of virtual potential vorticity and virtual potential temperature. A virtual energy equation is employed (in place of the previous zonal momentum equation) in the derivation, and the basic state is stationary but can be three-dimensional or, at least, not necessarily zonally symmetric. The derived wave-activity relation is further used for the diagnosis of the evolution and propagation of meso-scale weather systems leading to heavy rainfall. Our diagnosis of two real cases of heavy precipitation shows that positive anomalies of the virtual pseudoenergy wave-activity density correspond well with the strong precipitation and are capable of indicating the movement of the precipitation region. This is largely due to the cyclonic vorticity perturbation and the vertically increasing virtual potential temperature over the precipitation region. Project supported by the National Basic Research Program of China (Grant No. 2013CB430105), the Key Program of the Chinese Academy of Sciences (Grant No. KZZD-EW-05), the National Natural Science Foundation of China (Grant No. 41175060), and the Project of CAMS, China (Grant No. 2011LASW-B15).

  4. Maestro: an orchestration framework for large-scale WSN simulations.

    PubMed

    Riliskis, Laurynas; Osipov, Evgeny

    2014-03-18

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation.

  5. Maestro: An Orchestration Framework for Large-Scale WSN Simulations

    PubMed Central

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  6. Human Factors Virtual Analysis Techniques for NASA's Space Launch System Ground Support using MSFC's Virtual Environments Lab (VEL)

    NASA Technical Reports Server (NTRS)

    Searcy, Brittani

    2017-01-01

    Using virtual environments to assess complex large scale human tasks provides timely and cost effective results to evaluate designs and to reduce operational risks during assembly and integration of the Space Launch System (SLS). NASA's Marshall Space Flight Center (MSFC) uses a suite of tools to conduct integrated virtual analysis during the design phase of the SLS Program. Siemens Jack is a simulation tool that allows engineers to analyze human interaction with CAD designs by placing a digital human model into the environment to test different scenarios and assess the design's compliance to human factors requirements. Engineers at MSFC are using Jack in conjunction with motion capture and virtual reality systems in MSFC's Virtual Environments Lab (VEL). The VEL provides additional capability beyond standalone Jack to record and analyze a person performing a planned task to assemble the SLS at Kennedy Space Center (KSC). The VEL integrates Vicon Blade motion capture system, Siemens Jack, Oculus Rift, and other virtual tools to perform human factors assessments. By using motion capture and virtual reality, a more accurate breakdown and understanding of how an operator will perform a task can be gained. By virtual analysis, engineers are able to determine if a specific task is capable of being safely performed by both a 5% (approx. 5ft) female and a 95% (approx. 6'1) male. In addition, the analysis will help identify any tools or other accommodations that may to help complete the task. These assessments are critical for the safety of ground support engineers and keeping launch operations on schedule. Motion capture allows engineers to save and examine human movements on a frame by frame basis, while virtual reality gives the actor (person performing a task in the VEL) an immersive view of the task environment. This presentation will discuss the need of human factors for SLS and the benefits of analyzing tasks in NASA MSFC's VEL.

  7. Simulating Virtual Terminal Area Weather Data Bases for Use in the Wake Vortex Avoidance System (Wake VAS) Prediction Algorithm

    NASA Technical Reports Server (NTRS)

    Kaplan, Michael L.; Lin, Yuh-Lang

    2004-01-01

    During the research project, sounding datasets were generated for the region surrounding 9 major airports, including Dallas, TX, Boston, MA, New York, NY, Chicago, IL, St. Louis, MO, Atlanta, GA, Miami, FL, San Francico, CA, and Los Angeles, CA. The numerical simulation of winter and summer environments during which no instrument flight rule impact was occurring at these 9 terminals was performed using the most contemporary version of the Terminal Area PBL Prediction System (TAPPS) model nested from 36 km to 6 km to 1 km horizontal resolution and very detailed vertical resolution in the planetary boundary layer. The soundings from the 1 km model were archived at 30 minute time intervals for a 24 hour period and the vertical dependent variables as well as derived quantities, i.e., 3-dimensional wind components, temperatures, pressures, mixing ratios, turbulence kinetic energy and eddy dissipation rates were then interpolated to 5 m vertical resolution up to 1000 m elevation above ground level. After partial validation against field experiment datasets for Dallas as well as larger scale and much coarser resolution observations at the other 8 airports, these sounding datasets were sent to NASA for use in the Virtual Air Space and Modeling program. The application of these datasets being to determine representative airport weather environments to diagnose the response of simulated wake vortices to realistic atmospheric environments. These virtual datasets are based on large scale observed atmospheric initial conditions that are dynamically interpolated in space and time. The 1 km nested-grid simulated datasets providing a very coarse and highly smoothed representation of airport environment meteorological conditions. Details concerning the airport surface forcing are virtually absent from these simulated datasets although the observed background atmospheric processes have been compared to the simulated fields and the fields were found to accurately replicate the flows surrounding the airport where coarse verification data were available as well as where airport scale datasets were available.

  8. Sensor Spatial Distortion, Visual Latency, and Update Rate Effects on 3D Tracking in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)

    1998-01-01

    Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.

  9. Telescopic multi-resolution augmented reality

    NASA Astrophysics Data System (ADS)

    Jenkins, Jeffrey; Frenchi, Christopher; Szu, Harold

    2014-05-01

    To ensure a self-consistent scaling approximation, the underlying microscopic fluctuation components can naturally influence macroscopic means, which may give rise to emergent observable phenomena. In this paper, we describe a consistent macroscopic (cm-scale), mesoscopic (micron-scale), and microscopic (nano-scale) approach to introduce Telescopic Multi-Resolution (TMR) into current Augmented Reality (AR) visualization technology. We propose to couple TMR-AR by introducing an energy-matter interaction engine framework that is based on known Physics, Biology, Chemistry principles. An immediate payoff of TMR-AR is a self-consistent approximation of the interaction between microscopic observables and their direct effect on the macroscopic system that is driven by real-world measurements. Such an interdisciplinary approach enables us to achieve more than multiple scale, telescopic visualization of real and virtual information but also conducting thought experiments through AR. As a result of the consistency, this framework allows us to explore a large dimensionality parameter space of measured and unmeasured regions. Towards this direction, we explore how to build learnable libraries of biological, physical, and chemical mechanisms. Fusing analytical sensors with TMR-AR libraries provides a robust framework to optimize testing and evaluation through data-driven or virtual synthetic simulations. Visualizing mechanisms of interactions requires identification of observable image features that can indicate the presence of information in multiple spatial and temporal scales of analog data. The AR methodology was originally developed to enhance pilot-training as well as `make believe' entertainment industries in a user-friendly digital environment We believe TMR-AR can someday help us conduct thought experiments scientifically, to be pedagogically visualized in a zoom-in-and-out, consistent, multi-scale approximations.

  10. Statistical scaling of geometric characteristics in stochastically generated pore microstructures

    DOE PAGES

    Hyman, Jeffrey D.; Guadagnini, Alberto; Winter, C. Larrabee

    2015-05-21

    In this study, we analyze the statistical scaling of structural attributes of virtual porous microstructures that are stochastically generated by thresholding Gaussian random fields. Characterization of the extent at which randomly generated pore spaces can be considered as representative of a particular rock sample depends on the metrics employed to compare the virtual sample against its physical counterpart. Typically, comparisons against features and/patterns of geometric observables, e.g., porosity and specific surface area, flow-related macroscopic parameters, e.g., permeability, or autocorrelation functions are used to assess the representativeness of a virtual sample, and thereby the quality of the generation method. Here, wemore » rely on manifestations of statistical scaling of geometric observables which were recently observed in real millimeter scale rock samples [13] as additional relevant metrics by which to characterize a virtual sample. We explore the statistical scaling of two geometric observables, namely porosity (Φ) and specific surface area (SSA), of porous microstructures generated using the method of Smolarkiewicz and Winter [42] and Hyman and Winter [22]. Our results suggest that the method can produce virtual pore space samples displaying the symptoms of statistical scaling observed in real rock samples. Order q sample structure functions (statistical moments of absolute increments) of Φ and SSA scale as a power of the separation distance (lag) over a range of lags, and extended self-similarity (linear relationship between log structure functions of successive orders) appears to be an intrinsic property of the generated media. The width of the range of lags where power-law scaling is observed and the Hurst coefficient associated with the variables we consider can be controlled by the generation parameters of the method.« less

  11. Addressing data privacy in matched studies via virtual pooling.

    PubMed

    Saha-Chaudhuri, P; Weinberg, C R

    2017-09-07

    Data confidentiality and shared use of research data are two desirable but sometimes conflicting goals in research with multi-center studies and distributed data. While ideal for straightforward analysis, confidentiality restrictions forbid creation of a single dataset that includes covariate information of all participants. Current approaches such as aggregate data sharing, distributed regression, meta-analysis and score-based methods can have important limitations. We propose a novel application of an existing epidemiologic tool, specimen pooling, to enable confidentiality-preserving analysis of data arising from a matched case-control, multi-center design. Instead of pooling specimens prior to assay, we apply the methodology to virtually pool (aggregate) covariates within nodes. Such virtual pooling retains most of the information used in an analysis with individual data and since individual participant data is not shared externally, within-node virtual pooling preserves data confidentiality. We show that aggregated covariate levels can be used in a conditional logistic regression model to estimate individual-level odds ratios of interest. The parameter estimates from the standard conditional logistic regression are compared to the estimates based on a conditional logistic regression model with aggregated data. The parameter estimates are shown to be similar to those without pooling and to have comparable standard errors and confidence interval coverage. Virtual data pooling can be used to maintain confidentiality of data from multi-center study and can be particularly useful in research with large-scale distributed data.

  12. High-performance flat data center network architecture based on scalable and flow-controlled optical switching system

    NASA Astrophysics Data System (ADS)

    Calabretta, Nicola; Miao, Wang; Dorren, Harm

    2016-03-01

    Traffic in data centers networks (DCNs) is steadily growing to support various applications and virtualization technologies. Multi-tenancy enabling efficient resource utilization is considered as a key requirement for the next generation DCs resulting from the growing demands for services and applications. Virtualization mechanisms and technologies can leverage statistical multiplexing and fast switch reconfiguration to further extend the DC efficiency and agility. We present a novel high performance flat DCN employing bufferless and distributed fast (sub-microsecond) optical switches with wavelength, space, and time switching operation. The fast optical switches can enhance the performance of the DCNs by providing large-capacity switching capability and efficiently sharing the data plane resources by exploiting statistical multiplexing. Benefiting from the Software-Defined Networking (SDN) control of the optical switches, virtual DCNs can be flexibly created and reconfigured by the DCN provider. Numerical and experimental investigations of the DCN based on the fast optical switches show the successful setup of virtual network slices for intra-data center interconnections. Experimental results to assess the DCN performance in terms of latency and packet loss show less than 10^-5 packet loss and 640ns end-to-end latency with 0.4 load and 16- packet size buffer. Numerical investigation on the performance of the systems when the port number of the optical switch is scaled to 32x32 system indicate that more than 1000 ToRs each with Terabit/s interface can be interconnected providing a Petabit/s capacity. The roadmap to photonic integration of large port optical switches will be also presented.

  13. GATECloud.net: a platform for large-scale, open-source text processing on the cloud.

    PubMed

    Tablan, Valentin; Roberts, Ian; Cunningham, Hamish; Bontcheva, Kalina

    2013-01-28

    Cloud computing is increasingly being regarded as a key enabler of the 'democratization of science', because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research--GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost-benefit analysis and usage evaluation.

  14. An experimental study of combustion: The turbulent structure of a reacting shear layer formed at a rearward-facing step. Ph.D. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Pitz, R. W.

    1981-01-01

    A premixed propane-air flame is stabilized in a turbulent free shear layer formed at a rearward-facing step. The mean and rms averages of the turbulent velocity flow field were determined by LDV for both reacting and non-reacting flows. The reaching flow was visualized by high speed schlieren photography. Large scale structures dominate the reacting shear layer. The growth of the large scale structures is tied to the propagation of the flame. The linear growth rate of the reacting shear layer defined by the mean velocity profiles is unchanged by combustion but the virtual origin is shifted downstream. The reacting shear layer based on the mean velocity profiles is shifted toward the recirculation zone and the reattachments lengths are shortened by 30%.

  15. WWC Review of the Report "Conceptualizing Astronomical Scale: Virtual Simulations on Handheld Tablet Computers Reverse Misconceptions." What Works Clearinghouse Single Study Review

    ERIC Educational Resources Information Center

    What Works Clearinghouse, 2014

    2014-01-01

    The 2014 study, "Conceptualizing Astronomical Scale: Virtual Simulations on Handheld Tablet Computers Reverse Misconceptions," examined the effects of using the true-to-scale (TTS) display mode versus the orrery display mode in the iPad's Solar Walk software application on students' knowledge of the Earth's place in the solar system. The…

  16. Immersive Virtual Reality Therapy with Myoelectric Control for Treatment-resistant Phantom Limb Pain: Case Report.

    PubMed

    Chau, Brian; Phelan, Ivan; Ta, Phillip; Humbert, Sarah; Hata, Justin; Tran, Duc

    2017-01-01

    Objective: Phantom limb pain is a condition frequently experienced after amputation. One treatment for phantom limb pain is traditional mirror therapy, yet some patients do not respond to this intervention, and immersive virtual reality mirror therapy offers some potential advantages. We report the case of a patient with severe phantom limb pain following an upper limb amputation and successful treatment with therapy in a custom virtual reality environment. Methods: An interactive 3-D kitchen environment was developed based on the principles of mirror therapy to allow for control of virtual hands while wearing a motion-tracked, head-mounted virtual reality display. The patient used myoelectric control of a virtual hand as well as motion-tracking control in this setting for five therapy sessions. Pain scale measurements and subjective feedback was elicited at each session. Results: Analysis of the measured pain scales showed statistically significant decreases per session [Visual Analog Scale, Short Form McGill Pain Questionnaire, and Wong-Baker FACES pain scores decreased by 55 percent (p=0.0143), 60 percent (p=0.023), and 90 percent (p=0.0024), respectively]. Significant subjective pain relief persisting between sessions was also reported, as well as marked immersion within the virtual environments. On followup at six weeks, the patient noted continued decrease in phantom limb pain symptoms. Conclusions: Currently available immersive virtual reality technology with myolectric and motion tracking control may represent a possible therapy option for treatment-resistant phantom limb pain.

  17. Immersive Virtual Reality Therapy with Myoelectric Control for Treatment-resistant Phantom Limb Pain: Case Report

    PubMed Central

    Phelan, Ivan; Ta, Phillip; Humbert, Sarah; Hata, Justin; Tran, Duc

    2017-01-01

    Objective: Phantom limb pain is a condition frequently experienced after amputation. One treatment for phantom limb pain is traditional mirror therapy, yet some patients do not respond to this intervention, and immersive virtual reality mirror therapy offers some potential advantages. We report the case of a patient with severe phantom limb pain following an upper limb amputation and successful treatment with therapy in a custom virtual reality environment. Methods: An interactive 3-D kitchen environment was developed based on the principles of mirror therapy to allow for control of virtual hands while wearing a motion-tracked, head-mounted virtual reality display. The patient used myoelectric control of a virtual hand as well as motion-tracking control in this setting for five therapy sessions. Pain scale measurements and subjective feedback was elicited at each session. Results: Analysis of the measured pain scales showed statistically significant decreases per session [Visual Analog Scale, Short Form McGill Pain Questionnaire, and Wong-Baker FACES pain scores decreased by 55 percent (p=0.0143), 60 percent (p=0.023), and 90 percent (p=0.0024), respectively]. Significant subjective pain relief persisting between sessions was also reported, as well as marked immersion within the virtual environments. On followup at six weeks, the patient noted continued decrease in phantom limb pain symptoms. Conclusions: Currently available immersive virtual reality technology with myolectric and motion tracking control may represent a possible therapy option for treatment-resistant phantom limb pain. PMID:29616149

  18. Measuring Co-Presence and Social Presence in Virtual Environments - Psychometric Construction of a German Scale for a Fear of Public Speaking Scenario.

    PubMed

    Poeschl, Sandra; Doering, Nicola

    2015-01-01

    Virtual reality exposure therapy (VRET) applications use high levels of fidelity in order to produce high levels of presence and thereby elicit an emotional response for the user (like fear for phobia treatment). State of research shows mixed results for the correlation between anxiety and presence in virtual reality exposure, with differing results depending on specific anxiety disorders. A positive correlation for anxiety and presence for social anxiety disorder is not proven up to now. One reason might be that plausibility of the simulation, namely including key triggers for social anxiety (for example verbal and non-verbal behavior of virtual agents that reflects potentially negative human evaluation) might not be acknowledged in current presence questionnaires. A German scale for measuring co-presence and social presence for virtual reality (VR) fear of public speaking scenarios was developed based on a translation and adaption of existing co-presence and social presence questionnaires. A sample of N = 151 students rated co-presence and social presence after using a fear of public speaking application. Four correlated factors were derived by item- and principle axis factor analysis (Promax rotation), representing the presenter's reaction to virtual agents, the reactions of the virtual agents as perceived by the presenter, impression of interaction possibilities, and (co-)presence of other people in the virtual environment. The scale developed can be used as a starting point for future research and test construction for VR applications with a social context.

  19. Crops In Silico: Generating Virtual Crops Using an Integrative and Multi-scale Modeling Platform.

    PubMed

    Marshall-Colon, Amy; Long, Stephen P; Allen, Douglas K; Allen, Gabrielle; Beard, Daniel A; Benes, Bedrich; von Caemmerer, Susanne; Christensen, A J; Cox, Donna J; Hart, John C; Hirst, Peter M; Kannan, Kavya; Katz, Daniel S; Lynch, Jonathan P; Millar, Andrew J; Panneerselvam, Balaji; Price, Nathan D; Prusinkiewicz, Przemyslaw; Raila, David; Shekar, Rachel G; Shrivastava, Stuti; Shukla, Diwakar; Srinivasan, Venkatraman; Stitt, Mark; Turk, Matthew J; Voit, Eberhard O; Wang, Yu; Yin, Xinyou; Zhu, Xin-Guang

    2017-01-01

    Multi-scale models can facilitate whole plant simulations by linking gene networks, protein synthesis, metabolic pathways, physiology, and growth. Whole plant models can be further integrated with ecosystem, weather, and climate models to predict how various interactions respond to environmental perturbations. These models have the potential to fill in missing mechanistic details and generate new hypotheses to prioritize directed engineering efforts. Outcomes will potentially accelerate improvement of crop yield, sustainability, and increase future food security. It is time for a paradigm shift in plant modeling, from largely isolated efforts to a connected community that takes advantage of advances in high performance computing and mechanistic understanding of plant processes. Tools for guiding future crop breeding and engineering, understanding the implications of discoveries at the molecular level for whole plant behavior, and improved prediction of plant and ecosystem responses to the environment are urgently needed. The purpose of this perspective is to introduce Crops in silico (cropsinsilico.org), an integrative and multi-scale modeling platform, as one solution that combines isolated modeling efforts toward the generation of virtual crops, which is open and accessible to the entire plant biology community. The major challenges involved both in the development and deployment of a shared, multi-scale modeling platform, which are summarized in this prospectus, were recently identified during the first Crops in silico Symposium and Workshop.

  20. Crops In Silico: Generating Virtual Crops Using an Integrative and Multi-scale Modeling Platform

    PubMed Central

    Marshall-Colon, Amy; Long, Stephen P.; Allen, Douglas K.; Allen, Gabrielle; Beard, Daniel A.; Benes, Bedrich; von Caemmerer, Susanne; Christensen, A. J.; Cox, Donna J.; Hart, John C.; Hirst, Peter M.; Kannan, Kavya; Katz, Daniel S.; Lynch, Jonathan P.; Millar, Andrew J.; Panneerselvam, Balaji; Price, Nathan D.; Prusinkiewicz, Przemyslaw; Raila, David; Shekar, Rachel G.; Shrivastava, Stuti; Shukla, Diwakar; Srinivasan, Venkatraman; Stitt, Mark; Turk, Matthew J.; Voit, Eberhard O.; Wang, Yu; Yin, Xinyou; Zhu, Xin-Guang

    2017-01-01

    Multi-scale models can facilitate whole plant simulations by linking gene networks, protein synthesis, metabolic pathways, physiology, and growth. Whole plant models can be further integrated with ecosystem, weather, and climate models to predict how various interactions respond to environmental perturbations. These models have the potential to fill in missing mechanistic details and generate new hypotheses to prioritize directed engineering efforts. Outcomes will potentially accelerate improvement of crop yield, sustainability, and increase future food security. It is time for a paradigm shift in plant modeling, from largely isolated efforts to a connected community that takes advantage of advances in high performance computing and mechanistic understanding of plant processes. Tools for guiding future crop breeding and engineering, understanding the implications of discoveries at the molecular level for whole plant behavior, and improved prediction of plant and ecosystem responses to the environment are urgently needed. The purpose of this perspective is to introduce Crops in silico (cropsinsilico.org), an integrative and multi-scale modeling platform, as one solution that combines isolated modeling efforts toward the generation of virtual crops, which is open and accessible to the entire plant biology community. The major challenges involved both in the development and deployment of a shared, multi-scale modeling platform, which are summarized in this prospectus, were recently identified during the first Crops in silico Symposium and Workshop. PMID:28555150

  1. Life Enhancement of Naval Systems through Advanced Materials.

    DTIC Science & Technology

    1982-05-12

    sulfate ( eutectic at 575*C) and nickel sulfate-sodium sulfate ( eutectic at 670 0 C) systems. Cobalt and nickel sulfate are thermally unstable and undergo a...large scale commercial usage. Table IV-l - Ion implantation parameters Implanted Elements - Virtually any element from hydrogen to uranium can be...readily attainable by oxidation of the up to 1% sulfur allowed inI Navy fuel. Therefore, cobalt and nickel sulfate are formed by reaction of the 30 Fig. V-1

  2. New ultracool subdwarfs identified in large-scale surveys using Virtual Observatory tools (Corrigendum). I. UKIDSS LAS DR5 vs. SDSS DR7

    NASA Astrophysics Data System (ADS)

    Lodieu, N.; Espinoza Contreras, M.; Zapatero Osorio, M. R.; Solano, E.; Aberasturi, M.; Martín, E. L.

    2017-01-01

    Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 084.C-0928A.Based on observations made with the Nordic Optical Telescope, operated on the island of La Palma jointly by Denmark, Finland, Iceland, Norway, and Sweden, in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias.

  3. Skill Transfer and Virtual Training for IND Response Decision-Making: Analysis of Decision-Making Skills for Large-Scale Incidents

    DTIC Science & Technology

    2016-04-12

    One example of communication issues comes from the Fukushima Daiichi nuclear disaster (2011). The local medical health professional on staff at the...field of radiological and nuclear disaster management to help disaster management professionals develop and demonstrate relevant expertise [3]. The next...improvised nuclear device (IND) detonation in an urban area would be one of the most catastrophic incidents that could occur in the United States, resulting

  4. Modeling Large-Scale Networks Using Virtual Machines and Physical Appliances

    DTIC Science & Technology

    2014-01-27

    downloaded and run locally. The lab solution couldn’t be based on ActiveX because the military Report Documentation Page Form ApprovedOMB No. 0704-0188...unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 disallowed ActiveX support on...its systems, which made running an RDP client over ActiveX not possible. The challenges the SEI encountered in delivering the instruction were

  5. Study on the Mechanisms of Active Compounds in Traditional Chinese Medicine for the Treatment of Influenza Virus by Virtual Screening.

    PubMed

    Ai, Haixin; Wu, Xuewei; Qi, Mengyuan; Zhang, Li; Hu, Huan; Zhao, Qi; Zhao, Jian; Liu, Hongsheng

    2018-06-01

    In recent years, new strains of influenza virus such as H7N9, H10N8, H5N6 and H5N8 had continued to emerge. There was an urgent need for discovery of new anti-influenza virus drugs as well as accurate and efficient large-scale inhibitor screening methods. In this study, we focused on six influenza virus proteins that could be anti-influenza drug targets, including neuraminidase (NA), hemagglutinin (HA), matrix protein 1 (M1), M2 proton channel (M2), nucleoprotein (NP) and non-structural protein 1 (NS1). Structure-based molecular docking was utilized to identify potential inhibitors for these drug targets from 13144 compounds in the Traditional Chinese Medicine Systems Pharmacology Database and Analysis Platform. The results showed that 56 compounds could inhibit more than two drug targets simultaneously. Further, we utilized reverse docking to study the interaction of these compounds with host targets. Finally, the 22 compound inhibitors could stably bind to host targets with high binding free energy. The results showed that the Chinese herbal medicines had a multi-target effect, which could directly inhibit influenza virus by the target viral protein and indirectly inhibit virus by the human target protein. This method was of great value for large-scale virtual screening of new anti-influenza virus compounds.

  6. Virtual Surgery for the Nasal Airway: A Preliminary Report on Decision Support and Technology Acceptance.

    PubMed

    Vanhille, Derek L; Garcia, Guilherme J M; Asan, Onur; Borojeni, Azadeh A T; Frank-Ito, Dennis O; Kimbell, Julia S; Pawar, Sachin S; Rhee, John S

    2018-01-01

    Nasal airway obstruction (NAO) is a common problem that affects patient quality of life. Surgical success for NAO correction is variable. Virtual surgery planning via computational fluid dynamics (CFD) has the potential to improve the success rates of NAO surgery. To elicit surgeon feedback of a virtual surgery planning tool for NAO and to determine if this tool affects surgeon decision making. For this cross-sectional study, 60-minute face-to-face interviews with board-certified otolaryngologists were conducted at a single academic otolaryngology department from September 16, 2016, through October 7, 2016. Virtual surgery methods were introduced, and surgeons were able to interact with the virtual surgery planning tool interface. Surgeons were provided with a patient case of NAO, and open feedback of the platform was obtained, with emphasis on surgical decision making. Likert scale responses and qualitative feedback were collected for the virtual surgery planning tool and its influence on surgeon decision making. Our 9 study participants were all male, board-certified otolaryngologists with a mean (range) 15 (4-28) number of years in practice and a mean (range) number of nasal surgeries per month at 2.2 (0.0-6.0). When examined on a scale of 1 (not at all) to 5 (completely), surgeon mean (SD) score was 3.4 (0.5) for how realistic the virtual models were compared with actual surgery. On the same scale, when asked how much the virtual surgery planning tool changed surgeon decision making, mean (SD) score was 2.6 (1.6). On a scale of 1 (strongly disagree) to 7 (strongly agree), surgeon scores for perceived usefulness of the technology and attitude toward using it were 5.1 (1.1) and 5.7 (0.9), respectively. Our study shows positive surgeon experience with a virtual surgery planning tool for NAO based on CFD simulations. Surgeons felt that future applications and areas of study of the virtual surgery planning tool include its potential role for patient counseling, selecting appropriate surgical candidates, and identifying which anatomical structures should be targeted for surgical correction. NA.

  7. Virtual water trade and time scales for loss of water sustainability: a comparative regional analysis.

    PubMed

    Goswami, Prashant; Nishad, Shiv Narayan

    2015-03-20

    Assessment and policy design for sustainability in primary resources like arable land and water need to adopt long-term perspective; even small but persistent effects like net export of water may influence sustainability through irreversible losses. With growing consumption, this virtual water trade has become an important element in the water sustainability of a nation. We estimate and contrast the virtual (embedded) water trades of two populous nations, India and China, to present certain quantitative measures and time scales. Estimates show that export of embedded water alone can lead to loss of water sustainability. With the current rate of net export of water (embedded) in the end products, India is poised to lose its entire available water in less than 1000 years; much shorter time scales are implied in terms of water for production. The two cases contrast and exemplify sustainable and non-sustainable virtual water trade in long term perspective.

  8. Virtual water trade and time scales for loss of water sustainability: A comparative regional analysis

    NASA Astrophysics Data System (ADS)

    Goswami, Prashant; Nishad, Shiv Narayan

    2015-03-01

    Assessment and policy design for sustainability in primary resources like arable land and water need to adopt long-term perspective; even small but persistent effects like net export of water may influence sustainability through irreversible losses. With growing consumption, this virtual water trade has become an important element in the water sustainability of a nation. We estimate and contrast the virtual (embedded) water trades of two populous nations, India and China, to present certain quantitative measures and time scales. Estimates show that export of embedded water alone can lead to loss of water sustainability. With the current rate of net export of water (embedded) in the end products, India is poised to lose its entire available water in less than 1000 years; much shorter time scales are implied in terms of water for production. The two cases contrast and exemplify sustainable and non-sustainable virtual water trade in long term perspective.

  9. A statistical approach to the life cycle analysis of cumulus clouds selected in a virtual reality environment

    NASA Astrophysics Data System (ADS)

    Heus, Thijs; Jonker, Harm J. J.; van den Akker, Harry E. A.; Griffith, Eric J.; Koutek, Michal; Post, Frits H.

    2009-03-01

    In this study, a new method is developed to investigate the entire life cycle of shallow cumuli in large eddy simulations. Although trained observers have no problem in distinguishing the different life stages of a cloud, this process proves difficult to automate, because cloud-splitting and cloud-merging events complicate the distinction between a single system divided in several cloudy parts and two independent systems that collided. Because the human perception is well equipped to capture and to make sense of these time-dependent three-dimensional features, a combination of automated constraints and human inspection in a three-dimensional virtual reality environment is used to select clouds that are exemplary in their behavior throughout their entire life span. Three specific cases (ARM, BOMEX, and BOMEX without large-scale forcings) are analyzed in this way, and the considerable number of selected clouds warrants reliable statistics of cloud properties conditioned on the phase in their life cycle. The most dominant feature in this statistical life cycle analysis is the pulsating growth that is present throughout the entire lifetime of the cloud, independent of the case and of the large-scale forcings. The pulses are a self-sustained phenomenon, driven by a balance between buoyancy and horizontal convergence of dry air. The convective inhibition just above the cloud base plays a crucial role as a barrier for the cloud to overcome in its infancy stage, and as a buffer region later on, ensuring a steady supply of buoyancy into the cloud.

  10. A Novel Approach for Efficient Pharmacophore-based Virtual Screening: Method and Applications

    PubMed Central

    Dror, Oranit; Schneidman-Duhovny, Dina; Inbar, Yuval; Nussinov, Ruth; Wolfson, Haim J.

    2009-01-01

    Virtual screening is emerging as a productive and cost-effective technology in rational drug design for the identification of novel lead compounds. An important model for virtual screening is the pharmacophore. Pharmacophore is the spatial configuration of essential features that enable a ligand molecule to interact with a specific target receptor. In the absence of a known receptor structure, a pharmacophore can be identified from a set of ligands that have been observed to interact with the target receptor. Here, we present a novel computational method for pharmacophore detection and virtual screening. The pharmacophore detection module is able to: (i) align multiple flexible ligands in a deterministic manner without exhaustive enumeration of the conformational space, (ii) detect subsets of input ligands that may bind to different binding sites or have different binding modes, (iii) address cases where the input ligands have different affinities by defining weighted pharmacophores based on the number of ligands that share them, and (iv) automatically select the most appropriate pharmacophore candidates for virtual screening. The algorithm is highly efficient, allowing a fast exploration of the chemical space by virtual screening of huge compound databases. The performance of PharmaGist was successfully evaluated on a commonly used dataset of G-Protein Coupled Receptor alpha1A. Additionally, a large-scale evaluation using the DUD (directory of useful decoys) dataset was performed. DUD contains 2950 active ligands for 40 different receptors, with 36 decoy compounds for each active ligand. PharmaGist enrichment rates are comparable with other state-of-the-art tools for virtual screening. Availability The software is available for download. A user-friendly web interface for pharmacophore detection is available at http://bioinfo3d.cs.tau.ac.il/PharmaGist. PMID:19803502

  11. Identifying the Priorities and Practices of Virtual School Educators Using Action Research

    ERIC Educational Resources Information Center

    Dawson, Kara; Dana, Nancy Fichtman; Wolkenhauer, Rachel; Krell, Desi

    2013-01-01

    This study examined the nature of thirty virtual educators' action research questions during a yearlong action research professional development experience within a large, state-funded virtual school. Virtual educators included instructional personnel (i.e., individuals responsible for teaching virtual courses) and noninstructional personnel…

  12. Multiscale virtual particle based elastic network model (MVP-ENM) for normal mode analysis of large-sized biomolecules.

    PubMed

    Xia, Kelin

    2017-12-20

    In this paper, a multiscale virtual particle based elastic network model (MVP-ENM) is proposed for the normal mode analysis of large-sized biomolecules. The multiscale virtual particle (MVP) model is proposed for the discretization of biomolecular density data. With this model, large-sized biomolecular structures can be coarse-grained into virtual particles such that a balance between model accuracy and computational cost can be achieved. An elastic network is constructed by assuming "connections" between virtual particles. The connection is described by a special harmonic potential function, which considers the influence from both the mass distributions and distance relations of the virtual particles. Two independent models, i.e., the multiscale virtual particle based Gaussian network model (MVP-GNM) and the multiscale virtual particle based anisotropic network model (MVP-ANM), are proposed. It has been found that in the Debye-Waller factor (B-factor) prediction, the results from our MVP-GNM with a high resolution are as good as the ones from GNM. Even with low resolutions, our MVP-GNM can still capture the global behavior of the B-factor very well with mismatches predominantly from the regions with large B-factor values. Further, it has been demonstrated that the low-frequency eigenmodes from our MVP-ANM are highly consistent with the ones from ANM even with very low resolutions and a coarse grid. Finally, the great advantage of MVP-ANM model for large-sized biomolecules has been demonstrated by using two poliovirus virus structures. The paper ends with a conclusion.

  13. Controlled interaction: strategies for using virtual reality to study perception.

    PubMed

    Durgin, Frank H; Li, Zhi

    2010-05-01

    Immersive virtual reality systems employing head-mounted displays offer great promise for the investigation of perception and action, but there are well-documented limitations to most virtual reality systems. In the present article, we suggest strategies for studying perception/action interactions that try to depend on both scale-invariant metrics (such as power function exponents) and careful consideration of the requirements of the interactions under investigation. New data concerning the effect of pincushion distortion on the perception of surface orientation are presented, as well as data documenting the perception of dynamic distortions associated with head movements with uncorrected optics. A review of several successful uses of virtual reality to study the interaction of perception and action emphasizes scale-free analysis strategies that can achieve theoretical goals while minimizing assumptions about the accuracy of virtual simulations.

  14. Dynamic phenomena and human activity in an artificial society

    NASA Astrophysics Data System (ADS)

    Grabowski, A.; Kruszewska, N.; Kosiński, R. A.

    2008-12-01

    We study dynamic phenomena in a large social network of nearly 3×104 individuals who interact in the large virtual world of a massive multiplayer online role playing game. On the basis of a database received from the online game server, we examine the structure of the friendship network and human dynamics. To investigate the relation between networks of acquaintances in virtual and real worlds, we carried out a survey among the players. We show that, even though the virtual network did not develop as a growing graph of an underlying network of social acquaintances in the real world, it influences it. Furthermore we find very interesting scaling laws concerning human dynamics. Our research shows how long people are interested in a single task and how much time they devote to it. Surprisingly, exponent values in both cases are close to -1 . We calculate the activity of individuals, i.e., the relative time daily devoted to interactions with others in the artificial society. Our research shows that the distribution of activity is not uniform and is highly correlated with the degree of the node, and that such human activity has a significant influence on dynamic phenomena, e.g., epidemic spreading and rumor propagation, in complex networks. We find that spreading is accelerated (an epidemic) or decelerated (a rumor) as a result of superspreaders’ various behavior.

  15. GPU acceleration of Dock6's Amber scoring computation.

    PubMed

    Yang, Hailong; Zhou, Qiongqiong; Li, Bo; Wang, Yongjian; Luan, Zhongzhi; Qian, Depei; Li, Hanlu

    2010-01-01

    Dressing the problem of virtual screening is a long-term goal in the drug discovery field, which if properly solved, can significantly shorten new drugs' R&D cycle. The scoring functionality that evaluates the fitness of the docking result is one of the major challenges in virtual screening. In general, scoring functionality in docking requires a large amount of floating-point calculations, which usually takes several weeks or even months to be finished. This time-consuming procedure is unacceptable, especially when highly fatal and infectious virus arises such as SARS and H1N1, which forces the scoring task to be done in a limited time. This paper presents how to leverage the computational power of GPU to accelerate Dock6's (http://dock.compbio.ucsf.edu/DOCK_6/) Amber (J. Comput. Chem. 25: 1157-1174, 2004) scoring with NVIDIA CUDA (NVIDIA Corporation Technical Staff, Compute Unified Device Architecture - Programming Guide, NVIDIA Corporation, 2008) (Compute Unified Device Architecture) platform. We also discuss many factors that will greatly influence the performance after porting the Amber scoring to GPU, including thread management, data transfer, and divergence hidden. Our experiments show that the GPU-accelerated Amber scoring achieves a 6.5× speedup with respect to the original version running on AMD dual-core CPU for the same problem size. This acceleration makes the Amber scoring more competitive and efficient for large-scale virtual screening problems.

  16. Assessment of the State-of-the-Art in the Design and Manufacturing of Large Composite Structure

    NASA Technical Reports Server (NTRS)

    Harris, C. E.

    2001-01-01

    This viewgraph presentation gives an assessment of the state-of-the-art in the design and manufacturing of large component structures, including details on the use of continuous fiber reinforced polymer matrix composites (CFRP) in commercial and military aircraft and in space launch vehicles. Project risk mitigation plans must include a building-block test approach to structural design development, manufacturing process scale-up development tests, and pre-flight ground tests to verify structural integrity. The potential benefits of composite structures justifies NASA's investment in developing the technology. Advanced composite structures technology is enabling to virtually every Aero-Space Technology Enterprise Goal.

  17. A web-based platform for virtual screening.

    PubMed

    Watson, Paul; Verdonk, Marcel; Hartshorn, Michael J

    2003-09-01

    A fully integrated, web-based, virtual screening platform has been developed to allow rapid virtual screening of large numbers of compounds. ORACLE is used to store information at all stages of the process. The system includes a large database of historical compounds from high throughput screenings (HTS) chemical suppliers, ATLAS, containing over 3.1 million unique compounds with their associated physiochemical properties (ClogP, MW, etc.). The database can be screened using a web-based interface to produce compound subsets for virtual screening or virtual library (VL) enumeration. In order to carry out the latter task within ORACLE a reaction data cartridge has been developed. Virtual libraries can be enumerated rapidly using the web-based interface to the cartridge. The compound subsets can be seamlessly submitted for virtual screening experiments, and the results can be viewed via another web-based interface allowing ad hoc querying of the virtual screening data stored in ORACLE.

  18. Design and analysis of a global sub-mesoscale and tidal dynamics admitting virtual ocean.

    NASA Astrophysics Data System (ADS)

    Menemenlis, D.; Hill, C. N.

    2016-02-01

    We will describe the techniques used to realize a global kilometerscale ocean model configuration that includes representation of sea-ice and tidal excitation, and spans scales from planetary gyres to internal tides. A simulation using this model configuration provides a virtual ocean that admits some sub-mesoscale dynamics and tidal energetics not normally represented in global calculations. This extends simulated ocean behavior beyond broadly quasi-geostrophic flows and provides a preliminary example of a next generation computational approach to explicitly probing the interactions between instabilities that are usually parameterized and dominant energetic scales in the ocean. From previous process studies we have ascertained that this can lead to a qualitative improvement in the realism of many significant processes including geostrophic eddy dynamics, shelf-break exchange and topographic mixing. Computationally we exploit high-degrees of parallelism in both numerical evaluation and in recording model state to persistent disk storage. Together this allows us to compute and record a full three-dimensional model trajectory at hourly frequency for a timeperiod of 5 months with less than 9 million core hours of parallel computer time, using the present generation NASA Ames Research Center facilities. We have used this capability to create a 5 month trajectory archive, sampled at high spatial and temporal frequency for an ocean configuration that is initialized from a realistic data-assimilated state and driven with reanalysis surface forcing from ECMWF. The resulting database of model state provides a novel virtual laboratory for exploring coupling across scales in the ocean, and for testing ideas on the relationship between small scale fluxes and large scale state. The computation is complemented by counterpart computations that are coarsened two and four times respectively. In this presentation we will review the computational and numerical technologies employed and show how the high spatio-temporal frequency archive of model state can provide a new and promising tool for researching richer ocean dynamics at scale. We will also outline how computations of this nature could be combined with next generation computer hardware plans to help inform important climate process questions.

  19. Arctic Boreal Vulnerability Experiment (ABoVE) Science Cloud

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Schnase, J. L.; McInerney, M.; Webster, W. P.; Sinno, S.; Thompson, J. H.; Griffith, P. C.; Hoy, E.; Carroll, M.

    2014-12-01

    The effects of climate change are being revealed at alarming rates in the Arctic and Boreal regions of the planet. NASA's Terrestrial Ecology Program has launched a major field campaign to study these effects over the next 5 to 8 years. The Arctic Boreal Vulnerability Experiment (ABoVE) will challenge scientists to take measurements in the field, study remote observations, and even run models to better understand the impacts of a rapidly changing climate for areas of Alaska and western Canada. The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center (GSFC) has partnered with the Terrestrial Ecology Program to create a science cloud designed for this field campaign - the ABoVE Science Cloud. The cloud combines traditional high performance computing with emerging technologies to create an environment specifically designed for large-scale climate analytics. The ABoVE Science Cloud utilizes (1) virtualized high-speed InfiniBand networks, (2) a combination of high-performance file systems and object storage, and (3) virtual system environments tailored for data intensive, science applications. At the center of the architecture is a large object storage environment, much like a traditional high-performance file system, that supports data proximal processing using technologies like MapReduce on a Hadoop Distributed File System (HDFS). Surrounding the storage is a cloud of high performance compute resources with many processing cores and large memory coupled to the storage through an InfiniBand network. Virtual systems can be tailored to a specific scientist and provisioned on the compute resources with extremely high-speed network connectivity to the storage and to other virtual systems. In this talk, we will present the architectural components of the science cloud and examples of how it is being used to meet the needs of the ABoVE campaign. In our experience, the science cloud approach significantly lowers the barriers and risks to organizations that require high performance computing solutions and provides the NCCS with the agility required to meet our customers' rapidly increasing and evolving requirements.

  20. Virtual reality-based prospective memory training program for people with acquired brain injury.

    PubMed

    Yip, Ben C B; Man, David W K

    2013-01-01

    Acquired brain injuries (ABI) may display cognitive impairments and lead to long-term disabilities including prospective memory (PM) failure. Prospective memory serves to remember to execute an intended action in the future. PM problems would be a challenge to an ABI patient's successful community reintegration. While retrospective memory (RM) has been extensively studied, treatment programs for prospective memory are rarely reported. The development of a treatment program for PM, which is considered timely, can be cost-effective and appropriate to the patient's environment. A 12-session virtual reality (VR)-based cognitive rehabilitation program was developed using everyday PM activities as training content. 37 subjects were recruited to participate in a pretest-posttest control experimental study to evaluate its treatment effectiveness. Results suggest that significantly better changes were seen in both VR-based and real-life PM outcome measures, related cognitive attributes such as frontal lobe functions and semantic fluency. VR-based training may be well accepted by ABI patients as encouraging improvement has been shown. Large-scale studies of a virtual reality-based prospective memory (VRPM) training program are indicated.

  1. A genetic algorithm for a bi-objective mathematical model for dynamic virtual cell formation problem

    NASA Astrophysics Data System (ADS)

    Moradgholi, Mostafa; Paydar, Mohammad Mahdi; Mahdavi, Iraj; Jouzdani, Javid

    2016-09-01

    Nowadays, with the increasing pressure of the competitive business environment and demand for diverse products, manufacturers are force to seek for solutions that reduce production costs and rise product quality. Cellular manufacturing system (CMS), as a means to this end, has been a point of attraction to both researchers and practitioners. Limitations of cell formation problem (CFP), as one of important topics in CMS, have led to the introduction of virtual CMS (VCMS). This research addresses a bi-objective dynamic virtual cell formation problem (DVCFP) with the objective of finding the optimal formation of cells, considering the material handling costs, fixed machine installation costs and variable production costs of machines and workforce. Furthermore, we consider different skills on different machines in workforce assignment in a multi-period planning horizon. The bi-objective model is transformed to a single-objective fuzzy goal programming model and to show its performance; numerical examples are solved using the LINGO software. In addition, genetic algorithm (GA) is customized to tackle large-scale instances of the problems to show the performance of the solution method.

  2. Free Energy-Based Virtual Screening and Optimization of RNase H Inhibitors of HIV-1 Reverse Transcriptase.

    PubMed

    Zhang, Baofeng; D'Erasmo, Michael P; Murelli, Ryan P; Gallicchio, Emilio

    2016-09-30

    We report the results of a binding free energy-based virtual screening campaign of a library of 77 α-hydroxytropolone derivatives against the challenging RNase H active site of the reverse transcriptase (RT) enzyme of human immunodeficiency virus-1. Multiple protonation states, rotamer states, and binding modalities of each compound were individually evaluated. The work involved more than 300 individual absolute alchemical binding free energy parallel molecular dynamics calculations and over 1 million CPU hours on national computing clusters and a local campus computational grid. The thermodynamic and structural measures obtained in this work rationalize a series of characteristics of this system useful for guiding future synthetic and biochemical efforts. The free energy model identified key ligand-dependent entropic and conformational reorganization processes difficult to capture using standard docking and scoring approaches. Binding free energy-based optimization of the lead compounds emerging from the virtual screen has yielded four compounds with very favorable binding properties, which will be the subject of further experimental investigations. This work is one of the few reported applications of advanced-binding free energy models to large-scale virtual screening and optimization projects. It further demonstrates that, with suitable algorithms and automation, advanced-binding free energy models can have a useful role in early-stage drug-discovery programs.

  3. Framework and Implications of Virtual Neurorobotics

    PubMed Central

    Goodman, Philip H.; Zou, Quan; Dascalu, Sergiu-Mihai

    2008-01-01

    Despite decades of societal investment in artificial learning systems, truly “intelligent” systems have yet to be realized. These traditional models are based on input-output pattern optimization and/or cognitive production rule modeling. One response has been social robotics, using the interaction of human and robot to capture important cognitive dynamics such as cooperation and emotion; to date, these systems still incorporate traditional learning algorithms. More recently, investigators are focusing on the core assumptions of the brain “algorithm” itself—trying to replicate uniquely “neuromorphic” dynamics such as action potential spiking and synaptic learning. Only now are large-scale neuromorphic models becoming feasible, due to the availability of powerful supercomputers and an expanding supply of parameters derived from research into the brain's interdependent electrophysiological, metabolomic and genomic networks. Personal computer technology has also led to the acceptance of computer-generated humanoid images, or “avatars”, to represent intelligent actors in virtual realities. In a recent paper, we proposed a method of virtual neurorobotics (VNR) in which the approaches above (social-emotional robotics, neuromorphic brain architectures, and virtual reality projection) are hybridized to rapidly forward-engineer and develop increasingly complex, intrinsically intelligent systems. In this paper, we synthesize our research and related work in the field and provide a framework for VNR, with wider implications for research and practical applications. PMID:18982115

  4. CloVR: a virtual machine for automated and portable sequence analysis from the desktop using cloud computing.

    PubMed

    Angiuoli, Samuel V; Matalka, Malcolm; Gussman, Aaron; Galens, Kevin; Vangala, Mahesh; Riley, David R; Arze, Cesar; White, James R; White, Owen; Fricke, W Florian

    2011-08-30

    Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing.

  5. : A Scalable and Transparent System for Simulating MPI Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S

    2010-01-01

    is a scalable, transparent system for experimenting with the execution of parallel programs on simulated computing platforms. The level of simulated detail can be varied for application behavior as well as for machine characteristics. Unique features of are repeatability of execution, scalability to millions of simulated (virtual) MPI ranks, scalability to hundreds of thousands of host (real) MPI ranks, portability of the system to a variety of host supercomputing platforms, and the ability to experiment with scientific applications whose source-code is available. The set of source-code interfaces supported by is being expanded to support a wider set of applications, andmore » MPI-based scientific computing benchmarks are being ported. In proof-of-concept experiments, has been successfully exercised to spawn and sustain very large-scale executions of an MPI test program given in source code form. Low slowdowns are observed, due to its use of purely discrete event style of execution, and due to the scalability and efficiency of the underlying parallel discrete event simulation engine, sik. In the largest runs, has been executed on up to 216,000 cores of a Cray XT5 supercomputer, successfully simulating over 27 million virtual MPI ranks, each virtual rank containing its own thread context, and all ranks fully synchronized by virtual time.« less

  6. Virtual Observatory Science Applications

    NASA Technical Reports Server (NTRS)

    McGlynn, Tom

    2005-01-01

    Many Virtual-Observatory-based applications are now available to astronomers for use in their research. These span data discovery, access, visualization and analysis. Tools can quickly gather and organize information from sites around the world to help in planning a response to a gamma-ray burst, help users pick filters to isolate a desired feature, make an average template for z=2 AGN, select sources based upon information in many catalogs, or correlate massive distributed databases. Using VO protocols, the reach of existing software tools and packages can be greatly extended, allowing users to find and access remote information almost as conveniently as local data. The talk highlights just a few of the tools available to scientists, describes how both large and small scale projects can use existing tools, and previews some of the new capabilities that will be available in the next few years.

  7. Spatial considerations for instructional development in a virtual environment

    NASA Technical Reports Server (NTRS)

    Mccarthy, Laurie; Pontecorvo, Michael; Grant, Frances; Stiles, Randy

    1993-01-01

    In this paper we discuss spatial considerations for instructional development in a virtual environment. For both the instructional developer and the student, the important spatial criteria are perspective, orientation, scale, level of visual detail, and granularity of simulation. Developing a representation that allows an instructional developer to specify spatial criteria and enables intelligent agents to reason about a given instructional problem is of paramount importance to the success of instruction delivered in a virtual environment, especially one that supports dynamic exploration or spans more than one scale of operation.

  8. The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications

    NASA Technical Reports Server (NTRS)

    Johnston, William E.

    2002-01-01

    With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.

  9. Lagrangian ocean analysis: Fundamentals and practices

    DOE PAGES

    van Sebille, Erik; Griffies, Stephen M.; Abernathey, Ryan; ...

    2017-11-24

    Lagrangian analysis is a powerful way to analyse the output of ocean circulation models and other ocean velocity data such as from altimetry. In the Lagrangian approach, large sets of virtual particles are integrated within the three-dimensional, time-evolving velocity fields. A variety of tools and methods for this purpose have emerged, over several decades. Here, we review the state of the art in the field of Lagrangian analysis of ocean velocity data, starting from a fundamental kinematic framework and with a focus on large-scale open ocean applications. Beyond the use of explicit velocity fields, we consider the influence of unresolvedmore » physics and dynamics on particle trajectories. We comprehensively list and discuss the tools currently available for tracking virtual particles. We then showcase some of the innovative applications of trajectory data, and conclude with some open questions and an outlook. Our overall goal of this review paper is to reconcile some of the different techniques and methods in Lagrangian ocean analysis, while recognising the rich diversity of codes that have and continue to emerge, and the challenges of the coming age of petascale computing.« less

  10. Lagrangian ocean analysis: Fundamentals and practices

    NASA Astrophysics Data System (ADS)

    van Sebille, Erik; Griffies, Stephen M.; Abernathey, Ryan; Adams, Thomas P.; Berloff, Pavel; Biastoch, Arne; Blanke, Bruno; Chassignet, Eric P.; Cheng, Yu; Cotter, Colin J.; Deleersnijder, Eric; Döös, Kristofer; Drake, Henri F.; Drijfhout, Sybren; Gary, Stefan F.; Heemink, Arnold W.; Kjellsson, Joakim; Koszalka, Inga Monika; Lange, Michael; Lique, Camille; MacGilchrist, Graeme A.; Marsh, Robert; Mayorga Adame, C. Gabriela; McAdam, Ronan; Nencioli, Francesco; Paris, Claire B.; Piggott, Matthew D.; Polton, Jeff A.; Rühs, Siren; Shah, Syed H. A. M.; Thomas, Matthew D.; Wang, Jinbo; Wolfram, Phillip J.; Zanna, Laure; Zika, Jan D.

    2018-01-01

    Lagrangian analysis is a powerful way to analyse the output of ocean circulation models and other ocean velocity data such as from altimetry. In the Lagrangian approach, large sets of virtual particles are integrated within the three-dimensional, time-evolving velocity fields. Over several decades, a variety of tools and methods for this purpose have emerged. Here, we review the state of the art in the field of Lagrangian analysis of ocean velocity data, starting from a fundamental kinematic framework and with a focus on large-scale open ocean applications. Beyond the use of explicit velocity fields, we consider the influence of unresolved physics and dynamics on particle trajectories. We comprehensively list and discuss the tools currently available for tracking virtual particles. We then showcase some of the innovative applications of trajectory data, and conclude with some open questions and an outlook. The overall goal of this review paper is to reconcile some of the different techniques and methods in Lagrangian ocean analysis, while recognising the rich diversity of codes that have and continue to emerge, and the challenges of the coming age of petascale computing.

  11. Lagrangian ocean analysis: Fundamentals and practices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Sebille, Erik; Griffies, Stephen M.; Abernathey, Ryan

    Lagrangian analysis is a powerful way to analyse the output of ocean circulation models and other ocean velocity data such as from altimetry. In the Lagrangian approach, large sets of virtual particles are integrated within the three-dimensional, time-evolving velocity fields. A variety of tools and methods for this purpose have emerged, over several decades. Here, we review the state of the art in the field of Lagrangian analysis of ocean velocity data, starting from a fundamental kinematic framework and with a focus on large-scale open ocean applications. Beyond the use of explicit velocity fields, we consider the influence of unresolvedmore » physics and dynamics on particle trajectories. We comprehensively list and discuss the tools currently available for tracking virtual particles. We then showcase some of the innovative applications of trajectory data, and conclude with some open questions and an outlook. Our overall goal of this review paper is to reconcile some of the different techniques and methods in Lagrangian ocean analysis, while recognising the rich diversity of codes that have and continue to emerge, and the challenges of the coming age of petascale computing.« less

  12. Host Immunity via Mutable Virtualized Large-Scale Network Containers

    DTIC Science & Technology

    2016-07-25

    and constrain the distributed persistent inside crawlers that have va.lid credentials to access the web services. The main idea is to add a marker...to each web page URL and use the URL path and user inforn1ation contained in the marker to help accurately detect crawlers at its earliest stage...more than half of all website traffic, and malicious bots contributes almost one third of the traffic. As one type of bots, web crawlers have been

  13. Composite annotations: requirements for mapping multiscale data and models to biomedical ontologies

    PubMed Central

    Cook, Daniel L.; Mejino, Jose L. V.; Neal, Maxwell L.; Gennari, John H.

    2009-01-01

    Current methods for annotating biomedical data resources rely on simple mappings between data elements and the contents of a variety of biomedical ontologies and controlled vocabularies. Here we point out that such simple mappings are inadequate for large-scale multiscale, multidomain integrative “virtual human” projects. For such integrative challenges, we describe a “composite annotation” schema that is simple yet sufficiently extensible for mapping the biomedical content of a variety of data sources and biosimulation models to available biomedical ontologies. PMID:19964601

  14. Towards AI-powered personalization in MOOC learning

    NASA Astrophysics Data System (ADS)

    Yu, Han; Miao, Chunyan; Leung, Cyril; White, Timothy John

    2017-12-01

    Massive Open Online Courses (MOOCs) represent a form of large-scale learning that is changing the landscape of higher education. In this paper, we offer a perspective on how advances in artificial intelligence (AI) may enhance learning and research on MOOCs. We focus on emerging AI techniques including how knowledge representation tools can enable students to adjust the sequence of learning to fit their own needs; how optimization techniques can efficiently match community teaching assistants to MOOC mediation tasks to offer personal attention to learners; and how virtual learning companions with human traits such as curiosity and emotions can enhance learning experience on a large scale. These new capabilities will also bring opportunities for educational researchers to analyse students' learning skills and uncover points along learning paths where students with different backgrounds may require different help. Ethical considerations related to the application of AI in MOOC education research are also discussed.

  15. Virtual screening methods as tools for drug lead discovery from large chemical libraries.

    PubMed

    Ma, X H; Zhu, F; Liu, X; Shi, Z; Zhang, J X; Yang, S Y; Wei, Y Q; Chen, Y Z

    2012-01-01

    Virtual screening methods have been developed and explored as useful tools for searching drug lead compounds from chemical libraries, including large libraries that have become publically available. In this review, we discussed the new developments in exploring virtual screening methods for enhanced performance in searching large chemical libraries, their applications in screening libraries of ~ 1 million or more compounds in the last five years, the difficulties in their applications, and the strategies for further improving these methods.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Junghyun; Gangwon, Jo; Jaehoon, Jung

    Applications written solely in OpenCL or CUDA cannot execute on a cluster as a whole. Most previous approaches that extend these programming models to clusters are based on a common idea: designating a centralized host node and coordinating the other nodes with the host for computation. However, the centralized host node is a serious performance bottleneck when the number of nodes is large. In this paper, we propose a scalable and distributed OpenCL framework called SnuCL-D for large-scale clusters. SnuCL-D's remote device virtualization provides an OpenCL application with an illusion that all compute devices in a cluster are confined inmore » a single node. To reduce the amount of control-message and data communication between nodes, SnuCL-D replicates the OpenCL host program execution and data in each node. We also propose a new OpenCL host API function and a queueing optimization technique that significantly reduce the overhead incurred by the previous centralized approaches. To show the effectiveness of SnuCL-D, we evaluate SnuCL-D with a microbenchmark and eleven benchmark applications on a large-scale CPU cluster and a medium-scale GPU cluster.« less

  17. Experimental performance evaluation of software defined networking (SDN) based data communication networks for large scale flexi-grid optical networks.

    PubMed

    Zhao, Yongli; He, Ruiying; Chen, Haoran; Zhang, Jie; Ji, Yuefeng; Zheng, Haomian; Lin, Yi; Wang, Xinbo

    2014-04-21

    Software defined networking (SDN) has become the focus in the current information and communication technology area because of its flexibility and programmability. It has been introduced into various network scenarios, such as datacenter networks, carrier networks, and wireless networks. Optical transport network is also regarded as an important application scenario for SDN, which is adopted as the enabling technology of data communication networks (DCN) instead of general multi-protocol label switching (GMPLS). However, the practical performance of SDN based DCN for large scale optical networks, which is very important for the technology selection in the future optical network deployment, has not been evaluated up to now. In this paper we have built a large scale flexi-grid optical network testbed with 1000 virtual optical transport nodes to evaluate the performance of SDN based DCN, including network scalability, DCN bandwidth limitation, and restoration time. A series of network performance parameters including blocking probability, bandwidth utilization, average lightpath provisioning time, and failure restoration time have been demonstrated under various network environments, such as with different traffic loads and different DCN bandwidths. The demonstration in this work can be taken as a proof for the future network deployment.

  18. Asymmetric noise-induced large fluctuations in coupled systems

    NASA Astrophysics Data System (ADS)

    Schwartz, Ira B.; Szwaykowska, Klimka; Carr, Thomas W.

    2017-10-01

    Networks of interacting, communicating subsystems are common in many fields, from ecology, biology, and epidemiology to engineering and robotics. In the presence of noise and uncertainty, interactions between the individual components can lead to unexpected complex system-wide behaviors. In this paper, we consider a generic model of two weakly coupled dynamical systems, and we show how noise in one part of the system is transmitted through the coupling interface. Working synergistically with the coupling, the noise on one system drives a large fluctuation in the other, even when there is no noise in the second system. Moreover, the large fluctuation happens while the first system exhibits only small random oscillations. Uncertainty effects are quantified by showing how characteristic time scales of noise-induced switching scale as a function of the coupling between the two coupled parts of the experiment. In addition, our results show that the probability of switching in the noise-free system scales inversely as the square of reduced noise intensity amplitude, rendering the virtual probability of switching an extremely rare event. Our results showing the interplay between transmitted noise and coupling are also confirmed through simulations, which agree quite well with analytic theory.

  19. Virtual Facility at Fermilab: Infrastructure and Services Expand to Public Clouds

    DOE PAGES

    Timm, Steve; Garzoglio, Gabriele; Cooper, Glenn; ...

    2016-02-18

    In preparation for its new Virtual Facility Project, Fermilab has launched a program of work to determine the requirements for running a computation facility on-site, in public clouds, or a combination of both. This program builds on the work we have done to successfully run experimental workflows of 1000-VM scale both on an on-site private cloud and on Amazon AWS. To do this at scale we deployed dynamically launched and discovered caching services on the cloud. We are now testing the deployment of more complicated services on Amazon AWS using native load balancing and auto scaling features they provide. Themore » Virtual Facility Project will design and develop a facility including infrastructure and services that can live on the site of Fermilab, off-site, or a combination of both. We expect to need this capacity to meet the peak computing requirements in the future. The Virtual Facility is intended to provision resources on the public cloud on behalf of the facility as a whole instead of having each experiment or Virtual Organization do it on their own. We will describe the policy aspects of a distributed Virtual Facility, the requirements, and plans to make a detailed comparison of the relative cost of the public and private clouds. Furthermore, this talk will present the details of the technical mechanisms we have developed to date, and the plans currently taking shape for a Virtual Facility at Fermilab.« less

  20. Virtual Facility at Fermilab: Infrastructure and Services Expand to Public Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timm, Steve; Garzoglio, Gabriele; Cooper, Glenn

    In preparation for its new Virtual Facility Project, Fermilab has launched a program of work to determine the requirements for running a computation facility on-site, in public clouds, or a combination of both. This program builds on the work we have done to successfully run experimental workflows of 1000-VM scale both on an on-site private cloud and on Amazon AWS. To do this at scale we deployed dynamically launched and discovered caching services on the cloud. We are now testing the deployment of more complicated services on Amazon AWS using native load balancing and auto scaling features they provide. Themore » Virtual Facility Project will design and develop a facility including infrastructure and services that can live on the site of Fermilab, off-site, or a combination of both. We expect to need this capacity to meet the peak computing requirements in the future. The Virtual Facility is intended to provision resources on the public cloud on behalf of the facility as a whole instead of having each experiment or Virtual Organization do it on their own. We will describe the policy aspects of a distributed Virtual Facility, the requirements, and plans to make a detailed comparison of the relative cost of the public and private clouds. Furthermore, this talk will present the details of the technical mechanisms we have developed to date, and the plans currently taking shape for a Virtual Facility at Fermilab.« less

  1. A Multi-Stage Method for Connecting Participatory Sensing and Noise Simulations

    PubMed Central

    Hu, Mingyuan; Che, Weitao; Zhang, Qiuju; Luo, Qingli; Lin, Hui

    2015-01-01

    Most simulation-based noise maps are important for official noise assessment but lack local noise characteristics. The main reasons for this lack of information are that official noise simulations only provide information about expected noise levels, which is limited by the use of large-scale monitoring of noise sources, and are updated infrequently. With the emergence of smart cities and ubiquitous sensing, the possible improvements enabled by sensing technologies provide the possibility to resolve this problem. This study proposed an integrated methodology to propel participatory sensing from its current random and distributed sampling origins to professional noise simulation. The aims of this study were to effectively organize the participatory noise data, to dynamically refine the granularity of the noise features on road segments (e.g., different portions of a road segment), and then to provide a reasonable spatio-temporal data foundation to support noise simulations, which can be of help to researchers in understanding how participatory sensing can play a role in smart cities. This study first discusses the potential limitations of the current participatory sensing and simulation-based official noise maps. Next, we explain how participatory noise data can contribute to a simulation-based noise map by providing (1) spatial matching of the participatory noise data to the virtual partitions at a more microscopic level of road networks; (2) multi-temporal scale noise estimations at the spatial level of virtual partitions; and (3) dynamic aggregation of virtual partitions by comparing the noise values at the relevant temporal scale to form a dynamic segmentation of each road segment to support multiple spatio-temporal noise simulations. In this case study, we demonstrate how this method could play a significant role in a simulation-based noise map. Together, these results demonstrate the potential benefits of participatory noise data as dynamic input sources for noise simulations on multiple spatio-temporal scales. PMID:25621604

  2. A multi-stage method for connecting participatory sensing and noise simulations.

    PubMed

    Hu, Mingyuan; Che, Weitao; Zhang, Qiuju; Luo, Qingli; Lin, Hui

    2015-01-22

    Most simulation-based noise maps are important for official noise assessment but lack local noise characteristics. The main reasons for this lack of information are that official noise simulations only provide information about expected noise levels, which is limited by the use of large-scale monitoring of noise sources, and are updated infrequently. With the emergence of smart cities and ubiquitous sensing, the possible improvements enabled by sensing technologies provide the possibility to resolve this problem. This study proposed an integrated methodology to propel participatory sensing from its current random and distributed sampling origins to professional noise simulation. The aims of this study were to effectively organize the participatory noise data, to dynamically refine the granularity of the noise features on road segments (e.g., different portions of a road segment), and then to provide a reasonable spatio-temporal data foundation to support noise simulations, which can be of help to researchers in understanding how participatory sensing can play a role in smart cities. This study first discusses the potential limitations of the current participatory sensing and simulation-based official noise maps. Next, we explain how participatory noise data can contribute to a simulation-based noise map by providing (1) spatial matching of the participatory noise data to the virtual partitions at a more microscopic level of road networks; (2) multi-temporal scale noise estimations at the spatial level of virtual partitions; and (3) dynamic aggregation of virtual partitions by comparing the noise values at the relevant temporal scale to form a dynamic segmentation of each road segment to support multiple spatio-temporal noise simulations. In this case study, we demonstrate how this method could play a significant role in a simulation-based noise map. Together, these results demonstrate the potential benefits of participatory noise data as dynamic input sources for noise simulations on multiple spatio-temporal scales.

  3. A calibration method based on virtual large planar target for cameras with large FOV

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Han, Yangyang; Nie, Hong; Ou, Qiaofeng; Xiong, Bangshu

    2018-02-01

    In order to obtain high precision in camera calibration, a target should be large enough to cover the whole field of view (FOV). For cameras with large FOV, using a small target will seriously reduce the precision of calibration. However, using a large target causes many difficulties in making, carrying and employing the large target. In order to solve this problem, a calibration method based on the virtual large planar target (VLPT), which is virtually constructed with multiple small targets (STs), is proposed for cameras with large FOV. In the VLPT-based calibration method, first, the positions and directions of STs are changed several times to obtain a number of calibration images. Secondly, the VLPT of each calibration image is created by finding the virtual point corresponding to the feature points of the STs. Finally, intrinsic and extrinsic parameters of the camera are calculated by using the VLPTs. Experiment results show that the proposed method can not only achieve the similar calibration precision as those employing a large target, but also have good stability in the whole measurement area. Thus, the difficulties to accurately calibrate cameras with large FOV can be perfectly tackled by the proposed method with good operability.

  4. Drawing Inspiration from Human Brain Networks: Construction of Interconnected Virtual Networks

    PubMed Central

    Kominami, Daichi; Leibnitz, Kenji; Murata, Masayuki

    2018-01-01

    Virtualization of wireless sensor networks (WSN) is widely considered as a foundational block of edge/fog computing, which is a key technology that can help realize next-generation Internet of things (IoT) networks. In such scenarios, multiple IoT devices and service modules will be virtually deployed and interconnected over the Internet. Moreover, application services are expected to be more sophisticated and complex, thereby increasing the number of modifications required for the construction of network topologies. Therefore, it is imperative to establish a method for constructing a virtualized WSN (VWSN) topology that achieves low latency on information transmission and high resilience against network failures, while keeping the topological construction cost low. In this study, we draw inspiration from inter-modular connectivity in human brain networks, which achieves high performance when dealing with large-scale networks composed of a large number of modules (i.e., regions) and nodes (i.e., neurons). We propose a method for assigning inter-modular links based on a connectivity model observed in the cerebral cortex of the brain, known as the exponential distance rule (EDR) model. We then choose endpoint nodes of these links by controlling inter-modular assortativity, which characterizes the topological connectivity of brain networks. We test our proposed methods using simulation experiments. The results show that the proposed method based on the EDR model can construct a VWSN topology with an optimal combination of communication efficiency, robustness, and construction cost. Regarding the selection of endpoint nodes for the inter-modular links, the results also show that high assortativity enhances the robustness and communication efficiency because of the existence of inter-modular links of two high-degree nodes. PMID:29642483

  5. Drawing Inspiration from Human Brain Networks: Construction of Interconnected Virtual Networks.

    PubMed

    Murakami, Masaya; Kominami, Daichi; Leibnitz, Kenji; Murata, Masayuki

    2018-04-08

    Virtualization of wireless sensor networks (WSN) is widely considered as a foundational block of edge/fog computing, which is a key technology that can help realize next-generation Internet of things (IoT) networks. In such scenarios, multiple IoT devices and service modules will be virtually deployed and interconnected over the Internet. Moreover, application services are expected to be more sophisticated and complex, thereby increasing the number of modifications required for the construction of network topologies. Therefore, it is imperative to establish a method for constructing a virtualized WSN (VWSN) topology that achieves low latency on information transmission and high resilience against network failures, while keeping the topological construction cost low. In this study, we draw inspiration from inter-modular connectivity in human brain networks, which achieves high performance when dealing with large-scale networks composed of a large number of modules (i.e., regions) and nodes (i.e., neurons). We propose a method for assigning inter-modular links based on a connectivity model observed in the cerebral cortex of the brain, known as the exponential distance rule (EDR) model. We then choose endpoint nodes of these links by controlling inter-modular assortativity, which characterizes the topological connectivity of brain networks. We test our proposed methods using simulation experiments. The results show that the proposed method based on the EDR model can construct a VWSN topology with an optimal combination of communication efficiency, robustness, and construction cost. Regarding the selection of endpoint nodes for the inter-modular links, the results also show that high assortativity enhances the robustness and communication efficiency because of the existence of inter-modular links of two high-degree nodes.

  6. Virtual experiments: a new approach for improving process conceptualization in hillslope hydrology

    NASA Astrophysics Data System (ADS)

    Weiler, Markus; McDonnell, Jeff

    2004-01-01

    We present an approach for process conceptualization in hillslope hydrology. We develop and implement a series of virtual experiments, whereby the interaction between water flow pathways, source and mixing at the hillslope scale is examined within a virtual experiment framework. We define these virtual experiments as 'numerical experiments with a model driven by collective field intelligence'. The virtual experiments explore the first-order controls in hillslope hydrology, where the experimentalist and modeler work together to cooperatively develop and analyze the results. Our hillslope model for the virtual experiments (HillVi) in this paper is based on conceptualizing the water balance within the saturated and unsaturated zone in relation to soil physical properties in a spatially explicit manner at the hillslope scale. We argue that a virtual experiment model needs to be able to capture all major controls on subsurface flow processes that the experimentalist might deem important, while at the same time being simple with few 'tunable parameters'. This combination makes the approach, and the dialog between experimentalist and modeler, a useful hypothesis testing tool. HillVi simulates mass flux for different initial conditions under the same flow conditions. We analyze our results in terms of an artificial line source and isotopic hydrograph separation of water and subsurface flow. Our results for this first set of virtual experiments showed how drainable porosity and soil depth variability exert a first order control on flow and transport at the hillslope scale. We found that high drainable porosity soils resulted in a restricted water table rise, resulting in more pronounced channeling of lateral subsurface flow along the soil-bedrock interface. This in turn resulted in a more anastomosing network of tracer movement across the slope. The virtual isotope hydrograph separation showed higher proportions of event water with increasing drainable porosity. When combined with previous experimental findings and conceptualizations, virtual experiments can be an effective way to isolate certain controls and examine their influence over a range of rainfall and antecedent wetness conditions.

  7. Exploiting PubChem for Virtual Screening

    PubMed Central

    Xie, Xiang-Qun

    2011-01-01

    Importance of the field PubChem is a public molecular information repository, a scientific showcase of the NIH Roadmap Initiative. The PubChem database holds over 27 million records of unique chemical structures of compounds (CID) derived from nearly 70 million substance depositions (SID), and contains more than 449,000 bioassay records with over thousands of in vitro biochemical and cell-based screening bioassays established, with targeting more than 7000 proteins and genes linking to over 1.8 million of substances. Areas covered in this review This review builds on recent PubChem-related computational chemistry research reported by other authors while providing readers with an overview of the PubChem database, focusing on its increasing role in cheminformatics, virtual screening and toxicity prediction modeling. What the reader will gain These publicly available datasets in PubChem provide great opportunities for scientists to perform cheminformatics and virtual screening research for computer-aided drug design. However, the high volume and complexity of the datasets, in particular the bioassay-associated false positives/negatives and highly imbalanced datasets in PubChem, also creates major challenges. Several approaches regarding the modeling of PubChem datasets and development of virtual screening models for bioactivity and toxicity predictions are also reviewed. Take home message Novel data-mining cheminformatics tools and virtual screening algorithms are being developed and used to retrieve, annotate and analyze the large-scale and highly complex PubChem biological screening data for drug design. PMID:21691435

  8. Large-scale exact diagonalizations reveal low-momentum scales of nuclei

    NASA Astrophysics Data System (ADS)

    Forssén, C.; Carlsson, B. D.; Johansson, H. T.; Sääf, D.; Bansal, A.; Hagen, G.; Papenbrock, T.

    2018-03-01

    Ab initio methods aim to solve the nuclear many-body problem with controlled approximations. Virtually exact numerical solutions for realistic interactions can only be obtained for certain special cases such as few-nucleon systems. Here we extend the reach of exact diagonalization methods to handle model spaces with dimension exceeding 1010 on a single compute node. This allows us to perform no-core shell model (NCSM) calculations for 6Li in model spaces up to Nmax=22 and to reveal the 4He+d halo structure of this nucleus. Still, the use of a finite harmonic-oscillator basis implies truncations in both infrared (IR) and ultraviolet (UV) length scales. These truncations impose finite-size corrections on observables computed in this basis. We perform IR extrapolations of energies and radii computed in the NCSM and with the coupled-cluster method at several fixed UV cutoffs. It is shown that this strategy enables information gain also from data that is not fully UV converged. IR extrapolations improve the accuracy of relevant bound-state observables for a range of UV cutoffs, thus making them profitable tools. We relate the momentum scale that governs the exponential IR convergence to the threshold energy for the first open decay channel. Using large-scale NCSM calculations we numerically verify this small-momentum scale of finite nuclei.

  9. Characterization of the phantom material virtual water in high-energy photon and electron beams.

    PubMed

    McEwen, M R; Niven, D

    2006-04-01

    The material Virtual Water has been characterized in photon and electron beams. Range-scaling factors and fluence correction factors were obtained, the latter with an uncertainty of around 0.2%. This level of uncertainty means that it may be possible to perform dosimetry in a solid phantom with an accuracy approaching that of measurements in water. Two formulations of Virtual Water were investigated with nominally the same elemental composition but differing densities. For photon beams neither formulation showed exact water equivalence-the water/Virtual Water dose ratio varied with the depth of measurement with a difference of over 1% at 10 cm depth. However, by using a density (range) scaling factor very good agreement (<0.2%) between water and Virtual Water at all depths was obtained. In the case of electron beams a range-scaling factor was also required to match the shapes of the depth dose curves in water and Virtual Water. However, there remained a difference in the measured fluence in the two phantoms after this scaling factor had been applied. For measurements around the peak of the depth-dose curve and the reference depth this difference showed some small energy dependence but was in the range 0.1%-0.4%. Perturbation measurements have indicated that small slabs of material upstream of a detector have a small (<0.1% effect) on the chamber reading but material behind the detector can have a larger effect. This has consequences for the design of experiments and in the comparison of measurements and Monte Carlo-derived values.

  10. Collaborative Research: Bringing Problem Solving in the Field into the Classroom: Developing and Assessing Virtual Field Trips for Teaching Sedimentary and Introductory Geology

    NASA Astrophysics Data System (ADS)

    Wang, P.; Caldwell, M.

    2012-12-01

    Coastal Florida offers a unique setting for the facilitation of learning about a variety of modern sedimentary environments. Despite the conflicting concept of "virtual" and "actual" field trip, and the uncertainties associated with the implementation and effectiveness, virtual trips provide likely the only way to reach a large diversified student population and eliminate travel time and expenses. In addition, with rapidly improving web and visualization technology, field trips can be simulated virtually. It is therefore essential to systematically develop and assess the educational effectiveness of virtual field trips. This project is developing, implementing, and assessing a series of virtual field trips for teaching undergraduate sedimentary geology at a large four-year research university and introductory geology at a large two-year community college. The virtual field trip is based on a four-day actual field trip for a senior level sedimentary geology class. Two versions of the virtual field trip, one for advanced class and one for introductory class, are being produced. The educational outcome of the virtual field trip will be compared to that from actual field trip. This presentation summarizes Year 1 achievements of the three-year project. The filming, editing, and initial production of the virtual field trip have been completed. Formative assessments were conducted by the Coalition for Science Literacy at the University of South Florida. Once tested and refined, the virtual field trips will be disseminated through broadly used web portals and workshops at regional and national meetings.

  11. MOLA: a bootable, self-configuring system for virtual screening using AutoDock4/Vina on computer clusters.

    PubMed

    Abreu, Rui Mv; Froufe, Hugo Jc; Queiroz, Maria João Rp; Ferreira, Isabel Cfr

    2010-10-28

    Virtual screening of small molecules using molecular docking has become an important tool in drug discovery. However, large scale virtual screening is time demanding and usually requires dedicated computer clusters. There are a number of software tools that perform virtual screening using AutoDock4 but they require access to dedicated Linux computer clusters. Also no software is available for performing virtual screening with Vina using computer clusters. In this paper we present MOLA, an easy-to-use graphical user interface tool that automates parallel virtual screening using AutoDock4 and/or Vina in bootable non-dedicated computer clusters. MOLA automates several tasks including: ligand preparation, parallel AutoDock4/Vina jobs distribution and result analysis. When the virtual screening project finishes, an open-office spreadsheet file opens with the ligands ranked by binding energy and distance to the active site. All results files can automatically be recorded on an USB-flash drive or on the hard-disk drive using VirtualBox. MOLA works inside a customized Live CD GNU/Linux operating system, developed by us, that bypass the original operating system installed on the computers used in the cluster. This operating system boots from a CD on the master node and then clusters other computers as slave nodes via ethernet connections. MOLA is an ideal virtual screening tool for non-experienced users, with a limited number of multi-platform heterogeneous computers available and no access to dedicated Linux computer clusters. When a virtual screening project finishes, the computers can just be restarted to their original operating system. The originality of MOLA lies on the fact that, any platform-independent computer available can he added to the cluster, without ever using the computer hard-disk drive and without interfering with the installed operating system. With a cluster of 10 processors, and a potential maximum speed-up of 10x, the parallel algorithm of MOLA performed with a speed-up of 8,64× using AutoDock4 and 8,60× using Vina.

  12. The basis for cosmic ray feedback: Written on the wind

    PubMed Central

    Zweibel, Ellen G.

    2017-01-01

    Star formation and supermassive black hole growth in galaxies appear to be self-limiting. The mechanisms for self-regulation are known as feedback. Cosmic rays, the relativistic particle component of interstellar and intergalactic plasma, are among the agents of feedback. Because cosmic rays are virtually collisionless in the plasma environments of interest, their interaction with the ambient medium is primarily mediated by large scale magnetic fields and kinetic scale plasma waves. Because kinetic scales are much smaller than global scales, this interaction is most conveniently described by fluid models. In this paper, I discuss the kinetic theory and the classical theory of cosmic ray hydrodynamics (CCRH) which follows from assuming cosmic rays interact only with self-excited waves. I generalize CCRH to generalized cosmic ray hydrodynamics, which accommodates interactions with extrinsic turbulence, present examples of cosmic ray feedback, and assess where progress is needed. PMID:28579734

  13. The basis for cosmic ray feedback: Written on the wind

    NASA Astrophysics Data System (ADS)

    Zweibel, Ellen G.

    2017-05-01

    Star formation and supermassive black hole growth in galaxies appear to be self-limiting. The mechanisms for self-regulation are known as feedback. Cosmic rays, the relativistic particle component of interstellar and intergalactic plasma, are among the agents of feedback. Because cosmic rays are virtually collisionless in the plasma environments of interest, their interaction with the ambient medium is primarily mediated by large scale magnetic fields and kinetic scale plasma waves. Because kinetic scales are much smaller than global scales, this interaction is most conveniently described by fluid models. In this paper, I discuss the kinetic theory and the classical theory of cosmic ray hydrodynamics (CCRH) which follows from assuming cosmic rays interact only with self-excited waves. I generalize CCRH to generalized cosmic ray hydrodynamics, which accommodates interactions with extrinsic turbulence, present examples of cosmic ray feedback, and assess where progress is needed.

  14. The basis for cosmic ray feedback: Written on the wind.

    PubMed

    Zweibel, Ellen G

    2017-05-01

    Star formation and supermassive black hole growth in galaxies appear to be self-limiting. The mechanisms for self-regulation are known as feedback . Cosmic rays, the relativistic particle component of interstellar and intergalactic plasma, are among the agents of feedback. Because cosmic rays are virtually collisionless in the plasma environments of interest, their interaction with the ambient medium is primarily mediated by large scale magnetic fields and kinetic scale plasma waves. Because kinetic scales are much smaller than global scales, this interaction is most conveniently described by fluid models. In this paper, I discuss the kinetic theory and the classical theory of cosmic ray hydrodynamics (CCRH) which follows from assuming cosmic rays interact only with self-excited waves. I generalize CCRH to generalized cosmic ray hydrodynamics, which accommodates interactions with extrinsic turbulence, present examples of cosmic ray feedback, and assess where progress is needed.

  15. Virtual water trade and time scales for loss of water sustainability: A comparative regional analysis

    PubMed Central

    Goswami, Prashant; Nishad, Shiv Narayan

    2015-01-01

    Assessment and policy design for sustainability in primary resources like arable land and water need to adopt long-term perspective; even small but persistent effects like net export of water may influence sustainability through irreversible losses. With growing consumption, this virtual water trade has become an important element in the water sustainability of a nation. We estimate and contrast the virtual (embedded) water trades of two populous nations, India and China, to present certain quantitative measures and time scales. Estimates show that export of embedded water alone can lead to loss of water sustainability. With the current rate of net export of water (embedded) in the end products, India is poised to lose its entire available water in less than 1000 years; much shorter time scales are implied in terms of water for production. The two cases contrast and exemplify sustainable and non-sustainable virtual water trade in long term perspective. PMID:25790964

  16. Scaled Jump in Gravity-Reduced Virtual Environments.

    PubMed

    Kim, MyoungGon; Cho, Sunglk; Tran, Tanh Quang; Kim, Seong-Pil; Kwon, Ohung; Han, JungHyun

    2017-04-01

    The reduced gravity experienced in lunar or Martian surfaces can be simulated on the earth using a cable-driven system, where the cable lifts a person to reduce his or her weight. This paper presents a novel cable-driven system designed for the purpose. It is integrated with a head-mounted display and a motion capture system. Focusing on jump motion within the system, this paper proposes to scale the jump and reports the experiments made for quantifying the extent to which a jump can be scaled without the discrepancy between physical and virtual jumps being noticed by the user. With the tolerable range of scaling computed from these experiments, an application named retargeted jump is developed, where a user can jump up onto virtual objects while physically jumping in the real-world flat floor. The core techniques presented in this paper can be extended to develop extreme-sport simulators such as parasailing and skydiving.

  17. Wettability Investigations and Wet Transfer Enhancement of Large-Area CVD-Graphene on Aluminum Nitride

    PubMed Central

    Knapp, Marius; Hoffmann, René; Cimalla, Volker; Ambacher, Oliver

    2017-01-01

    The two-dimensional and virtually massless character of graphene attracts great interest for radio frequency devices, such as surface and bulk acoustic wave resonators. Due to its good electric conductivity, graphene might be an alternative as a virtually massless electrode by improving resonator performance regarding mass-loading effects. We report on an optimization of the commonly used wet transfer technique for large-area graphene, grown via chemical vapor deposition, onto aluminum nitride (AlN), which is mainly used as an active, piezoelectric material for acoustic devices. Today, graphene wet transfer is well-engineered for silicon dioxide (SiO2). Investigations on AlN substrates reveal highly different surface properties compared to SiO2 regarding wettability, which strongly influences the quality of transferred graphene monolayers. Both physical and chemical effects of a plasma treatment of AlN surfaces change wettability and avoid large-scale cracks in the transferred graphene sheet during desiccation. Spatially-resolved Raman spectroscopy reveals a strong strain and doping dependence on AlN plasma pretreatments correlating with the electrical conductivity of graphene. In our work, we achieved transferred crack-free large-area (40 × 40 mm2) graphene monolayers with sheet resistances down to 350 Ω/sq. These achievements make graphene more powerful as an eco-friendly and cheaper replacement for conventional electrode materials used in radio frequency resonator devices. PMID:28820462

  18. Environmental Social Stress, Paranoia and Psychosis Liability: A Virtual Reality Study

    PubMed Central

    Veling, Wim; Pot-Kolder, Roos; Counotte, Jacqueline; van Os, Jim; van der Gaag, Mark

    2016-01-01

    The impact of social environments on mental states is difficult to assess, limiting the understanding of which aspects of the social environment contribute to the onset of psychotic symptoms and how individual characteristics moderate this outcome. This study aimed to test sensitivity to environmental social stress as a mechanism of psychosis using Virtual Reality (VR) experiments. Fifty-five patients with recent onset psychotic disorder, 20 patients at ultra high risk for psychosis, 42 siblings of patients with psychosis, and 53 controls walked 5 times in a virtual bar with different levels of environmental social stress. Virtual social stressors were population density, ethnic density and hostility. Paranoia about virtual humans and subjective distress in response to virtual social stress exposures were measured with State Social Paranoia Scale (SSPS) and self-rated momentary subjective distress (SUD), respectively. Pre-existing (subclinical) symptoms were assessed with the Community Assessment of Psychic Experiences (CAPE), Green Paranoid Thoughts Scale (GPTS) and the Social Interaction Anxiety Scale (SIAS). Paranoia and subjective distress increased with degree of social stress in the environment. Psychosis liability and pre-existing symptoms, in particular negative affect, positively impacted the level of paranoia and distress in response to social stress. These results provide experimental evidence that heightened sensitivity to environmental social stress may play an important role in the onset and course of psychosis. PMID:27038469

  19. Two-Time Scale Virtual Sensor Design for Vibration Observation of a Translational Flexible-Link Manipulator Based on Singular Perturbation and Differential Games

    PubMed Central

    Ju, Jinyong; Li, Wei; Wang, Yuqiao; Fan, Mengbao; Yang, Xuefeng

    2016-01-01

    Effective feedback control requires all state variable information of the system. However, in the translational flexible-link manipulator (TFM) system, it is unrealistic to measure the vibration signals and their time derivative of any points of the TFM by infinite sensors. With the rigid-flexible coupling between the global motion of the rigid base and the elastic vibration of the flexible-link manipulator considered, a two-time scale virtual sensor, which includes the speed observer and the vibration observer, is designed to achieve the estimation for the vibration signals and their time derivative of the TFM, as well as the speed observer and the vibration observer are separately designed for the slow and fast subsystems, which are decomposed from the dynamic model of the TFM by the singular perturbation. Additionally, based on the linear-quadratic differential games, the observer gains of the two-time scale virtual sensor are optimized, which aims to minimize the estimation error while keeping the observer stable. Finally, the numerical calculation and experiment verify the efficiency of the designed two-time scale virtual sensor. PMID:27801840

  20. Modeling Brain Dynamics in Brain Tumor Patients Using the Virtual Brain.

    PubMed

    Aerts, Hannelore; Schirner, Michael; Jeurissen, Ben; Van Roost, Dirk; Achten, Eric; Ritter, Petra; Marinazzo, Daniele

    2018-01-01

    Presurgical planning for brain tumor resection aims at delineating eloquent tissue in the vicinity of the lesion to spare during surgery. To this end, noninvasive neuroimaging techniques such as functional MRI and diffusion-weighted imaging fiber tracking are currently employed. However, taking into account this information is often still insufficient, as the complex nonlinear dynamics of the brain impede straightforward prediction of functional outcome after surgical intervention. Large-scale brain network modeling carries the potential to bridge this gap by integrating neuroimaging data with biophysically based models to predict collective brain dynamics. As a first step in this direction, an appropriate computational model has to be selected, after which suitable model parameter values have to be determined. To this end, we simulated large-scale brain dynamics in 25 human brain tumor patients and 11 human control participants using The Virtual Brain, an open-source neuroinformatics platform. Local and global model parameters of the Reduced Wong-Wang model were individually optimized and compared between brain tumor patients and control subjects. In addition, the relationship between model parameters and structural network topology and cognitive performance was assessed. Results showed (1) significantly improved prediction accuracy of individual functional connectivity when using individually optimized model parameters; (2) local model parameters that can differentiate between regions directly affected by a tumor, regions distant from a tumor, and regions in a healthy brain; and (3) interesting associations between individually optimized model parameters and structural network topology and cognitive performance.

  1. Large-scale block adjustment without use of ground control points based on the compensation of geometric calibration for ZY-3 images

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Wang, Mi; Xu, Wen; Li, Deren; Gong, Jianya; Pi, Yingdong

    2017-12-01

    The potential of large-scale block adjustment (BA) without ground control points (GCPs) has long been a concern among photogrammetric researchers, which is of effective guiding significance for global mapping. However, significant problems with the accuracy and efficiency of this method remain to be solved. In this study, we analyzed the effects of geometric errors on BA, and then developed a step-wise BA method to conduct integrated processing of large-scale ZY-3 satellite images without GCPs. We first pre-processed the BA data, by adopting a geometric calibration (GC) method based on the viewing-angle model to compensate for systematic errors, such that the BA input images were of good initial geometric quality. The second step was integrated BA without GCPs, in which a series of technical methods were used to solve bottleneck problems and ensure accuracy and efficiency. The BA model, based on virtual control points (VCPs), was constructed to address the rank deficiency problem caused by lack of absolute constraints. We then developed a parallel matching strategy to improve the efficiency of tie points (TPs) matching, and adopted a three-array data structure based on sparsity to relieve the storage and calculation burden of the high-order modified equation. Finally, we used the conjugate gradient method to improve the speed of solving the high-order equations. To evaluate the feasibility of the presented large-scale BA method, we conducted three experiments on real data collected by the ZY-3 satellite. The experimental results indicate that the presented method can effectively improve the geometric accuracies of ZY-3 satellite images. This study demonstrates the feasibility of large-scale mapping without GCPs.

  2. GISpark: A Geospatial Distributed Computing Platform for Spatiotemporal Big Data

    NASA Astrophysics Data System (ADS)

    Wang, S.; Zhong, E.; Wang, E.; Zhong, Y.; Cai, W.; Li, S.; Gao, S.

    2016-12-01

    Geospatial data are growing exponentially because of the proliferation of cost effective and ubiquitous positioning technologies such as global remote-sensing satellites and location-based devices. Analyzing large amounts of geospatial data can provide great value for both industrial and scientific applications. Data- and compute- intensive characteristics inherent in geospatial big data increasingly pose great challenges to technologies of data storing, computing and analyzing. Such challenges require a scalable and efficient architecture that can store, query, analyze, and visualize large-scale spatiotemporal data. Therefore, we developed GISpark - a geospatial distributed computing platform for processing large-scale vector, raster and stream data. GISpark is constructed based on the latest virtualized computing infrastructures and distributed computing architecture. OpenStack and Docker are used to build multi-user hosting cloud computing infrastructure for GISpark. The virtual storage systems such as HDFS, Ceph, MongoDB are combined and adopted for spatiotemporal data storage management. Spark-based algorithm framework is developed for efficient parallel computing. Within this framework, SuperMap GIScript and various open-source GIS libraries can be integrated into GISpark. GISpark can also integrated with scientific computing environment (e.g., Anaconda), interactive computing web applications (e.g., Jupyter notebook), and machine learning tools (e.g., TensorFlow/Orange). The associated geospatial facilities of GISpark in conjunction with the scientific computing environment, exploratory spatial data analysis tools, temporal data management and analysis systems make up a powerful geospatial computing tool. GISpark not only provides spatiotemporal big data processing capacity in the geospatial field, but also provides spatiotemporal computational model and advanced geospatial visualization tools that deals with other domains related with spatial property. We tested the performance of the platform based on taxi trajectory analysis. Results suggested that GISpark achieves excellent run time performance in spatiotemporal big data applications.

  3. Analysis of context dependence in social interaction networks of a massively multiplayer online role-playing game.

    PubMed

    Son, Seokshin; Kang, Ah Reum; Kim, Hyun-chul; Kwon, Taekyoung; Park, Juyong; Kim, Huy Kang

    2012-01-01

    Rapid advances in modern computing and information technology have enabled millions of people to interact online via various social network and gaming services. The widespread adoption of such online services have made possible analysis of large-scale archival data containing detailed human interactions, presenting a very promising opportunity to understand the rich and complex human behavior. In collaboration with a leading global provider of Massively Multiplayer Online Role-Playing Games (MMORPGs), here we present a network science-based analysis of the interplay between distinct types of user interaction networks in the virtual world. We find that their properties depend critically on the nature of the context-interdependence of the interactions, highlighting the complex and multilayered nature of human interactions, a robust understanding of which we believe may prove instrumental in the designing of more realistic future virtual arenas as well as provide novel insights to the science of collective human behavior.

  4. Physically Based Virtual Surgery Planning and Simulation Tools for Personal Health Care Systems

    NASA Astrophysics Data System (ADS)

    Dogan, Firat; Atilgan, Yasemin

    The virtual surgery planning and simulation tools have gained a great deal of importance in the last decade in a consequence of increasing capacities at the information technology level. The modern hardware architectures, large scale database systems, grid based computer networks, agile development processes, better 3D visualization and all the other strong aspects of the information technology brings necessary instruments into almost every desk. The last decade’s special software and sophisticated super computer environments are now serving to individual needs inside “tiny smart boxes” for reasonable prices. However, resistance to learning new computerized environments, insufficient training and all the other old habits prevents effective utilization of IT resources by the specialists of the health sector. In this paper, all the aspects of the former and current developments in surgery planning and simulation related tools are presented, future directions and expectations are investigated for better electronic health care systems.

  5. Virtually-Enhanced Fluid Laboratories for Teaching Meteorology

    NASA Astrophysics Data System (ADS)

    Marshall, J.; Illari, L.

    2015-12-01

    The Weather in a Tank (WIAT) project aims to offer instructors a repertoire of rotating tank experiments, and a curriculum in fluid dynamics, to better assist students in learning how to move between phenomena in the real world and basic principles of rotating fluid dynamics which play a central role in determining the climate of the planet. Despite the increasing use of laboratory experiments in teaching meteorology, however, we are aware that many teachers and students do not have access to suitable apparatus and so cannot benefit from them. Here we describe a 'virtually-enhanced' laboratory that we hope could be very effective in getting across a flavor of the experiments and bring them to a wider audience. In the pedagogical spirit of WIAT we focus on how simple underlying principles, illustrated through laboratory experiments, shape the observed structure of the large-scale atmospheric circulation.

  6. Spatial ability in secondary school students: intra-sex differences based on self-selection for physical education.

    PubMed

    Tlauka, Michael; Williams, Jennifer; Williamson, Paul

    2008-08-01

    Past research has demonstrated consistent sex differences with men typically outperforming women on tests of spatial ability. However, less is known about intra-sex effects. In the present study, two groups of female students (physical education and non-physical education secondary students) and two corresponding groups of male students explored a large-scale virtual shopping centre. In a battery of tasks, spatial knowledge of the shopping centre as well as mental rotation ability were tested. Additional variables considered were circulating testosterone levels, the ratio of 2D:4D digit length, and computer experience. The results revealed both sex and intra-sex differences in spatial ability. Variables related to virtual navigation and computer ability and experience were found to be the most powerful predictors of group membership. Our results suggest that in female and male secondary students, participation in physical education and spatial skill are related.

  7. Production of recombinant antigens and antibodies in Nicotiana benthamiana using 'magnifection' technology: GMP-compliant facilities for small- and large-scale manufacturing.

    PubMed

    Klimyuk, Victor; Pogue, Gregory; Herz, Stefan; Butler, John; Haydon, Hugh

    2014-01-01

    This review describes the adaptation of the plant virus-based transient expression system, magnICON(®) for the at-scale manufacturing of pharmaceutical proteins. The system utilizes so-called "deconstructed" viral vectors that rely on Agrobacterium-mediated systemic delivery into the plant cells for recombinant protein production. The system is also suitable for production of hetero-oligomeric proteins like immunoglobulins. By taking advantage of well established R&D tools for optimizing the expression of protein of interest using this system, product concepts can reach the manufacturing stage in highly competitive time periods. At the manufacturing stage, the system offers many remarkable features including rapid production cycles, high product yield, virtually unlimited scale-up potential, and flexibility for different manufacturing schemes. The magnICON system has been successfully adaptated to very different logistical manufacturing formats: (1) speedy production of multiple small batches of individualized pharmaceuticals proteins (e.g. antigens comprising individualized vaccines to treat NonHodgkin's Lymphoma patients) and (2) large-scale production of other pharmaceutical proteins such as therapeutic antibodies. General descriptions of the prototype GMP-compliant manufacturing processes and facilities for the product formats that are in preclinical and clinical testing are provided.

  8. Multi-scale virtual view on the precessing jet SS433

    NASA Astrophysics Data System (ADS)

    Monceau-Baroux, R.; Porth, O.; Meliani, Z.; Keppens, R.

    2014-07-01

    Observations of SS433 infer how an X-ray binary gives rise to a corkscrew patterned relativistic jet. XRB SS433 is well known on a large range of scales for wich we realize 3D simulation and radio mappings. For our study we use relativistic hydrodynamic in special relativity using a relativistic effective polytropic index. We use parameters extracted from observations to impose thermodynamical conditions of the ISM and jet. We follow the kinetic and thermal energy content, of the various ISM and jet regions. Our simulation follows simultaneously the evolution of the population of electrons which are accelerated by the jet. The evolving spectrum of these electrons, together with an assumed equipartition between dynamic and magnetic pressure, gives input for estimating the radio emission from our simulation. Ray tracing according to a direction of sight then realizes radio mappings of our data. Single snapshots are realised to compare with VLA observation as in Roberts et al. 2008. A radio movie is realised to compare with the 41 days movie made with the VLBA instrument. Finaly a larger scale simulation explore the discrepancy of opening angle between 10 and 20 degree between the large scale observation of SS433 and its close in observation.

  9. From path models to commands during additive printing of large-scale architectural designs

    NASA Astrophysics Data System (ADS)

    Chepchurov, M. S.; Zhukov, E. M.; Yakovlev, E. A.; Matveykin, V. G.

    2018-05-01

    The article considers the problem of automation of the formation of large complex parts, products and structures, especially for unique or small-batch objects produced by a method of additive technology [1]. Results of scientific research in search for the optimal design of a robotic complex, its modes of operation (work), structure of its control helped to impose the technical requirements on the technological process for manufacturing and design installation of the robotic complex. Research on virtual models of the robotic complexes allowed defining the main directions of design improvements and the main goal (purpose) of testing of the the manufactured prototype: checking the positioning accuracy of the working part.

  10. Concurrent heterogeneous neural model simulation on real-time neuromimetic hardware.

    PubMed

    Rast, Alexander; Galluppi, Francesco; Davies, Sergio; Plana, Luis; Patterson, Cameron; Sharp, Thomas; Lester, David; Furber, Steve

    2011-11-01

    Dedicated hardware is becoming increasingly essential to simulate emerging very-large-scale neural models. Equally, however, it needs to be able to support multiple models of the neural dynamics, possibly operating simultaneously within the same system. This may be necessary either to simulate large models with heterogeneous neural types, or to simplify simulation and analysis of detailed, complex models in a large simulation by isolating the new model to a small subpopulation of a larger overall network. The SpiNNaker neuromimetic chip is a dedicated neural processor able to support such heterogeneous simulations. Implementing these models on-chip uses an integrated library-based tool chain incorporating the emerging PyNN interface that allows a modeller to input a high-level description and use an automated process to generate an on-chip simulation. Simulations using both LIF and Izhikevich models demonstrate the ability of the SpiNNaker system to generate and simulate heterogeneous networks on-chip, while illustrating, through the network-scale effects of wavefront synchronisation and burst gating, methods that can provide effective behavioural abstractions for large-scale hardware modelling. SpiNNaker's asynchronous virtual architecture permits greater scope for model exploration, with scalable levels of functional and temporal abstraction, than conventional (or neuromorphic) computing platforms. The complete system illustrates a potential path to understanding the neural model of computation, by building (and breaking) neural models at various scales, connecting the blocks, then comparing them against the biology: computational cognitive neuroscience. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Can Virtual Schools Thrive in the Real World?

    ERIC Educational Resources Information Center

    Wang, Yinying; Decker, Janet R.

    2014-01-01

    Despite the relatively large number of students enrolled in Ohio's virtual schools, it is unclear how virtual schools compare to their traditional school counterparts on measures of student achievement. To provide some insight, we compared the school performance from 2007-2011 at Ohio's virtual and traditional schools. The results suggest that…

  12. Grid heterogeneity in in-silico experiments: an exploration of drug screening using DOCK on cloud environments.

    PubMed

    Yim, Wen-Wai; Chien, Shu; Kusumoto, Yasuyuki; Date, Susumu; Haga, Jason

    2010-01-01

    Large-scale in-silico screening is a necessary part of drug discovery and Grid computing is one answer to this demand. A disadvantage of using Grid computing is the heterogeneous computational environments characteristic of a Grid. In our study, we have found that for the molecular docking simulation program DOCK, different clusters within a Grid organization can yield inconsistent results. Because DOCK in-silico virtual screening (VS) is currently used to help select chemical compounds to test with in-vitro experiments, such differences have little effect on the validity of using virtual screening before subsequent steps in the drug discovery process. However, it is difficult to predict whether the accumulation of these discrepancies over sequentially repeated VS experiments will significantly alter the results if VS is used as the primary means for identifying potential drugs. Moreover, such discrepancies may be unacceptable for other applications requiring more stringent thresholds. This highlights the need for establishing a more complete solution to provide the best scientific accuracy when executing an application across Grids. One possible solution to platform heterogeneity in DOCK performance explored in our study involved the use of virtual machines as a layer of abstraction. This study investigated the feasibility and practicality of using virtual machine and recent cloud computing technologies in a biological research application. We examined the differences and variations of DOCK VS variables, across a Grid environment composed of different clusters, with and without virtualization. The uniform computer environment provided by virtual machines eliminated inconsistent DOCK VS results caused by heterogeneous clusters, however, the execution time for the DOCK VS increased. In our particular experiments, overhead costs were found to be an average of 41% and 2% in execution time for two different clusters, while the actual magnitudes of the execution time costs were minimal. Despite the increase in overhead, virtual clusters are an ideal solution for Grid heterogeneity. With greater development of virtual cluster technology in Grid environments, the problem of platform heterogeneity may be eliminated through virtualization, allowing greater usage of VS, and will benefit all Grid applications in general.

  13. The virtual supermarket: An innovative research tool to study consumer food purchasing behaviour

    PubMed Central

    2011-01-01

    Background Economic interventions in the food environment are expected to effectively promote healthier food choices. However, before introducing them on a large scale, it is important to gain insight into the effectiveness of economic interventions and peoples' genuine reactions to price changes. Nonetheless, because of complex implementation issues, studies on price interventions are virtually non-existent. This is especially true for experiments undertaken in a retail setting. We have developed a research tool to study the effects of retail price interventions in a virtual-reality setting: the Virtual Supermarket. This paper aims to inform researchers about the features and utilization of this new software application. Results The Virtual Supermarket is a Dutch-developed three-dimensional software application in which study participants can shop in a manner comparable to a real supermarket. The tool can be used to study several food pricing and labelling strategies. The application base can be used to build future extensions and could be translated into, for example, an English-language version. The Virtual Supermarket contains a front-end which is seen by the participants, and a back-end that enables researchers to easily manipulate research conditions. The application keeps track of time spent shopping, number of products purchased, shopping budget, total expenditures and answers on configurable questionnaires. All data is digitally stored and automatically sent to a web server. A pilot study among Dutch consumers (n = 66) revealed that the application accurately collected and stored all data. Results from participant feedback revealed that 83% of the respondents considered the Virtual Supermarket easy to understand and 79% found that their virtual grocery purchases resembled their regular groceries. Conclusions The Virtual Supermarket is an innovative research tool with a great potential to assist in gaining insight into food purchasing behaviour. The application can be obtained via an URL and is freely available for academic use. The unique features of the tool include the fact that it enables researchers to easily modify research conditions and in this way study different types of interventions in a retail environment without a complex implementation process. Finally, it also maintains researcher independence and avoids conflicts of interest that may arise from industry collaboration. PMID:21787391

  14. The virtual supermarket: an innovative research tool to study consumer food purchasing behaviour.

    PubMed

    Waterlander, Wilma E; Scarpa, Michael; Lentz, Daisy; Steenhuis, Ingrid H M

    2011-07-25

    Economic interventions in the food environment are expected to effectively promote healthier food choices. However, before introducing them on a large scale, it is important to gain insight into the effectiveness of economic interventions and peoples' genuine reactions to price changes. Nonetheless, because of complex implementation issues, studies on price interventions are virtually non-existent. This is especially true for experiments undertaken in a retail setting. We have developed a research tool to study the effects of retail price interventions in a virtual-reality setting: the Virtual Supermarket. This paper aims to inform researchers about the features and utilization of this new software application. The Virtual Supermarket is a Dutch-developed three-dimensional software application in which study participants can shop in a manner comparable to a real supermarket. The tool can be used to study several food pricing and labelling strategies. The application base can be used to build future extensions and could be translated into, for example, an English-language version. The Virtual Supermarket contains a front-end which is seen by the participants, and a back-end that enables researchers to easily manipulate research conditions. The application keeps track of time spent shopping, number of products purchased, shopping budget, total expenditures and answers on configurable questionnaires. All data is digitally stored and automatically sent to a web server. A pilot study among Dutch consumers (n = 66) revealed that the application accurately collected and stored all data. Results from participant feedback revealed that 83% of the respondents considered the Virtual Supermarket easy to understand and 79% found that their virtual grocery purchases resembled their regular groceries. The Virtual Supermarket is an innovative research tool with a great potential to assist in gaining insight into food purchasing behaviour. The application can be obtained via an URL and is freely available for academic use. The unique features of the tool include the fact that it enables researchers to easily modify research conditions and in this way study different types of interventions in a retail environment without a complex implementation process. Finally, it also maintains researcher independence and avoids conflicts of interest that may arise from industry collaboration.

  15. CloVR: A virtual machine for automated and portable sequence analysis from the desktop using cloud computing

    PubMed Central

    2011-01-01

    Background Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. Results We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. Conclusion The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing. PMID:21878105

  16. Water footprint characteristic of less developed water-rich regions: Case of Yunnan, China.

    PubMed

    Qian, Yiying; Dong, Huijuan; Geng, Yong; Zhong, Shaozhuo; Tian, Xu; Yu, Yanhong; Chen, Yihui; Moss, Dana Avery

    2018-03-30

    Rapid industrialization and urbanization pose pressure on water resources in China. Virtual water trade proves to be an increasingly useful tool in water stress alleviation for water-scarce regions, while bringing opportunities and challenges for less developed water-rich regions. In this study, Yunnan, a typical province in southwest China, was selected as the case study area to explore its potential in socio-economic development in the context of water sustainability. Both input-output analysis and structural decomposition analysis on Yunnan's water footprint for the period of 2002-2012 were performed at not only an aggregated level but also a sectoral level. Results show that although the virtual water content of all economic sectors decreased due to technological progress, Yunnan's total water footprint still increased as a result of economic scale expansion. From the sectoral perspective, sectors with large water footprints include construction sector, agriculture sector, food manufacturing & processing sector, and service sector, while metal products sector and food manufacturing & processing sector were the major virtual water exporters, and textile & clothing sector and construction sector were the major importers. Based on local conditions, policy suggestions were proposed, including economic structure and efficiency optimization, technology promotion and appropriate virtual water trade scheme. This study provides valuable insights for regions facing "resource curse" by exploring potential socio-economic progress while ensuring water security. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Virtual gap dielectric wall accelerator

    DOEpatents

    Caporaso, George James; Chen, Yu-Jiuan; Nelson, Scott; Sullivan, Jim; Hawkins, Steven A

    2013-11-05

    A virtual, moving accelerating gap is formed along an insulating tube in a dielectric wall accelerator (DWA) by locally controlling the conductivity of the tube. Localized voltage concentration is thus achieved by sequential activation of a variable resistive tube or stalk down the axis of an inductive voltage adder, producing a "virtual" traveling wave along the tube. The tube conductivity can be controlled at a desired location, which can be moved at a desired rate, by light illumination, or by photoconductive switches, or by other means. As a result, an impressed voltage along the tube appears predominantly over a local region, the virtual gap. By making the length of the tube large in comparison to the virtual gap length, the effective gain of the accelerator can be made very large.

  18. Psychological benefits of virtual reality for patients in rehabilitation therapy.

    PubMed

    Chen, Chih-Hung; Jeng, Ming-Chang; Fung, Chin-Ping; Doong, Ji-Liang; Chuang, Tien-Yow

    2009-05-01

    Whether virtual rehabilitation is beneficial has not been determined. To investigate the psychological benefits of virtual reality in rehabilitation. An experimental group underwent therapy with a virtual-reality-based exercise bike, and a control group underwent the therapy without virtual-reality equipment. Hospital laboratory. 30 patients suffering from spinal-cord injury. A designed rehabilitation therapy. Endurance, Borg's rating-of-perceived-exertion scale, the Activation-Deactivation Adjective Check List (AD-ACL), and the Simulator Sickness Questionnaire. The differences between the experimental and control groups were significant for AD-ACL calmness and tension. A virtual-reality-based rehabilitation program can ease patients' tension and induce calm.

  19. Studies of Shock Wave Interactions with Homogeneous and Isotropic Turbulence

    NASA Technical Reports Server (NTRS)

    Briassulis, G.; Agui, J.; Watkins, C. B.; Andreopoulos, Y.

    1998-01-01

    A nearly homogeneous nearly isotropic compressible turbulent flow interacting with a normal shock wave has been studied experimentally in a large shock tube facility. Spatial resolution of the order of 8 Kolmogorov viscous length scales was achieved in the measurements of turbulence. A variety of turbulence generating grids provide a wide range of turbulence scales. Integral length scales were found to substantially decrease through the interaction with the shock wave in all investigated cases with flow Mach numbers ranging from 0.3 to 0.7 and shock Mach numbers from 1.2 to 1.6. The outcome of the interaction depends strongly on the state of compressibility of the incoming turbulence. The length scales in the lateral direction are amplified at small Mach numbers and attenuated at large Mach numbers. Even at large Mach numbers amplification of lateral length scales has been observed in the case of fine grids. In addition to the interaction with the shock the present work has documented substantial compressibility effects in the incoming homogeneous and isotropic turbulent flow. The decay of Mach number fluctuations was found to follow a power law similar to that describing the decay of incompressible isotropic turbulence. It was found that the decay coefficient and the decay exponent decrease with increasing Mach number while the virtual origin increases with increasing Mach number. A mechanism possibly responsible for these effects appears to be the inherently low growth rate of compressible shear layers emanating from the cylindrical rods of the grid.

  20. Flying Cassini with Virtual Operations Teams

    NASA Technical Reports Server (NTRS)

    Dodd, Suzanne; Gustavson, Robert

    1998-01-01

    The Cassini Program's challenge is to fly a large, complex mission with a reduced operations budget. A consequence of the reduced budget is elimination of the large, centrally located group traditionally used for uplink operations. Instead, responsibility for completing parts of the uplink function is distributed throughout the Program. A critical strategy employed to handle this challenge is the use of Virtual Uplink Operations Teams. A Virtual Team is comprised of a group of people with the necessary mix of engineering and science expertise who come together for the purpose of building a specific uplink product. These people are drawn from throughout the Cassini Program and participate across a large geographical area (from Germany to the West coast of the USA), covering ten time zones. The participants will often split their time between participating in the Virtual Team and accomplishing their core responsibilities, requiring significant planning and time management. When the particular uplink product task is complete, the Virtual Team disbands and the members turn back to their home organization element for future work assignments. This time-sharing of employees is used on Cassini to build mission planning products, via the Mission Planning Virtual Team, and sequencing products and monitoring of the sequence execution, via the Sequence Virtual Team. This challenging, multitasking approach allows efficient use of personnel in a resource constrained environment.

  1. Stable isotope probing to study functional components of complex microbial ecosystems.

    PubMed

    Mazard, Sophie; Schäfer, Hendrik

    2014-01-01

    This protocol presents a method of dissecting the DNA or RNA of key organisms involved in a specific biochemical process within a complex ecosystem. Stable isotope probing (SIP) allows the labelling and separation of nucleic acids from community members that are involved in important biochemical transformations, yet are often not the most numerically abundant members of a community. This pure culture-independent technique circumvents limitations of traditional microbial isolation techniques or data mining from large-scale whole-community metagenomic studies to tease out the identities and genomic repertoires of microorganisms participating in biological nutrient cycles. SIP experiments can be applied to virtually any ecosystem and biochemical pathway under investigation provided a suitable stable isotope substrate is available. This versatile methodology allows a wide range of analyses to be performed, from fatty-acid analyses, community structure and ecology studies, and targeted metagenomics involving nucleic acid sequencing. SIP experiments provide an effective alternative to large-scale whole-community metagenomic studies by specifically targeting the organisms or biochemical transformations of interest, thereby reducing the sequencing effort and time-consuming bioinformatics analyses of large datasets.

  2. The Virtual Geophysics Laboratory (VGL): Scientific Workflows Operating Across Organizations and Across Infrastructures

    NASA Astrophysics Data System (ADS)

    Cox, S. J.; Wyborn, L. A.; Fraser, R.; Rankine, T.; Woodcock, R.; Vote, J.; Evans, B.

    2012-12-01

    The Virtual Geophysics Laboratory (VGL) is web portal that provides geoscientists with an integrated online environment that: seamlessly accesses geophysical and geoscience data services from the AuScope national geoscience information infrastructure; loosely couples these data to a variety of gesocience software tools; and provides large scale processing facilities via cloud computing. VGL is a collaboration between CSIRO, Geoscience Australia, National Computational Infrastructure, Monash University, Australian National University and the University of Queensland. The VGL provides a distributed system whereby a user can enter an online virtual laboratory to seamlessly connect to OGC web services for geoscience data. The data is supplied in open standards formats using international standards like GeoSciML. A VGL user uses a web mapping interface to discover and filter the data sources using spatial and attribute filters to define a subset. Once the data is selected the user is not required to download the data. VGL collates the service query information for later in the processing workflow where it will be staged directly to the computing facilities. The combination of deferring data download and access to Cloud computing enables VGL users to access their data at higher resolutions and to undertake larger scale inversions, more complex models and simulations than their own local computing facilities might allow. Inside the Virtual Geophysics Laboratory, the user has access to a library of existing models, complete with exemplar workflows for specific scientific problems based on those models. For example, the user can load a geological model published by Geoscience Australia, apply a basic deformation workflow provided by a CSIRO scientist, and have it run in a scientific code from Monash. Finally the user can publish these results to share with a colleague or cite in a paper. This opens new opportunities for access and collaboration as all the resources (models, code, data, processing) are shared in the one virtual laboratory. VGL provides end users with access to an intuitive, user-centered interface that leverages cloud storage and cloud and cluster processing from both the research communities and commercial suppliers (e.g. Amazon). As the underlying data and information services are agnostic of the scientific domain, they can support many other data types. This fundamental characteristic results in a highly reusable virtual laboratory infrastructure that could also be used for example natural hazards, satellite processing, soil geochemistry, climate modeling, agriculture crop modeling.

  3. Teaching with Virtual Worlds: Factors to Consider for Instructional Use of Second Life

    ERIC Educational Resources Information Center

    Mayrath, Michael C.; Traphagan, Tomoko; Jarmon, Leslie; Trivedi, Avani; Resta, Paul

    2010-01-01

    Substantial evidence now supports pedagogical applications of virtual worlds; however, most research supporting virtual worlds for education has been conducted using researcher-developed Multi-User Virtual Environments (MUVE). Second Life (SL) is a MUVE that has been adopted by a large number of academic institutions; however, little research has…

  4. Virtual reality measures in neuropsychological assessment: a meta-analytic review.

    PubMed

    Neguț, Alexandra; Matu, Silviu-Andrei; Sava, Florin Alin; David, Daniel

    2016-02-01

    Virtual reality-based assessment is a new paradigm for neuropsychological evaluation, that might provide an ecological assessment, compared to paper-and-pencil or computerized neuropsychological assessment. Previous research has focused on the use of virtual reality in neuropsychological assessment, but no meta-analysis focused on the sensitivity of virtual reality-based measures of cognitive processes in measuring cognitive processes in various populations. We found eighteen studies that compared the cognitive performance between clinical and healthy controls on virtual reality measures. Based on a random effects model, the results indicated a large effect size in favor of healthy controls (g = .95). For executive functions, memory and visuospatial analysis, subgroup analysis revealed moderate to large effect sizes, with superior performance in the case of healthy controls. Participants' mean age, type of clinical condition, type of exploration within virtual reality environments, and the presence of distractors were significant moderators. Our findings support the sensitivity of virtual reality-based measures in detecting cognitive impairment. They highlight the possibility of using virtual reality measures for neuropsychological assessment in research applications, as well as in clinical practice.

  5. Simulating the decentralized processes of the human immune system in a virtual anatomy model.

    PubMed

    Sarpe, Vladimir; Jacob, Christian

    2013-01-01

    Many physiological processes within the human body can be perceived and modeled as large systems of interacting particles or swarming agents. The complex processes of the human immune system prove to be challenging to capture and illustrate without proper reference to the spatial distribution of immune-related organs and systems. Our work focuses on physical aspects of immune system processes, which we implement through swarms of agents. This is our first prototype for integrating different immune processes into one comprehensive virtual physiology simulation. Using agent-based methodology and a 3-dimensional modeling and visualization environment (LINDSAY Composer), we present an agent-based simulation of the decentralized processes in the human immune system. The agents in our model - such as immune cells, viruses and cytokines - interact through simulated physics in two different, compartmentalized and decentralized 3-dimensional environments namely, (1) within the tissue and (2) inside a lymph node. While the two environments are separated and perform their computations asynchronously, an abstract form of communication is allowed in order to replicate the exchange, transportation and interaction of immune system agents between these sites. The distribution of simulated processes, that can communicate across multiple, local CPUs or through a network of machines, provides a starting point to build decentralized systems that replicate larger-scale processes within the human body, thus creating integrated simulations with other physiological systems, such as the circulatory, endocrine, or nervous system. Ultimately, this system integration across scales is our goal for the LINDSAY Virtual Human project. Our current immune system simulations extend our previous work on agent-based simulations by introducing advanced visualizations within the context of a virtual human anatomy model. We also demonstrate how to distribute a collection of connected simulations over a network of computers. As a future endeavour, we plan to use parameter tuning techniques on our model to further enhance its biological credibility. We consider these in silico experiments and their associated modeling and optimization techniques as essential components in further enhancing our capabilities of simulating a whole-body, decentralized immune system, to be used both for medical education and research as well as for virtual studies in immunoinformatics.

  6. Environmental Social Stress, Paranoia and Psychosis Liability: A Virtual Reality Study.

    PubMed

    Veling, Wim; Pot-Kolder, Roos; Counotte, Jacqueline; van Os, Jim; van der Gaag, Mark

    2016-11-01

    The impact of social environments on mental states is difficult to assess, limiting the understanding of which aspects of the social environment contribute to the onset of psychotic symptoms and how individual characteristics moderate this outcome. This study aimed to test sensitivity to environmental social stress as a mechanism of psychosis using Virtual Reality (VR) experiments. Fifty-five patients with recent onset psychotic disorder, 20 patients at ultra high risk for psychosis, 42 siblings of patients with psychosis, and 53 controls walked 5 times in a virtual bar with different levels of environmental social stress. Virtual social stressors were population density, ethnic density and hostility. Paranoia about virtual humans and subjective distress in response to virtual social stress exposures were measured with State Social Paranoia Scale (SSPS) and self-rated momentary subjective distress (SUD), respectively. Pre-existing (subclinical) symptoms were assessed with the Community Assessment of Psychic Experiences (CAPE), Green Paranoid Thoughts Scale (GPTS) and the Social Interaction Anxiety Scale (SIAS). Paranoia and subjective distress increased with degree of social stress in the environment. Psychosis liability and pre-existing symptoms, in particular negative affect, positively impacted the level of paranoia and distress in response to social stress. These results provide experimental evidence that heightened sensitivity to environmental social stress may play an important role in the onset and course of psychosis. © The Author 2016. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  7. Approaches to virtual screening and screening library selection.

    PubMed

    Wildman, Scott A

    2013-01-01

    The ease of access to virtual screening (VS) software in recent years has resulted in a large increase in literature reports. Over 300 publications in the last year report the use of virtual screening techniques to identify new chemical matter or present the development of new virtual screening techniques. The increased use is accompanied by a corresponding increase in misuse and misinterpretation of virtual screening results. This review aims to identify many of the common difficulties associated with virtual screening and allow researchers to better assess the reliability of their virtual screening effort.

  8. Towards control of dexterous hand manipulations using a silicon Pattern Generator.

    PubMed

    Russell, Alexander; Tenore, Francesco; Singhal, Girish; Thakor, Nitish; Etienne-Cummings, Ralph

    2008-01-01

    This work demonstrates how an in silico Pattern Generator (PG) can be used as a low power control system for rhythmic hand movements in an upper-limb prosthesis. Neural spike patterns, which encode rotation of a cylindrical object, were implemented in a custom Very Large Scale Integration chip. PG control was tested by using the decoded control signals to actuate the fingers of a virtual prosthetic arm. This system provides a framework for prototyping and controlling dexterous hand manipulation tasks in a compact and efficient solution.

  9. Large earthquake rates from geologic, geodetic, and seismological perspectives

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.

    2017-12-01

    Earthquake rate and recurrence information comes primarily from geology, geodesy, and seismology. Geology gives the longest temporal perspective, but it reveals only surface deformation, relatable to earthquakes only with many assumptions. Geodesy is also limited to surface observations, but it detects evidence of the processes leading to earthquakes, again subject to important assumptions. Seismology reveals actual earthquakes, but its history is too short to capture important properties of very large ones. Unfortunately, the ranges of these observation types barely overlap, so that integrating them into a consistent picture adequate to infer future prospects requires a great deal of trust. Perhaps the most important boundary is the temporal one at the beginning of the instrumental seismic era, about a century ago. We have virtually no seismological or geodetic information on large earthquakes before then, and little geological information after. Virtually all-modern forecasts of large earthquakes assume some form of equivalence between tectonic- and seismic moment rates as functions of location, time, and magnitude threshold. That assumption links geology, geodesy, and seismology, but it invokes a host of other assumptions and incurs very significant uncertainties. Questions include temporal behavior of seismic and tectonic moment rates; shape of the earthquake magnitude distribution; upper magnitude limit; scaling between rupture length, width, and displacement; depth dependence of stress coupling; value of crustal rigidity; and relation between faults at depth and their surface fault traces, to name just a few. In this report I'll estimate the quantitative implications for estimating large earthquake rate. Global studies like the GEAR1 project suggest that surface deformation from geology and geodesy best show the geography of very large, rare earthquakes in the long term, while seismological observations of small earthquakes best forecasts moderate earthquakes up to about magnitude 7. Regional forecasts for a few decades, like those in UCERF3, could be improved by calibrating tectonic moment rate to past seismicity rates. Century-long forecasts must be speculative. Estimates of maximum magnitude and rate of giant earthquakes over geologic time scales require more than science.

  10. iRODS-Based Climate Data Services and Virtualization-as-a-Service in the NASA Center for Climate Simulation

    NASA Astrophysics Data System (ADS)

    Schnase, J. L.; Duffy, D. Q.; Tamkin, G. S.; Strong, S.; Ripley, D.; Gill, R.; Sinno, S. S.; Shen, Y.; Carriere, L. E.; Brieger, L.; Moore, R.; Rajasekar, A.; Schroeder, W.; Wan, M.

    2011-12-01

    Scientific data services are becoming an important part of the NASA Center for Climate Simulation's mission. Our technological response to this expanding role is built around the concept of specialized virtual climate data servers, repetitive cloud provisioning, image-based deployment and distribution, and virtualization-as-a-service. A virtual climate data server is an OAIS-compliant, iRODS-based data server designed to support a particular type of scientific data collection. iRODS is data grid middleware that provides policy-based control over collection-building, managing, querying, accessing, and preserving large scientific data sets. We have developed prototype vCDSs to manage NetCDF, HDF, and GeoTIF data products. We use RPM scripts to build vCDS images in our local computing environment, our local Virtual Machine Environment, NASA's Nebula Cloud Services, and Amazon's Elastic Compute Cloud. Once provisioned into these virtualized resources, multiple vCDSs can use iRODS's federation and realized object capabilities to create an integrated ecosystem of data servers that can scale and adapt to changing requirements. This approach enables platform- or software-as-a-service deployment of the vCDSs and allows the NCCS to offer virtualization-as-a-service, a capacity to respond in an agile way to new customer requests for data services, and a path for migrating existing services into the cloud. We have registered MODIS Atmosphere data products in a vCDS that contains 54 million registered files, 630TB of data, and over 300 million metadata values. We are now assembling IPCC AR5 data into a production vCDS that will provide the platform upon which NCCS's Earth System Grid (ESG) node publishes to the extended science community. In this talk, we describe our approach, experiences, lessons learned, and plans for the future.

  11. Towards an integrated strategy for monitoring wetland inundation with virtual constellations of optical and radar satellites

    NASA Astrophysics Data System (ADS)

    DeVries, B.; Huang, W.; Huang, C.; Jones, J. W.; Lang, M. W.; Creed, I. F.; Carroll, M.

    2017-12-01

    The function of wetlandscapes in hydrological and biogeochemical cycles is largely governed by surface inundation, with small wetlands that experience periodic inundation playing a disproportionately large role in these processes. However, the spatial distribution and temporal dynamics of inundation in these wetland systems are still poorly understood, resulting in large uncertainties in global water, carbon and greenhouse gas budgets. Satellite imagery provides synoptic and repeat views of the Earth's surface and presents opportunities to fill this knowledge gap. Despite the proliferation of Earth Observation satellite missions in the past decade, no single satellite sensor can simultaneously provide the spatial and temporal detail needed to adequately characterize inundation in small, dynamic wetland systems. Surface water data products must therefore integrate observations from multiple satellite sensors in order to address this objective, requiring the development of improved and coordinated algorithms to generate consistent estimates of surface inundation. We present a suite of algorithms designed to detect surface inundation in wetlands using data from a virtual constellation of optical and radar sensors comprising the Landsat and Sentinel missions (DeVries et al., 2017). Both optical and radar algorithms were able to detect inundation in wetlands without the need for external training data, allowing for high-efficiency monitoring of wetland inundation at large spatial and temporal scales. Applying these algorithms across a gradient of wetlands in North America, preliminary findings suggest that while these fully automated algorithms can detect wetland inundation at higher spatial and temporal resolutions than currently available surface water data products, limitations specific to the satellite sensors and their acquisition strategies are responsible for uncertainties in inundation estimates. Further research is needed to investigate strategies for integrating optical and radar data from virtual constellations, with a focus on reducing uncertainties, maximizing spatial and temporal detail, and establishing consistent records of wetland inundation over time. The findings and conclusions in this article do not necessarily represent the views of the U.S. Government.

  12. VizieR Online Data Catalog: Horizon MareNostrum cosmological run (Gay+, 2010)

    NASA Astrophysics Data System (ADS)

    Gay, C.; Pichon, C.; Le Borgne, D.; Teyssier, R.; Sousbie, T.; Devriendt, J.

    2010-11-01

    The correlation between the large-scale distribution of galaxies and their spectroscopic properties at z=1.5 is investigated using the Horizon MareNostrum cosmological run. We have extracted a large sample of 105 galaxies from this large hydrodynamical simulation featuring standard galaxy formation physics. Spectral synthesis is applied to these single stellar populations to generate spectra and colours for all galaxies. We use the skeleton as a tracer of the cosmic web and study how our galaxy catalogue depends on the distance to the skeleton. We show that galaxies closer to the skeleton tend to be redder but that the effect is mostly due to the proximity of large haloes at the nodes of the skeleton, rather than the filaments themselves. The virtual catalogues (spectroscopical properties of the MareNostrum galaxies at various redshifts) are available online at http://www.iap.fr/users/pichon/MareNostrum/catalogues. (7 data files).

  13. The Virtual Maternity Clinic: a teaching and learning innovation for midwifery education.

    PubMed

    Phillips, Diane; Duke, Maxine; Nagle, Cate; Macfarlane, Susie; Karantzas, Gery; Patterson, Denise

    2013-10-01

    There are challenges for midwifery students in developing skill and competency due to limited placements in antenatal clinics. The Virtual Maternity Clinic, an online resource, was developed to support student learning in professional midwifery practice. Identifying students' perceptions of the Virtual Maternity Clinic; learning about the impact of the Virtual Maternity Clinic on the students' experience of its use and access; and learning about the level of student satisfaction of the Virtual Maternity Clinic. Two interventions were used including pre and post evaluations of the online learning resource with data obtained from questionnaires using open ended and dichotomous responses and rating scales. The pre-Virtual Maternity Clinic intervention used a qualitative design and the post-Virtual Maternity Clinic intervention applied both qualitative and quantitative approaches. Three campuses of Deakin University, located in Victoria, Australia. Midwifery students enrolled in the Bachelor of Nursing/Bachelor of Midwifery and Graduate Diploma of Midwifery were recruited across three campuses of Deakin University (n=140). Thematic analysis of the pre-Virtual Maternity Clinic intervention (return rate n=119) related to students' expectations of this resource. The data for the post-Virtual Maternity Clinic intervention (return rate n=42) including open-ended responses were thematically analysed; dichotomous data examined in the form of frequencies and percentages of agreement and disagreement; and 5-rating scales were analysed using Pearson's correlations (α=.05, two-tailed). Results showed from the pre-Virtual Maternity Clinic intervention that students previously had placements in antenatal clinics were optimistic about the online learning resource. The post-Virtual Maternity Clinic intervention results indicated that students were satisfied with the Virtual Maternity Clinic as a learning resource despite some technological issues. The Virtual Maternity Clinic provides benefits for students in repeated observation of the practice of the midwife to support their professional learning and practice development. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Comparative analysis of machine learning methods in ligand-based virtual screening of large compound libraries.

    PubMed

    Ma, Xiao H; Jia, Jia; Zhu, Feng; Xue, Ying; Li, Ze R; Chen, Yu Z

    2009-05-01

    Machine learning methods have been explored as ligand-based virtual screening tools for facilitating drug lead discovery. These methods predict compounds of specific pharmacodynamic, pharmacokinetic or toxicological properties based on their structure-derived structural and physicochemical properties. Increasing attention has been directed at these methods because of their capability in predicting compounds of diverse structures and complex structure-activity relationships without requiring the knowledge of target 3D structure. This article reviews current progresses in using machine learning methods for virtual screening of pharmacodynamically active compounds from large compound libraries, and analyzes and compares the reported performances of machine learning tools with those of structure-based and other ligand-based (such as pharmacophore and clustering) virtual screening methods. The feasibility to improve the performance of machine learning methods in screening large libraries is discussed.

  15. Virtual Reality Exposure Training for Musicians: Its Effect on Performance Anxiety and Quality.

    PubMed

    Bissonnette, Josiane; Dubé, Francis; Provencher, Martin D; Moreno Sala, Maria T

    2015-09-01

    Music performance anxiety affects numerous musicians, with many of them reporting impairment of performance due to this problem. This exploratory study investigated the effects of virtual reality exposure training on students with music performance anxiety. Seventeen music students were randomly assigned to a control group (n=8) or a virtual training group (n=9). Participants were asked to play a musical piece by memory in two separate recitals within a 3-week interval. Anxiety was then measured with the Personal Report of Confidence as a Performer Scale and the S-Anxiety scale from the State-Trait Anxiety Inventory (STAI-Y). Between pre- and post-tests, the virtual training group took part in virtual reality exposure training consisting of six 1-hour long sessions of virtual exposure. The results indicate a significant decrease in performance anxiety for musicians in the treatment group for those with a high level of state anxiety, for those with a high level of trait anxiety, for women, and for musicians with high immersive tendencies. Finally, between the pre- and post-tests, we observed a significant increase in performance quality for the experimental group, but not for the control group.

  16. Virtually distortion-free imaging system for large field, high resolution lithography

    DOEpatents

    Hawryluk, A.M.; Ceglio, N.M.

    1993-01-05

    Virtually distortion free large field high resolution imaging is performed using an imaging system which contains large field distortion or field curvature. A reticle is imaged in one direction through the optical system to form an encoded mask. The encoded mask is then imaged back through the imaging system onto a wafer positioned at the reticle position.

  17. Virtually distortion-free imaging system for large field, high resolution lithography

    DOEpatents

    Hawryluk, Andrew M.; Ceglio, Natale M.

    1993-01-01

    Virtually distortion free large field high resolution imaging is performed using an imaging system which contains large field distortion or field curvature. A reticle is imaged in one direction through the optical system to form an encoded mask. The encoded mask is then imaged back through the imaging system onto a wafer positioned at the reticle position.

  18. Utility-Scale Solar 2014. An Empirical Analysis of Project Cost, Performance, and Pricing Trends in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolinger, Mark; Seel, Joachim

    2015-09-01

    Other than the nine Solar Energy Generation Systems (“SEGS”) parabolic trough projects built in the 1980s, virtually no large-scale or “utility-scale” solar projects – defined here to include any groundmounted photovoltaic (“PV”), concentrating photovoltaic (“CPV”), or concentrating solar thermal power (“CSP”) project larger than 5 MW AC – existed in the United States prior to 2007. By 2012 – just five years later – utility-scale had become the largest sector of the overall PV market in the United States, a distinction that was repeated in both 2013 and 2014 and that is expected to continue for at least the nextmore » few years. Over this same short period, CSP also experienced a bit of a renaissance in the United States, with a number of large new parabolic trough and power tower systems – some including thermal storage – achieving commercial operation. With this critical mass of new utility-scale projects now online and in some cases having operated for a number of years (generating not only electricity, but also empirical data that can be mined), the rapidly growing utility-scale sector is ripe for analysis. This report, the third edition in an ongoing annual series, meets this need through in-depth, annually updated, data-driven analysis of not just installed project costs or prices – i.e., the traditional realm of solar economics analyses – but also operating costs, capacity factors, and power purchase agreement (“PPA”) prices from a large sample of utility-scale solar projects in the United States. Given its current dominance in the market, utility-scale PV also dominates much of this report, though data from CPV and CSP projects are presented where appropriate.« less

  19. Circumnuclear Structures in Megamaser Host Galaxies

    NASA Astrophysics Data System (ADS)

    Pjanka, Patryk; Greene, Jenny E.; Seth, Anil C.; Braatz, James A.; Henkel, Christian; Lo, Fred K. Y.; Läsker, Ronald

    2017-08-01

    Using the Hubble Space Telescope, we identify circumnuclear (100-500 pc scale) structures in nine new H2O megamaser host galaxies to understand the flow of matter from kpc-scale galactic structures down to the supermassive black holes (SMBHs) at galactic centers. We double the sample analyzed in a similar way by Greene et al. and consider the properties of the combined sample of 18 sources. We find that disk-like structure is virtually ubiquitous when we can resolve <200 pc scales, in support of the notion that non-axisymmetries on these scales are a necessary condition for SMBH fueling. We perform an analysis of the orientation of our identified nuclear regions and compare it with the orientation of megamaser disks and the kpc-scale disks of the hosts. We find marginal evidence that the disk-like nuclear structures show increasing misalignment from the kpc-scale host galaxy disk as the scale of the structure decreases. In turn, we find that the orientation of both the ˜100 pc scale nuclear structures and their host galaxy large-scale disks is consistent with random with respect to the orientation of their respective megamaser disks.

  20. The Numerical Propulsion System Simulation: An Overview

    NASA Technical Reports Server (NTRS)

    Lytle, John K.

    2000-01-01

    Advances in computational technology and in physics-based modeling are making large-scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze major propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of designing systems, providing the designer with critical information about the components early in the design process. This paper describes the development of the numerical propulsion system simulation (NPSS), a modular and extensible framework for the integration of multicomponent and multidisciplinary analysis tools using geographically distributed resources such as computing platforms, data bases, and people. The analysis is currently focused on large-scale modeling of complete aircraft engines. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.

  1. Distributed rendering for multiview parallax displays

    NASA Astrophysics Data System (ADS)

    Annen, T.; Matusik, W.; Pfister, H.; Seidel, H.-P.; Zwicker, M.

    2006-02-01

    3D display technology holds great promise for the future of television, virtual reality, entertainment, and visualization. Multiview parallax displays deliver stereoscopic views without glasses to arbitrary positions within the viewing zone. These systems must include a high-performance and scalable 3D rendering subsystem in order to generate multiple views at real-time frame rates. This paper describes a distributed rendering system for large-scale multiview parallax displays built with a network of PCs, commodity graphics accelerators, multiple projectors, and multiview screens. The main challenge is to render various perspective views of the scene and assign rendering tasks effectively. In this paper we investigate two different approaches: Optical multiplexing for lenticular screens and software multiplexing for parallax-barrier displays. We describe the construction of large-scale multi-projector 3D display systems using lenticular and parallax-barrier technology. We have developed different distributed rendering algorithms using the Chromium stream-processing framework and evaluate the trade-offs and performance bottlenecks. Our results show that Chromium is well suited for interactive rendering on multiview parallax displays.

  2. Multi-pose system for geometric measurement of large-scale assembled rotational parts

    NASA Astrophysics Data System (ADS)

    Deng, Bowen; Wang, Zhaoba; Jin, Yong; Chen, Youxing

    2017-05-01

    To achieve virtual assembly of large-scale assembled rotational parts based on in-field geometric data, we develop a multi-pose rotative arm measurement system with a gantry and 2D laser sensor (RAMSGL) to measure and provide the geometry of these parts. We mount a 2D laser sensor onto the end of a six-jointed rotative arm to guarantee the accuracy and efficiency, combine the rotative arm with a gantry to measure pairs of assembled rotational parts. By establishing and using the D-H model of the system, the 2D laser data is turned into point clouds and finally geometry is calculated. In addition, we design three experiments to evaluate the performance of the system. Experimental results show that the system’s max length measuring deviation using gauge blocks is 35 µm, max length measuring deviation using ball plates is 50 µm, max single-point repeatability error is 25 µm, and measurement scope is from a radius of 0 mm to 500 mm.

  3. Protein-Fragment Complementation Assays for Large-Scale Analysis, Functional Dissection, and Spatiotemporal Dynamic Studies of Protein-Protein Interactions in Living Cells.

    PubMed

    Michnick, Stephen W; Landry, Christian R; Levy, Emmanuel D; Diss, Guillaume; Ear, Po Hien; Kowarzyk, Jacqueline; Malleshaiah, Mohan K; Messier, Vincent; Tchekanda, Emmanuelle

    2016-11-01

    Protein-fragment complementation assays (PCAs) comprise a family of assays that can be used to study protein-protein interactions (PPIs), conformation changes, and protein complex dimensions. We developed PCAs to provide simple and direct methods for the study of PPIs in any living cell, subcellular compartments or membranes, multicellular organisms, or in vitro. Because they are complete assays, requiring no cell-specific components other than reporter fragments, they can be applied in any context. PCAs provide a general strategy for the detection of proteins expressed at endogenous levels within appropriate subcellular compartments and with normal posttranslational modifications, in virtually any cell type or organism under any conditions. Here we introduce a number of applications of PCAs in budding yeast, Saccharomyces cerevisiae These applications represent the full range of PPI characteristics that might be studied, from simple detection on a large scale to visualization of spatiotemporal dynamics. © 2016 Cold Spring Harbor Laboratory Press.

  4. Enabling Large-Scale Design, Synthesis and Validation of Small Molecule Protein-Protein Antagonists

    PubMed Central

    Koes, David; Khoury, Kareem; Huang, Yijun; Wang, Wei; Bista, Michal; Popowicz, Grzegorz M.; Wolf, Siglinde; Holak, Tad A.; Dömling, Alexander; Camacho, Carlos J.

    2012-01-01

    Although there is no shortage of potential drug targets, there are only a handful known low-molecular-weight inhibitors of protein-protein interactions (PPIs). One problem is that current efforts are dominated by low-yield high-throughput screening, whose rigid framework is not suitable for the diverse chemotypes present in PPIs. Here, we developed a novel pharmacophore-based interactive screening technology that builds on the role anchor residues, or deeply buried hot spots, have in PPIs, and redesigns these entry points with anchor-biased virtual multicomponent reactions, delivering tens of millions of readily synthesizable novel compounds. Application of this approach to the MDM2/p53 cancer target led to high hit rates, resulting in a large and diverse set of confirmed inhibitors, and co-crystal structures validate the designed compounds. Our unique open-access technology promises to expand chemical space and the exploration of the human interactome by leveraging in-house small-scale assays and user-friendly chemistry to rationally design ligands for PPIs with known structure. PMID:22427896

  5. Virtual reality exposure therapy using a virtual Iraq: case report.

    PubMed

    Gerardi, Maryrose; Rothbaum, Barbara Olasov; Ressler, Kerry; Heekin, Mary; Rizzo, Albert

    2008-04-01

    Posttraumatic stress disorder (PTSD) has been estimated to affect up to 18% of returning Operation Iraqi Freedom (OIF) veterans. Soldiers need to maintain constant vigilance to deal with unpredictable threats, and an unprecedented number of soldiers are surviving serious wounds. These risk factors are significant for development of PTSD; therefore, early and efficient intervention options must be identified and presented in a form acceptable to military personnel. This case report presents the results of treatment utilizing virtual reality exposure (VRE) therapy (virtual Iraq) to treat an OIF veteran with PTSD. Following brief VRE treatment, the veteran demonstrated improvement in PTSD symptoms as indicated by clinically and statistically significant changes in scores on the Clinician Administered PTSD Scale (CAPS; Blake et al., 1990) and the PTSD Symptom Scale Self-Report (PSS-SR; Foa, Riggs, Dancu, & Rothbaum, 1993). These results indicate preliminary promise for this treatment.

  6. Virtual Reality Exposure Therapy Using a Virtual Iraq: Case Report

    PubMed Central

    Gerardi, Maryrose; Rothbaum, Barbara Olasov; Ressler, Kerry; Heekin, Mary; Rizzo, Albert

    2013-01-01

    Posttraumatic stress disorder (PTSD) has been estimated to affect up to 18% of returning Operation Iraqi Freedom (OIF) veterans. Soldiers need to maintain constant vigilance to deal with unpredictable threats, and an unprecedented number of soldiers are surviving serious wounds. These risk factors are significant for development of PTSD; therefore, early and efficient intervention options must be identified and presented in a form acceptable to military personnel. This case report presents the results of treatment utilizing virtual reality exposure (VRE) therapy (virtual Iraq) to treat an OIF veteran with PTSD. Following brief VRE treatment, the veteran demonstrated improvement in PTSD symptoms as indicated by clinically and statistically significant changes in scores on the Clinician Administered PTSD Scale (CAPS; Blake et al., 1990) and the PTSD Symptom Scale Self-Report (PSS-SR; Foa, Riggs, Dancu, & Rothbaum, 1993). These results indicate preliminary promise for this treatment. PMID:18404648

  7. Large-scale water projects in the developing world: Revisiting the past and looking to the future

    NASA Astrophysics Data System (ADS)

    Sivakumar, Bellie; Chen, Ji

    2014-05-01

    During the past half a century or so, the developing world has been witnessing a significant increase in freshwater demands due to a combination of factors, including population growth, increased food demand, improved living standards, and water quality degradation. Since there exists significant variability in rainfall and river flow in both space and time, large-scale storage and distribution of water has become a key means to meet these increasing demands. In this regard, large dams and water transfer schemes (including river-linking schemes and virtual water trades) have been playing a key role. While the benefits of such large-scale projects in supplying water for domestic, irrigation, industrial, hydropower, recreational, and other uses both in the countries of their development and in other countries are undeniable, concerns on their negative impacts, such as high initial costs and damages to our ecosystems (e.g. river environment and species) and socio-economic fabric (e.g. relocation and socio-economic changes of affected people) have also been increasing in recent years. These have led to serious debates on the role of large-scale water projects in the developing world and on their future, but the often one-sided nature of such debates have inevitably failed to yield fruitful outcomes thus far. The present study aims to offer a far more balanced perspective on this issue. First, it recognizes and emphasizes the need for still additional large-scale water structures in the developing world in the future, due to the continuing increase in water demands, inefficiency in water use (especially in the agricultural sector), and absence of equivalent and reliable alternatives. Next, it reviews a few important success and failure stories of large-scale water projects in the developing world (and in the developed world), in an effort to arrive at a balanced view on the future role of such projects. Then, it discusses some major challenges in future water planning and management, with proper consideration to potential technological developments and new options. Finally, it highlights the urgent need for a broader framework that integrates the physical science-related aspects ("hard sciences") and the human science-related aspects ("soft sciences").

  8. Virtual water trade of agri-food products: Evidence from italian-chinese relations.

    PubMed

    Lamastra, Lucrezia; Miglietta, Pier Paolo; Toma, Pierluigi; De Leo, Federica; Massari, Stefania

    2017-12-01

    At global scale, the majority of world water withdrawal is for the agricultural sector, with differences among countries depending on the relevance of agri-food sector in the economy. Virtual water and water footprint could be useful to express the impact on the water resources of each production process and good with the objective to lead to a sustainable use of water at a global level. International trade could be connected to the virtual water flows, in fact through commodities importation, water poor countries can save their own water resources. The present paper focuses on the bilateral virtual water flows connected to the top ten agri-food products traded between Italy and China. Comparing the virtual water flow related to the top 10 agri-food products, the virtual water flow from Italy to China is bigger than the water flow in the opposite direction. Moreover, the composition of virtual water flows is different; Italy imports significant amounts of grey water from China, depending on the different environmental strategies adopted by the two selected countries. This difference could be also related to the fact that traded commodities are very different; the 91% of virtual water imported by Italy is connected to crops products, while the 95% of virtual water imported by China is related to the animal products. Considering national water saving and global water saving, appears that Italy imports virtual water from China while China exerts pressure on its water resources to supply the exports to Italy. This result at global scale implies a global water loss of 129.29millionm3 because, in general, the agri-food products are traded from the area with lower water productivity to the area with the higher water productivity. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. A comparative analysis of dynamic grids vs. virtual grids using the A3pviGrid framework.

    PubMed

    Shankaranarayanan, Avinas; Amaldas, Christine

    2010-11-01

    With the proliferation of Quad/Multi-core micro-processors in mainstream platforms such as desktops and workstations; a large number of unused CPU cycles can be utilized for running virtual machines (VMs) as dynamic nodes in distributed environments. Grid services and its service oriented business broker now termed cloud computing could deploy image based virtualization platforms enabling agent based resource management and dynamic fault management. In this paper we present an efficient way of utilizing heterogeneous virtual machines on idle desktops as an environment for consumption of high performance grid services. Spurious and exponential increases in the size of the datasets are constant concerns in medical and pharmaceutical industries due to the constant discovery and publication of large sequence databases. Traditional algorithms are not modeled at handing large data sizes under sudden and dynamic changes in the execution environment as previously discussed. This research was undertaken to compare our previous results with running the same test dataset with that of a virtual Grid platform using virtual machines (Virtualization). The implemented architecture, A3pviGrid utilizes game theoretic optimization and agent based team formation (Coalition) algorithms to improve upon scalability with respect to team formation. Due to the dynamic nature of distributed systems (as discussed in our previous work) all interactions were made local within a team transparently. This paper is a proof of concept of an experimental mini-Grid test-bed compared to running the platform on local virtual machines on a local test cluster. This was done to give every agent its own execution platform enabling anonymity and better control of the dynamic environmental parameters. We also analyze performance and scalability of Blast in a multiple virtual node setup and present our findings. This paper is an extension of our previous research on improving the BLAST application framework using dynamic Grids on virtualization platforms such as the virtual box.

  10. Optimizing Virtual Network Functions Placement in Virtual Data Center Infrastructure Using Machine Learning

    NASA Astrophysics Data System (ADS)

    Bolodurina, I. P.; Parfenov, D. I.

    2018-01-01

    We have elaborated a neural network model of virtual network flow identification based on the statistical properties of flows circulating in the network of the data center and characteristics that describe the content of packets transmitted through network objects. This enabled us to establish the optimal set of attributes to identify virtual network functions. We have established an algorithm for optimizing the placement of virtual data functions using the data obtained in our research. Our approach uses a hybrid method of visualization using virtual machines and containers, which enables to reduce the infrastructure load and the response time in the network of the virtual data center. The algorithmic solution is based on neural networks, which enables to scale it at any number of the network function copies.

  11. Novel interactive virtual showcase based on 3D multitouch technology

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Liu, Yue; Lu, You; Wang, Yongtian

    2009-11-01

    A new interactive virtual showcase is proposed in this paper. With the help of virtual reality technology, the user of the proposed system can watch the virtual objects floating in the air from all four sides and interact with the virtual objects by touching the four surfaces of the virtual showcase. Unlike traditional multitouch system, this system cannot only realize multi-touch on a plane to implement 2D translation, 2D scaling, and 2D rotation of the objects; it can also realize the 3D interaction of the virtual objects by recognizing and analyzing the multi-touch that can be simultaneously captured from the four planes. Experimental results show the potential of the proposed system to be applied in the exhibition of historical relics and other precious goods.

  12. Automatic Tools for Enhancing the Collaborative Experience in Large Projects

    NASA Astrophysics Data System (ADS)

    Bourilkov, D.; Rodriquez, J. L.

    2014-06-01

    With the explosion of big data in many fields, the efficient management of knowledge about all aspects of the data analysis gains in importance. A key feature of collaboration in large scale projects is keeping a log of what is being done and how - for private use, reuse, and for sharing selected parts with collaborators and peers, often distributed geographically on an increasingly global scale. Even better if the log is automatically created on the fly while the scientist or software developer is working in a habitual way, without the need for extra efforts. This saves time and enables a team to do more with the same resources. The CODESH - COllaborative DEvelopment SHell - and CAVES - Collaborative Analysis Versioning Environment System projects address this problem in a novel way. They build on the concepts of virtual states and transitions to enhance the collaborative experience by providing automatic persistent virtual logbooks. CAVES is designed for sessions of distributed data analysis using the popular ROOT framework, while CODESH generalizes the approach for any type of work on the command line in typical UNIX shells like bash or tcsh. Repositories of sessions can be configured dynamically to record and make available the knowledge accumulated in the course of a scientific or software endeavor. Access can be controlled to define logbooks of private sessions or sessions shared within or between collaborating groups. A typical use case is building working scalable systems for analysis of Petascale volumes of data as encountered in the LHC experiments. Our approach is general enough to find applications in many fields.

  13. An activity index for geomagnetic paleosecular variation, excursions, and reversals

    NASA Astrophysics Data System (ADS)

    Panovska, S.; Constable, C. G.

    2017-04-01

    Magnetic indices provide quantitative measures of space weather phenomena that are widely used by researchers in geomagnetism. We introduce an index focused on the internally generated field that can be used to evaluate long term variations or climatology of modern and paleomagnetic secular variation, including geomagnetic excursions, polarity reversals, and changes in reversal rate. The paleosecular variation index, Pi, represents instantaneous or average deviation from a geocentric axial dipole field using normalized ratios of virtual geomagnetic pole colatitude and virtual dipole moment. The activity level of the index, σPi, provides a measure of field stability through the temporal standard deviation of Pi. Pi can be calculated on a global grid from geomagnetic field models to reveal large scale geographic variations in field structure. It can be determined for individual time series, or averaged at local, regional, and global scales to detect long term changes in geomagnetic activity, identify excursions, and transitional field behavior. For recent field models, Pi ranges from less than 0.05 to 0.30. Conventional definitions for geomagnetic excursions are characterized by Pi exceeding 0.5. Strong field intensities are associated with low Pi unless they are accompanied by large deviations from axial dipole field directions. σPi provides a measure of geomagnetic stability that is modulated by the level of PSV or frequency of excursional activity and reversal rate. We demonstrate uses of Pi for paleomagnetic observations and field models and show how it could be used to assess whether numerical simulations of the geodynamo exhibit Earth-like properties.

  14. Evidencing `Tight Bound States' in the Hydrogen Atom:. Empirical Manipulation of Large-Scale XD in Violation of QED

    NASA Astrophysics Data System (ADS)

    Amoroso, Richard L.; Vigier, Jean-Pierre

    2013-09-01

    In this work we extend Vigier's recent theory of `tight bound state' (TBS) physics and propose empirical protocols to test not only for their putative existence, but also that their existence if demonstrated provides the 1st empirical evidence of string theory because it occurs in the context of large-scale extra dimensionality (LSXD) cast in a unique M-Theoretic vacuum corresponding to the new Holographic Anthropic Multiverse (HAM) cosmological paradigm. Physicists generally consider spacetime as a stochastic foam containing a zero-point field (ZPF) from which virtual particles restricted by the quantum uncertainty principle (to the Planck time) wink in and out of existence. According to the extended de Broglie-Bohm-Vigier causal stochastic interpretation of quantum theory spacetime and the matter embedded within it is created annihilated and recreated as a virtual locus of reality with a continuous quantum evolution (de Broglie matter waves) governed by a pilot wave - a `super quantum potential' extended in HAM cosmology to be synonymous with the a `force of coherence' inherent in the Unified Field, UF. We consider this backcloth to be a covariant polarized vacuum of the (generally ignored by contemporary physicists) Dirac type. We discuss open questions of the physics of point particles (fermionic nilpotent singularities). We propose a new set of experiments to test for TBS in a Dirac covariant polarized vacuum LSXD hyperspace suggestive of a recently tested special case of the Lorentz Transformation put forth by Kowalski and Vigier. These protocols reach far beyond the recent battery of atomic spectral violations of QED performed through NIST.

  15. Robots integrated with virtual reality simulations for customized motor training in a person with upper extremity hemiparesis: a case report

    PubMed Central

    Fluet, Gerard G.; Merians, Alma S.; Qiu, Qinyin; Lafond, Ian; Saleh, Soha; Ruano, Viviana; Delmonico, Andrea R.; Adamovich, Sergei V.

    2014-01-01

    Background and Purpose A majority of studies examining repetitive task practice facilitated by robots for the treatment of upper extremity paresis utilize standardized protocols applied to large groups. Others utilize interventions tailored to patients but don't describe the clinical decision making process utilized to develop and modify interventions. This case report will describe a robot-based intervention customized to match the goals and clinical presentation of a gentleman with upper extremity hemiparesis secondary to stroke. Methods PM is an 85 year-old man with left hemiparesis secondary to an intracerebral hemorrhage five years prior to examination. Outcomes were measured before and after a one month period of home therapy and after a one month robotic intervention. The intervention was designed to address specific impairments identified during his PT examination. When necessary, activities were modified based on the patient's response to his first week of treatment. Outcomes PM trained twelve sessions using six virtually simulated activities. Modifications to original configurations of these activities resulted in performance improvements in five of these activities. PM demonstrated a 35 second improvement in Jebsen Test of Hand Function time and a 44 second improvement in Wolf Motor Function Test time subsequent to the robotic training intervention. Reaching kinematics, 24 hour activity measurement and the Hand and Activities of Daily Living scales of the Stroke Impact Scale all improved as well. Discussion A customized program of robotically facilitated rehabilitation resulted in large short-term improvements in several measurements of upper extremity function in a patient with chronic hemiparesis. PMID:22592063

  16. Development of an objective assessment tool for total laparoscopic hysterectomy: A Delphi method among experts and evaluation on a virtual reality simulator.

    PubMed

    Knight, Sophie; Aggarwal, Rajesh; Agostini, Aubert; Loundou, Anderson; Berdah, Stéphane; Crochet, Patrice

    2018-01-01

    Total Laparoscopic hysterectomy (LH) requires an advanced level of operative skills and training. The aim of this study was to develop an objective scale specific for the assessment of technical skills for LH (H-OSATS) and to demonstrate feasibility of use and validity in a virtual reality setting. The scale was developed using a hierarchical task analysis and a panel of international experts. A Delphi method obtained consensus among experts on relevant steps that should be included into the H-OSATS scale for assessment of operative performances. Feasibility of use and validity of the scale were evaluated by reviewing video recordings of LH performed on a virtual reality laparoscopic simulator. Three groups of operators of different levels of experience were assessed in a Marseille teaching hospital (10 novices, 8 intermediates and 8 experienced surgeons). Correlations with scores obtained using a recognised generic global rating tool (OSATS) were calculated. A total of 76 discrete steps were identified by the hierarchical task analysis. 14 experts completed the two rounds of the Delphi questionnaire. 64 steps reached consensus and were integrated in the scale. During the validation process, median time to rate each video recording was 25 minutes. There was a significant difference between the novice, intermediate and experienced group for total H-OSATS scores (133, 155.9 and 178.25 respectively; p = 0.002). H-OSATS scale demonstrated high inter-rater reliability (intraclass correlation coefficient [ICC] = 0.930; p<0.001) and test retest reliability (ICC = 0.877; p<0.001). High correlations were found between total H-OSATS scores and OSATS scores (rho = 0.928; p<0.001). The H-OSATS scale displayed evidence of validity for assessment of technical performances for LH performed on a virtual reality simulator. The implementation of this scale is expected to facilitate deliberate practice. Next steps should focus on evaluating the validity of the scale in the operating room.

  17. Structural Controllability and Controlling Centrality of Temporal Networks

    PubMed Central

    Pan, Yujian; Li, Xiang

    2014-01-01

    Temporal networks are such networks where nodes and interactions may appear and disappear at various time scales. With the evidence of ubiquity of temporal networks in our economy, nature and society, it's urgent and significant to focus on its structural controllability as well as the corresponding characteristics, which nowadays is still an untouched topic. We develop graphic tools to study the structural controllability as well as its characteristics, identifying the intrinsic mechanism of the ability of individuals in controlling a dynamic and large-scale temporal network. Classifying temporal trees of a temporal network into different types, we give (both upper and lower) analytical bounds of the controlling centrality, which are verified by numerical simulations of both artificial and empirical temporal networks. We find that the positive relationship between aggregated degree and controlling centrality as well as the scale-free distribution of node's controlling centrality are virtually independent of the time scale and types of datasets, meaning the inherent robustness and heterogeneity of the controlling centrality of nodes within temporal networks. PMID:24747676

  18. Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah

    Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has tomore » gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a scaling study that compares instrumented ROSS simulations with their noninstrumented counterparts in order to determine the amount of perturbation when running at different simulation scales.« less

  19. Virtual Cultural Landscape Laboratory Based on Internet GIS Technology

    NASA Astrophysics Data System (ADS)

    Bill, R.

    2012-07-01

    In recent years the transfer of old documents (books, paintings, maps etc.) from analogue to digital form has gained enormous importance. Numerous interventions are concentrated in the digitalisation of library collections, but also commercial companies like Microsoft or Google try to convert large analogue stocks such as books, paintings, etc. in digital form. Data in digital form can be much easier made accessible to a large user community, especially to the interested scientific community. The aim of the described research project is to set up a virtual research environment for interdisciplinary research focusing on the landscape of the historical Mecklenburg in the north-east of Germany. Georeferenced old maps from 1786 and 1890 covering complete Mecklenburg should be combined with current geo-information, satellite and aerial imagery to support spatio-temporal research aspects in different scales in space (regional 1:200,000 to local 1:25.000) and time (nearly 250 years in three time steps, the last 30 years also in three time slices). The Virtual Laboratory for Cultural Landscape Research (VKLandLab) is designed and developed by the Chair of Geodesy and Geoinformatics, hosted at the Computing Centre (ITMZ) and linked to the Digital Library (UB) at Rostock University. VKLandLab includes new developments such as wikis, blogs, data tagging, etc. and proven components already integrated in various data-related infrastructures such as InternetGIS, data repositories and authentication structures. The focus is to build a data-related infrastructure and a work platform that supports students as well as researchers from different disciplines in their research in space and time.

  20. A Novel Treatment of Fear of Flying Using a Large Virtual Reality System.

    PubMed

    Czerniak, Efrat; Caspi, Asaf; Litvin, Michal; Amiaz, Revital; Bahat, Yotam; Baransi, Hani; Sharon, Hanania; Noy, Shlomo; Plotnik, Meir

    2016-04-01

    Fear of flying (FoF), a common phobia in the developed world, is usually treated with cognitive behavioral therapy, most efficiently when combined with exposure methods, e.g., virtual reality exposure therapy (VRET). We evaluated FoF treatment using VRET in a large motion-based VR system. The treated subjects were seated on a moving platform. The virtual scenery included the interior of an aircraft and a window view to the outside world accompanied by platform movements simulating, e.g., takeoff, landing, and air turbulence. Relevant auditory stimuli were also incorporated. Three male patients with FoF underwent a clinical interview followed by three VRETs in the presence and with the guidance of a therapist. Scores on the Flight Anxiety Situation (FAS) and Flight Anxiety Modality (FAM) questionnaires were obtained on the first and fourth visits. Anxiety levels were assessed using the subjective units of distress (SUDs) scale during the exposure. All three subjects expressed satisfaction regarding the procedure and did not skip or avoid any of its stages. Consistent improvement was seen in the SUDs throughout the VRET session and across sessions, while patients' scores on the FAS and FAM showed inconsistent trends. Two patients participated in actual flights in the months following the treatment, bringing 12 and 16 yr of avoidance to an end. This VR-based treatment includes critical elements for exposure of flying experience beyond visual and auditory stimuli. The current case reports suggest VRET sessions may have a meaningful impact on anxiety levels, yet additional research seems warranted.

  1. RADER: a RApid DEcoy Retriever to facilitate decoy based assessment of virtual screening.

    PubMed

    Wang, Ling; Pang, Xiaoqian; Li, Yecheng; Zhang, Ziying; Tan, Wen

    2017-04-15

    Evaluation of the capacity for separating actives from challenging decoys is a crucial metric of performance related to molecular docking or a virtual screening workflow. The Directory of Useful Decoys (DUD) and its enhanced version (DUD-E) provide a benchmark for molecular docking, although they only contain a limited set of decoys for limited targets. DecoyFinder was released to compensate the limitations of DUD or DUD-E for building target-specific decoy sets. However, desirable query template design, generation of multiple decoy sets of similar quality, and computational speed remain bottlenecks, particularly when the numbers of queried actives and retrieved decoys increases to hundreds or more. Here, we developed a program suite called RApid DEcoy Retriever (RADER) to facilitate the decoy-based assessment of virtual screening. This program adopts a novel database-management regime that supports rapid and large-scale retrieval of decoys, enables high portability of databases, and provides multifaceted options for designing initial query templates from a large number of active ligands and generating subtle decoy sets. RADER provides two operational modes: as a command-line tool and on a web server. Validation of the performance and efficiency of RADER was also conducted and is described. RADER web server and a local version are freely available at http://rcidm.org/rader/ . lingwang@scut.edu.cn or went@scut.edu.cn . Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  2. Computing and Visualizing Reachable Volumes for Maneuvering Satellites

    NASA Astrophysics Data System (ADS)

    Jiang, M.; de Vries, W.; Pertica, A.; Olivier, S.

    2011-09-01

    Detecting and predicting maneuvering satellites is an important problem for Space Situational Awareness. The spatial envelope of all possible locations within reach of such a maneuvering satellite is known as the Reachable Volume (RV). As soon as custody of a satellite is lost, calculating the RV and its subsequent time evolution is a critical component in the rapid recovery of the satellite. In this paper, we present a Monte Carlo approach to computing the RV for a given object. Essentially, our approach samples all possible trajectories by randomizing thrust-vectors, thrust magnitudes and time of burn. At any given instance, the distribution of the "point-cloud" of the virtual particles defines the RV. For short orbital time-scales, the temporal evolution of the point-cloud can result in complex, multi-reentrant manifolds. Visualization plays an important role in gaining insight and understanding into this complex and evolving manifold. In the second part of this paper, we focus on how to effectively visualize the large number of virtual trajectories and the computed RV. We present a real-time out-of-core rendering technique for visualizing the large number of virtual trajectories. We also examine different techniques for visualizing the computed volume of probability density distribution, including volume slicing, convex hull and isosurfacing. We compare and contrast these techniques in terms of computational cost and visualization effectiveness, and describe the main implementation issues encountered during our development process. Finally, we will present some of the results from our end-to-end system for computing and visualizing RVs using examples of maneuvering satellites.

  3. Virtualization for the LHCb Online system

    NASA Astrophysics Data System (ADS)

    Bonaccorsi, Enrico; Brarda, Loic; Moine, Gary; Neufeld, Niko

    2011-12-01

    Virtualization has long been advertised by the IT-industry as a way to cut down cost, optimise resource usage and manage the complexity in large data-centers. The great number and the huge heterogeneity of hardware, both industrial and custom-made, has up to now led to reluctance in the adoption of virtualization in the IT infrastructure of large experiment installations. Our experience in the LHCb experiment has shown that virtualization improves the availability and the manageability of the whole system. We have done an evaluation of available hypervisors / virtualization solutions and find that the Microsoft HV technology provides a high level of maturity and flexibility for our purpose. We present the results of these comparison tests, describing in detail, the architecture of our virtualization infrastructure with a special emphasis on the security for services visible to the outside world. Security is achieved by a sophisticated combination of VLANs, firewalls and virtual routing - the cost and benefits of this solution are analysed. We have adapted our cluster management tools, notably Quattor, for the needs of virtual machines and this allows us to migrate smoothly services on physical machines to the virtualized infrastructure. The procedures for migration will also be described. In the final part of the document we describe our recent R&D activities aiming to replacing the SAN-backend for the virtualization by a cheaper iSCSI solution - this will allow to move all servers and related services to the virtualized infrastructure, excepting the ones doing hardware control via non-commodity PCI plugin cards.

  4. Virtual Remediation Versus Methylphenidate to Improve Distractibility in Children With ADHD: A Controlled Randomized Clinical Trial Study.

    PubMed

    Bioulac, Stéphanie; Micoulaud-Franchi, Jean-Arthur; Maire, Jenna; Bouvard, Manuel P; Rizzo, Albert A; Sagaspe, Patricia; Philip, Pierre

    2018-03-01

    Virtual environments have been used to assess children with ADHD but have never been tested as therapeutic tools. We tested a new virtual classroom cognitive remediation program to improve symptoms in children with ADHD. In this randomized clinical trial, 51 children with ADHD (7-11 years) were assigned to a virtual cognitive remediation group, a methylphenidate group, or a psychotherapy group. All children were evaluated before and after therapy with an ADHD Rating Scale, a Continuous Performance Test (CPT), and a virtual classroom task. After therapy by virtual remediation, children exhibited significantly higher numbers of correct hits on the virtual classroom and CPT. These improvements were equivalent to those observed with methylphenidate treatment. Our study demonstrates for the first time that a cognitive remediation program delivered in a virtual classroom reduces distractibility in children with ADHD and could replace methylphenidate treatment in specific cases.

  5. Development of visual 3D virtual environment for control software

    NASA Technical Reports Server (NTRS)

    Hirose, Michitaka; Myoi, Takeshi; Amari, Haruo; Inamura, Kohei; Stark, Lawrence

    1991-01-01

    Virtual environments for software visualization may enable complex programs to be created and maintained. A typical application might be for control of regional electric power systems. As these encompass broader computer networks than ever, construction of such systems becomes very difficult. Conventional text-oriented environments are useful in programming individual processors. However, they are obviously insufficient to program a large and complicated system, that includes large numbers of computers connected to each other; such programming is called 'programming in the large.' As a solution for this problem, the authors are developing a graphic programming environment wherein one can visualize complicated software in virtual 3D world. One of the major features of the environment is the 3D representation of concurrent process. 3D representation is used to supply both network-wide interprocess programming capability (capability for 'programming in the large') and real-time programming capability. The authors' idea is to fuse both the block diagram (which is useful to check relationship among large number of processes or processors) and the time chart (which is useful to check precise timing for synchronization) into a single 3D space. The 3D representation gives us a capability for direct and intuitive planning or understanding of complicated relationship among many concurrent processes. To realize the 3D representation, a technology to enable easy handling of virtual 3D object is a definite necessity. Using a stereo display system and a gesture input device (VPL DataGlove), our prototype of the virtual workstation has been implemented. The workstation can supply the 'sensation' of the virtual 3D space to a programmer. Software for the 3D programming environment is implemented on the workstation. According to preliminary assessments, a 50 percent reduction of programming effort is achieved by using the virtual 3D environment. The authors expect that the 3D environment has considerable potential in the field of software engineering.

  6. Design, Results, Evolution and Status of the ATLAS Simulation at Point1 Project

    NASA Astrophysics Data System (ADS)

    Ballestrero, S.; Batraneanu, S. M.; Brasolin, F.; Contescu, C.; Fazio, D.; Di Girolamo, A.; Lee, C. J.; Pozo Astigarraga, M. E.; Scannicchio, D. A.; Sedov, A.; Twomey, M. S.; Wang, F.; Zaytsev, A.

    2015-12-01

    During the LHC Long Shutdown 1 (LSI) period, that started in 2013, the Simulation at Point1 (Sim@P1) project takes advantage, in an opportunistic way, of the TDAQ (Trigger and Data Acquisition) HLT (High-Level Trigger) farm of the ATLAS experiment. This farm provides more than 1300 compute nodes, which are particularly suited for running event generation and Monte Carlo production jobs that are mostly CPU and not I/O bound. It is capable of running up to 2700 Virtual Machines (VMs) each with 8 CPU cores, for a total of up to 22000 parallel jobs. This contribution gives a review of the design, the results, and the evolution of the Sim@P1 project, operating a large scale OpenStack based virtualized platform deployed on top of the ATLAS TDAQ HLT farm computing resources. During LS1, Sim@P1 was one of the most productive ATLAS sites: it delivered more than 33 million CPU-hours and it generated more than 1.1 billion Monte Carlo events. The design aspects are presented: the virtualization platform exploited by Sim@P1 avoids interferences with TDAQ operations and it guarantees the security and the usability of the ATLAS private network. The cloud mechanism allows the separation of the needed support on both infrastructural (hardware, virtualization layer) and logical (Grid site support) levels. This paper focuses on the operational aspects of such a large system during the upcoming LHC Run 2 period: simple, reliable, and efficient tools are needed to quickly switch from Sim@P1 to TDAQ mode and back, to exploit the resources when they are not used for the data acquisition, even for short periods. The evolution of the central OpenStack infrastructure is described, as it was upgraded from Folsom to the Icehouse release, including the scalability issues addressed.

  7. ACStor: Optimizing Access Performance of Virtual Disk Images in Clouds

    DOE PAGES

    Wu, Song; Wang, Yihong; Luo, Wei; ...

    2017-03-02

    In virtualized data centers, virtual disk images (VDIs) serve as the containers in virtual environment, so their access performance is critical for the overall system performance. Some distributed VDI chunk storage systems have been proposed in order to alleviate the I/O bottleneck for VM management. As the system scales up to a large number of running VMs, however, the overall network traffic would become unbalanced with hot spots on some VMs inevitably, leading to I/O performance degradation when accessing the VMs. Here, we propose an adaptive and collaborative VDI storage system (ACStor) to resolve the above performance issue. In comparisonmore » with the existing research, our solution is able to dynamically balance the traffic workloads in accessing VDI chunks, based on the run-time network state. Specifically, compute nodes with lightly loaded traffic will be adaptively assigned more chunk access requests from remote VMs and vice versa, which can effectively eliminate the above problem and thus improves the I/O performance of VMs. We also implement a prototype based on our ACStor design, and evaluate it by various benchmarks on a real cluster with 32 nodes and a simulated platform with 256 nodes. Experiments show that under different network traffic patterns of data centers, our solution achieves up to 2-8 performance gain on VM booting time and VM’s I/O throughput, in comparison with the other state-of-the-art approaches.« less

  8. Integrating neuroinformatics tools in TheVirtualBrain.

    PubMed

    Woodman, M Marmaduke; Pezard, Laurent; Domide, Lia; Knock, Stuart A; Sanz-Leon, Paula; Mersmann, Jochen; McIntosh, Anthony R; Jirsa, Viktor

    2014-01-01

    TheVirtualBrain (TVB) is a neuroinformatics Python package representing the convergence of clinical, systems, and theoretical neuroscience in the analysis, visualization and modeling of neural and neuroimaging dynamics. TVB is composed of a flexible simulator for neural dynamics measured across scales from local populations to large-scale dynamics measured by electroencephalography (EEG), magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI), and core analytic and visualization functions, all accessible through a web browser user interface. A datatype system modeling neuroscientific data ties together these pieces with persistent data storage, based on a combination of SQL and HDF5. These datatypes combine with adapters allowing TVB to integrate other algorithms or computational systems. TVB provides infrastructure for multiple projects and multiple users, possibly participating under multiple roles. For example, a clinician might import patient data to identify several potential lesion points in the patient's connectome. A modeler, working on the same project, tests these points for viability through whole brain simulation, based on the patient's connectome, and subsequent analysis of dynamical features. TVB also drives research forward: the simulator itself represents the culmination of several simulation frameworks in the modeling literature. The availability of the numerical methods, set of neural mass models and forward solutions allows for the construction of a wide range of brain-scale simulation scenarios. This paper briefly outlines the history and motivation for TVB, describing the framework and simulator, giving usage examples in the web UI and Python scripting.

  9. Integrating neuroinformatics tools in TheVirtualBrain

    PubMed Central

    Woodman, M. Marmaduke; Pezard, Laurent; Domide, Lia; Knock, Stuart A.; Sanz-Leon, Paula; Mersmann, Jochen; McIntosh, Anthony R.; Jirsa, Viktor

    2014-01-01

    TheVirtualBrain (TVB) is a neuroinformatics Python package representing the convergence of clinical, systems, and theoretical neuroscience in the analysis, visualization and modeling of neural and neuroimaging dynamics. TVB is composed of a flexible simulator for neural dynamics measured across scales from local populations to large-scale dynamics measured by electroencephalography (EEG), magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI), and core analytic and visualization functions, all accessible through a web browser user interface. A datatype system modeling neuroscientific data ties together these pieces with persistent data storage, based on a combination of SQL and HDF5. These datatypes combine with adapters allowing TVB to integrate other algorithms or computational systems. TVB provides infrastructure for multiple projects and multiple users, possibly participating under multiple roles. For example, a clinician might import patient data to identify several potential lesion points in the patient's connectome. A modeler, working on the same project, tests these points for viability through whole brain simulation, based on the patient's connectome, and subsequent analysis of dynamical features. TVB also drives research forward: the simulator itself represents the culmination of several simulation frameworks in the modeling literature. The availability of the numerical methods, set of neural mass models and forward solutions allows for the construction of a wide range of brain-scale simulation scenarios. This paper briefly outlines the history and motivation for TVB, describing the framework and simulator, giving usage examples in the web UI and Python scripting. PMID:24795617

  10. Approaches for scalable modeling and emulation of cyber systems : LDRD final report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayo, Jackson R.; Minnich, Ronald G.; Armstrong, Robert C.

    2009-09-01

    The goal of this research was to combine theoretical and computational approaches to better understand the potential emergent behaviors of large-scale cyber systems, such as networks of {approx} 10{sup 6} computers. The scale and sophistication of modern computer software, hardware, and deployed networked systems have significantly exceeded the computational research community's ability to understand, model, and predict current and future behaviors. This predictive understanding, however, is critical to the development of new approaches for proactively designing new systems or enhancing existing systems with robustness to current and future cyber threats, including distributed malware such as botnets. We have developed preliminarymore » theoretical and modeling capabilities that can ultimately answer questions such as: How would we reboot the Internet if it were taken down? Can we change network protocols to make them more secure without disrupting existing Internet connectivity and traffic flow? We have begun to address these issues by developing new capabilities for understanding and modeling Internet systems at scale. Specifically, we have addressed the need for scalable network simulation by carrying out emulations of a network with {approx} 10{sup 6} virtualized operating system instances on a high-performance computing cluster - a 'virtual Internet'. We have also explored mappings between previously studied emergent behaviors of complex systems and their potential cyber counterparts. Our results provide foundational capabilities for further research toward understanding the effects of complexity in cyber systems, to allow anticipating and thwarting hackers.« less

  11. Virtual Tissues and Developmental Systems Biology (book chapter)

    EPA Science Inventory

    Virtual tissue (VT) models provide an in silico environment to simulate cross-scale properties in specific tissues or organs based on knowledge of the underlying biological networks. These integrative models capture the fundamental interactions in a biological system and enable ...

  12. WC WAVE - Integrating Diverse Hydrological-Modeling Data and Services Into an Interoperable Geospatial Infrastructure

    NASA Astrophysics Data System (ADS)

    Hudspeth, W. B.; Baros, S.; Barrett, H.; Savickas, J.; Erickson, J.

    2015-12-01

    WC WAVE (Western Consortium for Watershed Analysis, Visualization and Exploration) is a collaborative research project between the states of Idaho, Nevada, and New Mexico that is funded under the National Science Foundation's Experimental Program to Stimulate Competitive Research (EPSCoR). The goal of the project is to understand and document the effects of climate change on interactions between precipitation, vegetation growth, soil moisture and other landscape properties. These interactions are modeled within a framework we refer to as a virtual watershed (VW), a computer infrastructure that simulates watershed dynamics by linking scientific modeling, visualization, and data management components into a coherent whole. Developed and hosted at the Earth Data Analysis Center, University of New Mexico, the virtual watershed has a number of core functions which include: a) streamlined access to data required for model initialization and boundary conditions; b) the development of analytic scenarios through interactive visualization of available data and the storage of model configuration options; c) coupling of hydrological models through the rapid assimilation of model outputs into the data management system for access and use by sequent models. The WC-WAVE virtual watershed accomplishes these functions by provision of large-scale vector and raster data discovery, subsetting, and delivery via Open Geospatial Consortium (OGC) and REST web service standards. Central to the virtual watershed is the design and use of an innovative array of metadata elements that permits the stepwise coupling of diverse hydrological models (e.g. ISNOBAL, PRMS, CASiMiR) and input data to rapidly assess variation in outcomes under different climatic conditions. We present details on the architecture and functionality of the virtual watershed, results from three western U.S. watersheds, and discuss the realized benefits to watershed science of employing this integrated solution.

  13. Cybersickness and Anxiety During Simulated Motion: Implications for VRET.

    PubMed

    Bruck, Susan; Watters, Paul

    2009-01-01

    Some clinicians have suggested using virtual reality environments to deliver psychological interventions to treat anxiety disorders. However, given a significant body of work on cybersickness symptoms which may arise in virtual environments - especially those involving simulated motion - we tested (a) whether being exposed to a virtual reality environment alone causes anxiety to increase, and (b) whether exposure to simulated motion in a virtual reality environment increases anxiety. Using a repeated measures design, we used Kim's Anxiety Scale questionnaire to compare baseline anxiety, anxiety after virtual environment exposure, and anxiety after simulated motion. While there was no significant effect on anxiety for being in a virtual environment with no simulated motion, the introduction of simulated motion caused anxiety to significantly increase, but not to a severe or extreme level. The implications of this work for virtual reality exposure therapy (VRET) are discussed.

  14. Virtual Team Governance: Addressing the Governance Mechanisms and Virtual Team Performance

    NASA Astrophysics Data System (ADS)

    Zhan, Yihong; Bai, Yu; Liu, Ziheng

    As technology has improved and collaborative software has been developed, virtual teams with geographically dispersed members spread across diverse physical locations have become increasingly prominent. Virtual team is supported by advancing communication technologies, which makes virtual teams able to largely transcend time and space. Virtual teams have changed the corporate landscape, which are more complex and dynamic than traditional teams since the members of virtual teams are spread on diverse geographical locations and their roles in the virtual team are different. Therefore, how to realize good governance of virtual team and arrive at good virtual team performance is becoming critical and challenging. Good virtual team governance is essential for a high-performance virtual team. This paper explores the performance and the governance mechanism of virtual team. It establishes a model to explain the relationship between the performance and the governance mechanisms in virtual teams. This paper is focusing on managing virtual teams. It aims to find the strategies to help business organizations to improve the performance of their virtual teams and arrive at the objectives of good virtual team management.

  15. Web-Based Virtual Laboratory for Food Analysis Course

    NASA Astrophysics Data System (ADS)

    Handayani, M. N.; Khoerunnisa, I.; Sugiarti, Y.

    2018-02-01

    Implementation of learning on food analysis course in Program Study of Agro-industrial Technology Education faced problems. These problems include the availability of space and tools in the laboratory that is not comparable with the number of students also lack of interactive learning tools. On the other hand, the information technology literacy of students is quite high as well the internet network is quite easily accessible on campus. This is a challenge as well as opportunities in the development of learning media that can help optimize learning in the laboratory. This study aims to develop web-based virtual laboratory as one of the alternative learning media in food analysis course. This research is R & D (research and development) which refers to Borg & Gall model. The results showed that assessment’s expert of web-based virtual labs developed, in terms of software engineering aspects; visual communication; material relevance; usefulness and language used, is feasible as learning media. The results of the scaled test and wide-scale test show that students strongly agree with the development of web based virtual laboratory. The response of student to this virtual laboratory was positive. Suggestions from students provided further opportunities for improvement web based virtual laboratory and should be considered for further research.

  16. Boundary-layer diabatic processes, the virtual effect, and convective self-aggregation

    NASA Astrophysics Data System (ADS)

    Yang, D.

    2017-12-01

    The atmosphere can self-organize into long-lasting large-scale overturning circulations over an ocean surface with uniform temperature. This phenomenon is referred to as convective self-aggregation and has been argued to be important for tropical weather and climate systems. Here we use a 1D shallow water model and a 2D cloud-resolving model (CRM) to show that boundary-layer diabatic processes are essential for convective self-aggregation. We will show that boundary-layer radiative cooling, convective heating, and surface buoyancy flux help convection self-aggregate because they generate available potential energy (APE), which sustains the overturning circulation. We will also show that evaporative cooling in the boundary layer (cold pool) inhibits convective self-aggregation by reducing APE. Both the shallow water model and CRM results suggest that the enhanced virtual effect of water vapor can lead to convective self-aggregation, and this effect is mainly in the boundary layer. This study proposes new dynamical feedbacks for convective self-aggregation and complements current studies that focus on thermodynamic feedbacks.

  17. Surviving at Any Cost: Guilt Expression Following Extreme Ethical Conflicts in a Virtual Setting

    PubMed Central

    Cristofari, Cécile; Guitton, Matthieu J.

    2014-01-01

    Studying human behavior in response to large-scale catastrophic events, particularly how moral challenges would be undertaken under extreme conditions, is an important preoccupation for contemporary scientists and decision leaders. However, researching this issue was hindered by the lack of readily available models. Immersive virtual worlds could represent a solution, by providing ways to test human behavior in controlled life-threatening situations. Using a massively multi-player zombie apocalypse setting, we analysed spontaneously reported feelings of guilt following ethically questionable actions related to survival. The occurrence and magnitude of guilt depended on the nature of the consequences of the action. Furthermore, feelings of guilt predicted long-lasting changes in behavior, displayed as compensatory actions. Finally, actions inflicting immediate harm to others appeared mostly prompted by panic and were more commonly regretted. Thus, extreme conditions trigger a reduction of the impact of ethical norms in decision making, although awareness of ethicality is retained to a surprising extent. PMID:25007261

  18. Surviving at any cost: guilt expression following extreme ethical conflicts in a virtual setting.

    PubMed

    Cristofari, Cécile; Guitton, Matthieu J

    2014-01-01

    Studying human behavior in response to large-scale catastrophic events, particularly how moral challenges would be undertaken under extreme conditions, is an important preoccupation for contemporary scientists and decision leaders. However, researching this issue was hindered by the lack of readily available models. Immersive virtual worlds could represent a solution, by providing ways to test human behavior in controlled life-threatening situations. Using a massively multi-player zombie apocalypse setting, we analysed spontaneously reported feelings of guilt following ethically questionable actions related to survival. The occurrence and magnitude of guilt depended on the nature of the consequences of the action. Furthermore, feelings of guilt predicted long-lasting changes in behavior, displayed as compensatory actions. Finally, actions inflicting immediate harm to others appeared mostly prompted by panic and were more commonly regretted. Thus, extreme conditions trigger a reduction of the impact of ethical norms in decision making, although awareness of ethicality is retained to a surprising extent.

  19. "Virtual shear box" experiments of stress and slip cycling within a subduction interface mélange

    NASA Astrophysics Data System (ADS)

    Webber, Sam; Ellis, Susan; Fagereng, Åke

    2018-04-01

    What role does the progressive geometric evolution of subduction-related mélange shear zones play in the development of strain transients? We use a "virtual shear box" experiment, based on outcrop-scale observations from an ancient exhumed subduction interface - the Chrystalls Beach Complex (CBC), New Zealand - to constrain numerical models of slip processes within a meters-thick shear zone. The CBC is dominated by large, competent clasts surrounded by interconnected weak matrix. Under constant slip velocity boundary conditions, models of the CBC produce stress cycling behavior, accompanied by mixed brittle-viscous deformation. This occurs as a consequence of the reorganization of competent clasts, and the progressive development and breakdown of stress bridges as clasts mutually obstruct one another. Under constant shear stress boundary conditions, the models show periods of relative inactivity punctuated by aseismic episodic slip at rapid rates (meters per year). Such a process may contribute to the development of strain transients such as slow slip.

  20. Research on the key technologies of 3D spatial data organization and management for virtual building environments

    NASA Astrophysics Data System (ADS)

    Gong, Jun; Zhu, Qing

    2006-10-01

    As the special case of VGE in the fields of AEC (architecture, engineering and construction), Virtual Building Environment (VBE) has been broadly concerned. Highly complex, large-scale 3d spatial data is main bottleneck of VBE applications, so 3d spatial data organization and management certainly becomes the core technology for VBE. This paper puts forward 3d spatial data model for VBE, and the performance to implement it is very high. Inherent storage method of CAD data makes data redundant, and doesn't concern efficient visualization, which is a practical bottleneck to integrate CAD model, so An Efficient Method to Integrate CAD Model Data is put forward. Moreover, Since the 3d spatial indices based on R-tree are usually limited by their weakness of low efficiency due to the severe overlap of sibling nodes and the uneven size of nodes, a new node-choosing algorithm of R-tree are proposed.

  1. High-resolution digital brain atlases: a Hubble telescope for the brain.

    PubMed

    Jones, Edward G; Stone, James M; Karten, Harvey J

    2011-05-01

    We describe implementation of a method for digitizing at microscopic resolution brain tissue sections containing normal and experimental data and for making the content readily accessible online. Web-accessible brain atlases and virtual microscopes for online examination can be developed using existing computer and internet technologies. Resulting databases, made up of hierarchically organized, multiresolution images, enable rapid, seamless navigation through the vast image datasets generated by high-resolution scanning. Tools for visualization and annotation of virtual microscope slides enable remote and universal data sharing. Interactive visualization of a complete series of brain sections digitized at subneuronal levels of resolution offers fine grain and large-scale localization and quantification of many aspects of neural organization and structure. The method is straightforward and replicable; it can increase accessibility and facilitate sharing of neuroanatomical data. It provides an opportunity for capturing and preserving irreplaceable, archival neurohistological collections and making them available to all scientists in perpetuity, if resources could be obtained from hitherto uninterested agencies of scientific support. © 2011 New York Academy of Sciences.

  2. Analysis of Context Dependence in Social Interaction Networks of a Massively Multiplayer Online Role-Playing Game

    PubMed Central

    Son, Seokshin; Kang, Ah Reum; Kim, Hyun-chul; Kwon, Taekyoung; Park, Juyong; Kim, Huy Kang

    2012-01-01

    Rapid advances in modern computing and information technology have enabled millions of people to interact online via various social network and gaming services. The widespread adoption of such online services have made possible analysis of large-scale archival data containing detailed human interactions, presenting a very promising opportunity to understand the rich and complex human behavior. In collaboration with a leading global provider of Massively Multiplayer Online Role-Playing Games (MMORPGs), here we present a network science-based analysis of the interplay between distinct types of user interaction networks in the virtual world. We find that their properties depend critically on the nature of the context-interdependence of the interactions, highlighting the complex and multilayered nature of human interactions, a robust understanding of which we believe may prove instrumental in the designing of more realistic future virtual arenas as well as provide novel insights to the science of collective human behavior. PMID:22496771

  3. Measures against transmission of pandemic H1N1 influenza in Japan in 2009: simulation model.

    PubMed

    Yasuda, H; Suzuki, K

    2009-11-05

    The first outbreak of pandemic H1N1 influenza in Japan was contained in the Kansai region in May 2009 by social distancing measures. Modelling methods are needed to estimate the validity of these measures before their implementation on a large scale. We estimated the transmission coefficient from outbreaks of pandemic H1N1 influenza among school children in Japan in summer 2009; using this transmission coefficient, we simulated the spread of pandemic H1N1 influenza in a virtual community called the virtual Chuo Line which models an area to the west of metropolitan Tokyo. Measures evaluated in our simulation included: isolation at home, school closure, post-exposure prophylaxis and mass vaccinations of school children. We showed that post-exposure prophylaxis combined with isolation at home and school closure significantly decreases the total number of cases in the community and can mitigate the spread of pandemic H1N1 influenza, even when there is a delay in the availability of vaccine.

  4. Electro-textile garments for power and data distribution

    NASA Astrophysics Data System (ADS)

    Slade, Jeremiah R.; Winterhalter, Carole

    2015-05-01

    U.S. troops are increasingly being equipped with various electronic assets including flexible displays, computers, and communications systems. While these systems can significantly enhance operational capabilities, forming reliable connections between them poses a number of challenges in terms of comfort, weight, ergonomics, and operational security. IST has addressed these challenges by developing the technologies needed to integrate large-scale cross-seam electrical functionality into virtually any textile product, including the various garments and vests that comprise the warfighter's ensemble. Using this technology IST is able to develop textile products that do not simply support or accommodate a network but are the network.

  5. Avoiding the Pitfalls of Virtual Schooling

    ERIC Educational Resources Information Center

    Schachter, Ron

    2012-01-01

    Virtual school programs--especially online high school courses--are gaining traction in school districts around the country. While offering online courses was once the exclusive province of large state, nonprofit and for-profit organizations and companies, districts and even individual schools are now starting virtual schools of their own. Not…

  6. Active Gaming: Is "Virtual" Reality Right for Your Physical Education Program?

    ERIC Educational Resources Information Center

    Hansen, Lisa; Sanders, Stephen W.

    2012-01-01

    Active gaming is growing in popularity and the idea of increasing children's physical activity by using technology is largely accepted by physical educators. Teachers nationwide have been providing active gaming equipment such as virtual bikes, rhythmic dance machines, virtual sporting games, martial arts simulators, balance boards, and other…

  7. [Parallel virtual reality visualization of extreme large medical datasets].

    PubMed

    Tang, Min

    2010-04-01

    On the basis of a brief description of grid computing, the essence and critical techniques of parallel visualization of extreme large medical datasets are discussed in connection with Intranet and common-configuration computers of hospitals. In this paper are introduced several kernel techniques, including the hardware structure, software framework, load balance and virtual reality visualization. The Maximum Intensity Projection algorithm is realized in parallel using common PC cluster. In virtual reality world, three-dimensional models can be rotated, zoomed, translated and cut interactively and conveniently through the control panel built on virtual reality modeling language (VRML). Experimental results demonstrate that this method provides promising and real-time results for playing the role in of a good assistant in making clinical diagnosis.

  8. Effect of virtual reality distraction on pain among patients with hand injury undergoing dressing change.

    PubMed

    Guo, Chunlan; Deng, Hongyan; Yang, Jian

    2015-01-01

    To assess the effect of virtual reality distraction on pain among patients with a hand injury undergoing a dressing change. Virtual reality distraction can effectively alleviate pain among patients undergoing a dressing change. Clinical research has not addressed pain control during a dressing change. A randomised controlled trial was performed. In the first dressing change sequence, 98 patients were randomly divided into an experimental group and a control group, with 49 cases in each group. Pain levels were compared between the two groups before and after the dressing change using a visual analog scale. The sense of involvement in virtual environments was measured using the Pearson correlation coefficient analysis, which determined the relationship between the sense of involvement and pain level. The difference in visual analog scale scores between the two groups before the dressing change was not statistically significant (t = 0·196, p > 0·05), but the scores became statistically significant after the dressing change (t = -30·792, p < 0·01). The correlation between the sense of involvement in a virtual environment and pain level during the dressing was statistically significant (R(2) = 0·5538, p < 0·05). Virtual reality distraction can effectively alleviate pain among patients with a hand injury undergoing a dressing change. Better results can be obtained by increasing the sense of involvement in a virtual environment. Virtual reality distraction can effectively relieve pain without side effects and is not reliant on a doctor's prescription. This tool is convenient for nurses to use, especially when analgesics are unavailable. © 2014 John Wiley & Sons Ltd.

  9. Global Rating Scales and Motion Analysis Are Valid Proficiency Metrics in Virtual and Benchtop Knee Arthroscopy Simulators.

    PubMed

    Chang, Justues; Banaszek, Daniel C; Gambrel, Jason; Bardana, Davide

    2016-04-01

    Work-hour restrictions and fatigue management strategies in surgical training programs continue to evolve in an effort to improve the learning environment and promote safer patient care. In response, training programs must reevaluate how various teaching modalities such as simulation can augment the development of surgical competence in trainees. For surgical simulators to be most useful, it is important to determine whether surgical proficiency can be reliably differentiated using them. To our knowledge, performance on both virtual and benchtop arthroscopy simulators has not been concurrently assessed in the same subjects. (1) Do global rating scales and procedure time differentiate arthroscopic expertise in virtual and benchtop knee models? (2) Can commercially available built-in motion analysis metrics differentiate arthroscopic expertise? (3) How well are performance measures on virtual and benchtop simulators correlated? (4) Are these metrics sensitive enough to differentiate by year of training? A cross-sectional study of 19 subjects (four medical students, 12 residents, and three staff) were recruited and divided into 11 novice arthroscopists (student to Postgraduate Year [PGY] 3) and eight proficient arthroscopists (PGY 4 to staff) who completed a diagnostic arthroscopy and loose-body retrieval in both virtual and benchtop knee models. Global rating scales (GRS), procedure times, and motion analysis metrics were used to evaluate performance. The proficient group scored higher on virtual (14 ± 6 [95% confidence interval {CI}, 10-18] versus 36 ± 5 [95% CI, 32-40], p < 0.001) and benchtop (16 ± 8 [95% CI, 11-21] versus 36 ± 5 [95% CI, 31-40], p < 0.001) GRS scales. The proficient subjects completed nearly all tasks faster than novice subjects, including the virtual scope (579 ±169 [95% CI, 466-692] versus 358 ± 178 [95% CI, 210-507] seconds, p = 0.02) and benchtop knee scope + probe (480 ± 160 [95% CI, 373-588] versus 277 ± 64 [95% CI, 224-330] seconds, p = 0.002). The built-in motion analysis metrics also distinguished novices from proficient arthroscopists using the self-generated virtual loose body retrieval task scores (4 ± 1 [95% CI, 3-5] versus 6 ± 1 [95% CI, 5-7], p = 0.001). GRS scores between virtual and benchtop models were very strongly correlated (ρ = 0.93, p < 0.001). There was strong correlation between year of training and virtual GRS (ρ = 0.8, p < 0.001) and benchtop GRS (ρ = 0.87, p < 0.001) scores. To our knowledge, this is the first study to evaluate performance on both virtual and benchtop knee simulators. We have shown that subjective GRS scores and objective motion analysis metrics and procedure time are valid measures to distinguish arthroscopic skill on both virtual and benchtop modalities. Performance on both modalities is well correlated. We believe that training on artificial models allows acquisition of skills in a safe environment. Future work should compare different modalities in the efficiency of skill acquisition, retention, and transferability to the operating room.

  10. Application of the Virtual Fields Method to a relaxation behaviour of rubbers

    NASA Astrophysics Data System (ADS)

    Yoon, Sung-ho; Siviour, Clive R.

    2018-07-01

    This paper presents the application of the Virtual Fields Method (VFM) for the characterization of viscoelastic behaviour of rubbers. The relaxation behaviour of the rubbers following a dynamic loading event is characterized using the dynamic VFM in which full-field (two dimensional) strain and acceleration data, obtained from high-speed imaging, are analysed by the principle of virtual work without traction force data, instead using the acceleration fields in the specimen to provide stress information. Two (silicone and nitrile) rubbers were tested in tension using a drop-weight apparatus. It is assumed that the dynamic behaviour is described by the combination of hyperelastic and Prony series models. A VFM based procedure is designed and used to produce the identification of the modulus term of a hyperelastic model and the Prony series parameters within a time scale determined by two experimental factors: imaging speed and loading duration. Then, the time range of the data is extended using experiments at different temperatures combined with the time-temperature superposition principle. Prior to these experimental analyses, finite element simulations were performed to validate the application of the proposed VFM analysis. Therefore, for the first time, it has been possible to identify relaxation behaviour of a material following dynamic loading, using a technique that can be applied to both small and large deformations.

  11. Geometric and perceptual effects of the location of the observer vantage point for linear-perspective images.

    PubMed

    Todorović, Dejan

    2005-01-01

    New geometric analyses are presented of three impressive examples of the effects of location of the vantage point on virtual 3-D spaces conveyed by linear-perspective images. In the 'egocentric-road' effect, the perceived direction of the depicted road is always pointed towards the observer, for any position of the vantage point. It is shown that perspective images of real-observer-aimed roads are characterised by a specific, simple pattern of projected side lines. Given that pattern, the position of the observer, and certain assumptions and perspective arguments, the perceived direction of the virtual road towards the observer can be predicted. In the 'skewed balcony' and the 'collapsing ceiling' effects, the position of the vantage point affects the impression of alignment of the virtual architecture conveyed by large-scale illusionistic paintings and the real architecture surrounding them. It is shown that the dislocation of the vantage point away from the viewing position prescribed by the perspective construction induces a mismatch between the painted vanishing point of elements in the picture and the real vanishing point of corresponding elements of the actual architecture. This mismatch of vanishing points provides visual information that the elements of the two architectures are not mutually parallel.

  12. The Effect of Virtual Reality Distraction on Pain Relief During Dressing Changes in Children with Chronic Wounds on Lower Limbs.

    PubMed

    Hua, Yun; Qiu, Rong; Yao, Wen-Yan; Zhang, Qin; Chen, Xiao-Li

    2015-10-01

    It has been demonstrated that patients with chronic wounds experience the most pain during dressing changes. Currently, researchers focus mostly on analgesics and appropriate dressing materials to relieve pain during dressing changes of chronic wounds. However, the effect of nonpharmacologic interventions, such as virtual reality distraction, on pain management during dressing changes of pediatric chronic wounds remains poorly understood. To investigate the effect of virtual reality distraction on alleviating pain during dressing changes in children with chronic wounds on their lower limbs. A prospective randomized study. A pediatric center in a tertiary hospital. Sixty-five children, aged from 4 to 16 years, with chronic wounds on their lower limbs. Pain and anxiety scores during dressing changes were recorded by using the Wong-Baker Faces picture scale, visual analogue scale, and pain behavior scale, as well as physiological measurements including pulse rate and oxygen saturation. Time length of dressing change was recorded. Virtual reality distraction significantly relieved pain and anxiety scores during dressing changes and reduced the time length for dressing changes as compared to standard distraction methods. The use of virtual reality as a distraction tool in a pediatric ward offered superior pain reduction to children as compared to standard distractions. This device can potentially improve clinical efficiency by reducing length time for dressing changes. Copyright © 2015 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.

  13. Effect of virtual reality-based rehabilitation on upper-extremity function in patients with brain tumor: controlled trial.

    PubMed

    Yoon, Jisun; Chun, Min Ho; Lee, Sook Joung; Kim, Bo Ryun

    2015-06-01

    The aim of this study was to evaluate the benefit of virtual reality-based rehabilitation on upper-extremity function in patients with brain tumor. Patients with upper-extremity dysfunction were divided into age-matched and tumor type-matched two groups. The intervention group performed the virtual reality program 30 mins per session for 9 sessions and conventional occupational therapy 30 mins per session for 6 sessions for 3 wks, whereas the control group received conventional occupational therapy alone 30 mins per session for 15 sessions for 3 wks. The Box and Block test, the Manual Function test, and the Fugl-Meyer scale were used to evaluate upper-extremity function. The Korean version of the Modified Barthel Index was used to assess activities of daily living. Forty patients completed the study (20 for each group). Each group exhibited significant posttreatment improvements in the Box and Block test, Manual Function test, Fugl-Meyer scale, and Korean version of the Modified Barthel Index scores. The Box and Block test, the Fugl-Meyer scale, and the Manual Function test showed greater improvements in shoulder/elbow/forearm function in the intervention group and hand function in the control group. Virtual reality-based rehabilitation combined with conventional occupational therapy may be more effective than conventional occupational therapy, especially for proximal upper-extremity function in patients with brain tumor. Further studies considering hand function, such as use of virtual reality programs that targeting hand use, are required.

  14. Assessing the harms of cannabis cultivation in Belgium.

    PubMed

    Paoli, Letizia; Decorte, Tom; Kersten, Loes

    2015-03-01

    Since the 1990s, a shift from the importation of foreign cannabis to domestic cultivation has taken place in Belgium, as it has in many other countries. This shift has prompted Belgian policy-making bodies to prioritize the repression of cannabis cultivation. Against this background, the article aims to systematically map and assess for the first time ever the harms associated with cannabis cultivation, covering the whole spectrum of growers. This study is based on a web survey primarily targeting small-scale growers (N=1293) and on three interconnected sets of qualitative data on large-scale growers and traffickers (34 closed criminal proceedings, interviews with 32 criminal justice experts, and with 17 large-scale cannabis growers and three traffickers). The study relied on Greenfield and Paoli's (2013) harm assessment framework to identify the harms associated with cannabis cultivation and to assess the incidence, severity and causes of such harms. Cannabis cultivation has become endemic in Belgium. Despite that, it generates, for Belgium, limited harms of medium-low or medium priority. Large-scale growers tend to produce more harms than the small-scale ones. Virtually all the harms associated with cannabis cultivation are the result of the current criminalizing policies. Given the spread of cannabis cultivation and Belgium's position in Europe, reducing the supply of cannabis does not appear to be a realistic policy objective. Given the limited harms generated, there is scarce scientific justification to prioritize cannabis cultivation in Belgian law enforcement strategies. As most harms are generated by large-scale growers, it is this category of cultivator, if any, which should be the focus of law enforcement repression. Given the policy origin of most harms, policy-makers should seek to develop policies likely to reduce such harms. At the same time, further research is needed to comparatively assess the harms associated with cannabis cultivation (and trafficking) with those arising from use. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Immersive virtual reality for visualization of abdominal CT

    NASA Astrophysics Data System (ADS)

    Lin, Qiufeng; Xu, Zhoubing; Li, Bo; Baucom, Rebeccah; Poulose, Benjamin; Landman, Bennett A.; Bodenheimer, Robert E.

    2013-03-01

    Immersive virtual environments use a stereoscopic head-mounted display and data glove to create high fidelity virtual experiences in which users can interact with three-dimensional models and perceive relationships at their true scale. This stands in stark contrast to traditional PACS-based infrastructure in which images are viewed as stacks of two dimensional slices, or, at best, disembodied renderings. Although there has substantial innovation in immersive virtual environments for entertainment and consumer media, these technologies have not been widely applied in clinical applications. Here, we consider potential applications of immersive virtual environments for ventral hernia patients with abdominal computed tomography imaging data. Nearly a half million ventral hernias occur in the United States each year, and hernia repair is the most commonly performed general surgery operation worldwide. A significant problem in these conditions is communicating the urgency, degree of severity, and impact of a hernia (and potential repair) on patient quality of life. Hernias are defined by ruptures in the abdominal wall (i.e., the absence of healthy tissues) rather than a growth (e.g., cancer); therefore, understanding a hernia necessitates understanding the entire abdomen. Our environment allows surgeons and patients to view body scans at scale and interact with these virtual models using a data glove. This visualization and interaction allows users to perceive the relationship between physical structures and medical imaging data. The system provides close integration of PACS-based CT data with immersive virtual environments and creates opportunities to study and optimize interfaces for patient communication, operative planning, and medical education.

  16. Immersive Virtual Reality for Visualization of Abdominal CT.

    PubMed

    Lin, Qiufeng; Xu, Zhoubing; Li, Bo; Baucom, Rebeccah; Poulose, Benjamin; Landman, Bennett A; Bodenheimer, Robert E

    2013-03-28

    Immersive virtual environments use a stereoscopic head-mounted display and data glove to create high fidelity virtual experiences in which users can interact with three-dimensional models and perceive relationships at their true scale. This stands in stark contrast to traditional PACS-based infrastructure in which images are viewed as stacks of two-dimensional slices, or, at best, disembodied renderings. Although there has substantial innovation in immersive virtual environments for entertainment and consumer media, these technologies have not been widely applied in clinical applications. Here, we consider potential applications of immersive virtual environments for ventral hernia patients with abdominal computed tomography imaging data. Nearly a half million ventral hernias occur in the United States each year, and hernia repair is the most commonly performed general surgery operation worldwide. A significant problem in these conditions is communicating the urgency, degree of severity, and impact of a hernia (and potential repair) on patient quality of life. Hernias are defined by ruptures in the abdominal wall (i.e., the absence of healthy tissues) rather than a growth (e.g., cancer); therefore, understanding a hernia necessitates understanding the entire abdomen. Our environment allows surgeons and patients to view body scans at scale and interact with these virtual models using a data glove. This visualization and interaction allows users to perceive the relationship between physical structures and medical imaging data. The system provides close integration of PACS-based CT data with immersive virtual environments and creates opportunities to study and optimize interfaces for patient communication, operative planning, and medical education.

  17. Software Architecture for a Virtual Environment for Nano Scale Assembly (VENSA).

    PubMed

    Lee, Yong-Gu; Lyons, Kevin W; Feng, Shaw C

    2004-01-01

    A Virtual Environment (VE) uses multiple computer-generated media to let a user experience situations that are temporally and spatially prohibiting. The information flow between the user and the VE is bidirectional and the user can influence the environment. The software development of a VE requires orchestrating multiple peripherals and computers in a synchronized way in real time. Although a multitude of useful software components for VEs exists, many of these are packaged within a complex framework and can not be used separately. In this paper, an architecture is presented which is designed to let multiple frameworks work together while being shielded from the application program. This architecture, which is called the Virtual Environment for Nano Scale Assembly (VENSA), has been constructed for interfacing with an optical tweezers instrument for nanotechnology development. However, this approach can be generalized for most virtual environments. Through the use of VENSA, the programmer can rely on existing solutions and concentrate more on the application software design.

  18. Virtual rough samples to test 3D nanometer-scale scanning electron microscopy stereo photogrammetry.

    PubMed

    Villarrubia, J S; Tondare, V N; Vladár, A E

    2016-01-01

    The combination of scanning electron microscopy for high spatial resolution, images from multiple angles to provide 3D information, and commercially available stereo photogrammetry software for 3D reconstruction offers promise for nanometer-scale dimensional metrology in 3D. A method is described to test 3D photogrammetry software by the use of virtual samples-mathematical samples from which simulated images are made for use as inputs to the software under test. The virtual sample is constructed by wrapping a rough skin with any desired power spectral density around a smooth near-trapezoidal line with rounded top corners. Reconstruction is performed with images simulated from different angular viewpoints. The software's reconstructed 3D model is then compared to the known geometry of the virtual sample. Three commercial photogrammetry software packages were tested. Two of them produced results for line height and width that were within close to 1 nm of the correct values. All of the packages exhibited some difficulty in reconstructing details of the surface roughness.

  19. Software Architecture for a Virtual Environment for Nano Scale Assembly (VENSA)

    PubMed Central

    Lee, Yong-Gu; Lyons, Kevin W.; Feng, Shaw C.

    2004-01-01

    A Virtual Environment (VE) uses multiple computer-generated media to let a user experience situations that are temporally and spatially prohibiting. The information flow between the user and the VE is bidirectional and the user can influence the environment. The software development of a VE requires orchestrating multiple peripherals and computers in a synchronized way in real time. Although a multitude of useful software components for VEs exists, many of these are packaged within a complex framework and can not be used separately. In this paper, an architecture is presented which is designed to let multiple frameworks work together while being shielded from the application program. This architecture, which is called the Virtual Environment for Nano Scale Assembly (VENSA), has been constructed for interfacing with an optical tweezers instrument for nanotechnology development. However, this approach can be generalized for most virtual environments. Through the use of VENSA, the programmer can rely on existing solutions and concentrate more on the application software design. PMID:27366610

  20. Runtime Performance and Virtual Network Control Alternatives in VM-Based High-Fidelity Network Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J

    2012-01-01

    In prior work (Yoginath and Perumalla, 2011; Yoginath, Perumalla and Henz, 2012), the motivation, challenges and issues were articulated in favor of virtual time ordering of Virtual Machines (VMs) in network simulations hosted on multi-core machines. Two major components in the overall virtualization challenge are (1) virtual timeline establishment and scheduling of VMs, and (2) virtualization of inter-VM communication. Here, we extend prior work by presenting scaling results for the first component, with experiment results on up to 128 VMs scheduled in virtual time order on a single 12-core host. We also explore the solution space of design alternatives formore » the second component, and present performance results from a multi-threaded, multi-queue implementation of inter-VM network control for synchronized execution with VM scheduling, incorporated in our NetWarp simulation system.« less

  1. Developing science gateways for drug discovery in a grid environment.

    PubMed

    Pérez-Sánchez, Horacio; Rezaei, Vahid; Mezhuyev, Vitaliy; Man, Duhu; Peña-García, Jorge; den-Haan, Helena; Gesing, Sandra

    2016-01-01

    Methods for in silico screening of large databases of molecules increasingly complement and replace experimental techniques to discover novel compounds to combat diseases. As these techniques become more complex and computationally costly we are faced with an increasing problem to provide the research community of life sciences with a convenient tool for high-throughput virtual screening on distributed computing resources. To this end, we recently integrated the biophysics-based drug-screening program FlexScreen into a service, applicable for large-scale parallel screening and reusable in the context of scientific workflows. Our implementation is based on Pipeline Pilot and Simple Object Access Protocol and provides an easy-to-use graphical user interface to construct complex workflows, which can be executed on distributed computing resources, thus accelerating the throughput by several orders of magnitude.

  2. Expanding the user base beyond HEP for the Ganga distributed analysis user interface

    NASA Astrophysics Data System (ADS)

    Currie, R.; Egede, U.; Richards, A.; Slater, M.; Williams, M.

    2017-10-01

    This document presents the result of recent developments within Ganga[1] project to support users from new communities outside of HEP. In particular I will examine the case of users from the Large Scale Survey Telescope (LSST) group looking to use resources provided by the UK based GridPP[2][3] DIRAC[4][5] instance. An example use case is work performed with users from the LSST Virtual Organisation (VO) to distribute the workflow used for galaxy shape identification analyses. This work highlighted some LSST specific challenges which could be well solved by common tools within the HEP community. As a result of this work the LSST community was able to take advantage of GridPP[2][3] resources to perform large computing tasks within the UK.

  3. Master-slave system with force feedback based on dynamics of virtual model

    NASA Technical Reports Server (NTRS)

    Nojima, Shuji; Hashimoto, Hideki

    1994-01-01

    A master-slave system can extend manipulating and sensing capabilities of a human operator to a remote environment. But the master-slave system has two serious problems: one is the mechanically large impedance of the system; the other is the mechanical complexity of the slave for complex remote tasks. These two problems reduce the efficiency of the system. If the slave has local intelligence, it can help the human operator by using its good points like fast calculation and large memory. The authors suggest that the slave is a dextrous hand with many degrees of freedom able to manipulate an object of known shape. It is further suggested that the dimensions of the remote work space be shared by the human operator and the slave. The effect of the large impedance of the system can be reduced in a virtual model, a physical model constructed in a computer with physical parameters as if it were in the real world. A method to determine the damping parameter dynamically for the virtual model is proposed. Experimental results show that this virtual model is better than the virtual model with fixed damping.

  4. Fast Realistic MRI Simulations Based on Generalized Multi-Pool Exchange Tissue Model.

    PubMed

    Liu, Fang; Velikina, Julia V; Block, Walter F; Kijowski, Richard; Samsonov, Alexey A

    2017-02-01

    We present MRiLab, a new comprehensive simulator for large-scale realistic MRI simulations on a regular PC equipped with a modern graphical processing unit (GPU). MRiLab combines realistic tissue modeling with numerical virtualization of an MRI system and scanning experiment to enable assessment of a broad range of MRI approaches including advanced quantitative MRI methods inferring microstructure on a sub-voxel level. A flexible representation of tissue microstructure is achieved in MRiLab by employing the generalized tissue model with multiple exchanging water and macromolecular proton pools rather than a system of independent proton isochromats typically used in previous simulators. The computational power needed for simulation of the biologically relevant tissue models in large 3D objects is gained using parallelized execution on GPU. Three simulated and one actual MRI experiments were performed to demonstrate the ability of the new simulator to accommodate a wide variety of voxel composition scenarios and demonstrate detrimental effects of simplified treatment of tissue micro-organization adapted in previous simulators. GPU execution allowed  ∼ 200× improvement in computational speed over standard CPU. As a cross-platform, open-source, extensible environment for customizing virtual MRI experiments, MRiLab streamlines the development of new MRI methods, especially those aiming to infer quantitatively tissue composition and microstructure.

  5. Fast Realistic MRI Simulations Based on Generalized Multi-Pool Exchange Tissue Model

    PubMed Central

    Velikina, Julia V.; Block, Walter F.; Kijowski, Richard; Samsonov, Alexey A.

    2017-01-01

    We present MRiLab, a new comprehensive simulator for large-scale realistic MRI simulations on a regular PC equipped with a modern graphical processing unit (GPU). MRiLab combines realistic tissue modeling with numerical virtualization of an MRI system and scanning experiment to enable assessment of a broad range of MRI approaches including advanced quantitative MRI methods inferring microstructure on a sub-voxel level. A flexibl representation of tissue microstructure is achieved in MRiLab by employing the generalized tissue model with multiple exchanging water and macromolecular proton pools rather than a system of independent proton isochromats typically used in previous simulators. The computational power needed for simulation of the biologically relevant tissue models in large 3D objects is gained using parallelized execution on GPU. Three simulated and one actual MRI experiments were performed to demonstrate the ability of the new simulator to accommodate a wide variety of voxel composition scenarios and demonstrate detrimental effects of simplifie treatment of tissue micro-organization adapted in previous simulators. GPU execution allowed ∼200× improvement in computational speed over standard CPU. As a cross-platform, open-source, extensible environment for customizing virtual MRI experiments, MRiLab streamlines the development of new MRI methods, especially those aiming to infer quantitatively tissue composition and microstructure. PMID:28113746

  6. Hysteroscopic sterilization using a virtual reality simulator: assessment of learning curve.

    PubMed

    Janse, Juliënne A; Goedegebuure, Ruben S A; Veersema, Sebastiaan; Broekmans, Frank J M; Schreuder, Henk W R

    2013-01-01

    To assess the learning curve using a virtual reality simulator for hysteroscopic sterilization with the Essure method. Prospective multicenter study (Canadian Task Force classification II-2). University and teaching hospital in the Netherlands. Thirty novices (medical students) and five experts (gynecologists who had performed >150 Essure sterilization procedures). All participants performed nine repetitions of bilateral Essure placement on the simulator. Novices returned after 2 weeks and performed a second series of five repetitions to assess retention of skills. Structured observations on performance using the Global Rating Scale and parameters derived from the simulator provided measurements for analysis. The learning curve is represented by improvement per procedure. Two-way repeated-measures analysis of variance was used to analyze learning curves. Effect size (ES) was calculated to express the practical significance of the results (ES ≥ 0.50 indicates a large learning effect). For all parameters, significant improvements were found in novice performance within nine repetitions. Large learning effects were established for six of eight parameters (p < .001; ES, 0.50-0.96). Novices approached expert level within 9 to 14 repetitions. The learning curve established in this study endorses future implementation of the simulator in curricula on hysteroscopic skill acquisition for clinicians who are interested in learning this sterilization technique. Copyright © 2013 AAGL. Published by Elsevier Inc. All rights reserved.

  7. Phase Transition of a Dynamical System with a Bi-Directional, Instantaneous Coupling to a Virtual System

    NASA Astrophysics Data System (ADS)

    Gintautas, Vadas; Hubler, Alfred

    2006-03-01

    As worldwide computer resources increase in power and decrease in cost, real-time simulations of physical systems are becoming increasingly prevalent, from laboratory models to stock market projections and entire ``virtual worlds'' in computer games. Often, these systems are meticulously designed to match real-world systems as closely as possible. We study the limiting behavior of a virtual horizontally driven pendulum coupled to its real-world counterpart, where the interaction occurs on a time scale that is much shorter than the time scale of the dynamical system. We find that if the physical parameters of the virtual system match those of the real system within a certain tolerance, there is a qualitative change in the behavior of the two-pendulum system as the strength of the coupling is increased. Applications include a new method to measure the physical parameters of a real system and the use of resonance spectroscopy to refine a computer model. As virtual systems better approximate real ones, even very weak interactions may produce unexpected and dramatic behavior. The research is supported by the National Science Foundation Grant No. NSF PHY 01-40179, NSF DMS 03-25939 ITR, and NSF DGE 03-38215.

  8. Is physiotherapy integrated virtual walking effective on pain, function, and kinesiophobia in patients with non-specific low-back pain? Randomised controlled trial.

    PubMed

    Yilmaz Yelvar, Gul Deniz; Çırak, Yasemin; Dalkılınç, Murat; Parlak Demir, Yasemin; Guner, Zeynep; Boydak, Ayşenur

    2017-02-01

    According to literature, virtual reality was found to reduce pain and kinesiophobia in patients with chronic pain. The purpose of the study was to investigate short-term effect of the virtual reality on pain, function, and kinesiophobia in patients with subacute and chronic non-specific low-back pain METHODS: This randomised controlled study in which 44 patients were randomly assigned to the traditional physiotherapy (control group, 22 subjects) or virtual walking integrated physiotherapy (experimental group, 22 subjects). Before and after treatment, Visual Analog Scale (VAS), TAMPA Kinesiophobia Scale (TKS), Oswestry Disability Index (ODI), Nottingham Health Profile (NHP), Timed-up and go Test (TUG), 6-Minute Walk Test (6MWT), and Single-Leg Balance Test were assessed. The interaction effect between group and time was assessed by using repeated-measures analysis of covariance. After treatment, both groups showed improvement in all parameters. However, VAS, TKS, TUG, and 6MWT scores showed significant differences in favor of the experimental group. Virtual walking integrated physiotherapy reduces pain and kinesiophobia, and improved function in patients with subacute and chronic non-specific low-back pain in short term.

  9. Compiling and using input-output frameworks through collaborative virtual laboratories.

    PubMed

    Lenzen, Manfred; Geschke, Arne; Wiedmann, Thomas; Lane, Joe; Anderson, Neal; Baynes, Timothy; Boland, John; Daniels, Peter; Dey, Christopher; Fry, Jacob; Hadjikakou, Michalis; Kenway, Steven; Malik, Arunima; Moran, Daniel; Murray, Joy; Nettleton, Stuart; Poruschi, Lavinia; Reynolds, Christian; Rowley, Hazel; Ugon, Julien; Webb, Dean; West, James

    2014-07-01

    Compiling, deploying and utilising large-scale databases that integrate environmental and economic data have traditionally been labour- and cost-intensive processes, hindered by the large amount of disparate and misaligned data that must be collected and harmonised. The Australian Industrial Ecology Virtual Laboratory (IELab) is a novel, collaborative approach to compiling large-scale environmentally extended multi-region input-output (MRIO) models. The utility of the IELab product is greatly enhanced by avoiding the need to lock in an MRIO structure at the time the MRIO system is developed. The IELab advances the idea of the "mother-daughter" construction principle, whereby a regionally and sectorally very detailed "mother" table is set up, from which "daughter" tables are derived to suit specific research questions. By introducing a third tier - the "root classification" - IELab users are able to define their own mother-MRIO configuration, at no additional cost in terms of data handling. Customised mother-MRIOs can then be built, which maximise disaggregation in aspects that are useful to a family of research questions. The second innovation in the IELab system is to provide a highly automated collaborative research platform in a cloud-computing environment, greatly expediting workflows and making these computational benefits accessible to all users. Combining these two aspects realises many benefits. The collaborative nature of the IELab development project allows significant savings in resources. Timely deployment is possible by coupling automation procedures with the comprehensive input from multiple teams. User-defined MRIO tables, coupled with high performance computing, mean that MRIO analysis will be useful and accessible for a great many more research applications than would otherwise be possible. By ensuring that a common set of analytical tools such as for hybrid life-cycle assessment is adopted, the IELab will facilitate the harmonisation of fragmented, dispersed and misaligned raw data for the benefit of all interested parties. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. A Science Cloud: OneSpaceNet

    NASA Astrophysics Data System (ADS)

    Morikawa, Y.; Murata, K. T.; Watari, S.; Kato, H.; Yamamoto, K.; Inoue, S.; Tsubouchi, K.; Fukazawa, K.; Kimura, E.; Tatebe, O.; Shimojo, S.

    2010-12-01

    Main methodologies of Solar-Terrestrial Physics (STP) so far are theoretical, experimental and observational, and computer simulation approaches. Recently "informatics" is expected as a new (fourth) approach to the STP studies. Informatics is a methodology to analyze large-scale data (observation data and computer simulation data) to obtain new findings using a variety of data processing techniques. At NICT (National Institute of Information and Communications Technology, Japan) we are now developing a new research environment named "OneSpaceNet". The OneSpaceNet is a cloud-computing environment specialized for science works, which connects many researchers with high-speed network (JGN: Japan Gigabit Network). The JGN is a wide-area back-born network operated by NICT; it provides 10G network and many access points (AP) over Japan. The OneSpaceNet also provides with rich computer resources for research studies, such as super-computers, large-scale data storage area, licensed applications, visualization devices (like tiled display wall: TDW), database/DBMS, cluster computers (4-8 nodes) for data processing and communication devices. What is amazing in use of the science cloud is that a user simply prepares a terminal (low-cost PC). Once connecting the PC to JGN2plus, the user can make full use of the rich resources of the science cloud. Using communication devices, such as video-conference system, streaming and reflector servers, and media-players, the users on the OneSpaceNet can make research communications as if they belong to a same (one) laboratory: they are members of a virtual laboratory. The specification of the computer resources on the OneSpaceNet is as follows: The size of data storage we have developed so far is almost 1PB. The number of the data files managed on the cloud storage is getting larger and now more than 40,000,000. What is notable is that the disks forming the large-scale storage are distributed to 5 data centers over Japan (but the storage system performs as one disk). There are three supercomputers allocated on the cloud, one from Tokyo, one from Osaka and the other from Nagoya. One's simulation job data on any supercomputers are saved on the cloud data storage (same directory); it is a kind of virtual computing environment. The tiled display wall has 36 panels acting as one display; the pixel (resolution) size of it is as large as 18000x4300. This size is enough to preview or analyze the large-scale computer simulation data. It also allows us to take a look of multiple (e.g., 100 pictures) on one screen together with many researchers. In our talk we also present a brief report of the initial results using the OneSpaceNet for Global MHD simulations as an example of successful use of our science cloud; (i) Ultra-high time resolution visualization of Global MHD simulations on the large-scale storage and parallel processing system on the cloud, (ii) Database of real-time Global MHD simulation and statistic analyses of the data, and (iii) 3D Web service of Global MHD simulations.

  11. A transparently scalable visualization architecture for exploring the universe.

    PubMed

    Fu, Chi-Wing; Hanson, Andrew J

    2007-01-01

    Modern astronomical instruments produce enormous amounts of three-dimensional data describing the physical Universe. The currently available data sets range from the solar system to nearby stars and portions of the Milky Way Galaxy, including the interstellar medium and some extrasolar planets, and extend out to include galaxies billions of light years away. Because of its gigantic scale and the fact that it is dominated by empty space, modeling and rendering the Universe is very different from modeling and rendering ordinary three-dimensional virtual worlds at human scales. Our purpose is to introduce a comprehensive approach to an architecture solving this visualization problem that encompasses the entire Universe while seeking to be as scale-neutral as possible. One key element is the representation of model-rendering procedures using power scaled coordinates (PSC), along with various PSC-based techniques that we have devised to generalize and optimize the conventional graphics framework to the scale domains of astronomical visualization. Employing this architecture, we have developed an assortment of scale-independent modeling and rendering methods for a large variety of astronomical models, and have demonstrated scale-insensitive interactive visualizations of the physical Universe covering scales ranging from human scale to the Earth, to the solar system, to the Milky Way Galaxy, and to the entire observable Universe.

  12. Direct virtual photon production in Au+Au collisions at s N N = 200   GeV

    DOE PAGES

    Adamczyk, L.; Adkins, J. K.; Agakishiev, G.; ...

    2017-04-27

    Here we report the direct virtual photon invariant yields in the transverse momentum ranges 1< pT <3GeV/c and 5ee < 0.28GeV/c 2 for 0–80% minimum-bias Au+Au collisions atmore » $$\\sqrt{s}$$$_ {NN}$$ = 200GeV. A clear excess in the invariant yield compared to the nuclear overlap function T AA scaled p+p reference is observed in the p T range 1T <3GeV/c. For p T >6GeV/c the production follows T AA scaling. In conclusion, model calculations with contributions from thermal radiation and initial hard parton scattering are consistent within uncertainties with the direct virtual photon invariant yield.« less

  13. Fractal multi-level organisation of human groups in a virtual world.

    PubMed

    Fuchs, Benedikt; Sornette, Didier; Thurner, Stefan

    2014-10-06

    Humans are fundamentally social. They form societies which consist of hierarchically layered nested groups of various quality, size, and structure. The anthropologic literature has classified these groups as support cliques, sympathy groups, bands, cognitive groups, tribes, linguistic groups, and so on. Anthropologic data show that, on average, each group consists of approximately three subgroups. However, a general understanding of the structural dependence of groups at different layers is largely missing. We extend these early findings to a very large high-precision large-scale internet-based social network data. We analyse the organisational structure of a complete, multi-relational, large social multiplex network of a human society consisting of about 400,000 odd players of an open-ended massive multiplayer online game for which we know all about their various group memberships at different layers. Remarkably, the online players' society exhibits the same type of structured hierarchical layers as found in hunter-gatherer societies. Our findings suggest that the hierarchical organisation of human society is deeply nested in human psychology.

  14. Fractal multi-level organisation of human groups in a virtual world

    PubMed Central

    Fuchs, Benedikt; Sornette, Didier; Thurner, Stefan

    2014-01-01

    Humans are fundamentally social. They form societies which consist of hierarchically layered nested groups of various quality, size, and structure. The anthropologic literature has classified these groups as support cliques, sympathy groups, bands, cognitive groups, tribes, linguistic groups, and so on. Anthropologic data show that, on average, each group consists of approximately three subgroups. However, a general understanding of the structural dependence of groups at different layers is largely missing. We extend these early findings to a very large high-precision large-scale internet-based social network data. We analyse the organisational structure of a complete, multi-relational, large social multiplex network of a human society consisting of about 400,000 odd players of an open-ended massive multiplayer online game for which we know all about their various group memberships at different layers. Remarkably, the online players' society exhibits the same type of structured hierarchical layers as found in hunter-gatherer societies. Our findings suggest that the hierarchical organisation of human society is deeply nested in human psychology. PMID:25283998

  15. Fractal multi-level organisation of human groups in a virtual world

    NASA Astrophysics Data System (ADS)

    Fuchs, Benedikt; Sornette, Didier; Thurner, Stefan

    2014-10-01

    Humans are fundamentally social. They form societies which consist of hierarchically layered nested groups of various quality, size, and structure. The anthropologic literature has classified these groups as support cliques, sympathy groups, bands, cognitive groups, tribes, linguistic groups, and so on. Anthropologic data show that, on average, each group consists of approximately three subgroups. However, a general understanding of the structural dependence of groups at different layers is largely missing. We extend these early findings to a very large high-precision large-scale internet-based social network data. We analyse the organisational structure of a complete, multi-relational, large social multiplex network of a human society consisting of about 400,000 odd players of an open-ended massive multiplayer online game for which we know all about their various group memberships at different layers. Remarkably, the online players' society exhibits the same type of structured hierarchical layers as found in hunter-gatherer societies. Our findings suggest that the hierarchical organisation of human society is deeply nested in human psychology.

  16. Application of Virtual and Augmented reality to geoscientific teaching and research.

    NASA Astrophysics Data System (ADS)

    Hodgetts, David

    2017-04-01

    The geological sciences are the ideal candidate for the application of Virtual Reality (VR) and Augmented Reality (AR). Digital data collection techniques such as laser scanning, digital photogrammetry and the increasing use of Unmanned Aerial Vehicles (UAV) or Small Unmanned Aircraft (SUA) technology allow us to collect large datasets efficiently and evermore affordably. This linked with the recent resurgence in VR and AR technologies make these 3D digital datasets even more valuable. These advances in VR and AR have been further supported by rapid improvements in graphics card technologies, and by development of high performance software applications to support them. Visualising data in VR is more complex than normal 3D rendering, consideration needs to be given to latency, frame-rate and the comfort of the viewer to enable reasonably long immersion time. Each frame has to be rendered from 2 viewpoints (one for each eye) requiring twice the rendering than for normal monoscopic views. Any unnatural effects (e.g. incorrect lighting) can lead to an uncomfortable VR experience so these have to be minimised. With large digital outcrop datasets comprising 10's-100's of millions of triangles this is challenging but achievable. Apart from the obvious "wow factor" of VR there are some serious applications. It is often the case that users of digital outcrop data do not appreciate the size of features they are dealing with. This is not the case when using correctly scaled VR, and a true sense of scale can be achieved. In addition VR provides an excellent way of performing quality control on 3D models and interpretations and errors are much more easily visible. VR models can then be used to create content that can then be used in AR applications closing the loop and taking interpretations back into the field.

  17. Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale

    NASA Astrophysics Data System (ADS)

    Barrios, M. I.

    2013-12-01

    The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues. Moreover, the implementation of this virtual lab improved the ability to understand the rationale of these process and how to transfer the mathematical models to computational representations.

  18. Development of an objective assessment tool for total laparoscopic hysterectomy: A Delphi method among experts and evaluation on a virtual reality simulator

    PubMed Central

    Knight, Sophie; Aggarwal, Rajesh; Agostini, Aubert; Loundou, Anderson; Berdah, Stéphane

    2018-01-01

    Introduction Total Laparoscopic hysterectomy (LH) requires an advanced level of operative skills and training. The aim of this study was to develop an objective scale specific for the assessment of technical skills for LH (H-OSATS) and to demonstrate feasibility of use and validity in a virtual reality setting. Material and methods The scale was developed using a hierarchical task analysis and a panel of international experts. A Delphi method obtained consensus among experts on relevant steps that should be included into the H-OSATS scale for assessment of operative performances. Feasibility of use and validity of the scale were evaluated by reviewing video recordings of LH performed on a virtual reality laparoscopic simulator. Three groups of operators of different levels of experience were assessed in a Marseille teaching hospital (10 novices, 8 intermediates and 8 experienced surgeons). Correlations with scores obtained using a recognised generic global rating tool (OSATS) were calculated. Results A total of 76 discrete steps were identified by the hierarchical task analysis. 14 experts completed the two rounds of the Delphi questionnaire. 64 steps reached consensus and were integrated in the scale. During the validation process, median time to rate each video recording was 25 minutes. There was a significant difference between the novice, intermediate and experienced group for total H-OSATS scores (133, 155.9 and 178.25 respectively; p = 0.002). H-OSATS scale demonstrated high inter-rater reliability (intraclass correlation coefficient [ICC] = 0.930; p<0.001) and test retest reliability (ICC = 0.877; p<0.001). High correlations were found between total H-OSATS scores and OSATS scores (rho = 0.928; p<0.001). Conclusion The H-OSATS scale displayed evidence of validity for assessment of technical performances for LH performed on a virtual reality simulator. The implementation of this scale is expected to facilitate deliberate practice. Next steps should focus on evaluating the validity of the scale in the operating room. PMID:29293635

  19. Effectiveness of the Virtual Reality System Toyra on Upper Limb Function in People with Tetraplegia: A Pilot Randomized Clinical Trial.

    PubMed

    Dimbwadyo-Terrer, I; Gil-Agudo, A; Segura-Fragoso, A; de los Reyes-Guzmán, A; Trincado-Alonso, F; Piazza, S; Polonio-López, B

    2016-01-01

    The aim of this study was to investigate the effects of a virtual reality program combined with conventional therapy in upper limb function in people with tetraplegia and to provide data about patients' satisfaction with the virtual reality system. Thirty-one people with subacute complete cervical tetraplegia participated in the study. Experimental group received 15 sessions with Toyra(®) virtual reality system for 5 weeks, 30 minutes/day, 3 days/week in addition to conventional therapy, while control group only received conventional therapy. All patients were assessed at baseline, after intervention, and at three-month follow-up with a battery of clinical, functional, and satisfaction scales. Control group showed significant improvements in the manual muscle test (p = 0,043, partial η (2) = 0,22) in the follow-up evaluation. Both groups demonstrated clinical, but nonsignificant, changes to their arm function in 4 of the 5 scales used. All patients showed a high level of satisfaction with the virtual reality system. This study showed that virtual reality added to conventional therapy produces similar results in upper limb function compared to only conventional therapy. Moreover, the gaming aspects incorporated in conventional rehabilitation appear to produce high motivation during execution of the assigned tasks. This trial is registered with EudraCT number 2015-002157-35.

  20. Effectiveness of the Virtual Reality System Toyra on Upper Limb Function in People with Tetraplegia: A Pilot Randomized Clinical Trial

    PubMed Central

    Dimbwadyo-Terrer, I.; Gil-Agudo, A.; Segura-Fragoso, A.; de los Reyes-Guzmán, A.; Trincado-Alonso, F.; Piazza, S.; Polonio-López, B.

    2016-01-01

    The aim of this study was to investigate the effects of a virtual reality program combined with conventional therapy in upper limb function in people with tetraplegia and to provide data about patients' satisfaction with the virtual reality system. Thirty-one people with subacute complete cervical tetraplegia participated in the study. Experimental group received 15 sessions with Toyra® virtual reality system for 5 weeks, 30 minutes/day, 3 days/week in addition to conventional therapy, while control group only received conventional therapy. All patients were assessed at baseline, after intervention, and at three-month follow-up with a battery of clinical, functional, and satisfaction scales. Control group showed significant improvements in the manual muscle test (p = 0,043, partial η 2 = 0,22) in the follow-up evaluation. Both groups demonstrated clinical, but nonsignificant, changes to their arm function in 4 of the 5 scales used. All patients showed a high level of satisfaction with the virtual reality system. This study showed that virtual reality added to conventional therapy produces similar results in upper limb function compared to only conventional therapy. Moreover, the gaming aspects incorporated in conventional rehabilitation appear to produce high motivation during execution of the assigned tasks. This trial is registered with EudraCT number 2015-002157-35. PMID:26885511

  1. Artificial plasma cusp generated by upper hybrid instabilities in HF heating experiments at HAARP

    NASA Astrophysics Data System (ADS)

    Kuo, Spencer; Snyder, Arnold

    2013-05-01

    High Frequency Active Auroral Research Program digisonde was operated in a fast mode to record ionospheric modifications by the HF heating wave. With the O mode heater of 3.2 MHz turned on for 2 min, significant virtual height spread was observed in the heater off ionograms, acquired beginning the moment the heater turned off. Moreover, there is a noticeable bump in the virtual height spread of the ionogram trace that appears next to the plasma frequency (~ 2.88 MHz) of the upper hybrid resonance layer of the HF heating wave. The enhanced spread and the bump disappear in the subsequent heater off ionograms recorded 1 min later. The height distribution of the ionosphere in the spread situation indicates that both electron density and temperature increases exceed 10% over a large altitude region (> 30 km) from below to above the upper hybrid resonance layer. This "mini cusp" (bump) is similar to the cusp occurring in daytime ionograms at the F1-F2 layer transition, indicating that there is a small ledge in the density profile reminiscent of F1-F2 layer transitions. Two parametric processes exciting upper hybrid waves as the sidebands by the HF heating waves are studied. Field-aligned purely growing mode and lower hybrid wave are the respective decay modes. The excited upper hybrid and lower hybrid waves introduce the anomalous electron heating which results in the ionization enhancement and localized density ledge. The large-scale density irregularities formed in the heat flow, together with the density irregularities formed through the parametric instability, give rise to the enhanced virtual height spread. The results of upper hybrid instability analysis are also applied to explain the descending feature in the development of the artificial ionization layers observed in electron cyclotron harmonic resonance heating experiments.

  2. Changes to zooplankton community structure following colonization of a small lake by Leptodora kindti

    USGS Publications Warehouse

    McNaught, A.S.; Kiesling, R.L.; Ghadouani, A.

    2004-01-01

    The predaceous cladoceran Leptodora kindti (Focke) became established in Third Sister Lake, Michigan, after individuals escaped from experimental enclosures in 1987. By 1988, the Leptodora population exhibited seasonal dynamics characteristic of natural populations. The maximum seasonal abundance of Leptodora increased to 85 individuals m-3 3 yr following the introduction. After the appearance of Leptodora, small-bodied cladocerans (Ceriodaphnia and Bosmina) virtually disappeared from the lake. There were strong seasonal shifts in the dominance patterns of both cladocerans and copepods, and Daphnia species diversity increased. Results from this unplanned introduction suggest that invertebrate predators can have a rapid and lasting effect on prey populations, even in the presence of planktivorous fish. Small-scale (<20 km) geographic barriers might be as important as large-scale barriers to dispersal of planktonic animals.

  3. Concurrent access to a virtual microscope using a web service oriented architecture

    NASA Astrophysics Data System (ADS)

    Corredor, Germán.; Iregui, Marcela; Arias, Viviana; Romero, Eduardo

    2013-11-01

    Virtual microscopy (VM) facilitates visualization and deployment of histopathological virtual slides (VS), a useful tool for education, research and diagnosis. In recent years, it has become popular, yet its use is still limited basically because of the very large sizes of VS, typically of the order of gigabytes. Such volume of data requires efficacious and efficient strategies to access the VS content. In an educative or research scenario, several users may require to access and interact with VS at the same time, so, due to large data size, a very expensive and powerful infrastructure is usually required. This article introduces a novel JPEG2000-based service oriented architecture for streaming and visualizing very large images under scalable strategies, which in addition need not require very specialized infrastructure. Results suggest that the proposed architecture enables transmission and simultaneous visualization of large images, while it is efficient using resources and offering users proper response times.

  4. Development of an Environmental Virtual Field Laboratory

    ERIC Educational Resources Information Center

    Ramasundaram, V.; Grunwald, S.; Mangeot, A.; Comerford, N. B.; Bliss, C. M.

    2005-01-01

    Laboratory exercises, field observations and field trips are a fundamental part of many earth science and environmental science courses. Field observations and field trips can be constrained because of distance, time, expense, scale, safety, or complexity of real-world environments. Our objectives were to develop an environmental virtual field…

  5. Virtual Teaching Dispositions in an Open Distance Learning Environment: Origins, Measurement and Impact

    ERIC Educational Resources Information Center

    Martins, Nico; Ungerer, Leona M.

    2017-01-01

    An understanding of the key characteristics and implicit competencies underlying online teaching is essential to distance education institutions that embark on the assertive use of technology in their tuition development and delivery. The Virtual Teaching Dispositions Scale (VTDS) assists in investigating professional teaching dispositions…

  6. The Benefits and Barriers of Using Virtual Worlds to Engage Healthcare Professionals on Distance Learning Programmes

    ERIC Educational Resources Information Center

    Hack, Catherine Jane

    2016-01-01

    Using the delivery of a large postgraduate distance learning module in bioethics to health professionals as an illustrative example, the type of learning activity that could be enhanced through delivery in an immersive virtual world (IVW) was explored. Several activities were repurposed from the "traditional" virtual learning environment…

  7. The Blended Classroom Revolution: Virtual Technology Goes to School

    ERIC Educational Resources Information Center

    Weil, Marty

    2009-01-01

    While virtual schools, which currently serve only a tiny fraction of the nation's 48 million K-12 students, get all the buzz, a much bigger, largely untold story of online learning is unfolding in America's brick-and-mortar classrooms: a simple yet profound merger of virtual-school technology and the traditional classroom is taking place. This…

  8. Characterizing Student Navigation in Educational Multiuser Virtual Environments: A Case Study Using Data from the River City Project

    ERIC Educational Resources Information Center

    Dukas, Georg

    2009-01-01

    Though research in emerging technologies is vital to fulfilling their incredible potential for educational applications, it is often fraught with analytic challenges related to large datasets. This thesis explores these challenges in researching multiuser virtual environments (MUVEs). In a MUVE, users assume a persona and traverse a virtual space…

  9. Developing standards for the development of glaucoma virtual clinics using a modified Delphi approach.

    PubMed

    Kotecha, Aachal; Longstaff, Simon; Azuara-Blanco, Augusto; Kirwan, James F; Morgan, James Edwards; Spencer, Anne Fiona; Foster, Paul J

    2018-04-01

    To obtain consensus opinion for the development of a standards framework for the development and implementation of virtual clinics for glaucoma monitoring in the UK using a modified Delphi methodology. A modified Delphi technique was used that involved sampling members of the UK Glaucoma and Eire Society (UKEGS). The first round scored the strength of agreement to a series of standards statements using a 9-point Likert scale. The revised standards were subjected to a second round of scoring and free-text comment. The final standards were discussed and agreed by an expert panel consisting of seven glaucoma subspecialists from across the UK. A version of the standards was submitted to external stakeholders for a 3-month consultation. There was a 44% response rate of UKEGS members to rounds 1 and 2, consisting largely of consultant ophthalmologists with a specialist interest in glaucoma. The final version of the standards document was validated by stakeholder consultation and contains four sections pertaining to the patient groups, testing methods, staffing requirements and governance structure of NHS secondary care glaucoma virtual clinic models. Use of a modified Delphi approach has provided consensus agreement for the standards required for the development of virtual clinics to monitor glaucoma in the UK. It is anticipated that this document will be useful as a guide for those implementing this model of service delivery. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Jacob; Edgar, Thomas W.; Daily, Jeffrey A.

    With an ever-evolving power grid, concerns regarding how to maintain system stability, efficiency, and reliability remain constant because of increasing uncertainties and decreasing rotating inertia. To alleviate some of these concerns, demand response represents a viable solution and is virtually an untapped resource in the current power grid. This work describes a hierarchical control framework that allows coordination between distributed energy resources and demand response. This control framework is composed of two control layers: a coordination layer that ensures aggregations of resources are coordinated to achieve system objectives and a device layer that controls individual resources to assure the predeterminedmore » power profile is tracked in real time. Large-scale simulations are executed to study the hierarchical control, requiring advancements in simulation capabilities. Technical advancements necessary to investigate and answer control interaction questions, including the Framework for Network Co-Simulation platform and Arion modeling capability, are detailed. Insights into the interdependencies of controls across a complex system and how they must be tuned, as well as validation of the effectiveness of the proposed control framework, are yielded using a large-scale integrated transmission system model coupled with multiple distribution systems.« less

  11. Characterization and prediction of extreme events in turbulence

    NASA Astrophysics Data System (ADS)

    Fonda, Enrico; Iyer, Kartik P.; Sreenivasan, Katepalli R.

    2017-11-01

    Extreme events in Nature such as tornadoes, large floods and strong earthquakes are rare but can have devastating consequences. The predictability of these events is very limited at present. Extreme events in turbulence are the very large events in small scales that are intermittent in character. We examine events in energy dissipation rate and enstrophy which are several tens to hundreds to thousands of times the mean value. To this end we use our DNS database of homogeneous and isotropic turbulence with Taylor Reynolds numbers spanning a decade, computed with different small scale resolutions and different box sizes, and study the predictability of these events using machine learning. We start with an aggressive data augmentation to virtually increase the number of these rare events by two orders of magnitude and train a deep convolutional neural network to predict their occurrence in an independent data set. The goal of the work is to explore whether extreme events can be predicted with greater assurance than can be done by conventional methods (e.g., D.A. Donzis & K.R. Sreenivasan, J. Fluid Mech. 647, 13-26, 2010).

  12. Properties of on-line social systems

    NASA Astrophysics Data System (ADS)

    Grabowski, A.; Kruszewska, N.; Kosiński, R. A.

    2008-11-01

    We study properties of five different social systems: (i) internet society of friends consisting of over 106 people, (ii) social network consisting of 3 × 104 individuals, who interact in a large virtual world of Massive Multiplayer Online Role Playing Games (MMORPGs), (iii) over 106 users of music community website, (iv) over 5 × 106 users of gamers community server and (v) over 0.25 × 106 users of books admirer website. Individuals included in large social network form an Internet community and organize themselves in groups of different sizes. The destiny of those systems, as well as the method of creating of new connections, are different, however we found that the properties of these networks are very similar. We have found that the network components size distribution follow the power-law scaling form. In all five systems we have found interesting scaling laws concerning human dynamics. Our research has shown how long people are interested in a single task, how much time they devote to it and how fast they are making friends. It is surprising that the time evolution of an individual connectivity is very similar in each system.

  13. INFN, IT the GENIUS grid portal and the robot certificates to perform phylogenetic analysis on large scale: a success story from the International LIBI project

    NASA Astrophysics Data System (ADS)

    Barbera, Roberto; Donvit, Giacinto; Falzone, Alberto; Rocca, Giuseppe La; Maggi, Giorgio Pietro; Milanesi, Luciano; Vicarioicario, Saverio

    This paper depicts the solution proposed by INFN to allow users, not owning a personal digital certificate and therefore not belonging to any specific Virtual Organization (VO), to access Grid infrastructures via the GENIUS Grid portal enabled with robot certificates. Robot certificates, also known as portal certificates, are associated with a specific application that the user wants to share with the whole Grid community and have recently been introduced by the EUGridPMA (European Policy Management Authority for Grid Authentication) to perform automated tasks on Grids on behalf of users. They are proven to be extremely useful to automate grid service monitoring, data processing production, distributed data collection systems, etc. In this paper, robot certificates have been used to allow bioinformaticians involved in the Italian LIBI project to perform large scale phylogenetic analyses. The distributed environment set up in this work strongly simplify the grid access of occasional users and represents a valuable step forward to wide the communities of users.

  14. Light-focusing human micro-lenses generated from pluripotent stem cells model lens development and drug-induced cataract in vitro

    PubMed Central

    Murphy, Patricia; Kabir, Md Humayun; Srivastava, Tarini; Mason, Michele E.; Dewi, Chitra U.; Lim, Seakcheng; Yang, Andrian; Djordjevic, Djordje; Killingsworth, Murray C.; Ho, Joshua W. K.; Harman, David G.

    2018-01-01

    ABSTRACT Cataracts cause vision loss and blindness by impairing the ability of the ocular lens to focus light onto the retina. Various cataract risk factors have been identified, including drug treatments, age, smoking and diabetes. However, the molecular events responsible for these different forms of cataract are ill-defined, and the advent of modern cataract surgery in the 1960s virtually eliminated access to human lenses for research. Here, we demonstrate large-scale production of light-focusing human micro-lenses from spheroidal masses of human lens epithelial cells purified from differentiating pluripotent stem cells. The purified lens cells and micro-lenses display similar morphology, cellular arrangement, mRNA expression and protein expression to human lens cells and lenses. Exposing the micro-lenses to the emergent cystic fibrosis drug Vx-770 reduces micro-lens transparency and focusing ability. These human micro-lenses provide a powerful and large-scale platform for defining molecular disease mechanisms caused by cataract risk factors, for anti-cataract drug screening and for clinically relevant toxicity assays. PMID:29217756

  15. Light-focusing human micro-lenses generated from pluripotent stem cells model lens development and drug-induced cataract in vitro.

    PubMed

    Murphy, Patricia; Kabir, Md Humayun; Srivastava, Tarini; Mason, Michele E; Dewi, Chitra U; Lim, Seakcheng; Yang, Andrian; Djordjevic, Djordje; Killingsworth, Murray C; Ho, Joshua W K; Harman, David G; O'Connor, Michael D

    2018-01-09

    Cataracts cause vision loss and blindness by impairing the ability of the ocular lens to focus light onto the retina. Various cataract risk factors have been identified, including drug treatments, age, smoking and diabetes. However, the molecular events responsible for these different forms of cataract are ill-defined, and the advent of modern cataract surgery in the 1960s virtually eliminated access to human lenses for research. Here, we demonstrate large-scale production of light-focusing human micro-lenses from spheroidal masses of human lens epithelial cells purified from differentiating pluripotent stem cells. The purified lens cells and micro-lenses display similar morphology, cellular arrangement, mRNA expression and protein expression to human lens cells and lenses. Exposing the micro-lenses to the emergent cystic fibrosis drug Vx-770 reduces micro-lens transparency and focusing ability. These human micro-lenses provide a powerful and large-scale platform for defining molecular disease mechanisms caused by cataract risk factors, for anti-cataract drug screening and for clinically relevant toxicity assays. © 2018. Published by The Company of Biologists Ltd.

  16. Simultaneous cellular-resolution optical perturbation and imaging of place cell firing fields

    PubMed Central

    Rickgauer, John Peter; Deisseroth, Karl; Tank, David W.

    2015-01-01

    Linking neural microcircuit function to emergent properties of the mammalian brain requires fine-scale manipulation and measurement of neural activity during behavior, where each neuron’s coding and dynamics can be characterized. We developed an optical method for simultaneous cellular-resolution stimulation and large-scale recording of neuronal activity in behaving mice. Dual-wavelength two-photon excitation allowed largely independent functional imaging with a green fluorescent calcium sensor (GCaMP3, λ = 920 ± 6 nm) and single-neuron photostimulation with a red-shifted optogenetic probe (C1V1, λ = 1,064 ± 6 nm) in neurons coexpressing the two proteins. We manipulated task-modulated activity in individual hippocampal CA1 place cells during spatial navigation in a virtual reality environment, mimicking natural place-field activity, or ‘biasing’, to reveal subthreshold dynamics. Notably, manipulating single place-cell activity also affected activity in small groups of other place cells that were active around the same time in the task, suggesting a functional role for local place cell interactions in shaping firing fields. PMID:25402854

  17. Computational biomedicine: a challenge for the twenty-first century.

    PubMed

    Coveney, Peter V; Shublaq, Nour W

    2012-01-01

    With the relentless increase of computer power and the widespread availability of digital patient-specific medical data, we are now entering an era when it is becoming possible to develop predictive models of human disease and pathology, which can be used to support and enhance clinical decision-making. The approach amounts to a grand challenge to computational science insofar as we need to be able to provide seamless yet secure access to large scale heterogeneous personal healthcare data in a facile way, typically integrated into complex workflows-some parts of which may need to be run on high performance computers-in a facile way that is integrated into clinical decision support software. In this paper, we review the state of the art in terms of case studies drawn from neurovascular pathologies and HIV/AIDS. These studies are representative of a large number of projects currently being performed within the Virtual Physiological Human initiative. They make demands of information technology at many scales, from the desktop to national and international infrastructures for data storage and processing, linked by high performance networks.

  18. Archive interoperability in the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Genova, Françoise

    2003-02-01

    Main goals of Virtual Observatory projects are to build interoperability between astronomical on-line services, observatory archives, databases and results published in journals, and to develop tools permitting the best scientific usage from the very large data sets stored in observatory archives and produced by large surveys. The different Virtual Observatory projects collaborate to define common exchange standards, which are the key for a truly International Virtual Observatory: for instance their first common milestone has been a standard allowing exchange of tabular data, called VOTable. The Interoperability Work Area of the European Astrophysical Virtual Observatory project aims at networking European archives, by building a prototype using the CDS VizieR and Aladin tools, and at defining basic rules to help archive providers in interoperability implementation. The prototype is accessible for scientific usage, to get user feedback (and science results!) at an early stage of the project. ISO archive participates very actively to this endeavour, and more generally to information networking. The on-going inclusion of the ISO log in SIMBAD will allow higher level links for users.

  19. The Landscape Evolution Observatory: a large-scale controllable infrastructure to study coupled Earth-surface processes

    USGS Publications Warehouse

    Pangle, Luke A.; DeLong, Stephen B.; Abramson, Nate; Adams, John; Barron-Gafford, Greg A.; Breshears, David D.; Brooks, Paul D.; Chorover, Jon; Dietrich, William E.; Dontsova, Katerina; Durcik, Matej; Espeleta, Javier; Ferré, T.P.A.; Ferriere, Regis; Henderson, Whitney; Hunt, Edward A.; Huxman, Travis E.; Millar, David; Murphy, Brendan; Niu, Guo-Yue; Pavao-Zuckerman, Mitch; Pelletier, Jon D.; Rasmussen, Craig; Ruiz, Joaquin; Saleska, Scott; Schaap, Marcel; Sibayan, Michael; Troch, Peter A.; Tuller, Markus; van Haren, Joost; Zeng, Xubin

    2015-01-01

    Zero-order drainage basins, and their constituent hillslopes, are the fundamental geomorphic unit comprising much of Earth's uplands. The convergent topography of these landscapes generates spatially variable substrate and moisture content, facilitating biological diversity and influencing how the landscape filters precipitation and sequesters atmospheric carbon dioxide. In light of these significant ecosystem services, refining our understanding of how these functions are affected by landscape evolution, weather variability, and long-term climate change is imperative. In this paper we introduce the Landscape Evolution Observatory (LEO): a large-scale controllable infrastructure consisting of three replicated artificial landscapes (each 330 m2 surface area) within the climate-controlled Biosphere 2 facility in Arizona, USA. At LEO, experimental manipulation of rainfall, air temperature, relative humidity, and wind speed are possible at unprecedented scale. The Landscape Evolution Observatory was designed as a community resource to advance understanding of how topography, physical and chemical properties of soil, and biological communities coevolve, and how this coevolution affects water, carbon, and energy cycles at multiple spatial scales. With well-defined boundary conditions and an extensive network of sensors and samplers, LEO enables an iterative scientific approach that includes numerical model development and virtual experimentation, physical experimentation, data analysis, and model refinement. We plan to engage the broader scientific community through public dissemination of data from LEO, collaborative experimental design, and community-based model development.

  20. Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, Wes

    2016-07-24

    The primary challenge motivating this team’s work is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who are able to perform analysis only on a small fraction of the data they compute, resulting in the very real likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, an approach that is known as in situ processing. The idea in situ processing wasmore » not new at the time of the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by DOE science projects. In large, our objective was produce and enable use of production-quality in situ methods and infrastructure, at scale, on DOE HPC facilities, though we expected to have impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve that objective, we assembled a unique team of researchers consisting of representatives from DOE national laboratories, academia, and industry, and engaged in software technology R&D, as well as engaged in close partnerships with DOE science code teams, to produce software technologies that were shown to run effectively at scale on DOE HPC platforms.« less

  1. A Computational Systems Biology Software Platform for Multiscale Modeling and Simulation: Integrating Whole-Body Physiology, Disease Biology, and Molecular Reaction Networks

    PubMed Central

    Eissing, Thomas; Kuepfer, Lars; Becker, Corina; Block, Michael; Coboeken, Katrin; Gaub, Thomas; Goerlitz, Linus; Jaeger, Juergen; Loosen, Roland; Ludewig, Bernd; Meyer, Michaela; Niederalt, Christoph; Sevestre, Michael; Siegmund, Hans-Ulrich; Solodenko, Juri; Thelen, Kirstin; Telle, Ulrich; Weiss, Wolfgang; Wendl, Thomas; Willmann, Stefan; Lippert, Joerg

    2011-01-01

    Today, in silico studies and trial simulations already complement experimental approaches in pharmaceutical R&D and have become indispensable tools for decision making and communication with regulatory agencies. While biology is multiscale by nature, project work, and software tools usually focus on isolated aspects of drug action, such as pharmacokinetics at the organism scale or pharmacodynamic interaction on the molecular level. We present a modeling and simulation software platform consisting of PK-Sim® and MoBi® capable of building and simulating models that integrate across biological scales. A prototypical multiscale model for the progression of a pancreatic tumor and its response to pharmacotherapy is constructed and virtual patients are treated with a prodrug activated by hepatic metabolization. Tumor growth is driven by signal transduction leading to cell cycle transition and proliferation. Free tumor concentrations of the active metabolite inhibit Raf kinase in the signaling cascade and thereby cell cycle progression. In a virtual clinical study, the individual therapeutic outcome of the chemotherapeutic intervention is simulated for a large population with heterogeneous genomic background. Thereby, the platform allows efficient model building and integration of biological knowledge and prior data from all biological scales. Experimental in vitro model systems can be linked with observations in animal experiments and clinical trials. The interplay between patients, diseases, and drugs and topics with high clinical relevance such as the role of pharmacogenomics, drug–drug, or drug–metabolite interactions can be addressed using this mechanistic, insight driven multiscale modeling approach. PMID:21483730

  2. Brain activity during a lower limb functional task in a real and virtual environment: A comparative study.

    PubMed

    Pacheco, Thaiana Barbosa Ferreira; Oliveira Rego, Isabelle Ananda; Campos, Tania Fernandes; Cavalcanti, Fabrícia Azevedo da Costa

    2017-01-01

    Virtual Reality (VR) has been contributing to Neurological Rehabilitation because of its interactive and multisensory nature, providing the potential of brain reorganization. Given the use of mobile EEG devices, there is the possibility of investigating how the virtual therapeutic environment can influence brain activity. To compare theta, alpha, beta and gamma power in healthy young adults during a lower limb motor task in a virtual and real environment. Ten healthy adults were submitted to an EEG assessment while performing a one-minute task consisted of going up and down a step in a virtual environment - Nintendo Wii virtual game "Basic step" - and in a real environment. Real environment caused an increase in theta and alpha power, with small to large size effects mainly in the frontal region. VR caused a greater increase in beta and gamma power, however, with small or negligible effects on a variety of regions regarding beta frequency, and medium to very large effects on the frontal and the occipital regions considering gamma frequency. Theta, alpha, beta and gamma activity during the execution of a motor task differs according to the environment that the individual is exposed - real or virtual - and may have varying size effects if brain area activation and frequency spectrum in each environment are taken into consideration.

  3. Virtual Congresses

    PubMed Central

    Lecueder, Silvia; Manyari, Dante E.

    2000-01-01

    A new form of scientific medical meeting has emerged in the last few years—the virtual congress. This article describes the general role of computer technologies and the Internet in the development of this new means of scientific communication, by reviewing the history of “cyber sessions” in medical education and the rationale, methods, and initial results of the First Virtual Congress of Cardiology. Instructions on how to participate in this virtual congress, either actively or as an observer, are included. Current advantages and disadvantages of virtual congresses, their impact on the scientific community at large, and future developments and possibilities in this area are discussed. PMID:10641960

  4. A Framework for Implementing Virtual Collaborative Networks - Case Study on Automobile Components Production Industry

    NASA Astrophysics Data System (ADS)

    Parvinnia, Elham; Khayami, Raouf; Ziarati, Koorush

    Virtual collaborative networks are composed of small companies which take most advantage from the market opportunity and are able to compete with large companies. So some frameworks have been introduced for implementing this type of collaboration; although none of them has been standardized completely. In this paper we specify some instances that need to be standardized for implementing virtual enterprises. Then, a framework is suggested for implementing virtual collaborative networks. Finally, based on that suggestion, as a case study, we design a virtual collaborative network in automobile components production industry.

  5. High-numerical-aperture-based virtual point detectors for photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Li, Changhui; Wang, Lihong V.

    2008-07-01

    The focal point of a high-numerical-aperture (NA) ultrasonic transducer can be used as a virtual point detector. This virtual point detector detects omnidirectionally over a wide acceptance angle. It also combines a large active transducer surface and a small effective virtual detector size. Thus the sensitivity is high compared with that of a real point detector, and the aperture effect is small compared with that of a finite size transducer. We present two kinds of high-NA-based virtual point detectors and their successful application in photoacoustic tomography. They can also be applied in other ultrasound-related fields.

  6. Opportunities for leveraging OS virtualization in high-end supercomputing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bridges, Patrick G.; Pedretti, Kevin Thomas Tauke

    2010-11-01

    This paper examines potential motivations for incorporating virtualization support in the system software stacks of high-end capability supercomputers. We advocate that this will increase the flexibility of these platforms significantly and enable new capabilities that are not possible with current fixed software stacks. Our results indicate that compute, virtual memory, and I/O virtualization overheads are low and can be further mitigated by utilizing well-known techniques such as large paging and VMM bypass. Furthermore, since the addition of virtualization support does not affect the performance of applications using the traditional native environment, there is essentially no disadvantage to its addition.

  7. Home-based virtual reality balance training and conventional balance training in Parkinson's disease: A randomized controlled trial.

    PubMed

    Yang, Wen-Chieh; Wang, Hsing-Kuo; Wu, Ruey-Meei; Lo, Chien-Shun; Lin, Kwan-Hwa

    2016-09-01

    Virtual reality has the advantage to provide rich sensory feedbacks for training balance function. This study tested if the home-based virtual reality balance training is more effective than the conventional home balance training in improving balance, walking, and quality of life in patients with Parkinson's disease (PD). Twenty-three patients with idiopathic PD were recruited and underwent twelve 50-minute training sessions during the 6-week training period. The experimental group (n = 11) was trained with a custom-made virtual reality balance training system, and the control group (n = 12) was trained by a licensed physical therapist. Outcomes were measured at Week 0 (pretest), Week 6 (posttest), and Week 8 (follow-up). The primary outcome was the Berg Balance Scale. The secondary outcomes included the Dynamic Gait Index, timed Up-and-Go test, Parkinson's Disease Questionnaire, and the motor score of the Unified Parkinson's Disease Rating Scale. The experimental and control groups were comparable at pretest. After training, both groups performed better in the Berg Balance Scale, Dynamic Gait Index, timed Up-and-Go test, and Parkinson's Disease Questionnaire at posttest and follow-up than at pretest. However, no significant differences were found between these two groups at posttest and follow-up. This study did not find any difference between the effects of the home-based virtual reality balance training and conventional home balance training. The two training options were equally effective in improving balance, walking, and quality of life among community-dwelling patients with PD. Copyright © 2015. Published by Elsevier B.V.

  8. Virtually distortion-free imaging system for large field, high resolution lithography using electrons, ions or other particle beams

    DOEpatents

    Hawryluk, A.M.; Ceglio, N.M.

    1993-01-12

    Virtually distortion free large field high resolution imaging is performed using an imaging system which contains large field distortion or field curvature. A reticle is imaged in one direction through the optical system to form an encoded mask. The encoded mask is then imaged back through the imaging system onto a wafer positioned at the reticle position. Particle beams, including electrons, ions and neutral particles, may be used as well as electromagnetic radiation.

  9. Virtually distortion-free imaging system for large field, high resolution lithography using electrons, ions or other particle beams

    DOEpatents

    Hawryluk, Andrew M.; Ceglio, Natale M.

    1993-01-01

    Virtually distortion free large field high resolution imaging is performed using an imaging system which contains large field distortion or field curvature. A reticle is imaged in one direction through the optical system to form an encoded mask. The encoded mask is then imaged back through the imaging system onto a wafer positioned at the reticle position. Particle beams, including electrons, ions and neutral particles, may be used as well as electromagnetic radiation.

  10. Conceptualizing Military Acceptance of Civilian Control: Ideological Cohesion, Military Responsibilities, and the Military’s Propensity for Subordination in Brazil and Venezuela

    DTIC Science & Technology

    1991-09-01

    that it is virtually impossible for the armed forces even to contemplate opposition to their civilian masters. This would likely result in a...exhibit elements of two or more of the four levels of involvement. Therefore, exact placement on Colton’s scale is a virtual impossibility. However...According to Huntington’s definition of professionalism, virtually all Latin militaries are professional to the extent that they exhibit expertness

  11. Facilitating Co-Design for Extreme-Scale Systems Through Lightweight Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelmann, Christian; Lauer, Frank

    This work focuses on tools for investigating algorithm performance at extreme scale with millions of concurrent threads and for evaluating the impact of future architecture choices to facilitate the co-design of high-performance computing (HPC) architectures and applications. The approach focuses on lightweight simulation of extreme-scale HPC systems with the needed amount of accuracy. The prototype presented in this paper is able to provide this capability using a parallel discrete event simulation (PDES), such that a Message Passing Interface (MPI) application can be executed at extreme scale, and its performance properties can be evaluated. The results of an initial prototype aremore » encouraging as a simple 'hello world' MPI program could be scaled up to 1,048,576 virtual MPI processes on a four-node cluster, and the performance properties of two MPI programs could be evaluated at up to 16,384 virtual MPI processes on the same system.« less

  12. [Psychometric properties of a Multidimensional Scale of Social Expression to assess social skills in the Internet context].

    PubMed

    Carballo, José L; Pérez-Jover, Ma Virtudes; Espada, José P; Orgilés, Mireia; Piqueras, José Antonio

    2012-02-01

    The increase in the use of New Technologies in social relationships could be generating a new paradigm in social skills. Valid and reliable instruments to assess these changes are needed. The aim of this study is to analyze the psychometric properties of the Multidimensional Scale of Social Expression (EMES) in the assessment of social skills on the Internet and Social Networks. A total of 413 college students from the province of Alicante participated on this study. The scale was applied according to two contexts: Real and Internet/Virtual. High internal consistency was shown. The 12-factor structure found for the Virtual Context scale is similar to that of the original study. The scale had shown to be a good predictor of hours of Internet use. In conclusion, EMES-C is useful for assessing real context and Internet context social skills.

  13. Virtual scarce water in China.

    PubMed

    Feng, Kuishuang; Hubacek, Klaus; Pfister, Stephan; Yu, Yang; Sun, Laixiang

    2014-07-15

    Water footprints and virtual water flows have been promoted as important indicators to characterize human-induced water consumption. However, environmental impacts associated with water consumption are largely neglected in these analyses. Incorporating water scarcity into water consumption allows better understanding of what is causing water scarcity and which regions are suffering from it. In this study, we incorporate water scarcity and ecosystem impacts into multiregional input-output analysis to assess virtual water flows and associated impacts among 30 provinces in China. China, in particular its water-scarce regions, are facing a serious water crisis driven by rapid economic growth. Our findings show that inter-regional flows of virtual water reveal additional insights when water scarcity is taken into account. Consumption in highly developed coastal provinces is largely relying on water resources in the water-scarce northern provinces, such as Xinjiang, Hebei, and Inner Mongolia, thus significantly contributing to the water scarcity in these regions. In addition, many highly developed but water scarce regions, such as Shanghai, Beijing, and Tianjin, are already large importers of net virtual water at the expense of water resource depletion in other water scarce provinces. Thus, increasingly importing water-intensive goods from other water-scarce regions may just shift the pressure to other regions, but the overall water problems may still remain. Using the water footprint as a policy tool to alleviate water shortage may only work when water scarcity is taken into account and virtual water flows from water-poor regions are identified.

  14. Virtual environment architecture for rapid application development

    NASA Technical Reports Server (NTRS)

    Grinstein, Georges G.; Southard, David A.; Lee, J. P.

    1993-01-01

    We describe the MITRE Virtual Environment Architecture (VEA), a product of nearly two years of investigations and prototypes of virtual environment technology. This paper discusses the requirements for rapid prototyping, and an architecture we are developing to support virtual environment construction. VEA supports rapid application development by providing a variety of pre-built modules that can be reconfigured for each application session. The modules supply interfaces for several types of interactive I/O devices, in addition to large-screen or head-mounted displays.

  15. Optimization of incremental structure from motion combining a random k-d forest and pHash for unordered images in a complex scene

    NASA Astrophysics Data System (ADS)

    Zhan, Zongqian; Wang, Chendong; Wang, Xin; Liu, Yi

    2018-01-01

    On the basis of today's popular virtual reality and scientific visualization, three-dimensional (3-D) reconstruction is widely used in disaster relief, virtual shopping, reconstruction of cultural relics, etc. In the traditional incremental structure from motion (incremental SFM) method, the time cost of the matching is one of the main factors restricting the popularization of this method. To make the whole matching process more efficient, we propose a preprocessing method before the matching process: (1) we first construct a random k-d forest with the large-scale scale-invariant feature transform features in the images and combine this with the pHash method to obtain a value of relatedness, (2) we then construct a connected weighted graph based on the relatedness value, and (3) we finally obtain a planned sequence of adding images according to the principle of the minimum spanning tree. On this basis, we attempt to thin the minimum spanning tree to reduce the number of matchings and ensure that the images are well distributed. The experimental results show a great reduction in the number of matchings with enough object points, with only a small influence on the inner stability, which proves that this method can quickly and reliably improve the efficiency of the SFM method with unordered multiview images in complex scenes.

  16. WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Beche, A.; Belov, S.; Kadochnikov, I.; Saiz, P.; Tuckett, D.

    2014-06-01

    The Worldwide LHC Computing Grid provides resources for the four main virtual organizations. Along with data processing, data distribution is the key computing activity on the WLCG infrastructure. The scale of this activity is very large, the ATLAS virtual organization (VO) alone generates and distributes more than 40 PB of data in 100 million files per year. Another challenge is the heterogeneity of data transfer technologies. Currently there are two main alternatives for data transfers on the WLCG: File Transfer Service and XRootD protocol. Each LHC VO has its own monitoring system which is limited to the scope of that particular VO. There is a need for a global system which would provide a complete cross-VO and cross-technology picture of all WLCG data transfers. We present a unified monitoring tool - WLCG Transfers Dashboard - where all the VOs and technologies coexist and are monitored together. The scale of the activity and the heterogeneity of the system raise a number of technical challenges. Each technology comes with its own monitoring specificities and some of the VOs use several of these technologies. This paper describes the implementation of the system with particular focus on the design principles applied to ensure the necessary scalability and performance, and to easily integrate any new technology providing additional functionality which might be specific to that technology.

  17. Direct virtual photon production in Au+Au collisions at √{sNN} = 200 GeV

    NASA Astrophysics Data System (ADS)

    Adamczyk, L.; Adkins, J. K.; Agakishiev, G.; Aggarwal, M. M.; Ahammed, Z.; Ajitanand, N. N.; Alekseev, I.; Anderson, D. M.; Aoyama, R.; Aparin, A.; Arkhipkin, D.; Aschenauer, E. C.; Ashraf, M. U.; Attri, A.; Averichev, G. S.; Bai, X.; Bairathi, V.; Behera, A.; Bellwied, R.; Bhasin, A.; Bhati, A. K.; Bhattarai, P.; Bielcik, J.; Bielcikova, J.; Bland, L. C.; Bordyuzhin, I. G.; Bouchet, J.; Brandenburg, J. D.; Brandin, A. V.; Brown, D.; Bunzarov, I.; Butterworth, J.; Caines, H.; Calderón de la Barca Sánchez, M.; Campbell, J. M.; Cebra, D.; Chakaberia, I.; Chaloupka, P.; Chang, Z.; Chankova-Bunzarova, N.; Chatterjee, A.; Chattopadhyay, S.; Chen, X.; Chen, X.; Chen, J. H.; Cheng, J.; Cherney, M.; Christie, W.; Contin, G.; Crawford, H. J.; Das, S.; De Silva, L. C.; Debbe, R. R.; Dedovich, T. G.; Deng, J.; Derevschikov, A. A.; Didenko, L.; Dilks, C.; Dong, X.; Drachenberg, J. L.; Draper, J. E.; Dunkelberger, L. E.; Dunlop, J. C.; Efimov, L. G.; Elsey, N.; Engelage, J.; Eppley, G.; Esha, R.; Esumi, S.; Evdokimov, O.; Ewigleben, J.; Eyser, O.; Fatemi, R.; Fazio, S.; Federic, P.; Federicova, P.; Fedorisin, J.; Feng, Z.; Filip, P.; Finch, E.; Fisyak, Y.; Flores, C. E.; Fujita, J.; Fulek, L.; Gagliardi, C. A.; Garand, D.; Geurts, F.; Gibson, A.; Girard, M.; Grosnick, D.; Gunarathne, D. S.; Guo, Y.; Gupta, A.; Gupta, S.; Guryn, W.; Hamad, A. I.; Hamed, A.; Harlenderova, A.; Harris, J. W.; He, L.; Heppelmann, S.; Heppelmann, S.; Hirsch, A.; Hoffmann, G. W.; Horvat, S.; Huang, B.; Huang, T.; Huang, H. Z.; Huang, X.; Humanic, T. J.; Huo, P.; Igo, G.; Jacobs, W. W.; Jentsch, A.; Jia, J.; Jiang, K.; Jowzaee, S.; Judd, E. G.; Kabana, S.; Kalinkin, D.; Kang, K.; Kauder, K.; Ke, H. W.; Keane, D.; Kechechyan, A.; Khan, Z.; Kikoła, D. P.; Kisel, I.; Kisiel, A.; Kochenda, L.; Kocmanek, M.; Kollegger, T.; Kosarzewski, L. K.; Kraishan, A. F.; Kravtsov, P.; Krueger, K.; Kulathunga, N.; Kumar, L.; Kvapil, J.; Kwasizur, J. H.; Lacey, R.; Landgraf, J. M.; Landry, K. D.; Lauret, J.; Lebedev, A.; Lednicky, R.; Lee, J. H.; Li, W.; Li, X.; Li, C.; Li, Y.; Lidrych, J.; Lin, T.; Lisa, M. A.; Liu, Y.; Liu, F.; Liu, H.; Liu, P.; Ljubicic, T.; Llope, W. J.; Lomnitz, M.; Longacre, R. S.; Luo, S.; Luo, X.; Ma, G. L.; Ma, Y. G.; Ma, L.; Ma, R.; Magdy, N.; Majka, R.; Mallick, D.; Margetis, S.; Markert, C.; Matis, H. S.; Meehan, K.; Mei, J. C.; Miller, Z. W.; Minaev, N. G.; Mioduszewski, S.; Mishra, D.; Mizuno, S.; Mohanty, B.; Mondal, M. M.; Morozov, D. A.; Mustafa, M. K.; Nasim, Md.; Nayak, T. K.; Nelson, J. M.; Nie, M.; Nigmatkulov, G.; Niida, T.; Nogach, L. V.; Nonaka, T.; Nurushev, S. B.; Odyniec, G.; Ogawa, A.; Oh, K.; Okorokov, V. A.; Olvitt, D.; Page, B. S.; Pak, R.; Pandit, Y.; Panebratsev, Y.; Pawlik, B.; Pei, H.; Perkins, C.; Pile, P.; Pluta, J.; Poniatowska, K.; Porter, J.; Posik, M.; Poskanzer, A. M.; Pruthi, N. K.; Przybycien, M.; Putschke, J.; Qiu, H.; Quintero, A.; Ramachandran, S.; Ray, R. L.; Reed, R.; Rehbein, M. J.; Ritter, H. G.; Roberts, J. B.; Rogachevskiy, O. V.; Romero, J. L.; Roth, J. D.; Ruan, L.; Rusnak, J.; Rusnakova, O.; Sahoo, N. R.; Sahu, P. K.; Salur, S.; Sandweiss, J.; Saur, M.; Schambach, J.; Schmah, A. M.; Schmidke, W. B.; Schmitz, N.; Schweid, B. R.; Seger, J.; Sergeeva, M.; Seyboth, P.; Shah, N.; Shahaliev, E.; Shanmuganathan, P. V.; Shao, M.; Sharma, A.; Sharma, M. K.; Shen, W. Q.; Shi, Z.; Shi, S. S.; Shou, Q. Y.; Sichtermann, E. P.; Sikora, R.; Simko, M.; Singha, S.; Skoby, M. J.; Smirnov, N.; Smirnov, D.; Solyst, W.; Song, L.; Sorensen, P.; Spinka, H. M.; Srivastava, B.; Stanislaus, T. D. S.; Strikhanov, M.; Stringfellow, B.; Sugiura, T.; Sumbera, M.; Summa, B.; Sun, Y.; Sun, X. M.; Sun, X.; Surrow, B.; Svirida, D. N.; Tang, A. H.; Tang, Z.; Taranenko, A.; Tarnowsky, T.; Tawfik, A.; Thäder, J.; Thomas, J. H.; Timmins, A. R.; Tlusty, D.; Todoroki, T.; Tokarev, M.; Trentalange, S.; Tribble, R. E.; Tribedy, P.; Tripathy, S. K.; Trzeciak, B. A.; Tsai, O. D.; Ullrich, T.; Underwood, D. G.; Upsal, I.; Van Buren, G.; van Nieuwenhuizen, G.; Vasiliev, A. N.; Videbæk, F.; Vokal, S.; Voloshin, S. A.; Vossen, A.; Wang, G.; Wang, Y.; Wang, F.; Wang, Y.; Webb, J. C.; Webb, G.; Wen, L.; Westfall, G. D.; Wieman, H.; Wissink, S. W.; Witt, R.; Wu, Y.; Xiao, Z. G.; Xie, W.; Xie, G.; Xu, J.; Xu, N.; Xu, Q. H.; Xu, Y. F.; Xu, Z.; Yang, Y.; Yang, Q.; Yang, C.; Yang, S.; Ye, Z.; Ye, Z.; Yi, L.; Yip, K.; Yoo, I.-K.; Yu, N.; Zbroszczyk, H.; Zha, W.; Zhang, Z.; Zhang, X. P.; Zhang, J. B.; Zhang, S.; Zhang, J.; Zhang, Y.; Zhang, J.; Zhang, S.; Zhao, J.; Zhong, C.; Zhou, L.; Zhou, C.; Zhu, X.; Zhu, Z.; Zyzak, M.

    2017-07-01

    We report the direct virtual photon invariant yields in the transverse momentum ranges 1 6 GeV / c the production follows TAA scaling. Model calculations with contributions from thermal radiation and initial hard parton scattering are consistent within uncertainties with the direct virtual photon invariant yield.

  18. ViDI: Virtual Diagnostics Interface. Volume 2; Unified File Format and Web Services as Applied to Seamless Data Transfer

    NASA Technical Reports Server (NTRS)

    Fleming, Gary A. (Technical Monitor); Schwartz, Richard J.

    2004-01-01

    The desire to revolutionize the aircraft design cycle from its currently lethargic pace to a fast turn-around operation enabling the optimization of non-traditional configurations is a critical challenge facing the aeronautics industry. In response, a large scale effort is underway to not only advance the state of the art in wind tunnel testing, computational modeling, and information technology, but to unify these often disparate elements into a cohesive design resource. This paper will address Seamless Data Transfer, the critical central nervous system that will enable a wide variety of varied components to work together.

  19. Sleep enhances a spatially mediated generalization of learned values

    PubMed Central

    Tolat, Anisha; Spiers, Hugo J.

    2015-01-01

    Sleep is thought to play an important role in memory consolidation. Here we tested whether sleep alters the subjective value associated with objects located in spatial clusters that were navigated to in a large-scale virtual town. We found that sleep enhances a generalization of the value of high-value objects to the value of locally clustered objects, resulting in an impaired memory for the value of high-valued objects. Our results are consistent with (a) spatial context helping to bind items together in long-term memory and serve as a basis for generalizing across memories and (b) sleep mediating memory effects on salient/reward-related items. PMID:26373834

  20. Large scale GW calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Govoni, Marco; Galli, Giulia

    We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm, which takes advantage of separable expressions of both the single particle Green’s function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. The newly developed technique was applied to GW calculations of systems of unprecedented size, including water/semiconductor interfacesmore » with thousands of electrons.« less

  1. Large Scale GW Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Govoni, Marco; Galli, Giulia

    We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm which takes advantage of separable expressions of both the single particle Green's function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. We applied the newly developed technique to GW calculations of systems of unprecedented size, including water/semiconductor interfacesmore » with thousands of electrons.« less

  2. Large scale GW calculations

    DOE PAGES

    Govoni, Marco; Galli, Giulia

    2015-01-12

    We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm, which takes advantage of separable expressions of both the single particle Green’s function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. The newly developed technique was applied to GW calculations of systems of unprecedented size, including water/semiconductor interfacesmore » with thousands of electrons.« less

  3. Shader Lamps Virtual Patients: the physical manifestation of virtual patients.

    PubMed

    Rivera-Gutierrez, Diego; Welch, Greg; Lincoln, Peter; Whitton, Mary; Cendan, Juan; Chesnutt, David A; Fuchs, Henry; Lok, Benjamin

    2012-01-01

    We introduce the notion of Shader Lamps Virtual Patients (SLVP) - the combination of projector-based Shader Lamps Avatars and interactive virtual humans. This paradigm uses Shader Lamps Avatars technology to give a 3D physical presence to conversational virtual humans, improving their social interactivity and enabling them to share the physical space with the user. The paradigm scales naturally to multiple viewers, allowing for scenarios where an instructor and multiple students are involved in the training. We have developed a physical-virtual patient for medical students to conduct ophthalmic exams, in an interactive training experience. In this experience, the trainee practices multiple skills simultaneously, including using a surrogate optical instrument in front of a physical head, conversing with the patient about his fears, observing realistic head motion, and practicing patient safety. Here we present a prototype system and results from a preliminary formative evaluation of the system.

  4. Dynamic Test Generation for Large Binary Programs

    DTIC Science & Technology

    2009-11-12

    the fuzzing@whitestar.linuxbox.orgmailing list, including Jared DeMott, Disco Jonny, and Ari Takanen, for discussions on fuzzing tradeoffs. Martin...as is the case for large applications where exercising all execution paths is virtually hopeless anyway. This point will be further discussed in...consumes trace files generated by iDNA and virtually re-executes the recorded runs. TruScan offers several features that substantially simplify symbolic

  5. Novel Directional Protection Scheme for the FREEDM Smart Grid System

    NASA Astrophysics Data System (ADS)

    Sharma, Nitish

    This research primarily deals with the design and validation of the protection system for a large scale meshed distribution system. The large scale system simulation (LSSS) is a system level PSCAD model which is used to validate component models for different time-scale platforms, to provide a virtual testing platform for the Future Renewable Electric Energy Delivery and Management (FREEDM) system. It is also used to validate the cases of power system protection, renewable energy integration and storage, and load profiles. The protection of the FREEDM system against any abnormal condition is one of the important tasks. The addition of distributed generation and power electronic based solid state transformer adds to the complexity of the protection. The FREEDM loop system has a fault current limiter and in addition, the Solid State Transformer (SST) limits the fault current at 2.0 per unit. Former students at ASU have developed the protection scheme using fiber-optic cable. However, during the NSF-FREEDM site visit, the National Science Foundation (NSF) team regarded the system incompatible for the long distances. Hence, a new protection scheme with a wireless scheme is presented in this thesis. The use of wireless communication is extended to protect the large scale meshed distributed generation from any fault. The trip signal generated by the pilot protection system is used to trigger the FID (fault isolation device) which is an electronic circuit breaker operation (switched off/opening the FIDs). The trip signal must be received and accepted by the SST, and it must block the SST operation immediately. A comprehensive protection system for the large scale meshed distribution system has been developed in PSCAD with the ability to quickly detect the faults. The validation of the protection system is performed by building a hardware model using commercial relays at the ASU power laboratory.

  6. Fast evaluation of scaled opposite spin second-order Møller-Plesset correlation energies using auxiliary basis expansions and exploiting sparsity.

    PubMed

    Jung, Yousung; Shao, Yihan; Head-Gordon, Martin

    2007-09-01

    The scaled opposite spin Møller-Plesset method (SOS-MP2) is an economical way of obtaining correlation energies that are computationally cheaper, and yet, in a statistical sense, of higher quality than standard MP2 theory, by introducing one empirical parameter. But SOS-MP2 still has a fourth-order scaling step that makes the method inapplicable to very large molecular systems. We reduce the scaling of SOS-MP2 by exploiting the sparsity of expansion coefficients and local integral matrices, by performing local auxiliary basis expansions for the occupied-virtual product distributions. To exploit sparsity of 3-index local quantities, we use a blocking scheme in which entire zero-rows and columns, for a given third global index, are deleted by comparison against a numerical threshold. This approach minimizes sparse matrix book-keeping overhead, and also provides sufficiently large submatrices after blocking, to allow efficient matrix-matrix multiplies. The resulting algorithm is formally cubic scaling, and requires only moderate computational resources (quadratic memory and disk space) and, in favorable cases, is shown to yield effective quadratic scaling behavior in the size regime we can apply it to. Errors associated with local fitting using the attenuated Coulomb metric and numerical thresholds in the blocking procedure are found to be insignificant in terms of the predicted relative energies. A diverse set of test calculations shows that the size of system where significant computational savings can be achieved depends strongly on the dimensionality of the system, and the extent of localizability of the molecular orbitals. Copyright 2007 Wiley Periodicals, Inc.

  7. Virtual memory

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1986-01-01

    Virtual memory was conceived as a way to automate overlaying of program segments. Modern computers have very large main memories, but need automatic solutions to the relocation and protection problems. Virtual memory serves this need as well and is thus useful in computers of all sizes. The history of the idea is traced, showing how it has become a widespread, little noticed feature of computers today.

  8. Prism adaptation in virtual and natural contexts: Evidence for a flexible adaptive process.

    PubMed

    Veilleux, Louis-Nicolas; Proteau, Luc

    2015-01-01

    Prism exposure when aiming at a visual target in a virtual condition (e.g., when the hand is represented by a video representation) produces no or only small adaptations (after-effects), whereas prism exposure in a natural condition produces large after-effects. Some researchers suggested that this difference may arise from distinct adaptive processes, but other studies suggested a unique process. The present study reconciled these conflicting interpretations. Forty participants were divided into two groups: One group used visual feedback of their hand (natural context), and the other group used computer-generated representational feedback (virtual context). Visual feedback during adaptation was concurrent or terminal. All participants underwent laterally displacing prism perturbation. The results showed that the after-effects were twice as large in the "natural context" than in the "virtual context". No significant differences were observed between the concurrent and terminal feedback conditions. The after-effects generalized to untested targets and workspace. These results suggest that prism adaptation in virtual and natural contexts involves the same process. The smaller after-effects in the virtual context suggest that the depth of adaptation is a function of the degree of convergence between the proprioceptive and visual information that arises from the hand.

  9. Low Quality Natural Gas Sulfur Removal and Recovery CNG Claus Sulfur Recovery Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klint, V.W.; Dale, P.R.; Stephenson, C.

    1997-10-01

    Increased use of natural gas (methane) in the domestic energy market will force the development of large non-producing gas reserves now considered to be low quality. Large reserves of low quality natural gas (LQNG) contaminated with hydrogen sulfide (H{sub 2}S), carbon dioxide (CO{sub 2}) and nitrogen (N) are available but not suitable for treatment using current conventional gas treating methods due to economic and environmental constraints. A group of three technologies have been integrated to allow for processing of these LQNG reserves; the Controlled Freeze Zone (CFZ) process for hydrocarbon / acid gas separation; the Triple Point Crystallizer (TPC) processmore » for H{sub 2}S / C0{sub 2} separation and the CNG Claus process for recovery of elemental sulfur from H{sub 2}S. The combined CFZ/TPC/CNG Claus group of processes is one program aimed at developing an alternative gas treating technology which is both economically and environmentally suitable for developing these low quality natural gas reserves. The CFZ/TPC/CNG Claus process is capable of treating low quality natural gas containing >10% C0{sub 2} and measurable levels of H{sub 2}S and N{sub 2} to pipeline specifications. The integrated CFZ / CNG Claus Process or the stand-alone CNG Claus Process has a number of attractive features for treating LQNG. The processes are capable of treating raw gas with a variety of trace contaminant components. The processes can also accommodate large changes in raw gas composition and flow rates. The combined processes are capable of achieving virtually undetectable levels of H{sub 2}S and significantly less than 2% CO in the product methane. The separation processes operate at pressure and deliver a high pressure (ca. 100 psia) acid gas (H{sub 2}S) stream for processing in the CNG Claus unit. This allows for substantial reductions in plant vessel size as compared to conventional Claus / Tail gas treating technologies. A close integration of the components of the CNG Claus process also allow for use of the methane/H{sub 2}S separation unit as a Claus tail gas treating unit by recycling the CNG Claus tail gas stream. This allows for virtually 100 percent sulfur recovery efficiency (virtually zero SO{sub 2} emissions) by recycling the sulfur laden tail gas to extinction. The use of the tail gas recycle scheme also deemphasizes the conventional requirement in Claus units to have high unit conversion efficiency and thereby make the operation much less affected by process upsets and feed gas composition changes. The development of these technologies has been ongoing for many years and both the CFZ and the TPC processes have been demonstrated at large pilot plant scales. On the other hand, prior to this project, the CNG Claus process had not been proven at any scale. Therefore, the primary objective of this portion of the program was to design, build and operate a pilot scale CNG Claus unit and demonstrate the required fundamental reaction chemistry and also demonstrate the viability of a reasonably sized working unit.« less

  10. An Exploration of Desktop Virtual Reality and Visual Processing Skills in a Technical Training Environment

    ERIC Educational Resources Information Center

    Ausburn, Lynna J.; Ausburn, Floyd B.; Kroutter, Paul

    2010-01-01

    Virtual reality (VR) technology has demonstrated effectiveness in a variety of technical learning situations, yet little is known about its differential effects on learners with different levels of visual processing skill. This small-scale exploratory study tested VR through quasi-experimental methodology and a theoretical/conceptual framework…

  11. Medical Students' Attitudes towards the Use of Virtual Patients

    ERIC Educational Resources Information Center

    Sobocan, M.; Klemenc-Ketis, Z.

    2017-01-01

    An increasing number of virtual patients (VPs) are being used in the classroom, which raises questions about how to implement VPs to improve students' satisfaction and enhance their learning. This study developed and validated a scale that measures acceptability and attitudes of medical students towards the use of the VP education tool in the…

  12. Online Teacher Development: Collaborating in a Virtual Learning Environment

    ERIC Educational Resources Information Center

    Ernest, Pauline; Guitert Catasús, Montse; Hampel, Regine; Heiser, Sarah; Hopkins, Joseph; Murphy, Linda; Stickler, Ursula

    2013-01-01

    Over recent years, educational institutions have been making increasing use of virtual environments to set up collaborative activities for learners. While it is recognized that teachers play an important role in facilitating learner collaboration online, they may not have the necessary skills to do so successfully. Thus, a small-scale professional…

  13. Cloud services for the Fermilab scientific stakeholders

    DOE PAGES

    Timm, S.; Garzoglio, G.; Mhashilkar, P.; ...

    2015-12-23

    As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic raymore » simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.« less

  14. Cloud services for the Fermilab scientific stakeholders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timm, S.; Garzoglio, G.; Mhashilkar, P.

    As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic raymore » simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.« less

  15. A Computational framework for telemedicine.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, I.; von Laszewski, G.; Thiruvathukal, G. K.

    1998-07-01

    Emerging telemedicine applications require the ability to exploit diverse and geographically distributed resources. Highspeed networks are used to integrate advanced visualization devices, sophisticated instruments, large databases, archival storage devices, PCs, workstations, and supercomputers. This form of telemedical environment is similar to networked virtual supercomputers, also known as metacomputers. Metacomputers are already being used in many scientific application areas. In this article, we analyze requirements necessary for a telemedical computing infrastructure and compare them with requirements found in a typical metacomputing environment. We will show that metacomputing environments can be used to enable a more powerful and unified computational infrastructure formore » telemedicine. The Globus metacomputing toolkit can provide the necessary low level mechanisms to enable a large scale telemedical infrastructure. The Globus toolkit components are designed in a modular fashion and can be extended to support the specific requirements for telemedicine.« less

  16. Cloud-enabled large-scale land surface model simulations with the NASA Land Information System

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Vaughan, G.; Clark, M. P.; Peters-Lidard, C. D.; Nijssen, B.; Nearing, G. S.; Rheingrover, S.; Kumar, S.; Geiger, J. V.

    2017-12-01

    Developed by the Hydrological Sciences Laboratory at NASA Goddard Space Flight Center (GSFC), the Land Information System (LIS) is a high-performance software framework for terrestrial hydrology modeling and data assimilation. LIS provides the ability to integrate satellite and ground-based observational products and advanced modeling algorithms to extract land surface states and fluxes. Through a partnership with the National Center for Atmospheric Research (NCAR) and the University of Washington, the LIS model is currently being extended to include the Structure for Unifying Multiple Modeling Alternatives (SUMMA). With the addition of SUMMA in LIS, meaningful simulations containing a large multi-model ensemble will be enabled and can provide advanced probabilistic continental-domain modeling capabilities at spatial scales relevant for water managers. The resulting LIS/SUMMA application framework is difficult for non-experts to install due to the large amount of dependencies on specific versions of operating systems, libraries, and compilers. This has created a significant barrier to entry for domain scientists that are interested in using the software on their own systems or in the cloud. In addition, the requirement to support multiple run time environments across the LIS community has created a significant burden on the NASA team. To overcome these challenges, LIS/SUMMA has been deployed using Linux containers, which allows for an entire software package along with all dependences to be installed within a working runtime environment, and Kubernetes, which orchestrates the deployment of a cluster of containers. Within a cloud environment, users can now easily create a cluster of virtual machines and run large-scale LIS/SUMMA simulations. Installations that have taken weeks and months can now be performed in minutes of time. This presentation will discuss the steps required to create a cloud-enabled large-scale simulation, present examples of its use, and describe the potential deployment of this information technology with other NASA applications.

  17. A large-scale integrated karst-vegetation recharge model to understand the impact of climate and land cover change

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Hartmann, Andreas; Pianosi, Francesca; Wagener, Thorsten

    2017-04-01

    Karst aquifers are an important source of drinking water in many regions of the world, but their resources are likely to be affected by changes in climate and land cover. Karst areas are highly permeable and produce large amounts of groundwater recharge, while surface runoff is typically negligible. As a result, recharge in karst systems may be particularly sensitive to environmental changes compared to other less permeable systems. However, current large-scale hydrological models poorly represent karst specificities. They tend to provide an erroneous water balance and to underestimate groundwater recharge over karst areas. A better understanding of karst hydrology and estimating karst groundwater resources at a large-scale is therefore needed for guiding water management in a changing world. The first objective of the present study is to introduce explicit vegetation processes into a previously developed karst recharge model (VarKarst) to better estimate evapotranspiration losses depending on the land cover characteristics. The novelty of the approach for large-scale modelling lies in the assessment of model output uncertainty, and parameter sensitivity to avoid over-parameterisation. We find that the model so modified is able to produce simulations consistent with observations of evapotranspiration and soil moisture at Fluxnet sites located in carbonate rock areas. Secondly, we aim to determine the model sensitivities to climate and land cover characteristics, and to assess the relative influence of changes in climate and land cover on aquifer recharge. We perform virtual experiments using synthetic climate inputs, and varying the value of land cover parameters. In this way, we can control for variations in climate input characteristics (e.g. precipitation intensity, precipitation frequency) and vegetation characteristics (e.g. canopy water storage capacity, rooting depth), and we can isolate the effect that each of these quantities has on recharge. Our results show that these factors are strongly interacting and are generating non-linear responses in recharge.

  18. Rapid prototyping and stereolithography in dentistry

    PubMed Central

    Nayar, Sanjna; Bhuminathan, S.; Bhat, Wasim Manzoor

    2015-01-01

    The word rapid prototyping (RP) was first used in mechanical engineering field in the early 1980s to describe the act of producing a prototype, a unique product, the first product, or a reference model. In the past, prototypes were handmade by sculpting or casting, and their fabrication demanded a long time. Any and every prototype should undergo evaluation, correction of defects, and approval before the beginning of its mass or large scale production. Prototypes may also be used for specific or restricted purposes, in which case they are usually called a preseries model. With the development of information technology, three-dimensional models can be devised and built based on virtual prototypes. Computers can now be used to create accurately detailed projects that can be assessed from different perspectives in a process known as computer aided design (CAD). To materialize virtual objects using CAD, a computer aided manufacture (CAM) process has been developed. To transform a virtual file into a real object, CAM operates using a machine connected to a computer, similar to a printer or peripheral device. In 1987, Brix and Lambrecht used, for the first time, a prototype in health care. It was a three-dimensional model manufactured using a computer numerical control device, a type of machine that was the predecessor of RP. In 1991, human anatomy models produced with a technology called stereolithography were first used in a maxillofacial surgery clinic in Viena. PMID:26015715

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Song; Wang, Yihong; Luo, Wei

    In virtualized data centers, virtual disk images (VDIs) serve as the containers in virtual environment, so their access performance is critical for the overall system performance. Some distributed VDI chunk storage systems have been proposed in order to alleviate the I/O bottleneck for VM management. As the system scales up to a large number of running VMs, however, the overall network traffic would become unbalanced with hot spots on some VMs inevitably, leading to I/O performance degradation when accessing the VMs. Here, we propose an adaptive and collaborative VDI storage system (ACStor) to resolve the above performance issue. In comparisonmore » with the existing research, our solution is able to dynamically balance the traffic workloads in accessing VDI chunks, based on the run-time network state. Specifically, compute nodes with lightly loaded traffic will be adaptively assigned more chunk access requests from remote VMs and vice versa, which can effectively eliminate the above problem and thus improves the I/O performance of VMs. We also implement a prototype based on our ACStor design, and evaluate it by various benchmarks on a real cluster with 32 nodes and a simulated platform with 256 nodes. Experiments show that under different network traffic patterns of data centers, our solution achieves up to 2-8 performance gain on VM booting time and VM’s I/O throughput, in comparison with the other state-of-the-art approaches.« less

  20. Geovisualisation of relief in a virtual reality system on the basis of low-level aerial imagery

    NASA Astrophysics Data System (ADS)

    Halik, Łukasz; Smaczyński, Maciej

    2017-12-01

    The aim of the following paper was to present the geomatic process of transforming low-level aerial imagery obtained with unmanned aerial vehicles (UAV) into a digital terrain model (DTM) and implementing the model into a virtual reality system (VR). The object of the study was a natural aggretage heap of an irregular shape and denivelations up to 11 m. Based on the obtained photos, three point clouds (varying in the level of detail) were generated for the 20,000-m2-area. For further analyses, the researchers selected the point cloud with the best ratio of accuracy to output file size. This choice was made based on seven control points of the heap surveyed in the field and the corresponding points in the generated 3D model. The obtained several-centimetre differences between the control points in the field and the ones from the model might testify to the usefulness of the described algorithm for creating large-scale DTMs for engineering purposes. Finally, the chosen model was implemented into the VR system, which enables the most lifelike exploration of 3D terrain plasticity in real time, thanks to the first person view mode (FPV). In this mode, the user observes an object with the aid of a Head- mounted display (HMD), experiencing the geovisualisation from the inside, and virtually analysing the terrain as a direct animator of the observations.

  1. Rapid prototyping and stereolithography in dentistry.

    PubMed

    Nayar, Sanjna; Bhuminathan, S; Bhat, Wasim Manzoor

    2015-04-01

    The word rapid prototyping (RP) was first used in mechanical engineering field in the early 1980s to describe the act of producing a prototype, a unique product, the first product, or a reference model. In the past, prototypes were handmade by sculpting or casting, and their fabrication demanded a long time. Any and every prototype should undergo evaluation, correction of defects, and approval before the beginning of its mass or large scale production. Prototypes may also be used for specific or restricted purposes, in which case they are usually called a preseries model. With the development of information technology, three-dimensional models can be devised and built based on virtual prototypes. Computers can now be used to create accurately detailed projects that can be assessed from different perspectives in a process known as computer aided design (CAD). To materialize virtual objects using CAD, a computer aided manufacture (CAM) process has been developed. To transform a virtual file into a real object, CAM operates using a machine connected to a computer, similar to a printer or peripheral device. In 1987, Brix and Lambrecht used, for the first time, a prototype in health care. It was a three-dimensional model manufactured using a computer numerical control device, a type of machine that was the predecessor of RP. In 1991, human anatomy models produced with a technology called stereolithography were first used in a maxillofacial surgery clinic in Viena.

  2. Constructing Social Networks from Unstructured Group Dialog in Virtual Worlds

    NASA Astrophysics Data System (ADS)

    Shah, Fahad; Sukthankar, Gita

    Virtual worlds and massively multi-player online games are rich sources of information about large-scale teams and groups, offering the tantalizing possibility of harvesting data about group formation, social networks, and network evolution. However these environments lack many of the cues that facilitate natural language processing in other conversational settings and different types of social media. Public chat data often features players who speak simultaneously, use jargon and emoticons, and only erratically adhere to conversational norms. In this paper, we present techniques for inferring the existence of social links from unstructured conversational data collected from groups of participants in the Second Life virtual world. We present an algorithm for addressing this problem, Shallow Semantic Temporal Overlap (SSTO), that combines temporal and language information to create directional links between participants, and a second approach that relies on temporal overlap alone to create undirected links between participants. Relying on temporal overlap is noisy, resulting in a low precision and networks with many extraneous links. In this paper, we demonstrate that we can ameliorate this problem by using network modularity optimization to perform community detection in the noisy networks and severing cross-community links. Although using the content of the communications still results in the best performance, community detection is effective as a noise reduction technique for eliminating the extra links created by temporal overlap alone.

  3. The (human) science of medical virtual learning environments.

    PubMed

    Stone, Robert J

    2011-01-27

    The uptake of virtual simulation technologies in both military and civilian surgical contexts has been both slow and patchy. The failure of the virtual reality community in the 1990s and early 2000s to deliver affordable and accessible training systems stems not only from an obsessive quest to develop the 'ultimate' in so-called 'immersive' hardware solutions, from head-mounted displays to large-scale projection theatres, but also from a comprehensive lack of attention to the needs of the end users. While many still perceive the science of simulation to be defined by technological advances, such as computing power, specialized graphics hardware, advanced interactive controllers, displays and so on, the true science underpinning simulation--the science that helps to guarantee the transfer of skills from the simulated to the real--is that of human factors, a well-established discipline that focuses on the abilities and limitations of the end user when designing interactive systems, as opposed to the more commercially explicit components of technology. Based on three surgical simulation case studies, the importance of a human factors approach to the design of appropriate simulation content and interactive hardware for medical simulation is illustrated. The studies demonstrate that it is unnecessary to pursue real-world fidelity in all instances in order to achieve psychological fidelity--the degree to which the simulated tasks reproduce and foster knowledge, skills and behaviours that can be reliably transferred to real-world training applications.

  4. Virtual Patients and Sensitivity Analysis of the Guyton Model of Blood Pressure Regulation: Towards Individualized Models of Whole-Body Physiology

    PubMed Central

    Moss, Robert; Grosse, Thibault; Marchant, Ivanny; Lassau, Nathalie; Gueyffier, François; Thomas, S. Randall

    2012-01-01

    Mathematical models that integrate multi-scale physiological data can offer insight into physiological and pathophysiological function, and may eventually assist in individualized predictive medicine. We present a methodology for performing systematic analyses of multi-parameter interactions in such complex, multi-scale models. Human physiology models are often based on or inspired by Arthur Guyton's whole-body circulatory regulation model. Despite the significance of this model, it has not been the subject of a systematic and comprehensive sensitivity study. Therefore, we use this model as a case study for our methodology. Our analysis of the Guyton model reveals how the multitude of model parameters combine to affect the model dynamics, and how interesting combinations of parameters may be identified. It also includes a “virtual population” from which “virtual individuals” can be chosen, on the basis of exhibiting conditions similar to those of a real-world patient. This lays the groundwork for using the Guyton model for in silico exploration of pathophysiological states and treatment strategies. The results presented here illustrate several potential uses for the entire dataset of sensitivity results and the “virtual individuals” that we have generated, which are included in the supplementary material. More generally, the presented methodology is applicable to modern, more complex multi-scale physiological models. PMID:22761561

  5. The Cold Gas History of the Universe as seen by the ngVLA

    NASA Astrophysics Data System (ADS)

    Riechers, Dominik A.; Carilli, Chris Luke; Casey, Caitlin; da Cunha, Elisabete; Hodge, Jacqueline; Ivison, Rob; Murphy, Eric J.; Narayanan, Desika; Sargent, Mark T.; Scoville, Nicholas; Walter, Fabian

    2017-01-01

    The Next Generation Very Large Array (ngVLA) will fundamentally advance our understanding of the formation processes that lead to the assembly of galaxies throughout cosmic history. The combination of large bandwidth with unprecedented sensitivity to the critical low-level CO lines over virtually the entire redshift range will open up the opportunity to conduct large-scale, deep cold molecular gas surveys, mapping the fuel for star formation in galaxies over substantial cosmic volumes. Informed by the first efforts with the Karl G. Jansky Very Large Array (COLDz survey) and the Atacama Large (sub)Millimeter Array (ASPECS survey), we here present initial predictions and possible survey strategies for such "molecular deep field" observations with the ngVLA. These investigations will provide a detailed measurement of the volume density of molecular gas in galaxies as a function of redshift, the "cold gas history of the universe". This will crucially complement studies of the neutral gas, star formation and stellar mass histories with large low-frequency arrays, the Large UV/Optical/Infrared Surveyor, and the Origins Space Telescope, providing the means to obtain a comprehensive picture of galaxy evolution through cosmic times.

  6. Scientific Workflows and the Sensor Web for Virtual Environmental Observatories

    NASA Astrophysics Data System (ADS)

    Simonis, I.; Vahed, A.

    2008-12-01

    Virtual observatories mature from their original domain and become common practice for earth observation research and policy building. The term Virtual Observatory originally came from the astronomical research community. Here, virtual observatories provide universal access to the available astronomical data archives of space and ground-based observatories. Further on, as those virtual observatories aim at integrating heterogeneous ressources provided by a number of participating organizations, the virtual observatory acts as a coordinating entity that strives for common data analysis techniques and tools based on common standards. The Sensor Web is on its way to become one of the major virtual observatories outside of the astronomical research community. Like the original observatory that consists of a number of telescopes, each observing a specific part of the wave spectrum and with a collection of astronomical instruments, the Sensor Web provides a multi-eyes perspective on the current, past, as well as future situation of our planet and its surrounding spheres. The current view of the Sensor Web is that of a single worldwide collaborative, coherent, consistent and consolidated sensor data collection, fusion and distribution system. The Sensor Web can perform as an extensive monitoring and sensing system that provides timely, comprehensive, continuous and multi-mode observations. This technology is key to monitoring and understanding our natural environment, including key areas such as climate change, biodiversity, or natural disasters on local, regional, and global scales. The Sensor Web concept has been well established with ongoing global research and deployment of Sensor Web middleware and standards and represents the foundation layer of systems like the Global Earth Observation System of Systems (GEOSS). The Sensor Web consists of a huge variety of physical and virtual sensors as well as observational data, made available on the Internet at standardized interfaces. All data sets and sensor communication follow well-defined abstract models and corresponding encodings, mostly developed by the OGC Sensor Web Enablement initiative. Scientific progress is currently accelerated by an emerging new concept called scientific workflows, which organize and manage complex distributed computations. A scientific workflow represents and records the highly complex processes that a domain scientist typically would follow in exploration, discovery and ultimately, transformation of raw data to publishable results. The challenge is now to integrate the benefits of scientific workflows with those provided by the Sensor Web in order to leverage all resources for scientific exploration, problem solving, and knowledge generation. Scientific workflows for the Sensor Web represent the next evolutionary step towards efficient, powerful, and flexible earth observation frameworks and platforms. Those platforms support the entire process from capturing data, sharing and integrating, to requesting additional observations. Multiple sites and organizations will participate on single platforms and scientists from different countries and organizations interact and contribute to large-scale research projects. Simultaneously, the data- and information overload becomes manageable, as multiple layers of abstraction will free scientists to deal with underlying data-, processing or storage peculiarities. The vision are automated investigation and discovery mechanisms that allow scientists to pose queries to the system, which in turn would identify potentially related resources, schedules processing tasks and assembles all parts in workflows that may satisfy the query.

  7. Virtual agents in a simulated virtual training environment

    NASA Technical Reports Server (NTRS)

    Achorn, Brett; Badler, Norman L.

    1993-01-01

    A drawback to live-action training simulations is the need to gather a large group of participants in order to train a few individuals. One solution to this difficulty is the use of computer-controlled agents in a virtual training environment. This allows a human participant to be replaced by a virtual, or simulated, agent when only limited responses are needed. Each agent possesses a specified set of behaviors and is capable of limited autonomous action in response to its environment or the direction of a human trainee. The paper describes these agents in the context of a simulated hostage rescue training session, involving two human rescuers assisted by three virtual (computer-controlled) agents and opposed by three other virtual agents.

  8. [Self-perception and life satisfaction in video game addiction in young adolescents (11-14 years old)].

    PubMed

    Gaetan, S; Bonnet, A; Pedinielli, J-L

    2012-12-01

    Video games are part of our society's major entertainments. This is now a global industry that covers the preferential activity of adolescents. But for some, the practice goes beyond a game and becomes an addictive functioning. Clinical practice is then faced with a new problem. It is important to understand the special bond that develops between a player and his/her video game in order to understand the addictive process. The game consists of a virtual world, a graphical construction that is a simulation of reality and which reinvents the laws that govern it. It also consists of a character embodied by the player who controls it: the avatar. Through the virtual world and avatar, the game offers the player a virtual personification that matches his/her expectations and projected ideal. The avatar allows the subject to compensate, or even to modify some aspects of the Self and thus enhance his/her perception of him/herself; the virtual life become more satisfying than real life. The aim of this research is to propose, from the study of the relationship between psychosocial variables (self-perception and life satisfaction) and the adolescent's practice of video games, elements of construction of an explanatory model of video gambling addiction. The population of this research is composed of 74 adolescents aged 11-14 years (m(age)=12.78 and SD=0.921). Fourteen are identified as addicted to video games by the results of the Game Addiction Scale. The quantitative methodology allows measurement of the different psychosocial variables which appear important in the addictive process. The instruments used are: the Game Addiction Scale, the Self-Perception Profile and the Satisfaction with Life Scale. The results show that adolescents addicted to video games see their virtual and current Self as being less proficient than other teenagers. Furthermore, teenagers addicted to video games see their virtual Self as more proficient and adapted to the environment than their current Self. Moreover, adolescents addicted perceive their lives as less satisfying than others'. Hence, virtual life is perceived as more satisfying than real life among teenagers addicted to video games. Finally, this virtual experience is thus one of the factors that explain the addiction to video games. Through the game, the teenager can "live" a new version of him/herself, becoming secondarily alienating. The virtual world supplants real life and becomes the source of a clash of identity. Copyright © 2012 L’Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.

  9. Mosaic construction, processing, and review of very large electron micrograph composites

    NASA Astrophysics Data System (ADS)

    Vogt, Robert C., III; Trenkle, John M.; Harmon, Laurel A.

    1996-11-01

    A system of programs is described for acquisition, mosaicking, cueing and interactive review of large-scale transmission electron micrograph composite images. This work was carried out as part of a final-phase clinical analysis study of a drug for the treatment of diabetic peripheral neuropathy. MOre than 500 nerve biopsy samples were prepared, digitally imaged, processed, and reviewed. For a given sample, typically 1000 or more 1.5 megabyte frames were acquired, for a total of between 1 and 2 gigabytes of data per sample. These frames were then automatically registered and mosaicked together into a single virtual image composite, which was subsequently used to perform automatic cueing of axons and axon clusters, as well as review and marking by qualified neuroanatomists. Statistics derived from the review process were used to evaluate the efficacy of the drug in promoting regeneration of myelinated nerve fibers. This effort demonstrates a new, entirely digital capability for doing large-scale electron micrograph studies, in which all of the relevant specimen data can be included at high magnification, as opposed to simply taking a random sample of discrete locations. It opens up the possibility of a new era in electron microscopy--one which broadens the scope of questions that this imaging modality can be used to answer.

  10. Evaluation of the cognitive effects of travel technique in complex real and virtual environments.

    PubMed

    Suma, Evan A; Finkelstein, Samantha L; Reid, Myra; V Babu, Sabarish; Ulinski, Amy C; Hodges, Larry F

    2010-01-01

    We report a series of experiments conducted to investigate the effects of travel technique on information gathering and cognition in complex virtual environments. In the first experiment, participants completed a non-branching multilevel 3D maze at their own pace using either real walking or one of two virtual travel techniques. In the second experiment, we constructed a real-world maze with branching pathways and modeled an identical virtual environment. Participants explored either the real or virtual maze for a predetermined amount of time using real walking or a virtual travel technique. Our results across experiments suggest that for complex environments requiring a large number of turns, virtual travel is an acceptable substitute for real walking if the goal of the application involves learning or reasoning based on information presented in the virtual world. However, for applications that require fast, efficient navigation or travel that closely resembles real-world behavior, real walking has advantages over common joystick-based virtual travel techniques.

  11. Networking of Bibliographical Information: Lessons learned for the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Genova, Françoise; Egret, Daniel

    Networking of bibliographic information is particularly remarkable in astronomy. On-line journals, the ADS bibliographic database, SIMBAD and NED are everyday tools for research, and provide easy navigation from one resource to another. Tables are published on line, in close collaboration with data centers. Recent new developments include the links between observatory archives and the ADS, as well as the large scale prototyping of object links between Astronomy and Astrophysics and SIMBAD, following those implemented a few years ago with New Astronomy and the International Bulletin of Variable stars . This networking has been made possible by close collaboration between the ADS, data centers such as the CDS and NED, and the journals, and this partnership being now extended to observatory archives. Simple, de facto exchange standards, like the bibcode to refer to a published paper, have been the key for building links and exchanging data. This partnership, in which practitioners from different disciplines agree to link their resources and to work together to define useful and usable standards, has produced a revolution in scientists' practice. It is an excellent model for the Virtual Observatory projects.

  12. Simultaneous neural and movement recording in large-scale immersive virtual environments.

    PubMed

    Snider, Joseph; Plank, Markus; Lee, Dongpyo; Poizner, Howard

    2013-10-01

    Virtual reality (VR) allows precise control and manipulation of rich, dynamic stimuli that, when coupled with on-line motion capture and neural monitoring, can provide a powerful means both of understanding brain behavioral relations in the high dimensional world and of assessing and treating a variety of neural disorders. Here we present a system that combines state-of-the-art, fully immersive, 3D, multi-modal VR with temporally aligned electroencephalographic (EEG) recordings. The VR system is dynamic and interactive across visual, auditory, and haptic interactions, providing sight, sound, touch, and force. Crucially, it does so with simultaneous EEG recordings while subjects actively move about a 20 × 20 ft² space. The overall end-to-end latency between real movement and its simulated movement in the VR is approximately 40 ms. Spatial precision of the various devices is on the order of millimeters. The temporal alignment with the neural recordings is accurate to within approximately 1 ms. This powerful combination of systems opens up a new window into brain-behavioral relations and a new means of assessment and rehabilitation of individuals with motor and other disorders.

  13. Grid-cell representations in mental simulation

    PubMed Central

    Bellmund, Jacob LS; Deuker, Lorena; Navarro Schröder, Tobias; Doeller, Christian F

    2016-01-01

    Anticipating the future is a key motif of the brain, possibly supported by mental simulation of upcoming events. Rodent single-cell recordings suggest the ability of spatially tuned cells to represent subsequent locations. Grid-like representations have been observed in the human entorhinal cortex during virtual and imagined navigation. However, hitherto it remains unknown if grid-like representations contribute to mental simulation in the absence of imagined movement. Participants imagined directions between building locations in a large-scale virtual-reality city while undergoing fMRI without re-exposure to the environment. Using multi-voxel pattern analysis, we provide evidence for representations of absolute imagined direction at a resolution of 30° in the parahippocampal gyrus, consistent with the head-direction system. Furthermore, we capitalize on the six-fold rotational symmetry of grid-cell firing to demonstrate a 60° periodic pattern-similarity structure in the entorhinal cortex. Our findings imply a role of the entorhinal grid-system in mental simulation and future thinking beyond spatial navigation. DOI: http://dx.doi.org/10.7554/eLife.17089.001 PMID:27572056

  14. Bats' avoidance of real and virtual objects: implications for the sonar coding of object size.

    PubMed

    Goerlitz, Holger R; Genzel, Daria; Wiegrebe, Lutz

    2012-01-01

    Fast movement in complex environments requires the controlled evasion of obstacles. Sonar-based obstacle evasion involves analysing the acoustic features of object-echoes (e.g., echo amplitude) that correlate with this object's physical features (e.g., object size). Here, we investigated sonar-based obstacle evasion in bats emerging in groups from their day roost. Using video-recordings, we first show that the bats evaded a small real object (ultrasonic loudspeaker) despite the familiar flight situation. Secondly, we studied the sonar coding of object size by adding a larger virtual object. The virtual object echo was generated by real-time convolution of the bats' calls with the acoustic impulse response of a large spherical disc and played from the loudspeaker. Contrary to the real object, the virtual object did not elicit evasive flight, despite the spectro-temporal similarity of real and virtual object echoes. Yet, their spatial echo features differ: virtual object echoes lack the spread of angles of incidence from which the echoes of large objects arrive at a bat's ears (sonar aperture). We hypothesise that this mismatch of spectro-temporal and spatial echo features caused the lack of virtual object evasion and suggest that the sonar aperture of object echoscapes contributes to the sonar coding of object size. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Opportunities and challenges in industrial plantation mapping in big data era

    NASA Astrophysics Data System (ADS)

    Dong, J.; Xiao, X.; Qin, Y.; Chen, B.; Wang, J.; Kou, W.; Zhai, D.

    2017-12-01

    With the increasing demand in timer, rubber, palm oil in the world market, industrial plantations have dramatically expanded, especially in Southeast Asia; which have been affecting ecosystem services and human wellbeing. However, existing efforts on plantation mapping are still limited and blocked our understanding about the magnitude of plantation expansion and their potential environmental effects. Here we would present a literature review about the existing efforts on plantation mapping based on one or multiple remote sensing sources, including rubber, oil palm, and eucalyptus plantations. The biophysical features and spectral characteristics of plantations will be introduced first, a comparison on existing algorithms in terms of different plantation types. Based on that, we proposed potential improvements in large scale plantation mapping based on the virtual constellation of multiple sensors, citizen science tools, and cloud computing technology. Based on the literature review, we discussed a series of issues for future scale operational paddy rice mapping.

  16. Scientific Services on the Cloud

    NASA Astrophysics Data System (ADS)

    Chapman, David; Joshi, Karuna P.; Yesha, Yelena; Halem, Milt; Yesha, Yaacov; Nguyen, Phuong

    Scientific Computing was one of the first every applications for parallel and distributed computation. To this date, scientific applications remain some of the most compute intensive, and have inspired creation of petaflop compute infrastructure such as the Oak Ridge Jaguar and Los Alamos RoadRunner. Large dedicated hardware infrastructure has become both a blessing and a curse to the scientific community. Scientists are interested in cloud computing for much the same reason as businesses and other professionals. The hardware is provided, maintained, and administrated by a third party. Software abstraction and virtualization provide reliability, and fault tolerance. Graduated fees allow for multi-scale prototyping and execution. Cloud computing resources are only a few clicks away, and by far the easiest high performance distributed platform to gain access to. There may still be dedicated infrastructure for ultra-scale science, but the cloud can easily play a major part of the scientific computing initiative.

  17. Forming an ad-hoc nearby storage, based on IKAROS and social networking services

    NASA Astrophysics Data System (ADS)

    Filippidis, Christos; Cotronis, Yiannis; Markou, Christos

    2014-06-01

    We present an ad-hoc "nearby" storage, based on IKAROS and social networking services, such as Facebook. By design, IKAROS is capable to increase or decrease the number of nodes of the I/O system instance on the fly, without bringing everything down or losing data. IKAROS is capable to decide the file partition distribution schema, by taking on account requests from the user or an application, as well as a domain or a Virtual Organization policy. In this way, it is possible to form multiple instances of smaller capacity higher bandwidth storage utilities capable to respond in an ad-hoc manner. This approach, focusing on flexibility, can scale both up and down and so can provide more cost effective infrastructures for both large scale and smaller size systems. A set of experiments is performed comparing IKAROS with PVFS2 by using multiple clients requests under HPC IOR benchmark and MPICH2.

  18. ngVLA Key Science Goal 3: Charting the Assembly, Structure, and Evolution of Galaxies Over Cosmic Time

    NASA Astrophysics Data System (ADS)

    Riechers, Dominik A.; Bolatto, Alberto D.; Carilli, Chris; Casey, Caitlin M.; Decarli, Roberto; Murphy, Eric Joseph; Narayanan, Desika; Walter, Fabian; ngVLA Galaxy Assembly through Cosmic Time Science Working Group, ngVLA Galaxy Ecosystems Science Working Group

    2018-01-01

    The Next Generation Very Large Array (ngVLA) will fundamentally advance our understanding of the formation processes that lead to the assembly of galaxies throughout cosmic history. The combination of large bandwidth with unprecedented sensitivity to the critical low-level CO lines over virtually the entire redshift range will open up the opportunity to conduct large-scale, deep cold molecular gas surveys, mapping the fuel for star formation in galaxies over substantial cosmic volumes. Imaging of the sub-kiloparsec scale distribution and kinematic structure of molecular gas in both normal main-sequence galaxies and large starbursts back to early cosmic epochs will reveal the physical processes responsible for star formation and black hole growth in galaxies over a broad range in redshifts. In the nearby universe, the ngVLA has the capability to survey the structure of the cold, star-forming interstellar medium at parsec-resolution out to the Virgo cluster. A range of molecular tracers will be accessible to map the motion, distribution, and physical and chemical state of the gas as it flows in from the outer disk, assembles into clouds, and experiences feedback due to star formation or accretion into central super-massive black holes. These investigations will crucially complement studies of the star formation and stellar mass histories with the Large UV/Optical/Infrared Surveyor and the Origins Space Telescope, providing the means to obtain a comprehensive picture of galaxy evolution through cosmic times.

  19. Improvement in balance using a virtual reality-based stepping exercise: a randomized controlled trial involving individuals with chronic stroke.

    PubMed

    Lloréns, Roberto; Gil-Gómez, José-Antonio; Alcañiz, Mariano; Colomer, Carolina; Noé, Enrique

    2015-03-01

    To study the clinical effectiveness and the usability of a virtual reality-based intervention compared with conventional physical therapy in the balance recovery of individuals with chronic stroke. Randomized controlled trial. Outpatient neurorehabilitation unit. A total of 20 individuals with chronic stroke. The intervention consisted of 20 one-hour sessions, five sessions per week. The experimental group combined 30 minutes with the virtual reality-based intervention with 30 minutes of conventional training. The control group underwent one hour conventional therapy. Balance performance was assessed at the beginning and at the end of the trial using the Berg Balance Scale, the balance and gait subscales of the Tinetti Performance-Oriented Mobility Assessment, the Brunel Balance Assessment, and the 10-m Walking Test. Subjective data of the virtual reality-based intervention were collected from the experimental group, with a feedback questionnaire at the end of the trial. The results revealed a significant group-by-time interaction in the scores of the Berg Balance Scale (p < 0.05) and in the 10-m Walking Test (p < 0.05). Post-hoc analyses showed greater improvement in the experimental group: 3.8 ±2.6 vs. 1.8 ±1.4 in the Berg Balance Scale, -1.9 ±1.6 seconds vs. 0.0 ±2.3 seconds in the 10-m Walking Test, and also in the number of participants who increased level in the Brunel Balance Assessment (χ(2) = 2.5, p < 0.01). Virtual reality interventions can be an effective resource to enhance the improvement of balance in individuals with chronic stroke. © The Author(s) 2014.

  20. Evaluation of an interactive web-based nursing course with streaming videos for medication administration skills.

    PubMed

    Sowan, Azizeh K; Idhail, Jamila Abu

    2014-08-01

    Nursing students should exhibit competence in nursing skills in order to provide safe and quality patient care. This study describes the design and students' response to an interactive web-based course using streaming video technology tailored to students' needs and the course objectives of the fundamentals of nursing skills clinical course. A mixed-methodology design was used to describe the experience of 102 first-year undergraduate nursing students at a school of nursing in Jordan who were enrolled in the course. A virtual course with streaming videos was designed to demonstrate medication administration fundamental skills. The videos recorded the ideal lab demonstration of the skills, and real-world practice performed by registered nurses for patients in a hospital setting. After course completion, students completed a 30-item satisfaction questionnaire, 8 self-efficacy scales, and a 4-item scale solicited their preferences of using the virtual course as a substitute or a replacement of the lab demonstration. Students' grades in the skill examination of the procedures were measured. Relationships between the main variables and predictors of satisfaction and self-efficacy were examined. Students were satisfied with the virtual course (3.9 ± 0.56, out of a 5-point scale) with a high-perceived overall self-efficacy (4.38 ± 0.42, out of a 5-point scale). Data showed a significant correlation between student satisfaction, self-efficacy and achievement in the virtual course (r = 0.45-0.49, p < 0.01). The majority of students accessed the course from home and some faced technical difficulties. Significant predictors of satisfaction were ease of access the course and gender (B = 0.35, 0.25, CI = 0.12-0.57, 0.02-0.48 respectively). The mean achievement score of students in the virtual class (7.5 ± 0.34) was significantly higher than that of a previous comparable cohort who was taught in the traditional method (6.0 ± 0.23) (p < 0.05). Nearly 40% of the students believed that the virtual course is a sufficient replacement of the lab demonstration. The use of multimedia within an interactive online learning environment is a valuable teaching strategy that yields a high level of nursing student satisfaction, self-efficacy, and achievement. The creation and delivery of a virtual learning environment with streaming videos for clinical courses is a complex process that should be carefully designed to positively influence the learning experience. However, the learning benefits gained from such pedagogical approach are worth faculty, institution and students' efforts. Published by Elsevier Ireland Ltd.

  1. DPubChem: a web tool for QSAR modeling and high-throughput virtual screening.

    PubMed

    Soufan, Othman; Ba-Alawi, Wail; Magana-Mora, Arturo; Essack, Magbubah; Bajic, Vladimir B

    2018-06-14

    High-throughput screening (HTS) performs the experimental testing of a large number of chemical compounds aiming to identify those active in the considered assay. Alternatively, faster and cheaper methods of large-scale virtual screening are performed computationally through quantitative structure-activity relationship (QSAR) models. However, the vast amount of available HTS heterogeneous data and the imbalanced ratio of active to inactive compounds in an assay make this a challenging problem. Although different QSAR models have been proposed, they have certain limitations, e.g., high false positive rates, complicated user interface, and limited utilization options. Therefore, we developed DPubChem, a novel web tool for deriving QSAR models that implement the state-of-the-art machine-learning techniques to enhance the precision of the models and enable efficient analyses of experiments from PubChem BioAssay database. DPubChem also has a simple interface that provides various options to users. DPubChem predicted active compounds for 300 datasets with an average geometric mean and F 1 score of 76.68% and 76.53%, respectively. Furthermore, DPubChem builds interaction networks that highlight novel predicted links between chemical compounds and biological assays. Using such a network, DPubChem successfully suggested a novel drug for the Niemann-Pick type C disease. DPubChem is freely available at www.cbrc.kaust.edu.sa/dpubchem .

  2. Building Modelling Methodologies for Virtual District Heating and Cooling Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saurav, Kumar; Choudhury, Anamitra R.; Chandan, Vikas

    District heating and cooling systems (DHC) are a proven energy solution that has been deployed for many years in a growing number of urban areas worldwide. They comprise a variety of technologies that seek to develop synergies between the production and supply of heat, cooling, domestic hot water and electricity. Although the benefits of DHC systems are significant and have been widely acclaimed, yet the full potential of modern DHC systems remains largely untapped. There are several opportunities for development of energy efficient DHC systems, which will enable the effective exploitation of alternative renewable resources, waste heat recovery, etc., inmore » order to increase the overall efficiency and facilitate the transition towards the next generation of DHC systems. This motivated the need for modelling these complex systems. Large-scale modelling of DHC-networks is challenging, as it has several components interacting with each other. In this paper we present two building methodologies to model the consumer buildings. These models will be further integrated with network model and the control system layer to create a virtual test bed for the entire DHC system. The model is validated using data collected from a real life DHC system located at Lulea, a city on the coast of northern Sweden. The test bed will be then used for simulating various test cases such as peak energy reduction, overall demand reduction etc.« less

  3. The complete O (αs2) non-singlet heavy flavor corrections to the structure functions g1,2ep (x ,Q2), F1,2,Lep (x ,Q2), F1,2,3ν (ν bar) (x ,Q2) and the associated sum rules

    NASA Astrophysics Data System (ADS)

    Blümlein, Johannes; Falcioni, Giulio; De Freitas, Abilio

    2016-09-01

    We calculate analytically the flavor non-singlet O (αs2) massive Wilson coefficients for the inclusive neutral current non-singlet structure functions F1,2,Lep (x ,Q2) and g1,2ep (x ,Q2) and charged current non-singlet structure functions F1,2,3ν (ν bar) p (x ,Q2), at general virtualities Q2 in the deep-inelastic region. Numerical results are presented. We illustrate the transition from low to large virtualities for these observables, which may be contrasted to basic assumptions made in the so-called variable flavor number scheme. We also derive the corresponding results for the Adler sum rule, the unpolarized and polarized Bjorken sum rules and the Gross-Llewellyn Smith sum rule. There are no logarithmic corrections at large scales Q2 and the effects of the power corrections due to the heavy quark mass are of the size of the known O (αs4) corrections in the case of the sum rules. The complete charm and bottom corrections are compared to the approach using asymptotic representations in the region Q2 ≫mc,b2. We also study the target mass corrections to the above sum rules.

  4. Technology advancing the study of animal cognition: using virtual reality to present virtually simulated environments to investigate nonhuman primate spatial cognition

    PubMed Central

    Schweller, Kenneth; Milne, Scott

    2017-01-01

    Abstract Virtual simulated environments provide multiple ways of testing cognitive function and evaluating problem solving with humans (e.g., Woollett et al. 2009). The use of such interactive technology has increasingly become an essential part of modern life (e.g., autonomously driving vehicles, global positioning systems (GPS), and touchscreen computers; Chinn and Fairlie 2007; Brown 2011). While many nonhuman animals have their own forms of "technology", such as chimpanzees who create and use tools, in captive animal environments the opportunity to actively participate with interactive technology is not often made available. Exceptions can be found in some state-of-the-art zoos and laboratory facilities (e.g., Mallavarapu and Kuhar 2005). When interactive technology is available, captive animals often selectively choose to engage with it. This enhances the animal’s sense of control over their immediate surroundings (e.g., Clay et al. 2011; Ackerman 2012). Such self-efficacy may help to fulfill basic requirements in a species’ daily activities using problem solving that can involve foraging and other goal-oriented behaviors. It also assists in fulfilling the strong underlying motivation for contrafreeloading and exploration expressed behaviorally by many species in captivity (Young 1999). Moreover, being able to present nonhuman primates virtual reality environments under experimental conditions provides the opportunity to gain insight into their navigational abilities and spatial cognition. It allows for insight into the generation and application of internal mental representations of landmarks and environments under multiple conditions (e.g., small- and large-scale space) and subsequent spatial behavior. This paper reviews methods using virtual reality developed to investigate the spatial cognitive abilities of nonhuman primates, and great apes in particular, in comparison with that of humans of multiple age groups. We make recommendations about training, best practices, and also pitfalls to avoid. PMID:29491967

  5. Technology advancing the study of animal cognition: using virtual reality to present virtually simulated environments to investigate nonhuman primate spatial cognition.

    PubMed

    Dolins, Francine L; Schweller, Kenneth; Milne, Scott

    2017-02-01

    Virtual simulated environments provide multiple ways of testing cognitive function and evaluating problem solving with humans (e.g., Woollett et al. 2009). The use of such interactive technology has increasingly become an essential part of modern life (e.g., autonomously driving vehicles, global positioning systems (GPS), and touchscreen computers; Chinn and Fairlie 2007; Brown 2011). While many nonhuman animals have their own forms of "technology", such as chimpanzees who create and use tools, in captive animal environments the opportunity to actively participate with interactive technology is not often made available. Exceptions can be found in some state-of-the-art zoos and laboratory facilities (e.g., Mallavarapu and Kuhar 2005). When interactive technology is available, captive animals often selectively choose to engage with it. This enhances the animal's sense of control over their immediate surroundings (e.g., Clay et al. 2011; Ackerman 2012). Such self-efficacy may help to fulfill basic requirements in a species' daily activities using problem solving that can involve foraging and other goal-oriented behaviors. It also assists in fulfilling the strong underlying motivation for contrafreeloading and exploration expressed behaviorally by many species in captivity (Young 1999). Moreover, being able to present nonhuman primates virtual reality environments under experimental conditions provides the opportunity to gain insight into their navigational abilities and spatial cognition. It allows for insight into the generation and application of internal mental representations of landmarks and environments under multiple conditions (e.g., small- and large-scale space) and subsequent spatial behavior. This paper reviews methods using virtual reality developed to investigate the spatial cognitive abilities of nonhuman primates, and great apes in particular, in comparison with that of humans of multiple age groups. We make recommendations about training, best practices, and also pitfalls to avoid.

  6. Design of focused and restrained subsets from extremely large virtual libraries.

    PubMed

    Jamois, Eric A; Lin, Chien T; Waldman, Marvin

    2003-11-01

    With the current and ever-growing offering of reagents along with the vast palette of organic reactions, virtual libraries accessible to combinatorial chemists can reach sizes of billions of compounds or more. Extracting practical size subsets for experimentation has remained an essential step in the design of combinatorial libraries. A typical approach to computational library design involves enumeration of structures and properties for the entire virtual library, which may be unpractical for such large libraries. This study describes a new approach termed as on the fly optimization (OTFO) where descriptors are computed as needed within the subset optimization cycle and without intermediate enumeration of structures. Results reported herein highlight the advantages of coupling an ultra-fast descriptor calculation engine to subset optimization capabilities. We also show that enumeration of properties for the entire virtual library may not only be unpractical but also wasteful. Successful design of focused and restrained subsets can be achieved while sampling only a small fraction of the virtual library. We also investigate the stability of the method and compare results obtained from simulated annealing (SA) and genetic algorithms (GA).

  7. Virtual Environment Interpersonal Trust Scale: Validity and Reliability Study

    ERIC Educational Resources Information Center

    Usta, Ertugrul

    2012-01-01

    The purpose of this study is in the process of interpersonal communication in virtual environments is available from the trust problem is to develop a measurement tool. Trust in the process of distance education today, and has been a factor to be investigated. People, who take distance education course, they could may remain within the process…

  8. Blended Inquiry with Hands-On and Virtual Laboratories: The Role of Perceptual Features during Knowledge Construction

    ERIC Educational Resources Information Center

    Toth, Eva Erdosne; Ludvico, Lisa R.; Morrow, Becky L.

    2014-01-01

    This study examined the characteristics of virtual and hands-on inquiry environments for the development of blended learning in a popular domain of bio-nanotechnology: the separation of different-sized DNA fragments using gel-electrophoresis, also known as DNA-fingerprinting. Since the latest scientific developments in nano- and micro-scale tools…

  9. Validation of Virtual Learning Team Competencies for Individual Students in a Distance Education Setting

    ERIC Educational Resources Information Center

    Topchyan, Ruzanna; Zhang, Jie

    2014-01-01

    The purpose of this study was twofold. First, the study aimed to validate the scale of the Virtual Team Competency Inventory in distance education, which had initially been designed for a corporate setting. Second, the methodological advantages of Exploratory Structural Equation Modeling (ESEM) framework over Confirmatory Factor Analysis (CFA)…

  10. Virtual reality in the treatment of persecutory delusions: randomised controlled experimental study testing how to reduce delusional conviction.

    PubMed

    Freeman, Daniel; Bradley, Jonathan; Antley, Angus; Bourke, Emilie; DeWeever, Natalie; Evans, Nicole; Černis, Emma; Sheaves, Bryony; Waite, Felicity; Dunn, Graham; Slater, Mel; Clark, David M

    2016-07-01

    Persecutory delusions may be unfounded threat beliefs maintained by safety-seeking behaviours that prevent disconfirmatory evidence being successfully processed. Use of virtual reality could facilitate new learning. To test the hypothesis that enabling patients to test the threat predictions of persecutory delusions in virtual reality social environments with the dropping of safety-seeking behaviours (virtual reality cognitive therapy) would lead to greater delusion reduction than exposure alone (virtual reality exposure). Conviction in delusions and distress in a real-world situation were assessed in 30 patients with persecutory delusions. Patients were then randomised to virtual reality cognitive therapy or virtual reality exposure, both with 30 min in graded virtual reality social environments. Delusion conviction and real-world distress were then reassessed. In comparison with exposure, virtual reality cognitive therapy led to large reductions in delusional conviction (reduction 22.0%, P = 0.024, Cohen's d = 1.3) and real-world distress (reduction 19.6%, P = 0.020, Cohen's d = 0.8). Cognitive therapy using virtual reality could prove highly effective in treating delusions. © The Royal College of Psychiatrists 2016.

  11. Virtual reality in the treatment of persecutory delusions: randomised controlled experimental study testing how to reduce delusional conviction

    PubMed Central

    Freeman, Daniel; Bradley, Jonathan; Antley, Angus; Bourke, Emilie; DeWeever, Natalie; Evans, Nicole; Černis, Emma; Sheaves, Bryony; Waite, Felicity; Dunn, Graham; Slater, Mel; Clark, David M.

    2016-01-01

    Background Persecutory delusions may be unfounded threat beliefs maintained by safety-seeking behaviours that prevent disconfirmatory evidence being successfully processed. Use of virtual reality could facilitate new learning. Aims To test the hypothesis that enabling patients to test the threat predictions of persecutory delusions in virtual reality social environments with the dropping of safety-seeking behaviours (virtual reality cognitive therapy) would lead to greater delusion reduction than exposure alone (virtual reality exposure). Method Conviction in delusions and distress in a real-world situation were assessed in 30 patients with persecutory delusions. Patients were then randomised to virtual reality cognitive therapy or virtual reality exposure, both with 30 min in graded virtual reality social environments. Delusion conviction and real-world distress were then reassessed. Results In comparison with exposure, virtual reality cognitive therapy led to large reductions in delusional conviction (reduction 22.0%, P = 0.024, Cohen's d = 1.3) and real-world distress (reduction 19.6%, P = 0.020, Cohen's d = 0.8). Conclusion Cognitive therapy using virtual reality could prove highly effective in treating delusions. PMID:27151071

  12. A virtual simulator designed for collision prevention in proton therapy.

    PubMed

    Jung, Hyunuk; Kum, Oyeon; Han, Youngyih; Park, Hee Chul; Kim, Jin Sung; Choi, Doo Ho

    2015-10-01

    In proton therapy, collisions between the patient and nozzle potentially occur because of the large nozzle structure and efforts to minimize the air gap. Thus, software was developed to predict such collisions between the nozzle and patient using treatment virtual simulation. Three-dimensional (3D) modeling of a gantry inner-floor, nozzle, and robotic-couch was performed using SolidWorks based on the manufacturer's machine data. To obtain patient body information, a 3D-scanner was utilized right before CT scanning. Using the acquired images, a 3D-image of the patient's body contour was reconstructed. The accuracy of the image was confirmed against the CT image of a humanoid phantom. The machine components and the virtual patient were combined on the treatment-room coordinate system, resulting in a virtual simulator. The simulator simulated the motion of its components such as rotation and translation of the gantry, nozzle, and couch in real scale. A collision, if any, was examined both in static and dynamic modes. The static mode assessed collisions only at fixed positions of the machine's components, while the dynamic mode operated any time a component was in motion. A collision was identified if any voxels of two components, e.g., the nozzle and the patient or couch, overlapped when calculating volume locations. The event and collision point were visualized, and collision volumes were reported. All components were successfully assembled, and the motions were accurately controlled. The 3D-shape of the phantom agreed with CT images within a deviation of 2 mm. Collision situations were simulated within minutes, and the results were displayed and reported. The developed software will be useful in improving patient safety and clinical efficiency of proton therapy.

  13. A Case Study of Using Online Communities and Virtual Environment in Massively Multiplayer Role Playing Games (MMORPGs) as a Learning and Teaching Tool for Second Language Learners

    ERIC Educational Resources Information Center

    Kongmee, Isara; Strachan, Rebecca; Pickard, Alison; Montgomery, Catherine

    2012-01-01

    Massively Multiplayer Online Role Playing Games (MMORPGs) create large virtual communities. Online gaming shows potential not just for entertaining, but also in education. This research investigates the use of commercial MMORPGs to support second language teaching. MMORPGs offer virtual safe spaces in which students can communicate by using their…

  14. Efficient Checkpointing of Virtual Machines using Virtual Machine Introspection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aderholdt, Ferrol; Han, Fang; Scott, Stephen L

    Cloud Computing environments rely heavily on system-level virtualization. This is due to the inherent benefits of virtualization including fault tolerance through checkpoint/restart (C/R) mechanisms. Because clouds are the abstraction of large data centers and large data centers have a higher potential for failure, it is imperative that a C/R mechanism for such an environment provide minimal latency as well as a small checkpoint file size. Recently, there has been much research into C/R with respect to virtual machines (VM) providing excellent solutions to reduce either checkpoint latency or checkpoint file size. However, these approaches do not provide both. This papermore » presents a method of checkpointing VMs by utilizing virtual machine introspection (VMI). Through the usage of VMI, we are able to determine which pages of memory within the guest are used or free and are better able to reduce the amount of pages written to disk during a checkpoint. We have validated this work by using various benchmarks to measure the latency along with the checkpoint size. With respect to checkpoint file size, our approach results in file sizes within 24% or less of the actual used memory within the guest. Additionally, the checkpoint latency of our approach is up to 52% faster than KVM s default method.« less

  15. Understanding virtual water flows: A multiregion input-output case study of Victoria

    NASA Astrophysics Data System (ADS)

    Lenzen, Manfred

    2009-09-01

    This article explains and interprets virtual water flows from the well-established perspective of input-output analysis. Using a case study of the Australian state of Victoria, it demonstrates that input-output analysis can enumerate virtual water flows without systematic and unknown truncation errors, an issue which has been largely absent from the virtual water literature. Whereas a simplified flow analysis from a producer perspective would portray Victoria as a net virtual water importer, enumerating the water embodiments across the full supply chain using input-output analysis shows Victoria as a significant net virtual water exporter. This study has succeeded in informing government policy in Australia, which is an encouraging sign that input-output analysis will be able to contribute much value to other national and international applications.

  16. Successful contracting of prevention services: fighting malnutrition in Senegal and Madagascar.

    PubMed

    Marek, T; Diallo, I; Ndiaye, B; Rakotosalama, J

    1999-12-01

    There are very few documented large-scale successes in nutrition in Africa, and virtually no consideration of contracting for preventive services. This paper describes two successful large-scale community nutrition projects in Africa as examples of what can be done in prevention using the contracting approach in rural as well as urban areas. The two case-studies are the Secaline project in Madagascar, and the Community Nutrition Project in Senegal. The article explains what is meant by 'success' in the context of these two projects, how these results were achieved, and how certain bottlenecks were avoided. Both projects are very similar in the type of service they provide, and in combining private administration with public finance. The article illustrates that contracting out is a feasible option to be seriously considered for organizing certain prevention programmes on a large scale. There are strong indications from these projects of success in terms of reducing malnutrition, replicability and scale, and community involvement. When choosing that option, a government can tap available private local human resources through contracting out, rather than delivering those services by the public sector. However, as was done in both projects studied, consideration needs to be given to using a contract management unit for execution and monitoring, which costs 13-17% of the total project's budget. Rigorous assessments of the cost-effectiveness of contracted services are not available, but improved health outcomes, targeting of the poor, and basic cost data suggest that the programmes may well be relatively cost-effective. Although the contracting approach is not presented as the panacea to solve the malnutrition problem faced by Africa, it can certainly provide an alternative in many countries to increase coverage and quality of services.

  17. Distributed attitude synchronization of formation flying via consensus-based virtual structure

    NASA Astrophysics Data System (ADS)

    Cong, Bing-Long; Liu, Xiang-Dong; Chen, Zhen

    2011-06-01

    This paper presents a general framework for synchronized multiple spacecraft rotations via consensus-based virtual structure. In this framework, attitude control systems for formation spacecrafts and virtual structure are designed separately. Both parametric uncertainty and external disturbance are taken into account. A time-varying sliding mode control (TVSMC) algorithm is designed to improve the robustness of the actual attitude control system. As for the virtual attitude control system, a behavioral consensus algorithm is presented to accomplish the attitude maneuver of the entire formation and guarantee a consistent attitude among the local virtual structure counterparts during the attitude maneuver. A multiple virtual sub-structures (MVSSs) system is introduced to enhance current virtual structure scheme when large amounts of spacecrafts are involved in the formation. The attitude of spacecraft is represented by modified Rodrigues parameter (MRP) for its non-redundancy. Finally, a numerical simulation with three synchronization situations is employed to illustrate the effectiveness of the proposed strategy.

  18. Revisiting Parametric Types and Virtual Classes

    NASA Astrophysics Data System (ADS)

    Madsen, Anders Bach; Ernst, Erik

    This paper presents a conceptually oriented updated view on the relationship between parametric types and virtual classes. The traditional view is that parametric types excel at structurally oriented composition and decomposition, and virtual classes excel at specifying mutually recursive families of classes whose relationships are preserved in derived families. Conversely, while class families can be specified using a large number of F-bounded type parameters, this approach is complex and fragile; and it is difficult to use traditional virtual classes to specify object composition in a structural manner, because virtual classes are closely tied to nominal typing. This paper adds new insight about the dichotomy between these two approaches; it illustrates how virtual constraints and type refinements, as recently introduced in gbeta and Scala, enable structural treatment of virtual types; finally, it shows how a novel kind of dynamic type check can detect compatibility among entire families of classes.

  19. CycloPs: generating virtual libraries of cyclized and constrained peptides including nonnatural amino acids.

    PubMed

    Duffy, Fergal J; Verniere, Mélanie; Devocelle, Marc; Bernard, Elise; Shields, Denis C; Chubb, Anthony J

    2011-04-25

    We introduce CycloPs, software for the generation of virtual libraries of constrained peptides including natural and nonnatural commercially available amino acids. The software is written in the cross-platform Python programming language, and features include generating virtual libraries in one-dimensional SMILES and three-dimensional SDF formats, suitable for virtual screening. The stand-alone software is capable of filtering the virtual libraries using empirical measurements, including peptide synthesizability by standard peptide synthesis techniques, stability, and the druglike properties of the peptide. The software and accompanying Web interface is designed to enable the rapid generation of large, structurally diverse, synthesizable virtual libraries of constrained peptides quickly and conveniently, for use in virtual screening experiments. The stand-alone software, and the Web interface for evaluating these empirical properties of a single peptide, are available at http://bioware.ucd.ie .

  20. Nucleon spin-averaged forward virtual Compton tensor at large Q 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, Richard J.; Paz, Gil

    The nucleon spin-averaged forward virtual Compton tensor determines important physical quantities such as electromagnetically-induced mass differences of nucleons, and two-photon exchange contributions in hydrogen spectroscopy. It depends on two kinematic variables:more » $$\

Top