Prototyping Faithful Execution in a Java virtual machine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarman, Thomas David; Campbell, Philip LaRoche; Pierson, Lyndon George
2003-09-01
This report presents the implementation of a stateless scheme for Faithful Execution, the design for which is presented in a companion report, ''Principles of Faithful Execution in the Implementation of Trusted Objects'' (SAND 2003-2328). We added a simple cryptographic capability to an already simplified class loader and its associated Java Virtual Machine (JVM) to provide a byte-level implementation of Faithful Execution. The extended class loader and JVM we refer to collectively as the Sandia Faithfully Executing Java architecture (or JavaFE for short). This prototype is intended to enable exploration of more sophisticated techniques which we intend to implement in hardware.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth D.
The FY17Q2 milestone of the ECP/VTK-m project, which is the first milestone, includes the completion of design documents for the introduction of virtual methods into the VTK-m framework. Specifically, the ability from within the code of a device (e.g. GPU or Xeon Phi) to jump to a virtual method specified at run time. This change will enable us to drastically reduce the compile time and the executable code size for the VTK-m library. Our first design introduced the idea of adding virtual functions to classes that are used during algorithm execution. (Virtual methods were previously banned from the so calledmore » execution environment.) The design was straightforward. VTK-m already has the generic concepts of an “array handle” that provides a uniform interface to memory of different structures and an “array portal” that provides generic access to said memory. These array handles and portals use C++ templating to adjust them to different memory structures. This composition provides a powerful ability to adapt to data sources, but requires knowing static types. The proposed design creates a template specialization of an array portal that decorates another array handle while hiding its type. In this way we can wrap any type of static array handle and then feed it to a single compiled instance of a function. The second design focused on the mechanics of implementing virtual methods on parallel devices with a focus on CUDA. Our initial experiments on CUDA showed a very large overhead for using virtual C++ classes with virtual methods, the standard approach. Instead, we are using an alternate method provided by C that uses function pointers. With the completion of this milestone, we are able to move to the implementation of objects with virtual (like) methods. The upshot will be much faster compile times and much smaller library/executable sizes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCaskey, Alexander J.
There is a lack of state-of-the-art HPC simulation tools for simulating general quantum computing. Furthermore, there are no real software tools that integrate current quantum computers into existing classical HPC workflows. This product, the Quantum Virtual Machine (QVM), solves this problem by providing an extensible framework for pluggable virtual, or physical, quantum processing units (QPUs). It enables the execution of low level quantum assembly codes and returns the results of such executions.
The effective use of virtualization for selection of data centers in a cloud computing environment
NASA Astrophysics Data System (ADS)
Kumar, B. Santhosh; Parthiban, Latha
2018-04-01
Data centers are the places which consist of network of remote servers to store, access and process the data. Cloud computing is a technology where users worldwide will submit the tasks and the service providers will direct the requests to the data centers which are responsible for execution of tasks. The servers in the data centers need to employ the virtualization concept so that multiple tasks can be executed simultaneously. In this paper we proposed an algorithm for data center selection based on energy of virtual machines created in server. The virtualization energy in each of the server is calculated and total energy of the data center is obtained by the summation of individual server energy. The tasks submitted are routed to the data center with least energy consumption which will result in minimizing the operational expenses of a service provider.
Yim, Wen-Wai; Chien, Shu; Kusumoto, Yasuyuki; Date, Susumu; Haga, Jason
2010-01-01
Large-scale in-silico screening is a necessary part of drug discovery and Grid computing is one answer to this demand. A disadvantage of using Grid computing is the heterogeneous computational environments characteristic of a Grid. In our study, we have found that for the molecular docking simulation program DOCK, different clusters within a Grid organization can yield inconsistent results. Because DOCK in-silico virtual screening (VS) is currently used to help select chemical compounds to test with in-vitro experiments, such differences have little effect on the validity of using virtual screening before subsequent steps in the drug discovery process. However, it is difficult to predict whether the accumulation of these discrepancies over sequentially repeated VS experiments will significantly alter the results if VS is used as the primary means for identifying potential drugs. Moreover, such discrepancies may be unacceptable for other applications requiring more stringent thresholds. This highlights the need for establishing a more complete solution to provide the best scientific accuracy when executing an application across Grids. One possible solution to platform heterogeneity in DOCK performance explored in our study involved the use of virtual machines as a layer of abstraction. This study investigated the feasibility and practicality of using virtual machine and recent cloud computing technologies in a biological research application. We examined the differences and variations of DOCK VS variables, across a Grid environment composed of different clusters, with and without virtualization. The uniform computer environment provided by virtual machines eliminated inconsistent DOCK VS results caused by heterogeneous clusters, however, the execution time for the DOCK VS increased. In our particular experiments, overhead costs were found to be an average of 41% and 2% in execution time for two different clusters, while the actual magnitudes of the execution time costs were minimal. Despite the increase in overhead, virtual clusters are an ideal solution for Grid heterogeneity. With greater development of virtual cluster technology in Grid environments, the problem of platform heterogeneity may be eliminated through virtualization, allowing greater usage of VS, and will benefit all Grid applications in general.
Efficient operating system level virtualization techniques for cloud resources
NASA Astrophysics Data System (ADS)
Ansu, R.; Samiksha; Anju, S.; Singh, K. John
2017-11-01
Cloud computing is an advancing technology which provides the servcies of Infrastructure, Platform and Software. Virtualization and Computer utility are the keys of Cloud computing. The numbers of cloud users are increasing day by day. So it is the need of the hour to make resources available on demand to satisfy user requirements. The technique in which resources namely storage, processing power, memory and network or I/O are abstracted is known as Virtualization. For executing the operating systems various virtualization techniques are available. They are: Full System Virtualization and Para Virtualization. In Full Virtualization, the whole architecture of hardware is duplicated virtually. No modifications are required in Guest OS as the OS deals with the VM hypervisor directly. In Para Virtualization, modifications of OS is required to run in parallel with other OS. For the Guest OS to access the hardware, the host OS must provide a Virtual Machine Interface. OS virtualization has many advantages such as migrating applications transparently, consolidation of server, online maintenance of OS and providing security. This paper briefs both the virtualization techniques and discusses the issues in OS level virtualization.
A virtual data language and system for scientific workflow management in data grid environments
NASA Astrophysics Data System (ADS)
Zhao, Yong
With advances in scientific instrumentation and simulation, scientific data is growing fast in both size and analysis complexity. So-called Data Grids aim to provide high performance, distributed data analysis infrastructure for data- intensive sciences, where scientists distributed worldwide need to extract information from large collections of data, and to share both data products and the resources needed to produce and store them. However, the description, composition, and execution of even logically simple scientific workflows are often complicated by the need to deal with "messy" issues like heterogeneous storage formats and ad-hoc file system structures. We show how these difficulties can be overcome via a typed workflow notation called virtual data language, within which issues of physical representation are cleanly separated from logical typing, and by the implementation of this notation within the context of a powerful virtual data system that supports distributed execution. The resulting language and system are capable of expressing complex workflows in a simple compact form, enacting those workflows in distributed environments, monitoring and recording the execution processes, and tracing the derivation history of data products. We describe the motivation, design, implementation, and evaluation of the virtual data language and system, and the application of the virtual data paradigm in various science disciplines, including astronomy, cognitive neuroscience.
76 FR 7575 - National Institute on Drug Abuse; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-10
... evaluate grant applications. Place: National Institutes of Health, Neuroscience Center, 6001 Executive... Institutes of Health, Neuroscience Center, 6001 Executive Boulevard, Rockville, MD 20852 (Virtual Meeting... Health, Neuroscience Center, 6001 Executive Boulevard, Rockville, MD 20852 (Virtual Meeting). Contact...
iRODS: A Distributed Data Management Cyberinfrastructure for Observatories
NASA Astrophysics Data System (ADS)
Rajasekar, A.; Moore, R.; Vernon, F.
2007-12-01
Large-scale and long-term preservation of both observational and synthesized data requires a system that virtualizes data management concepts. A methodology is needed that can work across long distances in space (distribution) and long-periods in time (preservation). The system needs to manage data stored on multiple types of storage systems including new systems that become available in the future. This concept is called infrastructure independence, and is typically implemented through virtualization mechanisms. Data grids are built upon concepts of data and trust virtualization. These concepts enable the management of collections of data that are distributed across multiple institutions, stored on multiple types of storage systems, and accessed by multiple types of clients. Data virtualization ensures that the name spaces used to identify files, users, and storage systems are persistent, even when files are migrated onto future technology. This is required to preserve authenticity, the link between the record and descriptive and provenance metadata. Trust virtualization ensures that access controls remain invariant as files are moved within the data grid. This is required to track the chain of custody of records over time. The Storage Resource Broker (http://www.sdsc.edu/srb) is one such data grid used in a wide variety of applications in earth and space sciences such as ROADNet (roadnet.ucsd.edu), SEEK (seek.ecoinformatics.org), GEON (www.geongrid.org) and NOAO (www.noao.edu). Recent extensions to data grids provide one more level of virtualization - policy or management virtualization. Management virtualization ensures that execution of management policies can be automated, and that rules can be created that verify assertions about the shared collections of data. When dealing with distributed large-scale data over long periods of time, the policies used to manage the data and provide assurances about the authenticity of the data become paramount. The integrated Rule-Oriented Data System (iRODS) (http://irods.sdsc.edu) provides the mechanisms needed to describe not only management policies, but also to track how the policies are applied and their execution results. The iRODS data grid maps management policies to rules that control the execution of the remote micro-services. As an example, a rule can be created that automatically creates a replica whenever a file is added to a specific collection, or extracts its metadata automatically and registers it in a searchable catalog. For the replication operation, the persistent state information consists of the replica location, the creation date, the owner, the replica size, etc. The mechanism used by iRODS for providing policy virtualization is based on well-defined functions, called micro-services, which are chained into alternative workflows using rules. A rule engine, based on the event-condition-action paradigm executes the rule-based workflows after an event. Rules can be deferred to a pre-determined time or executed on a periodic basis. As the data management policies evolve, the iRODS system can implement new rules, new micro-services, and new state information (metadata content) needed to manage the new policies. Each sub- collection can be managed using a different set of policies. The discussion of the concepts in rule-based policy virtualization and its application to long-term and large-scale data management for observatories such as ORION and NEON will be the basis of the paper.
Software attribute visualization for high integrity software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pollock, G.M.
1998-03-01
This report documents a prototype tool developed to investigate the use of visualization and virtual reality technologies for improving software surety confidence. The tool is utilized within the execution phase of the software life cycle. It provides a capability to monitor an executing program against prespecified requirements constraints provided in a program written in the requirements specification language SAGE. The resulting Software Attribute Visual Analysis Tool (SAVAnT) also provides a technique to assess the completeness of a software specification.
NASA Astrophysics Data System (ADS)
Ge, Yuanzheng; Chen, Bin; liu, Liang; Qiu, Xiaogang; Song, Hongbin; Wang, Yong
2018-02-01
Individual-based computational environment provides an effective solution to study complex social events by reconstructing scenarios. Challenges remain in reconstructing the virtual scenarios and reproducing the complex evolution. In this paper, we propose a framework to reconstruct a synthetic computational environment, reproduce the epidemic outbreak, and evaluate management interventions in a virtual university. The reconstructed computational environment includes 4 fundamental components: the synthetic population, behavior algorithms, multiple social networks, and geographic campus environment. In the virtual university, influenza H1N1 transmission experiments are conducted, and gradually enhanced interventions are evaluated and compared quantitatively. The experiment results indicate that the reconstructed virtual environment provides a solution to reproduce complex emergencies and evaluate policies to be executed in the real world.
Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong
2014-01-01
The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform. PMID:25097872
Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong
2014-01-01
The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform.
Paging memory from random access memory to backing storage in a parallel computer
Archer, Charles J; Blocksome, Michael A; Inglett, Todd A; Ratterman, Joseph D; Smith, Brian E
2013-05-21
Paging memory from random access memory (`RAM`) to backing storage in a parallel computer that includes a plurality of compute nodes, including: executing a data processing application on a virtual machine operating system in a virtual machine on a first compute node; providing, by a second compute node, backing storage for the contents of RAM on the first compute node; and swapping, by the virtual machine operating system in the virtual machine on the first compute node, a page of memory from RAM on the first compute node to the backing storage on the second compute node.
Virtual machine-based simulation platform for mobile ad-hoc network-based cyber infrastructure
Yoginath, Srikanth B.; Perumalla, Kayla S.; Henz, Brian J.
2015-09-29
In modeling and simulating complex systems such as mobile ad-hoc networks (MANETs) in de-fense communications, it is a major challenge to reconcile multiple important considerations: the rapidity of unavoidable changes to the software (network layers and applications), the difficulty of modeling the critical, implementation-dependent behavioral effects, the need to sustain larger scale scenarios, and the desire for faster simulations. Here we present our approach in success-fully reconciling them using a virtual time-synchronized virtual machine(VM)-based parallel ex-ecution framework that accurately lifts both the devices as well as the network communications to a virtual time plane while retaining full fidelity. At themore » core of our framework is a scheduling engine that operates at the level of a hypervisor scheduler, offering a unique ability to execute multi-core guest nodes over multi-core host nodes in an accurate, virtual time-synchronized manner. In contrast to other related approaches that suffer from either speed or accuracy issues, our framework provides MANET node-wise scalability, high fidelity of software behaviors, and time-ordering accuracy. The design and development of this framework is presented, and an ac-tual implementation based on the widely used Xen hypervisor system is described. Benchmarks with synthetic and actual applications are used to identify the benefits of our approach. The time inaccuracy of traditional emulation methods is demonstrated, in comparison with the accurate execution of our framework verified by theoretically correct results expected from analytical models of the same scenarios. In the largest high fidelity tests, we are able to perform virtual time-synchronized simulation of 64-node VM-based full-stack, actual software behaviors of MANETs containing a mix of static and mobile (unmanned airborne vehicle) nodes, hosted on a 32-core host, with full fidelity of unmodified ad-hoc routing protocols, unmodified application executables, and user-controllable physical layer effects including inter-device wireless signal strength, reachability, and connectivity.« less
Virtual machine-based simulation platform for mobile ad-hoc network-based cyber infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B.; Perumalla, Kayla S.; Henz, Brian J.
In modeling and simulating complex systems such as mobile ad-hoc networks (MANETs) in de-fense communications, it is a major challenge to reconcile multiple important considerations: the rapidity of unavoidable changes to the software (network layers and applications), the difficulty of modeling the critical, implementation-dependent behavioral effects, the need to sustain larger scale scenarios, and the desire for faster simulations. Here we present our approach in success-fully reconciling them using a virtual time-synchronized virtual machine(VM)-based parallel ex-ecution framework that accurately lifts both the devices as well as the network communications to a virtual time plane while retaining full fidelity. At themore » core of our framework is a scheduling engine that operates at the level of a hypervisor scheduler, offering a unique ability to execute multi-core guest nodes over multi-core host nodes in an accurate, virtual time-synchronized manner. In contrast to other related approaches that suffer from either speed or accuracy issues, our framework provides MANET node-wise scalability, high fidelity of software behaviors, and time-ordering accuracy. The design and development of this framework is presented, and an ac-tual implementation based on the widely used Xen hypervisor system is described. Benchmarks with synthetic and actual applications are used to identify the benefits of our approach. The time inaccuracy of traditional emulation methods is demonstrated, in comparison with the accurate execution of our framework verified by theoretically correct results expected from analytical models of the same scenarios. In the largest high fidelity tests, we are able to perform virtual time-synchronized simulation of 64-node VM-based full-stack, actual software behaviors of MANETs containing a mix of static and mobile (unmanned airborne vehicle) nodes, hosted on a 32-core host, with full fidelity of unmodified ad-hoc routing protocols, unmodified application executables, and user-controllable physical layer effects including inter-device wireless signal strength, reachability, and connectivity.« less
Adamovich, S.V.; August, K.; Merians, A.; Tunik, E.
2017-01-01
Purpose Emerging evidence shows that interactive virtual environments (VEs) may be a promising tool for studying sensorimotor processes and for rehabilitation. However, the potential of VEs to recruit action observation-execution neural networks is largely unknown. For the first time, a functional MRI-compatible virtual reality system (VR) has been developed to provide a window into studying brain-behavior interactions. This system is capable of measuring the complex span of hand-finger movements and simultaneously streaming this kinematic data to control the motion of representations of human hands in virtual reality. Methods In a blocked fMRI design, thirteen healthy subjects observed, with the intent to imitate (OTI), finger sequences performed by the virtual hand avatar seen in 1st person perspective and animated by pre-recorded kinematic data. Following this, subjects imitated the observed sequence while viewing the virtual hand avatar animated by their own movement in real-time. These blocks were interleaved with rest periods during which subjects viewed static virtual hand avatars and control trials in which the avatars were replaced with moving non-anthropomorphic objects. Results We show three main findings. First, both observation with intent to imitate and imitation with real-time virtual avatar feedback, were associated with activation in a distributed frontoparietal network typically recruited for observation and execution of real-world actions. Second, we noted a time-variant increase in activation in the left insular cortex for observation with intent to imitate actions performed by the virtual avatar. Third, imitation with virtual avatar feedback (relative to the control condition) was associated with a localized recruitment of the angular gyrus, precuneus, and extrastriate body area, regions which are (along with insular cortex) associated with the sense of agency. Conclusions Our data suggest that the virtual hand avatars may have served as disembodied training tools in the observation condition and as embodied “extensions” of the subject’s own body (pseudo-tools) in the imitation. These data advance our understanding of the brain-behavior interactions when performing actions in VE and have implications in the development of observation- and imitation-based VR rehabilitation paradigms. PMID:19531876
Providing Assistive Technology Applications as a Service Through Cloud Computing.
Mulfari, Davide; Celesti, Antonio; Villari, Massimo; Puliafito, Antonio
2015-01-01
Users with disabilities interact with Personal Computers (PCs) using Assistive Technology (AT) software solutions. Such applications run on a PC that a person with a disability commonly uses. However the configuration of AT applications is not trivial at all, especially whenever the user needs to work on a PC that does not allow him/her to rely on his / her AT tools (e.g., at work, at university, in an Internet point). In this paper, we discuss how cloud computing provides a valid technological solution to enhance such a scenario.With the emergence of cloud computing, many applications are executed on top of virtual machines (VMs). Virtualization allows us to achieve a software implementation of a real computer able to execute a standard operating system and any kind of application. In this paper we propose to build personalized VMs running AT programs and settings. By using the remote desktop technology, our solution enables users to control their customized virtual desktop environment by means of an HTML5-based web interface running on any computer equipped with a browser, whenever they are.
Malware detection and analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, Ken; Lloyd, Levi; Crussell, Jonathan
Embodiments of the invention describe systems and methods for malicious software detection and analysis. A binary executable comprising obfuscated malware on a host device may be received, and incident data indicating a time when the binary executable was received and identifying processes operating on the host device may be recorded. The binary executable is analyzed via a scalable plurality of execution environments, including one or more non-virtual execution environments and one or more virtual execution environments, to generate runtime data and deobfuscation data attributable to the binary executable. At least some of the runtime data and deobfuscation data attributable tomore » the binary executable is stored in a shared database, while at least some of the incident data is stored in a private, non-shared database.« less
Review of Enabling Technologies to Facilitate Secure Compute Customization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aderholdt, Ferrol; Caldwell, Blake A; Hicks, Susan Elaine
High performance computing environments are often used for a wide variety of workloads ranging from simulation, data transformation and analysis, and complex workflows to name just a few. These systems may process data for a variety of users, often requiring strong separation between job allocations. There are many challenges to establishing these secure enclaves within the shared infrastructure of high-performance computing (HPC) environments. The isolation mechanisms in the system software are the basic building blocks for enabling secure compute enclaves. There are a variety of approaches and the focus of this report is to review the different virtualization technologies thatmore » facilitate the creation of secure compute enclaves. The report reviews current operating system (OS) protection mechanisms and modern virtualization technologies to better understand the performance/isolation properties. We also examine the feasibility of running ``virtualized'' computing resources as non-privileged users, and providing controlled administrative permissions for standard users running within a virtualized context. Our examination includes technologies such as Linux containers (LXC [32], Docker [15]) and full virtualization (KVM [26], Xen [5]). We categorize these different approaches to virtualization into two broad groups: OS-level virtualization and system-level virtualization. The OS-level virtualization uses containers to allow a single OS kernel to be partitioned to create Virtual Environments (VE), e.g., LXC. The resources within the host's kernel are only virtualized in the sense of separate namespaces. In contrast, system-level virtualization uses hypervisors to manage multiple OS kernels and virtualize the physical resources (hardware) to create Virtual Machines (VM), e.g., Xen, KVM. This terminology of VE and VM, detailed in Section 2, is used throughout the report to distinguish between the two different approaches to providing virtualized execution environments. As part of our technology review we analyzed several current virtualization solutions to assess their vulnerabilities. This included a review of common vulnerabilities and exposures (CVEs) for Xen, KVM, LXC and Docker to gauge their susceptibility to different attacks. The complete details are provided in Section 5 on page 33. Based on this review we concluded that system-level virtualization solutions have many more vulnerabilities than OS level virtualization solutions. As such, security mechanisms like sVirt (Section 3.3) should be considered when using system-level virtualization solutions in order to protect the host against exploits. The majority of vulnerabilities related to KVM, LXC, and Docker are in specific regions of the system. Therefore, future "zero day attacks" are likely to be in the same regions, which suggests that protecting these areas can simplify the protection of the host and maintain the isolation between users. The evaluations of virtualization technologies done thus far are discussed in Section 4. This includes experiments with 'user' namespaces in VEs, which provides the ability to isolate user privileges and allow a user to run with different UIDs within the container while mapping them to non-privileged UIDs in the host. We have identified Linux namespaces as a promising mechanism to isolate shared resources, while maintaining good performance. In Section 4.1 we describe our tests with LXC as a non-root user and leveraging namespaces to control UID/GID mappings and support controlled sharing of parallel file-systems. We highlight several of these namespace capabilities in Section 6.2.3. The other evaluations that were performed during this initial phase of work provide baseline performance data for comparing VEs and VMs to purely native execution. In Section 4.2 we performed tests using the High-Performance Computing Conjugate Gradient (HPCCG) benchmark to establish baseline performance for a scientific application when run on the Native (host) machine in contrast with execution under Docker and KVM. Our tests verified prior studies showing roughly 2-4% overheads in application execution time & MFlops when running in hypervisor-base environments (VMs) as compared to near native performance with VEs. For more details, see Figures 4.5 (page 28), 4.6 (page 28), and 4.7 (page 29). Additionally, in Section 4.3 we include network measurements for TCP bandwidth performance over the 10GigE interface in our testbed. The Native and Docker based tests achieved >= ~9Gbits/sec, while the KVM configuration only achieved 2.5Gbits/sec (Table 4.6 on page 32). This may be a configuration issue with our KVM installation, and is a point for further testing as we refine the network settings in the testbed. The initial network tests were done using a bridged networking configuration. The report outline is as follows: - Section 1 introduces the report and clarifies the scope of the proj...« less
A high performance scientific cloud computing environment for materials simulations
NASA Astrophysics Data System (ADS)
Jorissen, K.; Vila, F. D.; Rehr, J. J.
2012-09-01
We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.
Validation of the Virtual MET as an assessment tool for executive functions.
Rand, Debbie; Basha-Abu Rukan, Soraya; Weiss, Patrice L Tamar; Katz, Noomi
2009-08-01
The purpose of this study was to establish ecological validity and initial construct validity of a Virtual Multiple Errands Test (VMET) as an assessment tool for executive functions. It was implemented within the Virtual Mall (VMall), a novel functional video-capture virtual shopping environment. The main objectives were (1) to examine the relationships between the performance of three groups of participants in the Multiple Errands Test (MET) carried out in a real shopping mall and their performance in the VMET, (2) to assess the relationships between the MET and VMET of the post-stroke participant's level of executive functioning and independence in instrumental activities of daily living, and (3) to compare the performance of post-stroke participants to those of healthy young and older controls in both the MET and VMET. The study population included three groups; post-stroke participants (n = 9), healthy young participants (n = 20), and healthy older participants (n = 20). The VMET was able to differentiate between two age groups of healthy participants and between healthy and post-stroke participants thus demonstrating that it is sensitive to brain injury and ageing and supports construct validity between known groups. In addition, significant correlations were found between the MET and the VMET for both the post-stroke participants and older healthy participants. This provides initial support for the ecological validity of the VMET as an assessment tool of executive functions. However, further psychometric data on temporal stability are needed, namely test-retest reliability and responsiveness, before it is ready for clinical application. Further research using the VMET as an assessment tool within the VMall with larger groups and in additional populations is also recommended.
LHCb experience with running jobs in virtual machines
NASA Astrophysics Data System (ADS)
McNab, A.; Stagni, F.; Luzzi, C.
2015-12-01
The LHCb experiment has been running production jobs in virtual machines since 2013 as part of its DIRAC-based infrastructure. We describe the architecture of these virtual machines and the steps taken to replicate the WLCG worker node environment expected by user and production jobs. This relies on the uCernVM system for providing root images for virtual machines. We use the CernVM-FS distributed filesystem to supply the root partition files, the LHCb software stack, and the bootstrapping scripts necessary to configure the virtual machines for us. Using this approach, we have been able to minimise the amount of contextualisation which must be provided by the virtual machine managers. We explain the process by which the virtual machine is able to receive payload jobs submitted to DIRAC by users and production managers, and how this differs from payloads executed within conventional DIRAC pilot jobs on batch queue based sites. We describe our operational experiences in running production on VM based sites managed using Vcycle/OpenStack, Vac, and HTCondor Vacuum. Finally we show how our use of these resources is monitored using Ganglia and DIRAC.
Bio-Docklets: virtualization containers for single-step execution of NGS pipelines.
Kim, Baekdoo; Ali, Thahmina; Lijeron, Carlos; Afgan, Enis; Krampis, Konstantinos
2017-08-01
Processing of next-generation sequencing (NGS) data requires significant technical skills, involving installation, configuration, and execution of bioinformatics data pipelines, in addition to specialized postanalysis visualization and data mining software. In order to address some of these challenges, developers have leveraged virtualization containers toward seamless deployment of preconfigured bioinformatics software and pipelines on any computational platform. We present an approach for abstracting the complex data operations of multistep, bioinformatics pipelines for NGS data analysis. As examples, we have deployed 2 pipelines for RNA sequencing and chromatin immunoprecipitation sequencing, preconfigured within Docker virtualization containers we call Bio-Docklets. Each Bio-Docklet exposes a single data input and output endpoint and from a user perspective, running the pipelines as simply as running a single bioinformatics tool. This is achieved using a "meta-script" that automatically starts the Bio-Docklets and controls the pipeline execution through the BioBlend software library and the Galaxy Application Programming Interface. The pipeline output is postprocessed by integration with the Visual Omics Explorer framework, providing interactive data visualizations that users can access through a web browser. Our goal is to enable easy access to NGS data analysis pipelines for nonbioinformatics experts on any computing environment, whether a laboratory workstation, university computer cluster, or a cloud service provider. Beyond end users, the Bio-Docklets also enables developers to programmatically deploy and run a large number of pipeline instances for concurrent analysis of multiple datasets. © The Authors 2017. Published by Oxford University Press.
Bio-Docklets: virtualization containers for single-step execution of NGS pipelines
Kim, Baekdoo; Ali, Thahmina; Lijeron, Carlos; Afgan, Enis
2017-01-01
Abstract Processing of next-generation sequencing (NGS) data requires significant technical skills, involving installation, configuration, and execution of bioinformatics data pipelines, in addition to specialized postanalysis visualization and data mining software. In order to address some of these challenges, developers have leveraged virtualization containers toward seamless deployment of preconfigured bioinformatics software and pipelines on any computational platform. We present an approach for abstracting the complex data operations of multistep, bioinformatics pipelines for NGS data analysis. As examples, we have deployed 2 pipelines for RNA sequencing and chromatin immunoprecipitation sequencing, preconfigured within Docker virtualization containers we call Bio-Docklets. Each Bio-Docklet exposes a single data input and output endpoint and from a user perspective, running the pipelines as simply as running a single bioinformatics tool. This is achieved using a “meta-script” that automatically starts the Bio-Docklets and controls the pipeline execution through the BioBlend software library and the Galaxy Application Programming Interface. The pipeline output is postprocessed by integration with the Visual Omics Explorer framework, providing interactive data visualizations that users can access through a web browser. Our goal is to enable easy access to NGS data analysis pipelines for nonbioinformatics experts on any computing environment, whether a laboratory workstation, university computer cluster, or a cloud service provider. Beyond end users, the Bio-Docklets also enables developers to programmatically deploy and run a large number of pipeline instances for concurrent analysis of multiple datasets. PMID:28854616
Bedside wellness--development of a virtual forest rehabilitation system.
Ohsuga, M; Tatsuno, Y; Shimono, F; Hirasawa, K; Oyama, H; Okamura, H
1998-01-01
The present study aims at the development of a new concept system that will contribute toward improving the quality of life for bedridden patients and the elderly. The results of a basic study showed the possibility of a virtual reality system reducing stress and pain, provided VR sickness does not occur. A Bedside Wellness System which lets a person experience a virtual forest walk and provides a facility of rehabilitation was proposed based on the basic study and developed. An experiment to assess the developed system using healthy subjects was executed. The data suggested the positive effects of the system; however, some points to be improved were also extracted. After a few improvements, the system will be available for clinical use.
Chip architecture - A revolution brewing
NASA Astrophysics Data System (ADS)
Guterl, F.
1983-07-01
Techniques being explored by microchip designers and manufacturers to both speed up memory access and instruction execution while protecting memory are discussed. Attention is given to hardwiring control logic, pipelining for parallel processing, devising orthogonal instruction sets for interchangeable instruction fields, and the development of hardware for implementation of virtual memory and multiuser systems to provide memory management and protection. The inclusion of microcode in mainframes eliminated logic circuits that control timing and gating of the CPU. However, improvements in memory architecture have reduced access time to below that needed for instruction execution. Hardwiring the functions as a virtual memory enhances memory protection. Parallelism involves a redundant architecture, which allows identical operations to be performed simultaneously, and can be directed with microcode to avoid abortion of intermediate instructions once on set of instructions has been completed.
Multiplexing Low and High QoS Workloads in Virtual Environments
NASA Astrophysics Data System (ADS)
Verboven, Sam; Vanmechelen, Kurt; Broeckhove, Jan
Virtualization technology has introduced new ways for managing IT infrastructure. The flexible deployment of applications through self-contained virtual machine images has removed the barriers for multiplexing, suspending and migrating applications with their entire execution environment, allowing for a more efficient use of the infrastructure. These developments have given rise to an important challenge regarding the optimal scheduling of virtual machine workloads. In this paper, we specifically address the VM scheduling problem in which workloads that require guaranteed levels of CPU performance are mixed with workloads that do not require such guarantees. We introduce a framework to analyze this scheduling problem and evaluate to what extent such mixed service delivery is beneficial for a provider of virtualized IT infrastructure. Traditionally providers offer IT resources under a guaranteed and fixed performance profile, which can lead to underutilization. The findings of our simulation study show that through proper tuning of a limited set of parameters, the proposed scheduling algorithm allows for a significant increase in utilization without sacrificing on performance dependability.
2004-02-09
FINAL 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE VIRTUAL COLLABORATION: 5a. CONTRACT NUMBER ADVANTAGES AND DISADVANTAGES IN THE PLANNING AND...warfare is not one system; it is a system of systems from sensors to information flow. In analyzing the specific advantages and disadvantages of one of...Standard Form 298 (Rev. 8-98) NAVAL WAR COLLEGE Newport, R.I. VIRTUAL COLLABORATION: ADVANTAGES AND DISADVANTAGES IN THE PLANNING AND EXECUTION OF OPERATIONS
Vourvopoulos, Athanasios; Bermúdez I Badia, Sergi
2016-08-09
The use of Brain-Computer Interface (BCI) technology in neurorehabilitation provides new strategies to overcome stroke-related motor limitations. Recent studies demonstrated the brain's capacity for functional and structural plasticity through BCI. However, it is not fully clear how we can take full advantage of the neurobiological mechanisms underlying recovery and how to maximize restoration through BCI. In this study we investigate the role of multimodal virtual reality (VR) simulations and motor priming (MP) in an upper limb motor-imagery BCI task in order to maximize the engagement of sensory-motor networks in a broad range of patients who can benefit from virtual rehabilitation training. In order to investigate how different BCI paradigms impact brain activation, we designed 3 experimental conditions in a within-subject design, including an immersive Multimodal Virtual Reality with Motor Priming (VRMP) condition where users had to perform motor-execution before BCI training, an immersive Multimodal VR condition, and a control condition with standard 2D feedback. Further, these were also compared to overt motor-execution. Finally, a set of questionnaires were used to gather subjective data on Workload, Kinesthetic Imagery and Presence. Our findings show increased capacity to modulate and enhance brain activity patterns in all extracted EEG rhythms matching more closely those present during motor-execution and also a strong relationship between electrophysiological data and subjective experience. Our data suggest that both VR and particularly MP can enhance the activation of brain patterns present during overt motor-execution. Further, we show changes in the interhemispheric EEG balance, which might play an important role in the promotion of neural activation and neuroplastic changes in stroke patients in a motor-imagery neurofeedback paradigm. In addition, electrophysiological correlates of psychophysiological responses provide us with valuable information about the motor and affective state of the user that has the potential to be used to predict MI-BCI training outcome based on user's profile. Finally, we propose a BCI paradigm in VR, which gives the possibility of motor priming for patients with low level of motor control.
: A Scalable and Transparent System for Simulating MPI Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S
2010-01-01
is a scalable, transparent system for experimenting with the execution of parallel programs on simulated computing platforms. The level of simulated detail can be varied for application behavior as well as for machine characteristics. Unique features of are repeatability of execution, scalability to millions of simulated (virtual) MPI ranks, scalability to hundreds of thousands of host (real) MPI ranks, portability of the system to a variety of host supercomputing platforms, and the ability to experiment with scientific applications whose source-code is available. The set of source-code interfaces supported by is being expanded to support a wider set of applications, andmore » MPI-based scientific computing benchmarks are being ported. In proof-of-concept experiments, has been successfully exercised to spawn and sustain very large-scale executions of an MPI test program given in source code form. Low slowdowns are observed, due to its use of purely discrete event style of execution, and due to the scalability and efficiency of the underlying parallel discrete event simulation engine, sik. In the largest runs, has been executed on up to 216,000 cores of a Cray XT5 supercomputer, successfully simulating over 27 million virtual MPI ranks, each virtual rank containing its own thread context, and all ranks fully synchronized by virtual time.« less
Computing Services and Assured Computing
2006-05-01
fighters’ ability to execute the mission.” Computing Services 4 We run IT Systems that: provide medical care pay the warfighters manage maintenance...users • 1,400 applications • 18 facilities • 180 software vendors • 18,000+ copies of executive software products • Virtually every type of mainframe and... chocs electriques, de branchez les deux cordons d’al imentation avant de faire le depannage P R IM A R Y SD A S B 1 2 PowerHub 7000 RST U L 00- 00
Software as a service approach to sensor simulation software deployment
NASA Astrophysics Data System (ADS)
Webster, Steven; Miller, Gordon; Mayott, Gregory
2012-05-01
Traditionally, military simulation has been problem domain specific. Executing an exercise currently requires multiple simulation software providers to specialize, deploy, and configure their respective implementations, integrate the collection of software to achieve a specific system behavior, and then execute for the purpose at hand. This approach leads to rigid system integrations which require simulation expertise for each deployment due to changes in location, hardware, and software. Our alternative is Software as a Service (SaaS) predicated on the virtualization of Night Vision Electronic Sensors (NVESD) sensor simulations as an exemplary case. Management middleware elements layer self provisioning, configuration, and integration services onto the virtualized sensors to present a system of services at run time. Given an Infrastructure as a Service (IaaS) environment, enabled and managed system of simulations yields a durable SaaS delivery without requiring user simulation expertise. Persistent SaaS simulations would provide on demand availability to connected users, decrease integration costs and timelines, and benefit the domain community from immediate deployment of lessons learned.
Introduction of Virtualization Technology to Multi-Process Model Checking
NASA Technical Reports Server (NTRS)
Leungwattanakit, Watcharin; Artho, Cyrille; Hagiya, Masami; Tanabe, Yoshinori; Yamamoto, Mitsuharu
2009-01-01
Model checkers find failures in software by exploring every possible execution schedule. Java PathFinder (JPF), a Java model checker, has been extended recently to cover networked applications by caching data transferred in a communication channel. A target process is executed by JPF, whereas its peer process runs on a regular virtual machine outside. However, non-deterministic target programs may produce different output data in each schedule, causing the cache to restart the peer process to handle the different set of data. Virtualization tools could help us restore previous states of peers, eliminating peer restart. This paper proposes the application of virtualization technology to networked model checking, concentrating on JPF.
Efficiently Scheduling Multi-core Guest Virtual Machines on Multi-core Hosts in Network Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S
2011-01-01
Virtual machine (VM)-based simulation is a method used by network simulators to incorporate realistic application behaviors by executing actual VMs as high-fidelity surrogates for simulated end-hosts. A critical requirement in such a method is the simulation time-ordered scheduling and execution of the VMs. Prior approaches such as time dilation are less efficient due to the high degree of multiplexing possible when multiple multi-core VMs are simulated on multi-core host systems. We present a new simulation time-ordered scheduler to efficiently schedule multi-core VMs on multi-core real hosts, with a virtual clock realized on each virtual core. The distinguishing features of ourmore » approach are: (1) customizable granularity of the VM scheduling time unit on the simulation time axis, (2) ability to take arbitrary leaps in virtual time by VMs to maximize the utilization of host (real) cores when guest virtual cores idle, and (3) empirically determinable optimality in the tradeoff between total execution (real) time and time-ordering accuracy levels. Experiments show that it is possible to get nearly perfect time-ordered execution, with a slight cost in total run time, relative to optimized non-simulation VM schedulers. Interestingly, with our time-ordered scheduler, it is also possible to reduce the time-ordering error from over 50% of non-simulation scheduler to less than 1% realized by our scheduler, with almost the same run time efficiency as that of the highly efficient non-simulation VM schedulers.« less
A virtual shopping task for the assessment of executive functions: Validity for people with stroke.
Nir-Hadad, Shira Yama; Weiss, Patrice L; Waizman, Anna; Schwartz, Natalia; Kizony, Rachel
2017-07-01
The importance of assessing executive functions (EF) using ecologically valid assessments has been discussed extensively. Due to the difficulty of carrying out such assessments in real-world settings on a regular basis, virtual reality has been proposed as a technique to provide complex functional tasks under a variety of differing conditions while measuring various aspects of performance and controlling for stimuli. The main goal of this study was to examine the discriminant, construct-convergent and ecological validity of the Adapted Four-Item Shopping Task, an assessment of the Instrumental Activity of Daily Living (IADL) of shopping. Nineteen people with stroke, aged 50-85 years, and 20 age- and gender-matched healthy participants performed the shopping task in both the SeeMe Virtual Interactive Shopping environment and a real shopping environment (the hospital cafeteria) in a counterbalanced order. The shopping task outcomes were compared to clinical measures of EF. The findings provided good initial support for the validity of the Adapted Four-Item Shopping Task as an IADL assessment that requires the use of EF for people with stroke. Further studies should examine this task with a larger sample of people with stroke as well as with other populations who have deficits in EF.
The ASSERT Virtual Machine Kernel: Support for Preservation of Temporal Properties
NASA Astrophysics Data System (ADS)
Zamorano, J.; de la Puente, J. A.; Pulido, J. A.; Urueña
2008-08-01
A new approach to building embedded real-time software has been developed in the ASSERT project. One of its key elements is the concept of a virtual machine preserving the non-functional properties of the system, and especially real-time properties, all the way down from high- level design models down to executable code. The paper describes one instance of the virtual machine concept that provides support for the preservation of temporal properties both at the source code level —by accept- ing only "legal" entities, i.e. software components with statically analysable real-tim behaviour— and at run-time —by monitoring the temporal behaviour of the system. The virtual machine has been validated on several pilot projects carried out by aerospace companies in the framework of the ASSERT project.
Robitaille, Nicolas; Jackson, Philip L; Hébert, Luc J; Mercier, Catherine; Bouyer, Laurent J; Fecteau, Shirley; Richards, Carol L; McFadyen, Bradford J
2017-10-01
This proof of concept study tested the ability of a dual task walking protocol using a recently developed avatar-based virtual reality (VR) platform to detect differences between military personnel post mild traumatic brain injury (mTBI) and healthy controls. The VR platform coordinated motion capture, an interaction and rendering system, and a projection system to present first (participant-controlled) and third person avatars within the context of a specific military patrol scene. A divided attention task was also added. A healthy control group was compared to a group with previous mTBI (both groups comprised of six military personnel) and a repeated measures ANOVA tested for differences between conditions and groups based on recognition errors, walking speed and fluidity and obstacle clearance. The VR platform was well tolerated by both groups. Walking fluidity was degraded for the control group within the more complex navigational dual tasking involving avatars, and appeared greatest in the dual tasking with the interacting avatar. This navigational behaviour was not seen in the mTBI group. The present findings show proof of concept for using avatars, particularly more interactive avatars, to expose differences in executive functioning when applying context-specific protocols (here for the military). Implications for rehabilitation Virtual reality provides a means to control context-specific factors for assessment and intervention. Adding human interaction and agency through avatars increases the ecologic nature of the virtual environment. Avatars in the present application of the Virtual Reality avatar interaction platform appear to provide a better ability to reveal differences between trained, military personal with and without mTBI.
Dynamic Test Generation for Large Binary Programs
2009-11-12
the fuzzing@whitestar.linuxbox.orgmailing list, including Jared DeMott, Disco Jonny, and Ari Takanen, for discussions on fuzzing tradeoffs. Martin...as is the case for large applications where exercising all execution paths is virtually hopeless anyway. This point will be further discussed in...consumes trace files generated by iDNA and virtually re-executes the recorded runs. TruScan offers several features that substantially simplify symbolic
Distributed collaborative environments for virtual capability-based planning
NASA Astrophysics Data System (ADS)
McQuay, William K.
2003-09-01
Distributed collaboration is an emerging technology that will significantly change how decisions are made in the 21st century. Collaboration involves two or more geographically dispersed individuals working together to share and exchange data, information, knowledge, and actions. The marriage of information, collaboration, and simulation technologies provides the decision maker with a collaborative virtual environment for planning and decision support. This paper reviews research that is focusing on the applying open standards agent-based framework with integrated modeling and simulation to a new Air Force initiative in capability-based planning and the ability to implement it in a distributed virtual environment. Virtual Capability Planning effort will provide decision-quality knowledge for Air Force resource allocation and investment planning including examining proposed capabilities and cost of alternative approaches, the impact of technologies, identification of primary risk drivers, and creation of executable acquisition strategies. The transformed Air Force business processes are enabled by iterative use of constructive and virtual modeling, simulation, and analysis together with information technology. These tools are applied collaboratively via a technical framework by all the affected stakeholders - warfighter, laboratory, product center, logistics center, test center, and primary contractor.
Soar, K; Chapman, E; Lavan, N; Jansari, A S; Turner, J J D
2016-10-01
Caffeine has been shown to have effects on certain areas of cognition, but in executive functioning the research is limited and also inconsistent. One reason could be the need for a more sensitive measure to detect the effects of caffeine on executive function. This study used a new non-immersive virtual reality assessment of executive functions known as JEF(©) (the Jansari Assessment of Executive Function) alongside the 'classic' Stroop Colour-Word task to assess the effects of a normal dose of caffeinated coffee on executive function. Using a double-blind, counterbalanced within participants procedure 43 participants were administered either a caffeinated or decaffeinated coffee and completed the 'JEF(©)' and Stroop tasks, as well as a subjective mood scale and blood pressure pre- and post condition on two separate occasions a week apart. JEF(©) yields measures for eight separate aspects of executive functions, in addition to a total average score. Findings indicate that performance was significantly improved on the planning, creative thinking, event-, time- and action-based prospective memory, as well as total JEF(©) score following caffeinated coffee relative to the decaffeinated coffee. The caffeinated beverage significantly decreased reaction times on the Stroop task, but there was no effect on Stroop interference. The results provide further support for the effects of a caffeinated beverage on cognitive functioning. In particular, it has demonstrated the ability of JEF(©) to detect the effects of caffeine across a number of executive functioning constructs, which weren't shown in the Stroop task, suggesting executive functioning improvements as a result of a 'typical' dose of caffeine may only be detected by the use of more real-world, ecologically valid tasks. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.
2015-12-01
Cloud resources nowadays contribute an essential share of resources for computing in high-energy physics. Such resources can be either provided by private or public IaaS clouds (e.g. OpenStack, Amazon EC2, Google Compute Engine) or by volunteers computers (e.g. LHC@Home 2.0). In any case, experiments need to prepare a virtual machine image that provides the execution environment for the physics application at hand. The CernVM virtual machine since version 3 is a minimal and versatile virtual machine image capable of booting different operating systems. The virtual machine image is less than 20 megabyte in size. The actual operating system is delivered on demand by the CernVM File System. CernVM 3 has matured from a prototype to a production environment. It is used, for instance, to run LHC applications in the cloud, to tune event generators using a network of volunteer computers, and as a container for the historic Scientific Linux 5 and Scientific Linux 4 based software environments in the course of long-term data preservation efforts of the ALICE, CMS, and ALEPH experiments. We present experience and lessons learned from the use of CernVM at scale. We also provide an outlook on the upcoming developments. These developments include adding support for Scientific Linux 7, the use of container virtualization, such as provided by Docker, and the streamlining of virtual machine contextualization towards the cloud-init industry standard.
Virtual reality measures in neuropsychological assessment: a meta-analytic review.
Neguț, Alexandra; Matu, Silviu-Andrei; Sava, Florin Alin; David, Daniel
2016-02-01
Virtual reality-based assessment is a new paradigm for neuropsychological evaluation, that might provide an ecological assessment, compared to paper-and-pencil or computerized neuropsychological assessment. Previous research has focused on the use of virtual reality in neuropsychological assessment, but no meta-analysis focused on the sensitivity of virtual reality-based measures of cognitive processes in measuring cognitive processes in various populations. We found eighteen studies that compared the cognitive performance between clinical and healthy controls on virtual reality measures. Based on a random effects model, the results indicated a large effect size in favor of healthy controls (g = .95). For executive functions, memory and visuospatial analysis, subgroup analysis revealed moderate to large effect sizes, with superior performance in the case of healthy controls. Participants' mean age, type of clinical condition, type of exploration within virtual reality environments, and the presence of distractors were significant moderators. Our findings support the sensitivity of virtual reality-based measures in detecting cognitive impairment. They highlight the possibility of using virtual reality measures for neuropsychological assessment in research applications, as well as in clinical practice.
The HARNESS Workbench: Unified and Adaptive Access to Diverse HPC Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sunderam, Vaidy S.
2012-03-20
The primary goal of the Harness WorkBench (HWB) project is to investigate innovative software environments that will help enhance the overall productivity of applications science on diverse HPC platforms. Two complementary frameworks were designed: one, a virtualized command toolkit for application building, deployment, and execution, that provides a common view across diverse HPC systems, in particular the DOE leadership computing platforms (Cray, IBM, SGI, and clusters); and two, a unified runtime environment that consolidates access to runtime services via an adaptive framework for execution-time and post processing activities. A prototype of the first was developed based on the concept ofmore » a 'system-call virtual machine' (SCVM), to enhance portability of the HPC application deployment process across heterogeneous high-end machines. The SCVM approach to portable builds is based on the insertion of toolkit-interpretable directives into original application build scripts. Modifications resulting from these directives preserve the semantics of the original build instruction flow. The execution of the build script is controlled by our toolkit that intercepts build script commands in a manner transparent to the end-user. We have applied this approach to a scientific production code (Gamess-US) on the Cray-XT5 machine. The second facet, termed Unibus, aims to facilitate provisioning and aggregation of multifaceted resources from resource providers and end-users perspectives. To achieve that, Unibus proposes a Capability Model and mediators (resource drivers) to virtualize access to diverse resources, and soft and successive conditioning to enable automatic and user-transparent resource provisioning. A proof of concept implementation has demonstrated the viability of this approach on high end machines, grid systems and computing clouds.« less
Formation of Virtual Organizations in Grids: A Game-Theoretic Approach
NASA Astrophysics Data System (ADS)
Carroll, Thomas E.; Grosu, Daniel
The execution of large scale grid applications requires the use of several computational resources owned by various Grid Service Providers (GSPs). GSPs must form Virtual Organizations (VOs) to be able to provide the composite resource to these applications. We consider grids as self-organizing systems composed of autonomous, self-interested GSPs that will organize themselves into VOs with every GSP having the objective of maximizing its profit. We formulate the resource composition among GSPs as a coalition formation problem and propose a game-theoretic framework based on cooperation structures to model it. Using this framework, we design a resource management system that supports the VO formation among GSPs in a grid computing system.
77 FR 54919 - National Institute on Drug Abuse; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-06
... Institutes of Health; Neuroscience Center, 6001 Executive Boulevard, Rockville, MD 20852. Contact Person... Institutes of Health, Neuroscience Center, 6001 Executive Boulevard, Rockville, MD 20852. Contact Person...: National Institutes of Health, Neuroscience Center, 6001 Executive Boulevard, Rockville, MD 20852, (Virtual...
Design, Development, and Automated Verification of an Integrity-Protected Hypervisor
2012-07-16
mechanism for implementing software virtualization. Since hypervisors execute at a very high privilege level, they must be secure. A fundamental security...using the CBMC model checker. CBMC verified XMHF?s implementation ? about 4700 lines of C code ? in about 80 seconds using less than 2GB of RAM. 15...Hypervisors are a popular mechanism for implementing software virtualization. Since hypervisors execute at a very high privilege level, they must be
Renison, Belinda; Ponsford, Jennie; Testa, Renee; Richardson, Barry; Brownfield, Kylie
2012-05-01
Virtual reality (VR) assessment paradigms have the potential to address the limited ecological validity of pen and paper measures of executive function (EF) and the pragmatic and reliability issues associated with functional measures. To investigate the ecological validity and construct validity of a newly developed VR measure of EF, the Virtual Library Task (VLT); a real life analogous task--the Real Library Task (RLT); and five neuropsychological measures of EF were administered to 30 patients with traumatic brain injury (TBI) and 30 healthy Controls. Significant others for each participant also completed the Dysexecutive Questionnaire (DEX), which is a behavioral rating scale of everyday EF. Performances on the VLT and the RLT were significantly positively correlated indicating that VR performance is similar to real world performance. The TBI group performed significantly worse than the Control group on the VLT and the Modified Six Elements Test (MSET) but the other four neuropsychological measures of EF failed to differentiate the groups. Both the MSET and the VLT significantly predicted everyday EF suggesting that they are both ecologically valid tools for the assessment of EF. The VLT has the advantage over the MSET of providing objective measurement of individual components of EF.
Direct access inter-process shared memory
Brightwell, Ronald B; Pedretti, Kevin; Hudson, Trammell B
2013-10-22
A technique for directly sharing physical memory between processes executing on processor cores is described. The technique includes loading a plurality of processes into the physical memory for execution on a corresponding plurality of processor cores sharing the physical memory. An address space is mapped to each of the processes by populating a first entry in a top level virtual address table for each of the processes. The address space of each of the processes is cross-mapped into each of the processes by populating one or more subsequent entries of the top level virtual address table with the first entry in the top level virtual address table from other processes.
Besnard, Jeremy; Richard, Paul; Banville, Frederic; Nolin, Pierre; Aubin, Ghislaine; Le Gall, Didier; Richard, Isabelle; Allain, Phillippe
2016-01-01
Traumatic brain injury (TBI) causes impairments affecting instrumental activities of daily living (IADL). However, few studies have considered virtual reality as an ecologically valid tool for the assessment of IADL in patients who have sustained a TBI. The main objective of the present study was to examine the use of the Nonimmersive Virtual Coffee Task (NI-VCT) for IADL assessment in patients with TBI. We analyzed the performance of 19 adults suffering from TBI and 19 healthy controls (HCs) in the real and virtual tasks of making coffee with a coffee machine, as well as in global IQ and executive functions. Patients performed worse than HCs on both real and virtual tasks and on all tests of executive functions. Correlation analyses revealed that NI-VCT scores were related to scores on the real task. Moreover, regression analyses demonstrated that performance on NI-VCT matched real-task performance. Our results support the idea that the virtual kitchen is a valid tool for IADL assessment in patients who have sustained a TBI.
The road to virtual: the Sauls Memorial Virtual Library journey.
Waddell, Stacie; Harkness, Amy; Cohen, Mark L
2014-01-01
The Sauls Memorial Virtual Library closed its physical space in 2012. This article outlines the reasons for this change and how the library staff and hospital leadership planned and executed the enormous undertaking. Outcomes of the change and lessons learned from the process are discussed.
NASA Technical Reports Server (NTRS)
Hammrs, Stephan R.
2008-01-01
Virtual Satellite (VirtualSat) is a computer program that creates an environment that facilitates the development, verification, and validation of flight software for a single spacecraft or for multiple spacecraft flying in formation. In this environment, enhanced functionality and autonomy of navigation, guidance, and control systems of a spacecraft are provided by a virtual satellite that is, a computational model that simulates the dynamic behavior of the spacecraft. Within this environment, it is possible to execute any associated software, the development of which could benefit from knowledge of, and possible interaction (typically, exchange of data) with, the virtual satellite. Examples of associated software include programs for simulating spacecraft power and thermal- management systems. This environment is independent of the flight hardware that will eventually host the flight software, making it possible to develop the software simultaneously with, or even before, the hardware is delivered. Optionally, by use of interfaces included in VirtualSat, hardware can be used instead of simulated. The flight software, coded in the C or C++ programming language, is compilable and loadable into VirtualSat without any special modifications. Thus, VirtualSat can serve as a relatively inexpensive software test-bed for development test, integration, and post-launch maintenance of spacecraft flight software.
The HEPiX Virtualisation Working Group: Towards a Grid of Clouds
NASA Astrophysics Data System (ADS)
Cass, Tony
2012-12-01
The use of virtual machine images, as for example with Cloud services such as Amazon's Elastic Compute Cloud, is attractive for users as they have a guaranteed execution environment, something that cannot today be provided across sites participating in computing grids such as the Worldwide LHC Computing Grid. However, Grid sites often operate within computer security frameworks which preclude the use of remotely generated images. The HEPiX Virtualisation Working Group was setup with the objective to enable use of remotely generated virtual machine images at Grid sites and, to this end, has introduced the idea of trusted virtual machine images which are guaranteed to be secure and configurable by sites such that security policy commitments can be met. This paper describes the requirements and details of these trusted virtual machine images and presents a model for their use to facilitate the integration of Grid- and Cloud-based computing environments for High Energy Physics.
Global Team Development. Symposium 7. [AHRD Conference, 2001].
ERIC Educational Resources Information Center
2001
This document contains three papers on global team development. "Virtual Executives: A Paradox with Implications for Development" (Andrea Hornett), which is based on a case study exploring power relationships among members of a virtual team, demonstrates that members of a virtual team describe power differently for situations inside…
Cognitive ability predicts motor learning on a virtual reality game in patients with TBI.
O'Neil, Rochelle L; Skeel, Reid L; Ustinova, Ksenia I
2013-01-01
Virtual reality games and simulations have been utilized successfully for motor rehabilitation of individuals with traumatic brain injury (TBI). Little is known, however, how TBI-related cognitive decline affects learning of motor tasks in virtual environments. To fill this gap, we examined learning within a virtual reality game involving various reaching motions in 14 patients with TBI and 15 healthy individuals with different cognitive abilities. All participants practiced ten 90-second gaming trials to assess various aspects of motor learning. Cognitive abilities were assessed with a battery of tests including measures of memory, executive functioning, and visuospatial ability. Overall, participants with TBI showed both reduced performance and a slower learning rate in the virtual reality game compared to healthy individuals. Numerous correlations between overall performance and several of the cognitive ability domains were revealed for both the patient and control groups, with the best predictor being overall cognitive ability. The results may provide a starting point for rehabilitation programs regarding which cognitive domains interact with motor learning.
Virtual Network Embedding via Monte Carlo Tree Search.
Haeri, Soroush; Trajkovic, Ljiljana
2018-02-01
Network virtualization helps overcome shortcomings of the current Internet architecture. The virtualized network architecture enables coexistence of multiple virtual networks (VNs) on an existing physical infrastructure. VN embedding (VNE) problem, which deals with the embedding of VN components onto a physical network, is known to be -hard. In this paper, we propose two VNE algorithms: MaVEn-M and MaVEn-S. MaVEn-M employs the multicommodity flow algorithm for virtual link mapping while MaVEn-S uses the shortest-path algorithm. They formalize the virtual node mapping problem by using the Markov decision process (MDP) framework and devise action policies (node mappings) for the proposed MDP using the Monte Carlo tree search algorithm. Service providers may adjust the execution time of the MaVEn algorithms based on the traffic load of VN requests. The objective of the algorithms is to maximize the profit of infrastructure providers. We develop a discrete event VNE simulator to implement and evaluate performance of MaVEn-M, MaVEn-S, and several recently proposed VNE algorithms. We introduce profitability as a new performance metric that captures both acceptance and revenue to cost ratios. Simulation results show that the proposed algorithms find more profitable solutions than the existing algorithms. Given additional computation time, they further improve embedding solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guan, Qiang
At exascale, the challenge becomes to develop applications that run at scale and use exascale platforms reliably, efficiently, and flexibly. Workflows become much more complex because they must seamlessly integrate simulation and data analytics. They must include down-sampling, post-processing, feature extraction, and visualization. Power and data transfer limitations require these analysis tasks to be run in-situ or in-transit. We expect successful workflows will comprise multiple linked simulations along with tens of analysis routines. Users will have limited development time at scale and, therefore, must have rich tools to develop, debug, test, and deploy applications. At this scale, successful workflows willmore » compose linked computations from an assortment of reliable, well-defined computation elements, ones that can come and go as required, based on the needs of the workflow over time. We propose a novel framework that utilizes both virtual machines (VMs) and software containers to create a workflow system that establishes a uniform build and execution environment (BEE) beyond the capabilities of current systems. In this environment, applications will run reliably and repeatably across heterogeneous hardware and software. Containers, both commercial (Docker and Rocket) and open-source (LXC and LXD), define a runtime that isolates all software dependencies from the machine operating system. Workflows may contain multiple containers that run different operating systems, different software, and even different versions of the same software. We will run containers in open-source virtual machines (KVM) and emulators (QEMU) so that workflows run on any machine entirely in user-space. On this platform of containers and virtual machines, we will deliver workflow software that provides services, including repeatable execution, provenance, checkpointing, and future proofing. We will capture provenance about how containers were launched and how they interact to annotate workflows for repeatable and partial re-execution. We will coordinate the physical snapshots of virtual machines with parallel programming constructs, such as barriers, to automate checkpoint and restart. We will also integrate with HPC-specific container runtimes to gain access to accelerators and other specialized hardware to preserve native performance. Containers will link development to continuous integration. When application developers check code in, it will automatically be tested on a suite of different software and hardware architectures.« less
Taillade, Mathieu; Sauzéon, Hélène; Dejos, Marie; Pala, Prashant Arvind; Larrue, Florian; Wallet, Grégory; Gross, Christian; N'Kaoua, Bernard
2013-01-01
The aim of this study was to evaluate in large-scale spaces wayfinding and spatial learning difficulties for older adults in relation to the executive and memory decline associated with aging. We compared virtual reality (VR)-based wayfinding and spatial memory performances between young and older adults. Wayfinding and spatial memory performances were correlated with classical measures of executive and visuo-spatial memory functions, but also with self-reported estimates of wayfinding difficulties. We obtained a significant effect of age on wayfinding performances but not on spatial memory performances. The overall correlations showed significant correlations between the wayfinding performances and the classical measures of both executive and visuo-spatial memory, but only when the age factor was not partialled out. Also, older adults underestimated their wayfinding difficulties. A significant relationship between the wayfinding performances and self-reported wayfinding difficulty estimates is found, but only when the age effect was partialled out. These results show that, even when older adults have an equivalent spatial knowledge to young adults, they had greater difficulties with the wayfinding task, supporting an executive decline view in age-related wayfinding difficulties. However, the correlation results are in favor of both the memory and executive decline views as mediators of age-related differences in wayfinding performances. This is discussed in terms of the relationships between memory and executive functioning in wayfinding task orchestration. Our results also favor the use of objective assessments of everyday navigation difficulties in virtual applications, instead of self-reported questionnaires, since older adults showed difficulties in estimating their everyday wayfinding problems.
THE VIRTUAL INSTRUMENT: SUPPORT FOR GRID-ENABLED MCELL SIMULATIONS
Casanova, Henri; Berman, Francine; Bartol, Thomas; Gokcay, Erhan; Sejnowski, Terry; Birnbaum, Adam; Dongarra, Jack; Miller, Michelle; Ellisman, Mark; Faerman, Marcio; Obertelli, Graziano; Wolski, Rich; Pomerantz, Stuart; Stiles, Joel
2010-01-01
Ensembles of widely distributed, heterogeneous resources, or Grids, have emerged as popular platforms for large-scale scientific applications. In this paper we present the Virtual Instrument project, which provides an integrated application execution environment that enables end-users to run and interact with running scientific simulations on Grids. This work is performed in the specific context of MCell, a computational biology application. While MCell provides the basis for running simulations, its capabilities are currently limited in terms of scale, ease-of-use, and interactivity. These limitations preclude usage scenarios that are critical for scientific advances. Our goal is to create a scientific “Virtual Instrument” from MCell by allowing its users to transparently access Grid resources while being able to steer running simulations. In this paper, we motivate the Virtual Instrument project and discuss a number of relevant issues and accomplishments in the area of Grid software development and application scheduling. We then describe our software design and report on the current implementation. We verify and evaluate our design via experiments with MCell on a real-world Grid testbed. PMID:20689618
75 FR 25277 - National Institute on Drug Abuse; Notice of Closed Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-07
... applications. Place: National Institutes of Health, 6101 Executive Boulevard, Rockville, MD 20852 (Virtual..., Rockville, MD 20852 (Virtual Meeting). Contact Person: Gerald L. McLaughlin, PhD, Scientific Review...
Request queues for interactive clients in a shared file system of a parallel computing system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, John M.; Faibish, Sorin
Interactive requests are processed from users of log-in nodes. A metadata server node is provided for use in a file system shared by one or more interactive nodes and one or more batch nodes. The interactive nodes comprise interactive clients to execute interactive tasks and the batch nodes execute batch jobs for one or more batch clients. The metadata server node comprises a virtual machine monitor; an interactive client proxy to store metadata requests from the interactive clients in an interactive client queue; a batch client proxy to store metadata requests from the batch clients in a batch client queue;more » and a metadata server to store the metadata requests from the interactive client queue and the batch client queue in a metadata queue based on an allocation of resources by the virtual machine monitor. The metadata requests can be prioritized, for example, based on one or more of a predefined policy and predefined rules.« less
General-Purpose Front End for Real-Time Data Processing
NASA Technical Reports Server (NTRS)
James, Mark
2007-01-01
FRONTIER is a computer program that functions as a front end for any of a variety of other software of both the artificial intelligence (AI) and conventional data-processing types. As used here, front end signifies interface software needed for acquiring and preprocessing data and making the data available for analysis by the other software. FRONTIER is reusable in that it can be rapidly tailored to any such other software with minimum effort. Each component of FRONTIER is programmable and is executed in an embedded virtual machine. Each component can be reconfigured during execution. The virtual-machine implementation making FRONTIER independent of the type of computing hardware on which it is executed.
Banakou, Domna; Kishore, Sameer; Slater, Mel
2018-01-01
The brain's body representation is amenable to rapid change, even though we tend to think of our bodies as relatively fixed and stable. For example, it has been shown that a life-sized body perceived in virtual reality as substituting the participant's real body, can be felt as if it were their own, and that the body type can induce perceptual, attitudinal and behavioral changes. Here we show that changes can also occur in cognitive processing and specifically, executive functioning. Fifteen male participants were embodied in a virtual body that signifies super-intelligence (Einstein) and 15 in a (Normal) virtual body of similar age to their own. The Einstein body participants performed better on a cognitive task than the Normal body, considering prior cognitive ability (IQ), with the improvement greatest for those with low self-esteem. Einstein embodiment also reduced implicit bias against older people. Hence virtual body ownership may additionally be used to enhance executive functioning. PMID:29942270
Banakou, Domna; Kishore, Sameer; Slater, Mel
2018-01-01
The brain's body representation is amenable to rapid change, even though we tend to think of our bodies as relatively fixed and stable. For example, it has been shown that a life-sized body perceived in virtual reality as substituting the participant's real body, can be felt as if it were their own, and that the body type can induce perceptual, attitudinal and behavioral changes. Here we show that changes can also occur in cognitive processing and specifically, executive functioning. Fifteen male participants were embodied in a virtual body that signifies super-intelligence (Einstein) and 15 in a (Normal) virtual body of similar age to their own. The Einstein body participants performed better on a cognitive task than the Normal body, considering prior cognitive ability (IQ), with the improvement greatest for those with low self-esteem. Einstein embodiment also reduced implicit bias against older people. Hence virtual body ownership may additionally be used to enhance executive functioning.
Korthauer, L E; Nowak, N T; Frahmand, M; Driscoll, I
2017-01-15
Although effective spatial navigation requires memory for objects and locations, navigating a novel environment may also require considerable executive resources. The present study investigated associations between performance on the virtual Morris Water Task (vMWT), an analog version of a nonhuman spatial navigation task, and neuropsychological tests of executive functioning and spatial performance in 75 healthy young adults. More effective vMWT performance (e.g., lower latency and distance to reach hidden platform, greater distance in goal quadrant on a probe trial, fewer path intersections) was associated with better verbal fluency, set switching, response inhibition, and ability to mentally rotate objects. Findings also support a male advantage in spatial navigation, with sex moderating several associations between vMWT performance and executive abilities. Overall, we report a robust relationship between executive functioning and navigational skill, with some evidence that men and women may differentially recruit cognitive abilities when navigating a novel environment. Copyright © 2016 Elsevier B.V. All rights reserved.
Network testbed creation and validation
Thai, Tan Q.; Urias, Vincent; Van Leeuwen, Brian P.; Watts, Kristopher K.; Sweeney, Andrew John
2017-03-21
Embodiments of network testbed creation and validation processes are described herein. A "network testbed" is a replicated environment used to validate a target network or an aspect of its design. Embodiments describe a network testbed that comprises virtual testbed nodes executed via a plurality of physical infrastructure nodes. The virtual testbed nodes utilize these hardware resources as a network "fabric," thereby enabling rapid configuration and reconfiguration of the virtual testbed nodes without requiring reconfiguration of the physical infrastructure nodes. Thus, in contrast to prior art solutions which require a tester manually build an emulated environment of physically connected network devices, embodiments receive or derive a target network description and build out a replica of this description using virtual testbed nodes executed via the physical infrastructure nodes. This process allows for the creation of very large (e.g., tens of thousands of network elements) and/or very topologically complex test networks.
Network testbed creation and validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thai, Tan Q.; Urias, Vincent; Van Leeuwen, Brian P.
Embodiments of network testbed creation and validation processes are described herein. A "network testbed" is a replicated environment used to validate a target network or an aspect of its design. Embodiments describe a network testbed that comprises virtual testbed nodes executed via a plurality of physical infrastructure nodes. The virtual testbed nodes utilize these hardware resources as a network "fabric," thereby enabling rapid configuration and reconfiguration of the virtual testbed nodes without requiring reconfiguration of the physical infrastructure nodes. Thus, in contrast to prior art solutions which require a tester manually build an emulated environment of physically connected network devices,more » embodiments receive or derive a target network description and build out a replica of this description using virtual testbed nodes executed via the physical infrastructure nodes. This process allows for the creation of very large (e.g., tens of thousands of network elements) and/or very topologically complex test networks.« less
GO, an exec for running the programs: CELL, COLLIDER, MAGIC, PATRICIA, PETROS, TRANSPORT, and TURTLE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shoaee, H.
1982-05-01
An exec has been written and placed on the PEP group's public disk to facilitate the use of several PEP related computer programs available on VM. The exec's program list currently includes: CELL, COLLIDER, MAGIC, PATRICIA, PETROS, TRANSPORT, and TURTLE. In addition, provisions have been made to allow addition of new programs to this list as they become available. The GO exec is directly callable from inside the Wylbur editor (in fact, currently this is the only way to use the GO exec.). It provides the option of running any of the above programs in either interactive or batch mode.more » In the batch mode, the GO exec sends the data in the Wylbur active file along with the information required to run the job to the batch monitor (BMON, a virtual machine that schedules and controls execution of batch jobs). This enables the user to proceed with other VM activities at his/her terminal while the job executes, thus making it of particular interest to the users with jobs requiring much CPU time to execute and/or those wishing to run multiple jobs independently. In the interactive mode, useful for small jobs requiring less CPU time, the job is executed by the user's own Virtual Machine using the data in the active file as input. At the termination of an interactive job, the GO exec facilitates examination of the output by placing it in the Wylbur active file.« less
GO, an exec for running the programs: CELL, COLLIDER, MAGIC, PATRICIA, PETROS, TRANSPORT and TURTLE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shoaee, H.
1982-05-01
An exec has been written and placed on the PEP group's public disk (PUBRL 192) to facilitate the use of several PEP related computer programs available on VM. The exec's program list currently includes: CELL, COLLIDER, MAGIC, PATRICIA, PETROS, TRANSPORT, and TURTLE. In addition, provisions have been made to allow addition of new programs to this list as they become available. The GO exec is directly callable from inside the Wylbur editor (in fact, currently this is the only way to use the GO exec.) It provides the option of running any of the above programs in either interactive ormore » batch mode. In the batch mode, the GO exec sends the data in the Wylbur active file along with the information required to run the job to the batch monitor (BMON, a virtual machine that schedules and controls execution of batch jobs). This enables the user to proceed with other VM activities at his/her terminal while the job executes, thus making it of particular interest to the users with jobs requiring much CPU time to execute and/or those wishing to run multiple jobs independently. In the interactive mode, useful for small jobs requiring less CPU time, the job is executed by the user's own Virtual Machine using the data in the active file as input. At the termination of an interactive job, the GO exec facilitates examination of the output by placing it in the Wylbur active file.« less
A teleoperation training simulator with visual and kinesthetic force virtual reality
NASA Technical Reports Server (NTRS)
Kim, Won S.; Schenker, Paul
1992-01-01
A force-reflecting teleoperation training simulator with a high-fidelity real-time graphics display has been developed for operator training. A novel feature of this simulator is that it enables the operator to feel contact forces and torques through a force-reflecting controller during the execution of the simulated peg-in-hole task, providing the operator with the feel of visual and kinesthetic force virtual reality. A peg-in-hole task is used in our simulated teleoperation trainer as a generic teleoperation task. A quasi-static analysis of a two-dimensional peg-in-hole task model has been extended to a three-dimensional model analysis to compute contact forces and torques for a virtual realization of kinesthetic force feedback. The simulator allows the user to specify force reflection gains and stiffness (compliance) values of the manipulator hand for both the three translational and the three rotational axes in Cartesian space. Three viewing modes are provided for graphics display: single view, two split views, and stereoscopic view.
Survey of Command Execution Systems for NASA Spacecraft and Robots
NASA Technical Reports Server (NTRS)
Verma, Vandi; Jonsson, Ari; Simmons, Reid; Estlin, Tara; Levinson, Rich
2005-01-01
NASA spacecraft and robots operate at long distances from Earth Command sequences generated manually, or by automated planners on Earth, must eventually be executed autonomously onboard the spacecraft or robot. Software systems that execute commands onboard are known variously as execution systems, virtual machines, or sequence engines. Every robotic system requires some sort of execution system, but the level of autonomy and type of control they are designed for varies greatly. This paper presents a survey of execution systems with a focus on systems relevant to NASA missions.
Java bioinformatics analysis web services for multiple sequence alignment--JABAWS:MSA.
Troshin, Peter V; Procter, James B; Barton, Geoffrey J
2011-07-15
JABAWS is a web services framework that simplifies the deployment of web services for bioinformatics. JABAWS:MSA provides services for five multiple sequence alignment (MSA) methods (Probcons, T-coffee, Muscle, Mafft and ClustalW), and is the system employed by the Jalview multiple sequence analysis workbench since version 2.6. A fully functional, easy to set up server is provided as a Virtual Appliance (VA), which can be run on most operating systems that support a virtualization environment such as VMware or Oracle VirtualBox. JABAWS is also distributed as a Web Application aRchive (WAR) and can be configured to run on a single computer and/or a cluster managed by Grid Engine, LSF or other queuing systems that support DRMAA. JABAWS:MSA provides clients full access to each application's parameters, allows administrators to specify named parameter preset combinations and execution limits for each application through simple configuration files. The JABAWS command-line client allows integration of JABAWS services into conventional scripts. JABAWS is made freely available under the Apache 2 license and can be obtained from: http://www.compbio.dundee.ac.uk/jabaws.
The ALICE Software Release Validation cluster
NASA Astrophysics Data System (ADS)
Berzano, D.; Krzewicki, M.
2015-12-01
One of the most important steps of software lifecycle is Quality Assurance: this process comprehends both automatic tests and manual reviews, and all of them must pass successfully before the software is approved for production. Some tests, such as source code static analysis, are executed on a single dedicated service: in High Energy Physics, a full simulation and reconstruction chain on a distributed computing environment, backed with a sample “golden” dataset, is also necessary for the quality sign off. The ALICE experiment uses dedicated and virtualized computing infrastructures for the Release Validation in order not to taint the production environment (i.e. CVMFS and the Grid) with non-validated software and validation jobs: the ALICE Release Validation cluster is a disposable virtual cluster appliance based on CernVM and the Virtual Analysis Facility, capable of deploying on demand, and with a single command, a dedicated virtual HTCondor cluster with an automatically scalable number of virtual workers on any cloud supporting the standard EC2 interface. Input and output data are externally stored on EOS, and a dedicated CVMFS service is used to provide the software to be validated. We will show how the Release Validation Cluster deployment and disposal are completely transparent for the Release Manager, who simply triggers the validation from the ALICE build system's web interface. CernVM 3, based entirely on CVMFS, permits to boot any snapshot of the operating system in time: we will show how this allows us to certify each ALICE software release for an exact CernVM snapshot, addressing the problem of Long Term Data Preservation by ensuring a consistent environment for software execution and data reprocessing in the future.
Vieira, Ágata; Melo, Cristina; Machado, Jorge; Gabriel, Joaquim
2018-02-01
To analyse the effect of a six-month home-based phase III cardiac rehabilitation (CR) specific exercise program, performed in a virtual reality (Kinect) or conventional (booklet) environment, on executive function, quality of life and depression, anxiety and stress of subjects with coronary artery disease. A randomized controlled trial was conducted with subjects, who had completed phase II, randomly assigned to intervention group 1 (IG1), whose program encompassed the use of Kinect (n = 11); or intervention group 2 (IG2), a paper booklet (n = 11); or a control group (CG), only subjected to the usual care (n = 11). The three groups received education on cardiovascular risk factors. The assessed parameters, at baseline (M0), 3 (M1) and 6 months (M2), were executive function, control and integration in the implementation of an adequate behaviour in relation to a certain objective, specifically the ability to switch information (Trail Making Test), working memory (Verbal Digit Span test), and selective attention and conflict resolution ability (Stroop test), quality of life (MacNew questionnaire) and depression, anxiety and stress (Depression, Anxiety and Stress Scale 21). Descriptive and inferential statistical measures were used, significance level was set at .05. The IG1 revealed significant improvements, in the selective attention and conflict resolution ability, in comparison with the CG in the variable difference M0 - M2 (p = .021) and in comparison with the IG2 in the variable difference M1 - M2 and M0 - M2 (p = .001 and p = .002, respectively). No significant differences were found in the quality of life, and depression, anxiety and stress. The virtual reality format had improved selective attention and conflict resolution ability, revealing the potential of CR, specifically with virtual reality exercise, on executive function. Implications for Rehabilitation In cardiac rehabilitation, especially in phase III, it is important to develop and to present alternative strategies, as virtual reality using the Kinect in a home context. Taking into account the relationship between the improvement of the executive function with physical exercise, it is relevant to access the impact of a cardiac rehabilitation program on the executive function. Enhancing the value of the phase III of cardiac rehabilitation.
[Neuropsychological evaluation of the executive functions by means of virtual reality].
Climent-Martínez, Gema; Luna-Lario, Pilar; Bombín-González, Igor; Cifuentes-Rodríguez, Alicia; Tirapu-Ustárroz, Javier; Díaz-Orueta, Unai
2014-05-16
Executive functions include a wide range of self regulatory functions that allow control, organization and coordination of other cognitive functions, emotional responses and behaviours. The traditional approach to evaluate these functions, by means of paper and pencil neuropsychological tests, shows a greater than expected performance within the normal range for patients whose daily life difficulties would predict an inferior performance. These discrepancies suggest that classical neuropsychological tests may not adequately reproduce the complexity and dynamic nature of real life situations. Latest developments in the field of virtual reality offer interesting options for the neuropsychological assessment of many cognitive processes. Virtual reality reproduces three-dimensional environments with which the patient interacts in a dynamic way, with a sense of immersion in the environment similar to the presence and exposure to a real environment. Furthermore, the presentation of these stimuli, as well as distractors and other variables, may be controlled in a systematic way. Moreover, more consistent and precise answers may be obtained, and an in-depth analysis of them is possible. The present review shows current problems in neuropsychological evaluation of executive functions and latest advances in the consecution of higher preciseness and validity of the evaluation by means of new technologies and virtual reality, with special mention to some developments performed in Spain.
Development of Virtual Blade Model for Modelling Helicopter Rotor Downwash in OpenFOAM
2013-12-01
UNCLASSIFIED Development of Virtual Blade Model for Modelling Helicopter Rotor Downwash in OpenFOAM Stefano Wahono Aerospace...Georgia Institute of Technology. The OpenFOAM predicted result was also shown to compare favourably with ANSYS Fluent predictions. RELEASE...UNCLASSIFIED Development of Virtual Blade Model for Modelling Helicopter Rotor Downwash in OpenFOAM Executive Summary The Infrared
Buzzi, Jacopo; Ferrigno, Giancarlo; Jansma, Joost M.; De Momi, Elena
2017-01-01
Teleoperated robotic systems are widely spreading in multiple different fields, from hazardous environments exploration to surgery. In teleoperation, users directly manipulate a master device to achieve task execution at the slave robot side; this interaction is fundamental to guarantee both system stability and task execution performance. In this work, we propose a non-disruptive method to study the arm endpoint stiffness. We evaluate how users exploit the kinetic redundancy of the arm to achieve stability and precision during the execution of different tasks with different master devices. Four users were asked to perform two planar trajectories following virtual tasks using both a serial and a parallel link master device. Users' arm kinematics and muscular activation were acquired and combined with a user-specific musculoskeletal model to estimate the joint stiffness. Using the arm kinematic Jacobian, the arm end-point stiffness was derived. The proposed non-disruptive method is capable of estimating the arm endpoint stiffness during the execution of virtual teleoperated tasks. The obtained results are in accordance with the existing literature in human motor control and show, throughout the tested trajectory, a modulation of the arm endpoint stiffness that is affected by task characteristics and hand speed and acceleration. PMID:29018319
Witjes, Max J H; Schepers, Rutger H; Kraeima, Joep
2018-04-01
This review describes the advances in 3D virtual planning for mandibular and maxillary reconstruction surgical defects with full prosthetic rehabilitation. The primary purpose is to provide an overview of various techniques that apply 3D technology safely in primary and secondary reconstructive cases of patients suffering from head and neck cancer. Methods have been developed to overcome the problem of control over the margin during surgery while the crucial decision with regard to resection margin and planning of osteotomies were predetermined by virtual planning. The unlimited possibilities of designing patient-specific implants can result in creative uniquely applied solutions for single cases but should be applied wisely with knowledge of biomechanical engineering principles. The high surgical accuracy of an executed 3D virtual plan provides tumor margin control during ablative surgery and the possibility of planned combined use of osseus free flaps and dental implants in the reconstruction in one surgical procedure. A thorough understanding of the effects of radiotherapy on the reconstruction, soft tissue management, and prosthetic rehabilitation is imperative in individual cases when deciding to use dental implants in patients who received radiotherapy.
Developing an effective business plan.
Lehman, L B
1996-06-01
At some time, virtually all managed care executives, and most physician executives, will be asked to develop business plans. Business plans are thoughtful, comprehensive, and realistic descriptions of the many aspects of the formulation of a new business product or line for market. The author describes what goes into the writing of a business plan and how the physician executive should approach this task.
Virtual Sensor Test Instrumentation
NASA Technical Reports Server (NTRS)
Wang, Roy
2011-01-01
Virtual Sensor Test Instrumentation is based on the concept of smart sensor technology for testing with intelligence needed to perform sell-diagnosis of health, and to participate in a hierarchy of health determination at sensor, process, and system levels. A virtual sensor test instrumentation consists of five elements: (1) a common sensor interface, (2) microprocessor, (3) wireless interface, (4) signal conditioning and ADC/DAC (analog-to-digital conversion/ digital-to-analog conversion), and (5) onboard EEPROM (electrically erasable programmable read-only memory) for metadata storage and executable software to create powerful, scalable, reconfigurable, and reliable embedded and distributed test instruments. In order to maximize the efficient data conversion through the smart sensor node, plug-and-play functionality is required to interface with traditional sensors to enhance their identity and capabilities for data processing and communications. Virtual sensor test instrumentation can be accessible wirelessly via a Network Capable Application Processor (NCAP) or a Smart Transducer Interlace Module (STIM) that may be managed under real-time rule engines for mission-critical applications. The transducer senses the physical quantity being measured and converts it into an electrical signal. The signal is fed to an A/D converter, and is ready for use by the processor to execute functional transformation based on the sensor characteristics stored in a Transducer Electronic Data Sheet (TEDS). Virtual sensor test instrumentation is built upon an open-system architecture with standardized protocol modules/stacks to interface with industry standards and commonly used software. One major benefit for deploying the virtual sensor test instrumentation is the ability, through a plug-and-play common interface, to convert raw sensor data in either analog or digital form, to an IEEE 1451 standard-based smart sensor, which has instructions to program sensors for a wide variety of functions. The sensor data is processed in a distributed fashion across the network, providing a large pool of resources in real time to meet stringent latency requirements.
Pacheco, Thaiana Barbosa Ferreira; Oliveira Rego, Isabelle Ananda; Campos, Tania Fernandes; Cavalcanti, Fabrícia Azevedo da Costa
2017-01-01
Virtual Reality (VR) has been contributing to Neurological Rehabilitation because of its interactive and multisensory nature, providing the potential of brain reorganization. Given the use of mobile EEG devices, there is the possibility of investigating how the virtual therapeutic environment can influence brain activity. To compare theta, alpha, beta and gamma power in healthy young adults during a lower limb motor task in a virtual and real environment. Ten healthy adults were submitted to an EEG assessment while performing a one-minute task consisted of going up and down a step in a virtual environment - Nintendo Wii virtual game "Basic step" - and in a real environment. Real environment caused an increase in theta and alpha power, with small to large size effects mainly in the frontal region. VR caused a greater increase in beta and gamma power, however, with small or negligible effects on a variety of regions regarding beta frequency, and medium to very large effects on the frontal and the occipital regions considering gamma frequency. Theta, alpha, beta and gamma activity during the execution of a motor task differs according to the environment that the individual is exposed - real or virtual - and may have varying size effects if brain area activation and frequency spectrum in each environment are taken into consideration.
Moles: Tool-Assisted Environment Isolation with Closures
NASA Astrophysics Data System (ADS)
de Halleux, Jonathan; Tillmann, Nikolai
Isolating test cases from environment dependencies is often desirable, as it increases test reliability and reduces test execution time. However, code that calls non-virtual methods or consumes sealed classes is often impossible to test in isolation. Moles is a new lightweight framework which addresses this problem. For any .NET method, Moles allows test-code to provide alternative implementations, given as .NET delegates, for which C# provides very concise syntax while capturing local variables in a closure object. Using code instrumentation, the Moles framework will redirect calls to provided delegates instead of the original methods. The Moles framework is designed to work together with the dynamic symbolic execution tool Pex to enable automated test generation. In a case study, testing code programmed against the Microsoft SharePoint Foundation API, we achieved full code coverage while running tests in isolation without an actual SharePoint server. The Moles framework integrates with .NET and Visual Studio.
ERIC Educational Resources Information Center
Rajendran, Gnanathusharan; Law, Anna S.; Logie, Robert H.; van der Meulen, Marian; Fraser, Diane; Corley, Martin
2011-01-01
Using a modified version of the Virtual Errands Task (VET; McGeorge et al. in "Presence-Teleop Virtual Environ" 10(4):375-383, 2001), we investigated the executive ability of multitasking in 18 high-functioning adolescents with ASD and 18 typically developing adolescents. The VET requires multitasking (Law et al. in "Acta Psychol" 122(1):27-44,…
Shema-Shiratzky, Shirley; Brozgol, Marina; Cornejo-Thumm, Pablo; Geva-Dayan, Karen; Rotstein, Michael; Leitner, Yael; Hausdorff, Jeffrey M; Mirelman, Anat
2018-05-17
To examine the feasibility and efficacy of a combined motor-cognitive training using virtual reality to enhance behavior, cognitive function and dual-tasking in children with Attention-Deficit/Hyperactivity Disorder (ADHD). Fourteen non-medicated school-aged children with ADHD, received 18 training sessions during 6 weeks. Training included walking on a treadmill while negotiating virtual obstacles. Behavioral symptoms, cognition and gait were tested before and after the training and at 6-weeks follow-up. Based on parental report, there was a significant improvement in children's social problems and psychosomatic behavior after the training. Executive function and memory were improved post-training while attention was unchanged. Gait regularity significantly increased during dual-task walking. Long-term training effects were maintained in memory and executive function. Treadmill-training augmented with virtual-reality is feasible and may be an effective treatment to enhance behavior, cognitive function and dual-tasking in children with ADHD.
Dynamic visualization techniques for high consequence software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pollock, G.M.
1998-02-01
This report documents a prototype tool developed to investigate the use of visualization and virtual reality technologies for improving software surety confidence. The tool is utilized within the execution phase of the software life cycle. It provides a capability to monitor an executing program against prespecified requirements constraints provided in a program written in the requirements specification language SAGE. The resulting Software Attribute Visual Analysis Tool (SAVAnT) also provides a technique to assess the completeness of a software specification. The prototype tool is described along with the requirements constraint language after a brief literature review is presented. Examples of howmore » the tool can be used are also presented. In conclusion, the most significant advantage of this tool is to provide a first step in evaluating specification completeness, and to provide a more productive method for program comprehension and debugging. The expected payoff is increased software surety confidence, increased program comprehension, and reduced development and debugging time.« less
2008-01-01
The author provides a critical overview of three-dimensional (3-D) virtual worlds and “serious gaming” that are currently being developed and used in healthcare professional education and medicine. The relevance of this e-learning innovation for teaching students and professionals is debatable and variables influencing adoption, such as increased knowledge, self-directed learning, and peer collaboration, by academics, healthcare professionals, and business executives are examined while looking at various Web 2.0/3.0 applications. There is a need for more empirical research in order to unearth the pedagogical outcomes and advantages associated with this e-learning technology. A brief description of Roger’s Diffusion of Innovations Theory and Siemens’ Connectivism Theory for today’s learners is presented as potential underlying pedagogical tenets to support the use of virtual 3-D learning environments in higher education and healthcare. PMID:18762473
Hansen, Margaret M
2008-09-01
The author provides a critical overview of three-dimensional (3-D) virtual worlds and "serious gaming" that are currently being developed and used in healthcare professional education and medicine. The relevance of this e-learning innovation for teaching students and professionals is debatable and variables influencing adoption, such as increased knowledge, self-directed learning, and peer collaboration, by academics, healthcare professionals, and business executives are examined while looking at various Web 2.0/3.0 applications. There is a need for more empirical research in order to unearth the pedagogical outcomes and advantages associated with this e-learning technology. A brief description of Roger's Diffusion of Innovations Theory and Siemens' Connectivism Theory for today's learners is presented as potential underlying pedagogical tenets to support the use of virtual 3-D learning environments in higher education and healthcare.
Sweeney, Siobhan; Kersel, Denyse; Morris, Robin G; Manly, Tom; Evans, Jonathan J
2010-04-01
Executive functions have been argued to be the most vulnerable to brain injury. In providing an analogue of everyday situations amenable to control and management virtual reality (VR) may offer better insights into planning deficits consequent upon brain injury. Here 17 participants with a non-progressive brain injury and reported executive difficulties in everyday life were asked to perform a VR task (working in a furniture storage unit) that emphasised planning, rule following and prospective memory tasks. When compared with an age and IQ-matched control group, the patients were significantly poorer in terms of their strategy, their time-based prospective memory, the overall time required and their propensity to break rules. An examination of sensitivity and specificity of the VR task to group membership (brain-injured or control) showed that, with specificity set at maximum, sensitivity was only modest (at just over 50%). A second component to the study investigated whether the patients' performance could be improved by periodic auditory alerts. Previous studies have demonstrated that such cues can improve performance on laboratory tests, executive tests and everyday prospective memory tasks. Here, no significant changes in performance were detected. Potential reasons for this finding are discussed, including symptom severity and differences in the tasks employed in previous studies.
A Digital Repository and Execution Platform for Interactive Scholarly Publications in Neuroscience.
Hodge, Victoria; Jessop, Mark; Fletcher, Martyn; Weeks, Michael; Turner, Aaron; Jackson, Tom; Ingram, Colin; Smith, Leslie; Austin, Jim
2016-01-01
The CARMEN Virtual Laboratory (VL) is a cloud-based platform which allows neuroscientists to store, share, develop, execute, reproduce and publicise their work. This paper describes new functionality in the CARMEN VL: an interactive publications repository. This new facility allows users to link data and software to publications. This enables other users to examine data and software associated with the publication and execute the associated software within the VL using the same data as the authors used in the publication. The cloud-based architecture and SaaS (Software as a Service) framework allows vast data sets to be uploaded and analysed using software services. Thus, this new interactive publications facility allows others to build on research results through reuse. This aligns with recent developments by funding agencies, institutions, and publishers with a move to open access research. Open access provides reproducibility and verification of research resources and results. Publications and their associated data and software will be assured of long-term preservation and curation in the repository. Further, analysing research data and the evaluations described in publications frequently requires a number of execution stages many of which are iterative. The VL provides a scientific workflow environment to combine software services into a processing tree. These workflows can also be associated with publications and executed by users. The VL also provides a secure environment where users can decide the access rights for each resource to ensure copyright and privacy restrictions are met.
Cooperating reduction machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kluge, W.E.
1983-11-01
This paper presents a concept and a system architecture for the concurrent execution of program expressions of a concrete reduction language based on lamda-expressions. If formulated appropriately, these expressions are well-suited for concurrent execution, following a demand-driven model of computation. In particular, recursive program expressions with nonlinear expansion may, at run time, recursively be partitioned into a hierarchy of independent subexpressions which can be reduced by a corresponding hierarchy of virtual reduction machines. This hierarchy unfolds and collapses dynamically, with virtual machines recursively assuming the role of masters that create and eventually terminate, or synchronize with, slaves. The paper alsomore » proposes a nonhierarchically organized system of reduction machines, each featuring a stack architecture, that effectively supports the allocation of virtual machines to the real machines of the system in compliance with their hierarchical order of creation and termination. 25 references.« less
Cipresso, Pietro; Albani, Giovanni; Serino, Silvia; Pedroli, Elisa; Pallavicini, Federica; Mauro, Alessandro; Riva, Giuseppe
2014-01-01
Introduction: Several recent studies have pointed out that early impairment of executive functions (EFs) in Parkinson’s Disease (PD) may be a crucial marker to detect patients at risk for developing dementia. The main objective of this study was to compare the performances of PD patients with mild cognitive impairment (PD-MCI) with PD patients with normal cognition (PD-NC) and a control group (CG) using a traditional assessment of EFs and the Virtual Multiple Errands Test (VMET), a virtual reality (VR)-based tool. In order to understand which subcomponents of EFs are early impaired, this experimental study aimed to investigate specifically which instrument best discriminates among these three groups. Materials and methods: The study included three groups of 15 individuals each (for a total of 45 participants): 15 PD-NC; 15 PD-MCI, and 15 cognitively healthy individuals (CG). To assess the global neuropsychological functioning and the EFs, several tests (including the Mini Mental State Examination (MMSE), Clock Drawing Test, and Tower of London test) were administered to the participants. The VMET was used for a more ecologically valid neuropsychological evaluation of EFs. Results: Findings revealed significant differences in the VMET scores between the PD-NC patients vs. the controls. In particular, patients made more errors in the tasks of the VMET, and showed a poorer ability to use effective strategies to complete the tasks. This VMET result seems to be more sensitive in the early detection of executive deficits because these two groups did not differ in the traditional assessment of EFs (neuropsychological battery). Conclusion: This study offers initial evidence that a more ecologically valid evaluation of EFs is more likely to lead to detection of subtle executive deficits. PMID:25538578
Cipresso, Pietro; Albani, Giovanni; Serino, Silvia; Pedroli, Elisa; Pallavicini, Federica; Mauro, Alessandro; Riva, Giuseppe
2014-01-01
Several recent studies have pointed out that early impairment of executive functions (EFs) in Parkinson's Disease (PD) may be a crucial marker to detect patients at risk for developing dementia. The main objective of this study was to compare the performances of PD patients with mild cognitive impairment (PD-MCI) with PD patients with normal cognition (PD-NC) and a control group (CG) using a traditional assessment of EFs and the Virtual Multiple Errands Test (VMET), a virtual reality (VR)-based tool. In order to understand which subcomponents of EFs are early impaired, this experimental study aimed to investigate specifically which instrument best discriminates among these three groups. The study included three groups of 15 individuals each (for a total of 45 participants): 15 PD-NC; 15 PD-MCI, and 15 cognitively healthy individuals (CG). To assess the global neuropsychological functioning and the EFs, several tests (including the Mini Mental State Examination (MMSE), Clock Drawing Test, and Tower of London test) were administered to the participants. The VMET was used for a more ecologically valid neuropsychological evaluation of EFs. Findings revealed significant differences in the VMET scores between the PD-NC patients vs. the controls. In particular, patients made more errors in the tasks of the VMET, and showed a poorer ability to use effective strategies to complete the tasks. This VMET result seems to be more sensitive in the early detection of executive deficits because these two groups did not differ in the traditional assessment of EFs (neuropsychological battery). This study offers initial evidence that a more ecologically valid evaluation of EFs is more likely to lead to detection of subtle executive deficits.
DIstributed VIRtual System (DIVIRS) project
NASA Technical Reports Server (NTRS)
Schorr, Herbert; Neuman, B. Clifford
1994-01-01
As outlined in our continuation proposal 92-ISI-. OR (revised) on NASA cooperative agreement NCC2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the Virtual System Model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.
DIstributed VIRtual System (DIVIRS) project
NASA Technical Reports Server (NTRS)
Schorr, Herbert; Neuman, Clifford B.
1995-01-01
As outlined in our continuation proposal 92-ISI-50R (revised) on NASA cooperative agreement NCC2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the Virtual System Model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.
Distributed Virtual System (DIVIRS) project
NASA Technical Reports Server (NTRS)
Schorr, Herbert; Neuman, B. Clifford
1993-01-01
As outlined in the continuation proposal 92-ISI-50R (revised) on NASA cooperative agreement NCC 2-539, the investigators are developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; developing communications routines that support the abstractions implemented; continuing the development of file and information systems based on the Virtual System Model; and incorporating appropriate security measures to allow the mechanisms developed to be used on an open network. The goal throughout the work is to provide a uniform model that can be applied to both parallel and distributed systems. The authors believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. The work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.
Managing virtual machines with Vac and Vcycle
NASA Astrophysics Data System (ADS)
McNab, A.; Love, P.; MacMahon, E.
2015-12-01
We compare the Vac and Vcycle virtual machine lifecycle managers and our experiences in providing production job execution services for ATLAS, CMS, LHCb, and the GridPP VO at sites in the UK, France and at CERN. In both the Vac and Vcycle systems, the virtual machines are created outside of the experiment's job submission and pilot framework. In the case of Vac, a daemon runs on each physical host which manages a pool of virtual machines on that host, and a peer-to-peer UDP protocol is used to achieve the desired target shares between experiments across the site. In the case of Vcycle, a daemon manages a pool of virtual machines on an Infrastructure-as-a-Service cloud system such as OpenStack, and has within itself enough information to create the types of virtual machines to achieve the desired target shares. Both systems allow unused shares for one experiment to temporarily taken up by other experiements with work to be done. The virtual machine lifecycle is managed with a minimum of information, gathered from the virtual machine creation mechanism (such as libvirt or OpenStack) and using the proposed Machine/Job Features API from WLCG. We demonstrate that the same virtual machine designs can be used to run production jobs on Vac and Vcycle/OpenStack sites for ATLAS, CMS, LHCb, and GridPP, and that these technologies allow sites to be operated in a reliable and robust way.
Ouellet, Émilie; Boller, Benjamin; Corriveau-Lecavalier, Nick; Cloutier, Simon; Belleville, Sylvie
2018-06-01
Assessing and predicting memory performance in everyday life is a common assignment for neuropsychologists. However, most traditional neuropsychological tasks are not conceived to capture everyday memory performance. The Virtual Shop is a fully immersive task developed to assess memory in a more ecological way than traditional neuropsychological assessments. Two studies were undertaken to assess the feasibility of the Virtual Shop and to appraise its ecological and construct validity. In study 1, 20 younger and 19 older adults completed the Virtual Shop task to evaluate its level of difficulty and the way the participants interacted with the VR material. The construct validity was examined with the contrasted-group method, by comparing the performance of younger and older adults. In study 2, 35 individuals with subjective cognitive decline completed the Virtual Shop task. Performance was correlated with an existing questionnaire evaluating everyday memory in order to appraise its ecological validity. To add further support to its construct validity, performance was correlated with traditional episodic memory and executive tasks. All participants successfully completed the Virtual Shop. The task had an appropriate level of difficulty that helped differentiate younger and older adults, supporting the feasibility and construct validity of the task. The performance on the Virtual Shop was significantly and moderately correlated with the performance on the questionnaire and on the traditional memory and executive tasks. Results support the feasibility and both the ecological and construct validity of the Virtual Shop. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
A Virtual Laboratory for Digital Signal Processing
ERIC Educational Resources Information Center
Dow, Chyi-Ren; Li, Yi-Hsung; Bai, Jin-Yu
2006-01-01
This work designs and implements a virtual digital signal processing laboratory, VDSPL. VDSPL consists of four parts: mobile agent execution environments, mobile agents, DSP development software, and DSP experimental platforms. The network capability of VDSPL is created by using mobile agent and wrapper techniques without modifying the source code…
sRNAtoolboxVM: Small RNA Analysis in a Virtual Machine.
Gómez-Martín, Cristina; Lebrón, Ricardo; Rueda, Antonio; Oliver, José L; Hackenberg, Michael
2017-01-01
High-throughput sequencing (HTS) data for small RNAs (noncoding RNA molecules that are 20-250 nucleotides in length) can now be routinely generated by minimally equipped wet laboratories; however, the bottleneck in HTS-based research has shifted now to the analysis of such huge amount of data. One of the reasons is that many analysis types require a Linux environment but computers, system administrators, and bioinformaticians suppose additional costs that often cannot be afforded by small to mid-sized groups or laboratories. Web servers are an alternative that can be used if the data is not subjected to privacy issues (what very often is an important issue with medical data). However, in any case they are less flexible than stand-alone programs limiting the number of workflows and analysis types that can be carried out.We show in this protocol how virtual machines can be used to overcome those problems and limitations. sRNAtoolboxVM is a virtual machine that can be executed on all common operating systems through virtualization programs like VirtualBox or VMware, providing the user with a high number of preinstalled programs like sRNAbench for small RNA analysis without the need to maintain additional servers and/or operating systems.
76 FR 16432 - National Institute of Neurological Disorders and Stroke; Notice of Closed Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-23
..., Neuroscience Center, 6001 Executive Boulevard, Rockville, MD 20852 (Virtual Meeting). Contact Person: JoAnn.../ NIH/DHHS/Neuroscience Center, 6001 Executive Boulevard, Suite 3208, MSC 9529, Bethesda, MD 20892-9529..., Clinical Research Related to Neurological Disorders; 93.854, Biological Basis Research in the Neurosciences...
2015-09-28
the performance of log-and- replay can degrade significantly for VMs configured with multiple virtual CPUs, since the shared memory communication...whether based on checkpoint replication or log-and- replay , existing HA ap- proaches use in- memory backups. The backup VM sits in the memory of a...efficiently. 15. SUBJECT TERMS High-availability virtual machines, live migration, memory and traffic overheads, application suspension, Java
Using PVM to host CLIPS in distributed environments
NASA Technical Reports Server (NTRS)
Myers, Leonard; Pohl, Kym
1994-01-01
It is relatively easy to enhance CLIPS (C Language Integrated Production System) to support multiple expert systems running in a distributed environment with heterogeneous machines. The task is minimized by using the PVM (Parallel Virtual Machine) code from Oak Ridge Labs to provide the distributed utility. PVM is a library of C and FORTRAN subprograms that supports distributive computing on many different UNIX platforms. A PVM deamon is easily installed on each CPU that enters the virtual machine environment. Any user with rsh or rexec access to a machine can use the one PVM deamon to obtain a generous set of distributed facilities. The ready availability of both CLIPS and PVM makes the combination of software particularly attractive for budget conscious experimentation of heterogeneous distributive computing with multiple CLIPS executables. This paper presents a design that is sufficient to provide essential message passing functions in CLIPS and enable the full range of PVM facilities.
Assessment of executive functions in patients with obsessive compulsive disorder by NeuroVR.
La Paglia, Filippo; La Cascia, Caterina; Rizzo, Rosalinda; Riva, Giuseppe; La Barbera, Daniele
2012-01-01
Executive functions are often impaired in obsessive-compulsive disorder (OCD). We used a Virtual Reality version of the Multiple Errand Test (VMET) - developed dusing the free NeuroVR software (http://www.neurovr.org) - to evaluate the executive functions in daily life in 10 OCD patients and 10 controls. It is performed in a shopping setting where there are items to be bought and information to be obtained. The execution time for the whole task was higher in patients with OCD compared to controls, suggesting that patients with OCD need more time in planning than controls. The same difference was found in the partial errors during the task. Furthermore, the mean rank for and for interpretation failures is higher for controls, while the values of divided attention and the of self correction seems to be lower in controls. We think that obsessive patients tend to work with greater diligence and observance of rules than controls. In conclusion, these results provide initial support for the feasibility of VMET as assessment tool of executive functions. Specifically, the significant correlation found between the VMET and the neuropsychological battery support the ecological validity of VMET as an instrument for the evaluation of executive functions in patients with OCD.
VIPER: Virtual Intelligent Planetary Exploration Rover
NASA Technical Reports Server (NTRS)
Edwards, Laurence; Flueckiger, Lorenzo; Nguyen, Laurent; Washington, Richard
2001-01-01
Simulation and visualization of rover behavior are critical capabilities for scientists and rover operators to construct, test, and validate plans for commanding a remote rover. The VIPER system links these capabilities. using a high-fidelity virtual-reality (VR) environment. a kinematically accurate simulator, and a flexible plan executive to allow users to simulate and visualize possible execution outcomes of a plan under development. This work is part of a larger vision of a science-centered rover control environment, where a scientist may inspect and explore the environment via VR tools, specify science goals, and visualize the expected and actual behavior of the remote rover. The VIPER system is constructed from three generic systems, linked together via a minimal amount of customization into the integrated system. The complete system points out the power of combining plan execution, simulation, and visualization for envisioning rover behavior; it also demonstrates the utility of developing generic technologies. which can be combined in novel and useful ways.
Gilgamesh: A Multithreaded Processor-In-Memory Architecture for Petaflops Computing
NASA Technical Reports Server (NTRS)
Sterling, T. L.; Zima, H. P.
2002-01-01
Processor-in-Memory (PIM) architectures avoid the von Neumann bottleneck in conventional machines by integrating high-density DRAM and CMOS logic on the same chip. Parallel systems based on this new technology are expected to provide higher scalability, adaptability, robustness, fault tolerance and lower power consumption than current MPPs or commodity clusters. In this paper we describe the design of Gilgamesh, a PIM-based massively parallel architecture, and elements of its execution model. Gilgamesh extends existing PIM capabilities by incorporating advanced mechanisms for virtualizing tasks and data and providing adaptive resource management for load balancing and latency tolerance. The Gilgamesh execution model is based on macroservers, a middleware layer which supports object-based runtime management of data and threads allowing explicit and dynamic control of locality and load balancing. The paper concludes with a discussion of related research activities and an outlook to future work.
Methods For Self-Organizing Software
Bouchard, Ann M.; Osbourn, Gordon C.
2005-10-18
A method for dynamically self-assembling and executing software is provided, containing machines that self-assemble execution sequences and data structures. In addition to ordered functions calls (found commonly in other software methods), mutual selective bonding between bonding sites of machines actuates one or more of the bonding machines. Two or more machines can be virtually isolated by a construct, called an encapsulant, containing a population of machines and potentially other encapsulants that can only bond with each other. A hierarchical software structure can be created using nested encapsulants. Multi-threading is implemented by populations of machines in different encapsulants that are interacting concurrently. Machines and encapsulants can move in and out of other encapsulants, thereby changing the functionality. Bonding between machines' sites can be deterministic or stochastic with bonding triggering a sequence of actions that can be implemented by each machine. A self-assembled execution sequence occurs as a sequence of stochastic binding between machines followed by their deterministic actuation. It is the sequence of bonding of machines that determines the execution sequence, so that the sequence of instructions need not be contiguous in memory.
Serino, Silvia; Morganti, Francesca; Colombo, Desirée; Pedroli, Elisa; Cipresso, Pietro; Riva, Giuseppe
2018-06-01
A growing body of evidence pointed out that a decline in effectively using spatial reference frames for categorizing information occurs both in normal and pathological aging. Moreover, it is also known that executive deficits primarily characterize the cognitive profile of older individuals. Acknowledging this literature, the current study was aimed to specifically disentangle the contribution of the cognitive abilities related to the use of spatial reference frames to executive functioning in both healthy and pathological aging. 48 healthy elderly individuals and 52 elderly suffering from probable Alzheimer's Disease (AD) took part in the study. We exploited the potentiality of Virtual Reality to specifically measure the abilities in retrieving and syncing between different spatial reference frames, and then we administrated different neuropsychological tests for evaluating executive functions. Our results indicated that allocentric functions contributed significantly to the planning abilities, while syncing abilities influenced the attentional ones. The findings were discussed in terms of previous literature exploring relationships between cognitive deficits in the first phase of AD.
Nirme, Jens; Haake, Magnus; Lyberg Åhlander, Viveka; Brännström, Jonas; Sahlén, Birgitta
2018-04-05
Seeing a speaker's face facilitates speech recognition, particularly under noisy conditions. Evidence for how it might affect comprehension of the content of the speech is more sparse. We investigated how children's listening comprehension is affected by multi-talker babble noise, with or without presentation of a digitally animated virtual speaker, and whether successful comprehension is related to performance on a test of executive functioning. We performed a mixed-design experiment with 55 (34 female) participants (8- to 9-year-olds), recruited from Swedish elementary schools. The children were presented with four different narratives, each in one of four conditions: audio-only presentation in a quiet setting, audio-only presentation in noisy setting, audio-visual presentation in a quiet setting, and audio-visual presentation in a noisy setting. After each narrative, the children answered questions on the content and rated their perceived listening effort. Finally, they performed a test of executive functioning. We found significantly fewer correct answers to explicit content questions after listening in noise. This negative effect was only mitigated to a marginally significant degree by audio-visual presentation. Strong executive function only predicted more correct answers in quiet settings. Altogether, our results are inconclusive regarding how seeing a virtual speaker affects listening comprehension. We discuss how methodological adjustments, including modifications to our virtual speaker, can be used to discriminate between possible explanations to our results and contribute to understanding the listening conditions children face in a typical classroom.
AliEn—ALICE environment on the GRID
NASA Astrophysics Data System (ADS)
Saiz, P.; Aphecetche, L.; Bunčić, P.; Piskač, R.; Revsbech, J.-E.; Šego, V.; Alice Collaboration
2003-04-01
AliEn ( http://alien.cern.ch) (ALICE Environment) is a Grid framework built on top of the latest Internet standards for information exchange and authentication (SOAP, PKI) and common Open Source components. AliEn provides a virtual file catalogue that allows transparent access to distributed datasets and a number of collaborating Web services which implement the authentication, job execution, file transport, performance monitor and event logging. In the paper we will present the architecture and components of the system.
Kizony, R; Zeilig, G; Krasovsky, T; Bondi, M; Weiss, P L; Kodesh, E; Kafri, M
2017-01-01
Navigation skills are required for performance of functional complex tasks and may decline due to aging. Investigation of navigation skills should include measurement of cognitive-executive and motor aspects, which are part of complex tasks. to compare young and older healthy adults in navigation within a simulated environment with and without a functional-cognitive task. Ten young adults (25.6±4.3 years) and seven community dwelling older men (69.9±3.8 years) were tested during a single session. After training on a self-paced treadmill to navigate in a non-functional simulation, they performed the Virtual Multiple Errands Test (VMET) in a mall simulation. Outcome measures included cognitive-executive aspects of performance and gait parameters. Younger adults' performance of the VMET was more efficient (1.8±1.0) than older adults (5.3±2.7; p < 0.05) and faster (younger 478.1±141.5 s, older 867.6±393.5 s; p < 0.05). There were no differences between groups in gait parameters. Both groups walked slower in the mall simulation. The shopping simulation provided a paradigm to assess the interplay between motor and cognitive aspects involved in the efficient performance of a complex task. The study emphasized the role of the cognitive-executive aspect of task performance in healthy older adults.
Schuster-Amft, Corina; Henneke, Andrea; Hartog-Keisker, Birgit; Holper, Lisa; Siekierka, Ewa; Chevrier, Edith; Pyk, Pawel; Kollias, Spyros; Kiper, Daniel; Eng, Kynan
2015-01-01
To evaluate feasibility and neurophysiological changes after virtual reality (VR)-based training of upper limb (UL) movements. Single-case A-B-A-design with two male stroke patients (P1:67 y and 50 y, 3.5 and 3 y after onset) with UL motor impairments, 45-min therapy sessions 5×/week over 4 weeks. Patients facing screen, used bimanual data gloves to control virtual arms. Three applications trained bimanual reaching, grasping, hand opening. Assessments during 2-week baseline, weekly during intervention, at 3-month follow-up (FU): Goal Attainment Scale (GAS), Chedoke Arm and Hand Activity Inventory (CAHAI), Chedoke-McMaster Stroke Assessment (CMSA), Extended Barthel Index (EBI), Motor Activity Log (MAL). Functional magnetic resonance imaging scans (FMRI) before, immediately after treatment and at FU. P1 executed 5478 grasps (paretic arm). Improvements in CAHAI (+4) were maintained at FU. GAS changed to +1 post-test and +2 at FU. P2 executed 9835 grasps (paretic arm). CAHAI improvements (+13) were maintained at FU. GAS scores changed to -1 post-test and +1 at FU. MAL scores changed from 3.7 at pre-test to 5.5 post-test and 3.3 at FU. The VR-based intervention was feasible, safe, and intense. Adjustable application settings maintained training challenge and patient motivation. ADL-relevant UL functional improvements persisted at FU and were related to changed cortical activation patterns. Implications for Rehabilitation YouGrabber trains uni- and bimanual upper motor function. Its application is feasible, safe, and intense. The control of the virtual arms can be done in three main ways: (a) normal (b) virtual mirror therapy, or (c) virtual following. The mirroring feature provides an illusion of affected limb movements during the period when the affected upper limb (UL) is resting. The YouGrabber training led to ADL-relevant UL functional improvements that were still assessable 12 weeks after intervention finalization and were related to changed cortical activation patterns.
la Paglia, Filippo; la Cascia, Caterina; Rizzo, Rosalinda; Riva, Giuseppe; la Barbera, Daniele
2015-01-01
Neuropsychological disorders are common in Obsessive-Compulsive Disorder (OCD) patients. Executive functions, verbal fluency and verbal memory, shifting attention from one aspect of stimuli to others, mental flexibility, engaging in executive planning and decision making, are the most involved cognitive domains. We focus on two aspects of neuropsychological function: decision making and cognitive behavioral flexibility, assessed through a virtual version of the Multiple Errand Test (V-MET), developed using the NeuroVR software. Thirty OCD patients were compared with thirty matched control subjects. The results showed the presence of difficulties in OCD patients with tasks where the goal is not clear, the information is incomplete or the parameters are ill-defined.
A comparative analysis of dynamic grids vs. virtual grids using the A3pviGrid framework.
Shankaranarayanan, Avinas; Amaldas, Christine
2010-11-01
With the proliferation of Quad/Multi-core micro-processors in mainstream platforms such as desktops and workstations; a large number of unused CPU cycles can be utilized for running virtual machines (VMs) as dynamic nodes in distributed environments. Grid services and its service oriented business broker now termed cloud computing could deploy image based virtualization platforms enabling agent based resource management and dynamic fault management. In this paper we present an efficient way of utilizing heterogeneous virtual machines on idle desktops as an environment for consumption of high performance grid services. Spurious and exponential increases in the size of the datasets are constant concerns in medical and pharmaceutical industries due to the constant discovery and publication of large sequence databases. Traditional algorithms are not modeled at handing large data sizes under sudden and dynamic changes in the execution environment as previously discussed. This research was undertaken to compare our previous results with running the same test dataset with that of a virtual Grid platform using virtual machines (Virtualization). The implemented architecture, A3pviGrid utilizes game theoretic optimization and agent based team formation (Coalition) algorithms to improve upon scalability with respect to team formation. Due to the dynamic nature of distributed systems (as discussed in our previous work) all interactions were made local within a team transparently. This paper is a proof of concept of an experimental mini-Grid test-bed compared to running the platform on local virtual machines on a local test cluster. This was done to give every agent its own execution platform enabling anonymity and better control of the dynamic environmental parameters. We also analyze performance and scalability of Blast in a multiple virtual node setup and present our findings. This paper is an extension of our previous research on improving the BLAST application framework using dynamic Grids on virtualization platforms such as the virtual box.
The Virtual Data Center Tagged-Format Tool - Introduction and Executive Summary
Evans, John R.; Squibb, Melinda; Stephens, Christopher D.; Savage, W.U.; Haddadi, Hamid; Kircher, Charles A.; Hachem, Mahmoud M.
2008-01-01
This Report introduces and summarizes the new Virtual Data Center (VDC) Tagged Format (VTF) Tool, which was developed by a diverse group of seismologists, earthquake engineers, and information technology professionals for internal use by the COSMOS VDC and other interested parties for the exchange, archiving, and analysis of earthquake strong-ground-motion data.
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...
2017-09-29
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
Global tree network for computing structures enabling global processing operations
Blumrich; Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Hoenicke, Dirk; Steinmacher-Burow, Burkhard D.; Takken, Todd E.; Vranas, Pavlos M.
2010-01-19
A system and method for enabling high-speed, low-latency global tree network communications among processing nodes interconnected according to a tree network structure. The global tree network enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the tree via links to facilitate performance of low-latency global processing operations at nodes of the virtual tree and sub-tree structures. The global operations performed include one or more of: broadcast operations downstream from a root node to leaf nodes of a virtual tree, reduction operations upstream from leaf nodes to the root node in the virtual tree, and point-to-point message passing from any node to the root node. The global tree network is configurable to provide global barrier and interrupt functionality in asynchronous or synchronized manner, and, is physically and logically partitionable.
Robotics virtual rail system and method
Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID; Walton, Miles C [Idaho Falls, ID
2011-07-05
A virtual track or rail system and method is described for execution by a robot. A user, through a user interface, generates a desired path comprised of at least one segment representative of the virtual track for the robot. Start and end points are assigned to the desired path and velocities are also associated with each of the at least one segment of the desired path. A waypoint file is generated including positions along the virtual track representing the desired path with the positions beginning from the start point to the end point including the velocities of each of the at least one segment. The waypoint file is sent to the robot for traversing along the virtual track.
Ueki, Satoshi; Nishimoto, Yutaka; Abe, Motoyuki; Kawasaki, Haruhisa; Ito, Satoshi; Ishigure, Yasuhiko; Mizumoto, Jun; Ojika, Takeo
2008-01-01
This paper presents a virtual reality-enhanced hand rehabilitation support system with a symmetric master-slave motion assistant for independent rehabilitation therapies. Our aim is to provide fine motion exercise for a hand and fingers, which allows the impaired hand of a patient to be driven by his or her healthy hand on the opposite side. Since most disabilities caused by cerebral vascular accidents or bone fractures are hemiplegic, we adopted a symmetric master-slave motion assistant system in which the impaired hand is driven by the healthy hand on the opposite side. A VR environment displaying an effective exercise was created in consideration of system's characteristic. To verify the effectiveness of this system, a clinical test was executed by applying to six patients.
Dimbwadyo-Terrer, I; Gil-Agudo, A; Segura-Fragoso, A; de los Reyes-Guzmán, A; Trincado-Alonso, F; Piazza, S; Polonio-López, B
2016-01-01
The aim of this study was to investigate the effects of a virtual reality program combined with conventional therapy in upper limb function in people with tetraplegia and to provide data about patients' satisfaction with the virtual reality system. Thirty-one people with subacute complete cervical tetraplegia participated in the study. Experimental group received 15 sessions with Toyra(®) virtual reality system for 5 weeks, 30 minutes/day, 3 days/week in addition to conventional therapy, while control group only received conventional therapy. All patients were assessed at baseline, after intervention, and at three-month follow-up with a battery of clinical, functional, and satisfaction scales. Control group showed significant improvements in the manual muscle test (p = 0,043, partial η (2) = 0,22) in the follow-up evaluation. Both groups demonstrated clinical, but nonsignificant, changes to their arm function in 4 of the 5 scales used. All patients showed a high level of satisfaction with the virtual reality system. This study showed that virtual reality added to conventional therapy produces similar results in upper limb function compared to only conventional therapy. Moreover, the gaming aspects incorporated in conventional rehabilitation appear to produce high motivation during execution of the assigned tasks. This trial is registered with EudraCT number 2015-002157-35.
A User-Centric Knowledge Creation Model in a Web of Object-Enabled Internet of Things Environment
Kibria, Muhammad Golam; Fattah, Sheik Mohammad Mostakim; Jeong, Kwanghyeon; Chong, Ilyoung; Jeong, Youn-Kwae
2015-01-01
User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service. PMID:26393609
Dimbwadyo-Terrer, I.; Gil-Agudo, A.; Segura-Fragoso, A.; de los Reyes-Guzmán, A.; Trincado-Alonso, F.; Piazza, S.; Polonio-López, B.
2016-01-01
The aim of this study was to investigate the effects of a virtual reality program combined with conventional therapy in upper limb function in people with tetraplegia and to provide data about patients' satisfaction with the virtual reality system. Thirty-one people with subacute complete cervical tetraplegia participated in the study. Experimental group received 15 sessions with Toyra® virtual reality system for 5 weeks, 30 minutes/day, 3 days/week in addition to conventional therapy, while control group only received conventional therapy. All patients were assessed at baseline, after intervention, and at three-month follow-up with a battery of clinical, functional, and satisfaction scales. Control group showed significant improvements in the manual muscle test (p = 0,043, partial η 2 = 0,22) in the follow-up evaluation. Both groups demonstrated clinical, but nonsignificant, changes to their arm function in 4 of the 5 scales used. All patients showed a high level of satisfaction with the virtual reality system. This study showed that virtual reality added to conventional therapy produces similar results in upper limb function compared to only conventional therapy. Moreover, the gaming aspects incorporated in conventional rehabilitation appear to produce high motivation during execution of the assigned tasks. This trial is registered with EudraCT number 2015-002157-35. PMID:26885511
A User-Centric Knowledge Creation Model in a Web of Object-Enabled Internet of Things Environment.
Kibria, Muhammad Golam; Fattah, Sheik Mohammad Mostakim; Jeong, Kwanghyeon; Chong, Ilyoung; Jeong, Youn-Kwae
2015-09-18
User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service.
PISCES: An environment for parallel scientific computation
NASA Technical Reports Server (NTRS)
Pratt, T. W.
1985-01-01
The parallel implementation of scientific computing environment (PISCES) is a project to provide high-level programming environments for parallel MIMD computers. Pisces 1, the first of these environments, is a FORTRAN 77 based environment which runs under the UNIX operating system. The Pisces 1 user programs in Pisces FORTRAN, an extension of FORTRAN 77 for parallel processing. The major emphasis in the Pisces 1 design is in providing a carefully specified virtual machine that defines the run-time environment within which Pisces FORTRAN programs are executed. Each implementation then provides the same virtual machine, regardless of differences in the underlying architecture. The design is intended to be portable to a variety of architectures. Currently Pisces 1 is implemented on a network of Apollo workstations and on a DEC VAX uniprocessor via simulation of the task level parallelism. An implementation for the Flexible Computing Corp. FLEX/32 is under construction. An introduction to the Pisces 1 virtual computer and the FORTRAN 77 extensions is presented. An example of an algorithm for the iterative solution of a system of equations is given. The most notable features of the design are the provision for several granularities of parallelism in programs and the provision of a window mechanism for distributed access to large arrays of data.
Behavioral and neural effects of congruency of visual feedback during short-term motor learning.
Ossmy, Ori; Mukamel, Roy
2018-05-15
Visual feedback can facilitate or interfere with movement execution. Here, we describe behavioral and neural mechanisms by which the congruency of visual feedback during physical practice of a motor skill modulates subsequent performance gains. 18 healthy subjects learned to execute rapid sequences of right hand finger movements during fMRI scans either with or without visual feedback. Feedback consisted of a real-time, movement-based display of virtual hands that was either congruent (right virtual hand movement), or incongruent (left virtual hand movement yoked to the executing right hand). At the group level, right hand performance gains following training with congruent visual feedback were significantly higher relative to training without visual feedback. Conversely, performance gains following training with incongruent visual feedback were significantly lower. Interestingly, across individual subjects these opposite effects correlated. Activation in the Supplementary Motor Area (SMA) during training corresponded to individual differences in subsequent performance gains. Furthermore, functional coupling of SMA with visual cortices predicted individual differences in behavior. Our results demonstrate that some individuals are more sensitive than others to congruency of visual feedback during short-term motor learning and that neural activation in SMA correlates with such inter-individual differences. Copyright © 2017 Elsevier Inc. All rights reserved.
Luce, Bryan R; Connor, Jason T; Broglio, Kristine R; Mullins, C Daniel; Ishak, K Jack; Saunders, Elijah; Davis, Barry R
2016-09-20
Bayesian and adaptive clinical trial designs offer the potential for more efficient processes that result in lower sample sizes and shorter trial durations than traditional designs. To explore the use and potential benefits of Bayesian adaptive clinical trial designs in comparative effectiveness research. Virtual execution of ALLHAT (Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial) as if it had been done according to a Bayesian adaptive trial design. Comparative effectiveness trial of antihypertensive medications. Patient data sampled from the more than 42 000 patients enrolled in ALLHAT with publicly available data. Number of patients randomly assigned between groups, trial duration, observed numbers of events, and overall trial results and conclusions. The Bayesian adaptive approach and original design yielded similar overall trial conclusions. The Bayesian adaptive trial randomly assigned more patients to the better-performing group and would probably have ended slightly earlier. This virtual trial execution required limited resampling of ALLHAT patients for inclusion in RE-ADAPT (REsearch in ADAptive methods for Pragmatic Trials). Involvement of a data monitoring committee and other trial logistics were not considered. In a comparative effectiveness research trial, Bayesian adaptive trial designs are a feasible approach and potentially generate earlier results and allocate more patients to better-performing groups. National Heart, Lung, and Blood Institute.
An integrated pipeline to create and experience compelling scenarios in virtual reality
NASA Astrophysics Data System (ADS)
Springer, Jan P.; Neumann, Carsten; Reiners, Dirk; Cruz-Neira, Carolina
2011-03-01
One of the main barriers to create and use compelling scenarios in virtual reality is the complexity and time-consuming efforts for modeling, element integration, and the software development to properly display and interact with the content in the available systems. Still today, most virtual reality applications are tedious to create and they are hard-wired to the specific display and interaction system available to the developers when creating the application. Furthermore, it is not possible to alter the content or the dynamics of the content once the application has been created. We present our research on designing a software pipeline that enables the creation of compelling scenarios with a fair degree of visual and interaction complexity in a semi-automated way. Specifically, we are targeting drivable urban scenarios, ranging from large cities to sparsely populated rural areas that incorporate both static components (e. g., houses, trees) and dynamic components (e. g., people, vehicles) as well as events, such as explosions or ambient noise. Our pipeline has four basic components. First, an environment designer, where users sketch the overall layout of the scenario, and an automated method constructs the 3D environment from the information in the sketch. Second, a scenario editor used for authoring the complete scenario, incorporate the dynamic elements and events, fine tune the automatically generated environment, define the execution conditions of the scenario, and set up any data gathering that may be necessary during the execution of the scenario. Third, a run-time environment for different virtual-reality systems provides users with the interactive experience as designed with the designer and the editor. And fourth, a bi-directional monitoring system that allows for capturing and modification of information from the virtual environment. One of the interesting capabilities of our pipeline is that scenarios can be built and modified on-the-fly as they are being presented in the virtual-reality systems. Users can quickly prototype the basic scene using the designer and the editor on a control workstation. More elements can then be introduced into the scene from both the editor and the virtual-reality display. In this manner, users are able to gradually increase the complexity of the scenario with immediate feedback. The main use of this pipeline is the rapid development of scenarios for human-factors studies. However, it is applicable in a much more general context.
Collective communications apparatus and method for parallel systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knies, Allan D.; Keppel, David Pardo; Woo, Dong Hyuk
A collective communication apparatus and method for parallel computing systems. For example, one embodiment of an apparatus comprises a plurality of processor elements (PEs); collective interconnect logic to dynamically form a virtual collective interconnect (VCI) between the PEs at runtime without global communication among all of the PEs, the VCI defining a logical topology between the PEs in which each PE is directly communicatively coupled to a only a subset of the remaining PEs; and execution logic to execute collective operations across the PEs, wherein one or more of the PEs receive first results from a first portion of themore » subset of the remaining PEs, perform a portion of the collective operations, and provide second results to a second portion of the subset of the remaining PEs.« less
Nomura, Tsutomu; Mamada, Yasuhiro; Nakamura, Yoshiharu; Matsutani, Takeshi; Hagiwara, Nobutoshi; Fujita, Isturo; Mizuguchi, Yoshiaki; Fujikura, Terumichi; Miyashita, Masao; Uchida, Eiji
2015-11-01
Definitive assessment of laparoscopic skill improvement after virtual reality simulator training is best obtained during an actual operation. However, this is impossible in medical students. Therefore, we developed an alternative assessment technique using an augmented reality simulator. Nineteen medical students completed a 6-week training program using a virtual reality simulator (LapSim). The pretest and post-test were performed using an object-positioning module and cholecystectomy on an augmented reality simulator(ProMIS). The mean performance measures between pre- and post-training on the LapSim were compared with a paired t-test. In the object-positioning module, the execution time of the task (P < 0.001), left and right instrument path length (P = 0.001), and left and right instrument economy of movement (P < 0.001) were significantly shorter after than before the LapSim training. With respect to improvement in laparoscopic cholecystectomy using a gallbladder model, the execution time to identify, clip, and cut the cystic duct and cystic artery as well as the execution time to dissect the gallbladder away from the liver bed were both significantly shorter after than before the LapSim training (P = 0.01). Our training curriculum using a virtual reality simulator improved the operative skills of medical students as objectively evaluated by assessment using an augmented reality simulator instead of an actual operation. We hope that these findings help to establish an effective training program for medical students. © 2015 Japan Society for Endoscopic Surgery, Asia Endosurgery Task Force and Wiley Publishing Asia Pty Ltd.
Polley, John W; Figueroa, Alvaro A
2013-05-01
To introduce the concept and use of an occlusal-based "orthognathic positioning system" (OPS) to be used during orthognathic surgery. The OPS consists of intraoperative occlusal-based devices that transfer virtual surgical planning to the operating field for repositioning of the osteotomized dentoskeletal segments. The system uses detachable guides connected to an occlusal splint. An initial drilling guide is used to establish stable references or landmarks. These are drilled on the bone that will not be repositioned adjacent to the osteotomy line. After mobilization of the skeletal segment, a final positioning guide, referenced to the drilled landmarks, is used to transfer the skeletal segment according to the virtual surgical planning. The OPS is digitally designed using 3-dimensional computer-aided design/computer-aided manufacturing technology and manufactured with stereolithographic techniques. Virtual surgical planning has improved the preoperative assessment and, in conjunction with the OPS, the execution of orthognathic surgery. The OPS has the possibility to eliminate the inaccuracies commonly associated with traditional orthognathic surgery planning and to simplify the execution by eliminating surgical steps such as intraoperative measuring, determining the condylar position, the use of bulky intermediate splints, and the use of intermaxillary wire fixation. The OPS attempts precise translation of the virtual plan to the operating field, bridging the gap between virtual and actual surgery. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Integration of virtualized worker nodes in standard batch systems
NASA Astrophysics Data System (ADS)
Büge, Volker; Hessling, Hermann; Kemp, Yves; Kunze, Marcel; Oberst, Oliver; Quast, Günter; Scheurer, Armin; Synge, Owen
2010-04-01
Current experiments in HEP only use a limited number of operating system flavours. Their software might only be validated on one single OS platform. Resource providers might have other operating systems of choice for the installation of the batch infrastructure. This is especially the case if a cluster is shared with other communities, or communities that have stricter security requirements. One solution would be to statically divide the cluster into separated sub-clusters. In such a scenario, no opportunistic distribution of the load can be achieved, resulting in a poor overall utilization efficiency. Another approach is to make the batch system aware of virtualization, and to provide each community with its favoured operating system in a virtual machine. Here, the scheduler has full flexibility, resulting in a better overall efficiency of the resources. In our contribution, we present a lightweight concept for the integration of virtual worker nodes into standard batch systems. The virtual machines are started on the worker nodes just before jobs are executed there. No meta-scheduling is introduced. We demonstrate two prototype implementations, one based on the Sun Grid Engine (SGE), the other using Maui/Torque as a batch system. Both solutions support local job as well as Grid job submission. The hypervisors currently used are Xen and KVM, a port to another system is easily envisageable. To better handle different virtual machines on the physical host, the management solution VmImageManager is developed. We will present first experience from running the two prototype implementations. In a last part, we will show the potential future use of this lightweight concept when integrated into high-level (i.e. Grid) work-flows.
Dynamically programmable cache
NASA Astrophysics Data System (ADS)
Nakkar, Mouna; Harding, John A.; Schwartz, David A.; Franzon, Paul D.; Conte, Thomas
1998-10-01
Reconfigurable machines have recently been used as co- processors to accelerate the execution of certain algorithms or program subroutines. The problems with the above approach include high reconfiguration time and limited partial reconfiguration. By far the most critical problems are: (1) the small on-chip memory which results in slower execution time, and (2) small FPGA areas that cannot implement large subroutines. Dynamically Programmable Cache (DPC) is a novel architecture for embedded processors which offers solutions to the above problems. To solve memory access problems, DPC processors merge reconfigurable arrays with the data cache at various cache levels to create a multi-level reconfigurable machines. As a result DPC machines have both higher data accessibility and FPGA memory bandwidth. To solve the limited FPGA resource problem, DPC processors implemented multi-context switching (Virtualization) concept. Virtualization allows implementation of large subroutines with fewer FPGA cells. Additionally, DPC processors can parallelize the execution of several operations resulting in faster execution time. In this paper, the speedup improvement for DPC machines are shown to be 5X faster than an Altera FLEX10K FPGA chip and 2X faster than a Sun Ultral SPARC station for two different algorithms (convolution and motion estimation).
Implementation of relational data base management systems on micro-computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, C.L.
1982-01-01
This dissertation describes an implementation of a Relational Data Base Management System on a microcomputer. A specific floppy disk based hardward called TERAK is being used, and high level query interface which is similar to a subset of the SEQUEL language is provided. The system contains sub-systems such as I/O, file management, virtual memory management, query system, B-tree management, scanner, command interpreter, expression compiler, garbage collection, linked list manipulation, disk space management, etc. The software has been implemented to fulfill the following goals: (1) it is highly modularized. (2) The system is physically segmented into 16 logically independent, overlayable segments,more » in a way such that a minimal amount of memory is needed at execution time. (3) Virtual memory system is simulated that provides the system with seemingly unlimited memory space. (4) A language translator is applied to recognize user requests in the query language. The code generation of this translator generates compact code for the execution of UPDATE, DELETE, and QUERY commands. (5) A complete set of basic functions needed for on-line data base manipulations is provided through the use of a friendly query interface. (6) To eliminate the dependency on the environment (both software and hardware) as much as possible, so that it would be easy to transplant the system to other computers. (7) To simulate each relation as a sequential file. It is intended to be a highly efficient, single user system suited to be used by small or medium sized organizations for, say, administrative purposes. Experiments show that quite satisfying results have indeed been achieved.« less
Staghorn: An Automated Large-Scale Distributed System Analysis Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gabert, Kasimir; Burns, Ian; Elliott, Steven
2016-09-01
Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model,more » either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J
2012-01-01
In prior work (Yoginath and Perumalla, 2011; Yoginath, Perumalla and Henz, 2012), the motivation, challenges and issues were articulated in favor of virtual time ordering of Virtual Machines (VMs) in network simulations hosted on multi-core machines. Two major components in the overall virtualization challenge are (1) virtual timeline establishment and scheduling of VMs, and (2) virtualization of inter-VM communication. Here, we extend prior work by presenting scaling results for the first component, with experiment results on up to 128 VMs scheduled in virtual time order on a single 12-core host. We also explore the solution space of design alternatives formore » the second component, and present performance results from a multi-threaded, multi-queue implementation of inter-VM network control for synchronized execution with VM scheduling, incorporated in our NetWarp simulation system.« less
Simulation of Attacks for Security in Wireless Sensor Network.
Diaz, Alvaro; Sanchez, Pablo
2016-11-18
The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node's software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work.
Mobility for GCSS-MC through virtual PCs
2017-06-01
their productivity. Mobile device access to GCSS-MC would allow Marines to access a required program for their mission using a form of computing ...network throughput applications with a device running on various operating systems with limited computational ability. The use of VPCs leads to a...reduced need for network throughput and faster overall execution. 14. SUBJECT TERMS GCSS-MC, enterprise resource planning, virtual personal computer
Safe, Multiphase Bounds Check Elimination in Java
2010-01-28
production of mobile code from source code, JIT compilation in the virtual ma- chine, and application code execution. The code producer uses...invariants, and inequality constraint analysis) to identify and prove redundancy of bounds checks. During class-loading and JIT compilation, the virtual...unoptimized code if the speculated invariants do not hold. The combined effect of the multiple phases is to shift the effort as- sociated with bounds
ERIC Educational Resources Information Center
Caeyenberghs, Karen; Wilson, Peter H.; van Roon, Dominique; Swinnen, Stephan P.; Smits-Engelsman, Bouwien C. M.
2009-01-01
Motor imagery (MI) has become a principal focus of interest in studies on brain and behavior. However, changes in MI across development have received virtually no attention so far. In the present study, children (N = 112, 6 to 16 years old) performed a new, computerized Virtual Radial Fitts Task (VRFT) to determine their MI ability as well as the…
NASA Technical Reports Server (NTRS)
Mehhtz, Peter
2005-01-01
JPF is an explicit state software model checker for Java bytecode. Today, JPF is a swiss army knife for all sort of runtime based verification purposes. This basically means JPF is a Java virtual machine that executes your program not just once (like a normal VM), but theoretically in all possible ways, checking for property violations like deadlocks or unhandled exceptions along all potential execution paths. If it finds an error, JPF reports the whole execution that leads to it. Unlike a normal debugger, JPF keeps track of every step how it got to the defect.
DE-FG02-04ER25606 Identity Federation and Policy Management Guide: Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphrey, Marty, A
The goal of this 3-year project was to facilitate a more productive dynamic matching between resource providers and resource consumers in Grid environments by explicitly specifying policies. There were broadly two problems being addressed by this project. First, there was a lack of an Open Grid Services Architecture (OGSA)-compliant mechanism for expressing, storing and retrieving user policies and Virtual Organization (VO) policies. Second, there was a lack of tools to resolve and enforce policies in the Open Services Grid Architecture. To address these problems, our overall approach in this project was to make all policies explicit (e.g., virtual organization policies,more » resource provider policies, resource consumer policies), thereby facilitating policy matching and policy negotiation. Policies defined on a per-user basis were created, held, and updated in MyPolMan, thereby providing a Grid user to centralize (where appropriate) and manage his/her policies. Organizationally, the corresponding service was VOPolMan, in which the policies of the Virtual Organization are expressed, managed, and dynamically consulted. Overall, we successfully defined, prototyped, and evaluated policy-based resource management and access control for OGSA-based Grids. This DOE project partially supported 17 peer-reviewed publications on a number of different topics: General security for Grids, credential management, Web services/OGSA/OGSI, policy-based grid authorization (for remote execution and for access to information), policy-directed Grid data movement/placement, policies for large-scale virtual organizations, and large-scale policy-aware grid architectures. In addition to supporting the PI, this project partially supported the training of 5 PhD students.« less
Automated Instrumentation, Monitoring and Visualization of PVM Programs Using AIMS
NASA Technical Reports Server (NTRS)
Mehra, Pankaj; VanVoorst, Brian; Yan, Jerry; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
We present views and analysis of the execution of several PVM (Parallel Virtual Machine) codes for Computational Fluid Dynamics on a networks of Sparcstations, including: (1) NAS Parallel Benchmarks CG and MG; (2) a multi-partitioning algorithm for NAS Parallel Benchmark SP; and (3) an overset grid flowsolver. These views and analysis were obtained using our Automated Instrumentation and Monitoring System (AIMS) version 3.0, a toolkit for debugging the performance of PVM programs. We will describe the architecture, operation and application of AIMS. The AIMS toolkit contains: (1) Xinstrument, which can automatically instrument various computational and communication constructs in message-passing parallel programs; (2) Monitor, a library of runtime trace-collection routines; (3) VK (Visual Kernel), an execution-animation tool with source-code clickback; and (4) Tally, a tool for statistical analysis of execution profiles. Currently, Xinstrument can handle C and Fortran 77 programs using PVM 3.2.x; Monitor has been implemented and tested on Sun 4 systems running SunOS 4.1.2; and VK uses XIIR5 and Motif 1.2. Data and views obtained using AIMS clearly illustrate several characteristic features of executing parallel programs on networked workstations: (1) the impact of long message latencies; (2) the impact of multiprogramming overheads and associated load imbalance; (3) cache and virtual-memory effects; and (4) significant skews between workstation clocks. Interestingly, AIMS can compensate for constant skew (zero drift) by calibrating the skew between a parent and its spawned children. In addition, AIMS' skew-compensation algorithm can adjust timestamps in a way that eliminates physically impossible communications (e.g., messages going backwards in time). Our current efforts are directed toward creating new views to explain the observed performance of PVM programs. Some of the features planned for the near future include: (1) ConfigView, showing the physical topology of the virtual machine, inferred using specially formatted IP (Internet Protocol) packets: and (2) LoadView, synchronous animation of PVM-program execution and resource-utilization patterns.
A heterogeneous computing environment for simulating astrophysical fluid flows
NASA Technical Reports Server (NTRS)
Cazes, J.
1994-01-01
In the Concurrent Computing Laboratory in the Department of Physics and Astronomy at Louisiana State University we have constructed a heterogeneous computing environment that permits us to routinely simulate complicated three-dimensional fluid flows and to readily visualize the results of each simulation via three-dimensional animation sequences. An 8192-node MasPar MP-1 computer with 0.5 GBytes of RAM provides 250 MFlops of execution speed for our fluid flow simulations. Utilizing the parallel virtual machine (PVM) language, at periodic intervals data is automatically transferred from the MP-1 to a cluster of workstations where individual three-dimensional images are rendered for inclusion in a single animation sequence. Work is underway to replace executions on the MP-1 with simulations performed on the 512-node CM-5 at NCSA and to simultaneously gain access to more potent volume rendering workstations.
Demonstration of the Low-Cost Virtual Collaborative Environment (VCE)
NASA Technical Reports Server (NTRS)
Bowers, David; Montes, Leticia; Ramos, Angel; Joyce, Brendan; Lumia, Ron
1997-01-01
This paper demonstrates the feasibility of a low-cost approach of remotely controlling equipment. Our demonstration system consists of a PC, the PUMA 560 robot with Barrett hand, and commercially available controller and teleconferencing software. The system provides a graphical user interface which allows a user to program equipment tasks and preview motions i.e., simulate the results. Once satisfied that the actions are both safe and accomplish the task, the remote user sends the data over the Internet to the local site for execution on the real equipment. A video link provides visual feedback to the remote sight. This technology lends itself readily to NASA's upcoming Mars expeditions by providing remote simulation and control of equipment.
Virtual action and real action have different impacts on comprehension of concrete verbs
Repetto, Claudia; Cipresso, Pietro; Riva, Giuseppe
2015-01-01
In the last decade, many results have been reported supporting the hypothesis that language has an embodied nature. According to this theory, the sensorimotor system is involved in linguistic processes such as semantic comprehension. One of the cognitive processes emerging from the interplay between action and language is motor simulation. The aim of the present study is to deepen the knowledge about the simulation of action verbs during comprehension in a virtual reality setting. We compared two experimental conditions with different motor tasks: one in which the participants ran in a virtual world by moving the joypad knob with their left hand (virtual action performed with their feet plus real action performed with the hand) and one in which they only watched a video of runners and executed an attentional task by moving the joypad knob with their left hand (no virtual action plus real action performed with the hand). In both conditions, participants had to perform a concomitant go/no-go semantic task, in which they were asked to press a button (with their right hand) when presented with a sentence containing a concrete verb, and to refrain from providing a response when the verb was abstract. Action verbs described actions performed with hand, foot, or mouth. We recorded electromyography (EMG) latencies to measure reaction times of the linguistic task. We wanted to test if the simulation occurs, whether it is triggered by the virtual or the real action, and which effect it produces (facilitation or interference). Results underlined that those who virtually ran in the environment were faster in understanding foot-action verbs; no simulation effect was found for the real action. The present findings are discussed in the light of the embodied language framework, and a hypothesis is provided that integrates our results with those in literature. PMID:25759678
Casap, Nardy; Nadel, Sahar; Tarazi, Eyal; Weiss, Ervin I
2011-10-01
This study evaluated the benefits of a virtual reality navigation system for teaching the surgical stage of dental implantation to final-year dental students. The study aimed to assess the students' performance in dental implantation assignments by comparing freehand protocols with virtual reality navigation. Forty final-year dentistry students without previous experience in dental implantation surgery were given an implantation assignment comprising 3 tasks. Marking, drilling, and widening of implant holes were executed by a freehand protocol on the 2 mandibular sides by 1 group and by virtual reality navigation on 1 side and contralaterally with the freehand protocol by the other group. Subjective and objective assessments of the students' performance were graded. Marking with the navigation system was more accurate than with the standard protocol. The 2 groups performed similarly in the 2-mm drilling on the 2 mandibular sides. Widening of the 2 mesial holes to 3 mm was significantly better with the second execution in the standard protocol group, but not in the navigation group. The navigation group's second-site freehand drilling of the molar was significantly worse than the first. The execution of all assignments was significantly faster in the freehand group than in the navigation group (60.75 vs 77.25 minutes, P = .02). Self-assessment only partly matched the objective measurements and was more realistic in the standard protocol group. Despite the improved performance with the navigation system, the added value of training in dental implantation surgery with virtual reality navigation was minimal. Copyright © 2011 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Optimized Hypervisor Scheduler for Parallel Discrete Event Simulations on Virtual Machine Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S
2013-01-01
With the advent of virtual machine (VM)-based platforms for parallel computing, it is now possible to execute parallel discrete event simulations (PDES) over multiple virtual machines, in contrast to executing in native mode directly over hardware as is traditionally done over the past decades. While mature VM-based parallel systems now offer new, compelling benefits such as serviceability, dynamic reconfigurability and overall cost effectiveness, the runtime performance of parallel applications can be significantly affected. In particular, most VM-based platforms are optimized for general workloads, but PDES execution exhibits unique dynamics significantly different from other workloads. Here we first present results frommore » experiments that highlight the gross deterioration of the runtime performance of VM-based PDES simulations when executed using traditional VM schedulers, quantitatively showing the bad scaling properties of the scheduler as the number of VMs is increased. The mismatch is fundamental in nature in the sense that any fairness-based VM scheduler implementation would exhibit this mismatch with PDES runs. We also present a new scheduler optimized specifically for PDES applications, and describe its design and implementation. Experimental results obtained from running PDES benchmarks (PHOLD and vehicular traffic simulations) over VMs show over an order of magnitude improvement in the run time of the PDES-optimized scheduler relative to the regular VM scheduler, with over 20 reduction in run time of simulations using up to 64 VMs. The observations and results are timely in the context of emerging systems such as cloud platforms and VM-based high performance computing installations, highlighting to the community the need for PDES-specific support, and the feasibility of significantly reducing the runtime overhead for scalable PDES on VM platforms.« less
NASA Technical Reports Server (NTRS)
Riedel, Joseph E.; Grasso, Christopher A.
2012-01-01
VML (Virtual Machine Language) is an advanced computing environment that allows spacecraft to operate using mechanisms ranging from simple, time-oriented sequencing to advanced, multicomponent reactive systems. VML has developed in four evolutionary stages. VML 0 is a core execution capability providing multi-threaded command execution, integer data types, and rudimentary branching. VML 1 added named parameterized procedures, extensive polymorphism, data typing, branching, looping issuance of commands using run-time parameters, and named global variables. VML 2 added for loops, data verification, telemetry reaction, and an open flight adaptation architecture. VML 2.1 contains major advances in control flow capabilities for executable state machines. On the resource requirements front, VML 2.1 features a reduced memory footprint in order to fit more capability into modestly sized flight processors, and endian-neutral data access for compatibility with Intel little-endian processors. Sequence packaging has been improved with object-oriented programming constructs and the use of implicit (rather than explicit) time tags on statements. Sequence event detection has been significantly enhanced with multi-variable waiting, which allows a sequence to detect and react to conditions defined by complex expressions with multiple global variables. This multi-variable waiting serves as the basis for implementing parallel rule checking, which in turn, makes possible executable state machines. The new state machine feature in VML 2.1 allows the creation of sophisticated autonomous reactive systems without the need to develop expensive flight software. Users specify named states and transitions, along with the truth conditions required, before taking transitions. Transitions with the same signal name allow separate state machines to coordinate actions: the conditions distributed across all state machines necessary to arm a particular signal are evaluated, and once found true, that signal is raised. The selected signal then causes all identically named transitions in all present state machines to be taken simultaneously. VML 2.1 has relevance to all potential space missions, both manned and unmanned. It was under consideration for use on Orion.
Bashiri, Azadeh; Ghazisaeedi, Marjan
2017-01-01
Attention deficit hyperactivity disorder (ADHD) is one of the most common psychiatric disorders in childhood. This disorder, in addition to its main symptoms, creates significant difficulties in education, social performance, and personal relationships. Given the importance of rehabilitation for these patients to combat the above issues, the use of virtual reality (VR) technology is helpful. The aim of this study was to highlight the opportunities for VR in the rehabilitation of children with ADHD. This narrative review was conducted by searching for articles in scientific databases and e-Journals, using keywords including VR, children, and ADHD. Various studies have shown that VR capabilities in the rehabilitation of children with ADHD include providing flexibility in accordance with the patients' requirements; removing distractions and creating an effective and safe environment away from real-life dangers; saving time and money; increasing patients' incentives based on their interests; providing suitable tools to perform different behavioral tests and increase ecological validity; facilitating better understanding of individuals' cognitive deficits and improving them; helping therapists with accurate diagnosis, assessment, and rehabilitation; and improving working memory, executive function, and cognitive processes such as attention in these children. Rehabilitation of children with ADHD is based on behavior and physical patterns and is thus suitable for VR interventions. This technology, by simulating and providing a virtual environment for diagnosis, training, monitoring, assessment and treatment, is effective in providing optimal rehabilitation of children with ADHD. PMID:29234356
Bashiri, Azadeh; Ghazisaeedi, Marjan; Shahmoradi, Leila
2017-11-01
Attention deficit hyperactivity disorder (ADHD) is one of the most common psychiatric disorders in childhood. This disorder, in addition to its main symptoms, creates significant difficulties in education, social performance, and personal relationships. Given the importance of rehabilitation for these patients to combat the above issues, the use of virtual reality (VR) technology is helpful. The aim of this study was to highlight the opportunities for VR in the rehabilitation of children with ADHD. This narrative review was conducted by searching for articles in scientific databases and e-Journals, using keywords including VR, children, and ADHD. Various studies have shown that VR capabilities in the rehabilitation of children with ADHD include providing flexibility in accordance with the patients' requirements; removing distractions and creating an effective and safe environment away from real-life dangers; saving time and money; increasing patients' incentives based on their interests; providing suitable tools to perform different behavioral tests and increase ecological validity; facilitating better understanding of individuals' cognitive deficits and improving them; helping therapists with accurate diagnosis, assessment, and rehabilitation; and improving working memory, executive function, and cognitive processes such as attention in these children. Rehabilitation of children with ADHD is based on behavior and physical patterns and is thus suitable for VR interventions. This technology, by simulating and providing a virtual environment for diagnosis, training, monitoring, assessment and treatment, is effective in providing optimal rehabilitation of children with ADHD.
NASA Astrophysics Data System (ADS)
Takács, Ondřej; Kostolányová, Kateřina
2016-06-01
This paper describes the Virtual Teacher that uses a set of rules to automatically adapt the way of teaching. These rules compose of two parts: conditions on various students' properties or learning situation; conclusions that specify different adaptation parameters. The rules can be used for general adaptation of each subject or they can be specific to some subject. The rule based system of Virtual Teacher is dedicated to be used in pedagogical experiments in adaptive e-learning and is therefore designed for users without education in computer science. The Virtual Teacher was used in dissertation theses of two students, who executed two pedagogical experiments. This paper also describes the phase of simulating and modeling of the theoretically prepared adaptive process in the modeling tool, which has all the required parameters and has been created especially for the occasion. The experiments are being conducted on groups of virtual students and by using a virtual study material.
Dockres: a computer program that analyzes the output of virtual screening of small molecules
2010-01-01
Background This paper describes a computer program named Dockres that is designed to analyze and summarize results of virtual screening of small molecules. The program is supplemented with utilities that support the screening process. Foremost among these utilities are scripts that run the virtual screening of a chemical library on a large number of processors in parallel. Methods Dockres and some of its supporting utilities are written Fortran-77; other utilities are written as C-shell scripts. They support the parallel execution of the screening. The current implementation of the program handles virtual screening with Autodock-3 and Autodock-4, but can be extended to work with the output of other programs. Results Analysis of virtual screening by Dockres led to both active and selective lead compounds. Conclusions Analysis of virtual screening was facilitated and enhanced by Dockres in both the authors' laboratories as well as laboratories elsewhere. PMID:20205801
Graphical Language for Data Processing
NASA Technical Reports Server (NTRS)
Alphonso, Keith
2011-01-01
A graphical language for processing data allows processing elements to be connected with virtual wires that represent data flows between processing modules. The processing of complex data, such as lidar data, requires many different algorithms to be applied. The purpose of this innovation is to automate the processing of complex data, such as LIDAR, without the need for complex scripting and programming languages. The system consists of a set of user-interface components that allow the user to drag and drop various algorithmic and processing components onto a process graph. By working graphically, the user can completely visualize the process flow and create complex diagrams. This innovation supports the nesting of graphs, such that a graph can be included in another graph as a single step for processing. In addition to the user interface components, the system includes a set of .NET classes that represent the graph internally. These classes provide the internal system representation of the graphical user interface. The system includes a graph execution component that reads the internal representation of the graph (as described above) and executes that graph. The execution of the graph follows the interpreted model of execution in that each node is traversed and executed from the original internal representation. In addition, there are components that allow external code elements, such as algorithms, to be easily integrated into the system, thus making the system infinitely expandable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S
2013-01-01
Virtual machine (VM) technologies, especially those offered via Cloud platforms, present new dimensions with respect to performance and cost in executing parallel discrete event simulation (PDES) applications. Due to the introduction of overall cost as a metric, the choice of the highest-end computing configuration is no longer the most economical one. Moreover, runtime dynamics unique to VM platforms introduce new performance characteristics, and the variety of possible VM configurations give rise to a range of choices for hosting a PDES run. Here, an empirical study of these issues is undertaken to guide an understanding of the dynamics, trends and trade-offsmore » in executing PDES on VM/Cloud platforms. Performance results and cost measures are obtained from actual execution of a range of scenarios in two PDES benchmark applications on the Amazon Cloud offerings and on a high-end VM host machine. The data reveals interesting insights into the new VM-PDES dynamics that come into play and also leads to counter-intuitive guidelines with respect to choosing the best and second-best configurations when overall cost of execution is considered. In particular, it is found that choosing the highest-end VM configuration guarantees neither the best runtime nor the least cost. Interestingly, choosing a (suitably scaled) low-end VM configuration provides the least overall cost without adversely affecting the total runtime.« less
Intelligent Tutoring Methods for Optimizing Learning Outcomes with Embedded Training
2009-10-01
after action review. Particularly with free - play virtual environments, it is important to constrain the development task for constructing an...evaluation approach. Attempts to model all possible variations of correct performance can be prohibitive in free - play scenarios, and so for such conditions...member R for proper execution during free - play execution. In the first tier, the evaluation must know when it applies, or more specifically, when
Miller, Haylie L.; Bugnariu, Nicoleta; Patterson, Rita M.; Wijayasinghe, Indika; Popa, Dan O.
2018-01-01
Visuomotor integration (VMI), the use of visual information to guide motor planning, execution, and modification, is necessary for a wide range of functional tasks. To comprehensively, quantitatively assess VMI, we developed a paradigm integrating virtual environments, motion-capture, and mobile eye-tracking. Virtual environments enable tasks to be repeatable, naturalistic, and varied in complexity. Mobile eye-tracking and minimally-restricted movement enable observation of natural strategies for interacting with the environment. This paradigm yields a rich dataset that may inform our understanding of VMI in typical and atypical development. PMID:29876370
NASA Astrophysics Data System (ADS)
Schmeil, Andreas; Eppler, Martin J.
Despite the fact that virtual worlds and other types of multi-user 3D collaboration spaces have long been subjects of research and of application experiences, it still remains unclear how to best benefit from meeting with colleagues and peers in a virtual environment with the aim of working together. Making use of the potential of virtual embodiment, i.e. being immersed in a space as a personal avatar, allows for innovative new forms of collaboration. In this paper, we present a framework that serves as a systematic formalization of collaboration elements in virtual environments. The framework is based on the semiotic distinctions among pragmatic, semantic and syntactic perspectives. It serves as a blueprint to guide users in designing, implementing, and executing virtual collaboration patterns tailored to their needs. We present two team and two community collaboration pattern examples as a result of the application of the framework: Virtual Meeting, Virtual Design Studio, Spatial Group Configuration, and Virtual Knowledge Fair. In conclusion, we also point out future research directions for this emerging domain.
Taming Wild Horses: The Need for Virtual Time-based Scheduling of VMs in Network Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J
2012-01-01
The next generation of scalable network simulators employ virtual machines (VMs) to act as high-fidelity models of traffic producer/consumer nodes in simulated networks. However, network simulations could be inaccurate if VMs are not scheduled according to virtual time, especially when many VMs are hosted per simulator core in a multi-core simulator environment. Since VMs are by default free-running, on the outset, it is not clear if, and to what extent, their untamed execution affects the results in simulated scenarios. Here, we provide the first quantitative basis for establishing the need for generalized virtual time scheduling of VMs in network simulators,more » based on an actual prototyped implementations. To exercise breadth, our system is tested with multiple disparate applications: (a) a set of message passing parallel programs, (b) a computer worm propagation phenomenon, and (c) a mobile ad-hoc wireless network simulation. We define and use error metrics and benchmarks in scaled tests to empirically report the poor match of traditional, fairness-based VM scheduling to VM-based network simulation, and also clearly show the better performance of our simulation-specific scheduler, with up to 64 VMs hosted on a 12-core simulator node.« less
An Analysis of VR Technology Used in Immersive Simulations with a Serious Game Perspective.
Menin, Aline; Torchelsen, Rafael; Nedel, Luciana
2018-03-01
Using virtual environments (VEs) is a safer and cost-effective alternative to executing dangerous tasks, such as training firefighters and industrial operators. Immersive virtual reality (VR) combined with game aspects have the potential to improve the user experience in the VE by increasing realism, engagement, and motivation. This article investigates the impact of VR technology on 46 immersive gamified simulations with serious purposes and classifies it towards a taxonomy. Our findings suggest that immersive VR improves simulation outcomes, such as increasing learning gain and knowledge retention and improving clinical outcomes for rehabilitation. However, it also has limitations such as motion sickness and restricted access to VR hardware. Our contributions are to provide a better understanding of the benefits and limitations of using VR in immersive simulations with serious purposes, to propose a taxonomy that classifies them, and to discuss whether methods and participants profiles influence results.
Facilitating Co-Design for Extreme-Scale Systems Through Lightweight Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engelmann, Christian; Lauer, Frank
This work focuses on tools for investigating algorithm performance at extreme scale with millions of concurrent threads and for evaluating the impact of future architecture choices to facilitate the co-design of high-performance computing (HPC) architectures and applications. The approach focuses on lightweight simulation of extreme-scale HPC systems with the needed amount of accuracy. The prototype presented in this paper is able to provide this capability using a parallel discrete event simulation (PDES), such that a Message Passing Interface (MPI) application can be executed at extreme scale, and its performance properties can be evaluated. The results of an initial prototype aremore » encouraging as a simple 'hello world' MPI program could be scaled up to 1,048,576 virtual MPI processes on a four-node cluster, and the performance properties of two MPI programs could be evaluated at up to 16,384 virtual MPI processes on the same system.« less
Virtual Sensor Web Architecture
NASA Astrophysics Data System (ADS)
Bose, P.; Zimdars, A.; Hurlburt, N.; Doug, S.
2006-12-01
NASA envisions the development of smart sensor webs, intelligent and integrated observation network that harness distributed sensing assets, their associated continuous and complex data sets, and predictive observation processing mechanisms for timely, collaborative hazard mitigation and enhanced science productivity and reliability. This paper presents Virtual Sensor Web Infrastructure for Collaborative Science (VSICS) Architecture for sustained coordination of (numerical and distributed) model-based processing, closed-loop resource allocation, and observation planning. VSICS's key ideas include i) rich descriptions of sensors as services based on semantic markup languages like OWL and SensorML; ii) service-oriented workflow composition and repair for simple and ensemble models; event-driven workflow execution based on event-based and distributed workflow management mechanisms; and iii) development of autonomous model interaction management capabilities providing closed-loop control of collection resources driven by competing targeted observation needs. We present results from initial work on collaborative science processing involving distributed services (COSEC framework) that is being extended to create VSICS.
Associative programming language and virtual associative access manager
NASA Technical Reports Server (NTRS)
Price, C.
1978-01-01
APL provides convenient associative data manipulation functions in a high level language. Six statements were added to PL/1 via a preprocessor: CREATE, INSERT, FIND, FOR EACH, REMOVE, and DELETE. They allow complete control of all data base operations. During execution, data base management programs perform the functions required to support the APL language. VAAM is the data base management system designed to support the APL language. APL/VAAM is used by CADANCE, an interactive graphic computer system. VAAM is designed to support heavily referenced files. Virtual memory files, which utilize the paging mechanism of the operating system, are used. VAAM supports a full network data structure. The two basic blocks in a VAAM file are entities and sets. Entities are the basic information element and correspond to PL/1 based structures defined by the user. Sets contain the relationship information and are implemented as arrays.
Liberating Virtual Machines from Physical Boundaries through Execution Knowledge
2015-12-01
trivial infrastructures such as VM distribution networks, clients need to wait for an extended period of time before launching a VM. In cloud settings...hardware support. MobiDesk [28] efficiently supports virtual desktops in mobile environments by decou- pling the user’s workload from host systems and...experiment set-up. VMs are migrated between a pair of source and destination hosts, which are connected through a backend 10 Gbps network for
AstroGrid: Taverna in the Virtual Observatory .
NASA Astrophysics Data System (ADS)
Benson, K. M.; Walton, N. A.
This paper reports on the implementation of the Taverna workbench by AstroGrid, a tool for designing and executing workflows of tasks in the Virtual Observatory. The workflow approach helps astronomers perform complex task sequences with little technical effort. Visual approach to workflow construction streamlines highly complex analysis over public and private data and uses computational resources as minimal as a desktop computer. Some integration issues and future work are discussed in this article.
Controlling mechanisms over the internet
NASA Astrophysics Data System (ADS)
Lumia, Ronald
1997-01-01
The internet, widely available throughout the world, can be used to control robots, machine tools, and other mechanisms. This paper will describe a low-cost virtual collaborative environment (VCE) which will connect users with distant equipment. The system is based on PC technology, and incorporates off-line-programming with on-line execution. A remote user programs the systems graphically and simulates the motions and actions of the mechanism until satisfied with the functionality of the program. The program is then transferred from the remote site to the local site where the real equipment exists. At the local site, the simulation is run again to check the program from a safety standpoint. Then, the local user runs the program on the real equipment. During execution, a camera in the real workspace provides an image back to the remote user through a teleconferencing system. The system costs approximately 12,500 dollars and represents a low-cost alternative to the Sandia National Laboratories VCE.
Intelligent Tutoring Methods for Optimizing Learning Outcomes with Embedded Training
2009-01-01
used to stimulate learning activities, from practice events with real-time coaching, to exercises with after action review. Particularly with free - play virtual...variations of correct performance can be prohibitive in free - play scenarios, and so for such conditions this has led to a state-based approach for...tiered logic that evaluates team member R for proper execution during free - play execution. In the first tier, the evaluation must know when it
Simulation of Attacks for Security in Wireless Sensor Network
Diaz, Alvaro; Sanchez, Pablo
2016-01-01
The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node’s software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work. PMID:27869710
Intelligent cloud computing security using genetic algorithm as a computational tools
NASA Astrophysics Data System (ADS)
Razuky AL-Shaikhly, Mazin H.
2018-05-01
An essential change had occurred in the field of Information Technology which represented with cloud computing, cloud giving virtual assets by means of web yet awesome difficulties in the field of information security and security assurance. Currently main problem with cloud computing is how to improve privacy and security for cloud “cloud is critical security”. This paper attempts to solve cloud security by using intelligent system with genetic algorithm as wall to provide cloud data secure, all services provided by cloud must detect who receive and register it to create list of users (trusted or un-trusted) depend on behavior. The execution of present proposal has shown great outcome.
2016-05-05
Training for IND Response Decision-Making: Models for Government–Industry Collaboration for the Development of Game -Based Training Tools R.M. Seater...Skill Transfer and Virtual Training for IND Response Decision-Making: Models for Government–Industry Collaboration for the Development of Game -Based...unlimited. This page intentionally left blank. iii EXECUTIVE SUMMARY Game -based training tools, sometimes called “serious games ,” are becoming
Implementation of NASTRAN on the IBM/370 CMS operating system
NASA Technical Reports Server (NTRS)
Britten, S. S.; Schumacker, B.
1980-01-01
The NASA Structural Analysis (NASTRAN) computer program is operational on the IBM 360/370 series computers. While execution of NASTRAN has been described and implemented under the virtual storage operating systems of the IBM 370 models, the IBM 370/168 computer can also operate in a time-sharing mode under the virtual machine operating system using the Conversational Monitor System (CMS) subset. The changes required to make NASTRAN operational under the CMS operating system are described.
Performance Evaluation of Resource Management in Cloud Computing Environments.
Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci
2015-01-01
Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.
A methodological, task-based approach to Procedure-Specific Simulations training.
Setty, Yaki; Salzman, Oren
2016-12-01
Procedure-Specific Simulations (PSS) are 3D realistic simulations that provide a platform to practice complete surgical procedures in a virtual-reality environment. While PSS have the potential to improve surgeons' proficiency, there are no existing standards or guidelines for PSS development in a structured manner. We employ a unique platform inspired by game design to develop virtual reality simulations in three dimensions of urethrovesical anastomosis during radical prostatectomy. 3D visualization is supported by a stereo vision, providing a fully realistic view of the simulation. The software can be executed for any robotic surgery platform. Specifically, we tested the simulation under windows environment on the RobotiX Mentor. Using urethrovesical anastomosis during radical prostatectomy simulation as a representative example, we present a task-based methodological approach to PSS training. The methodology provides tasks in increasing levels of difficulty from a novice level of basic anatomy identification, to an expert level that permits testing new surgical approaches. The modular methodology presented here can be easily extended to support more complex tasks. We foresee this methodology as a tool used to integrate PSS as a complementary training process for surgical procedures.
Performance Evaluation of Resource Management in Cloud Computing Environments
Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci
2015-01-01
Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price. PMID:26555730
Multilevel Parallelization of AutoDock 4.2.
Norgan, Andrew P; Coffman, Paul K; Kocher, Jean-Pierre A; Katzmann, David J; Sosa, Carlos P
2011-04-28
Virtual (computational) screening is an increasingly important tool for drug discovery. AutoDock is a popular open-source application for performing molecular docking, the prediction of ligand-receptor interactions. AutoDock is a serial application, though several previous efforts have parallelized various aspects of the program. In this paper, we report on a multi-level parallelization of AutoDock 4.2 (mpAD4). Using MPI and OpenMP, AutoDock 4.2 was parallelized for use on MPI-enabled systems and to multithread the execution of individual docking jobs. In addition, code was implemented to reduce input/output (I/O) traffic by reusing grid maps at each node from docking to docking. Performance of mpAD4 was examined on two multiprocessor computers. Using MPI with OpenMP multithreading, mpAD4 scales with near linearity on the multiprocessor systems tested. In situations where I/O is limiting, reuse of grid maps reduces both system I/O and overall screening time. Multithreading of AutoDock's Lamarkian Genetic Algorithm with OpenMP increases the speed of execution of individual docking jobs, and when combined with MPI parallelization can significantly reduce the execution time of virtual screens. This work is significant in that mpAD4 speeds the execution of certain molecular docking workloads and allows the user to optimize the degree of system-level (MPI) and node-level (OpenMP) parallelization to best fit both workloads and computational resources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmid, Beat; Tomlinson, Jason M.; Hubbe, John M.
2014-05-01
The Department of Energy Atmospheric Radiation Measurement (ARM) Program is a climate research user facility operating stationary ground sites that provide long-term measurements of climate relevant properties, mobile ground- and ship-based facilities to conduct shorter field campaigns (6-12 months), and the ARM Aerial Facility (AAF). The airborne observations acquired by the AAF enhance the surface-based ARM measurements by providing high-resolution in-situ measurements for process understanding, retrieval-algorithm development, and model evaluation that are not possible using ground- or satellite-based techniques. Several ARM aerial efforts were consolidated into the AAF in 2006. With the exception of a small aircraft used for routinemore » measurements of aerosols and carbon cycle gases, AAF at the time had no dedicated aircraft and only a small number of instruments at its disposal. In this "virtual hangar" mode, AAF successfully carried out several missions contracting with organizations and investigators who provided their research aircraft and instrumentation. In 2009, AAF started managing operations of the Battelle-owned Gulfstream I (G-1) large twin-turboprop research aircraft. Furthermore, the American Recovery and Reinvestment Act of 2009 provided funding for the procurement of over twenty new instruments to be used aboard the G-1 and other AAF virtual-hangar aircraft. AAF now executes missions in the virtual- and real-hangar mode producing freely available datasets for studying aerosol, cloud, and radiative processes in the atmosphere. AAF is also engaged in the maturation and testing of newly developed airborne sensors to help foster the next generation of airborne instruments.« less
A 3D character animation engine for multimodal interaction on mobile devices
NASA Astrophysics Data System (ADS)
Sandali, Enrico; Lavagetto, Fabio; Pisano, Paolo
2005-03-01
Talking virtual characters are graphical simulations of real or imaginary persons that enable natural and pleasant multimodal interaction with the user, by means of voice, eye gaze, facial expression and gestures. This paper presents an implementation of a 3D virtual character animation and rendering engine, compliant with the MPEG-4 standard, running on Symbian-based SmartPhones. Real-time animation of virtual characters on mobile devices represents a challenging task, since many limitations must be taken into account with respect to processing power, graphics capabilities, disk space and execution memory size. The proposed optimization techniques allow to overcome these issues, guaranteeing a smooth and synchronous animation of facial expressions and lip movements on mobile phones such as Sony-Ericsson's P800 and Nokia's 6600. The animation engine is specifically targeted to the development of new "Over The Air" services, based on embodied conversational agents, with applications in entertainment (interactive story tellers), navigation aid (virtual guides to web sites and mobile services), news casting (virtual newscasters) and education (interactive virtual teachers).
Centrally managed unified shared virtual address space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilkes, John
Systems, apparatuses, and methods for managing a unified shared virtual address space. A host may execute system software and manage a plurality of nodes coupled to the host. The host may send work tasks to the nodes, and for each node, the host may externally manage the node's view of the system's virtual address space. Each node may have a central processing unit (CPU) style memory management unit (MMU) with an internal translation lookaside buffer (TLB). In one embodiment, the host may be coupled to a given node via an input/output memory management unit (IOMMU) interface, where the IOMMU frontendmore » interface shares the TLB with the given node's MMU. In another embodiment, the host may control the given node's view of virtual address space via memory-mapped control registers.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-20
... Executive Order 13335 and the Virtual Lifetime Electronic Record initiative, a strategic initiative that... microbiology laboratory tests. To delay the effective date would hamper the electronic exchange of health...
Maintaining consistency in distributed systems
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.
1991-01-01
In systems designed as assemblies of independently developed components, concurrent access to data or data structures normally arises within individual programs, and is controlled using mutual exclusion constructs, such as semaphores and monitors. Where data is persistent and/or sets of operation are related to one another, transactions or linearizability may be more appropriate. Systems that incorporate cooperative styles of distributed execution often replicate or distribute data within groups of components. In these cases, group oriented consistency properties must be maintained, and tools based on the virtual synchrony execution model greatly simplify the task confronting an application developer. All three styles of distributed computing are likely to be seen in future systems - often, within the same application. This leads us to propose an integrated approach that permits applications that use virtual synchrony with concurrent objects that respect a linearizability constraint, and vice versa. Transactional subsystems are treated as a special case of linearizability.
Handling debugger breakpoints in a shared instruction system
Gooding, Thomas Michael; Shok, Richard Michael
2014-01-21
A debugger debugs processes that execute shared instructions so that a breakpoint set for one process will not cause a breakpoint to occur in the other processes. A breakpoint is set by recording the original instruction at the desired location and writing a trap instruction to the shared instructions at that location. When a process encounters the breakpoint, the process passes control to the debugger for breakpoint processing if the breakpoint was set at that location for that process. If the trap was not set at that location for that process, the cacheline containing the trap is copied to a small scratchpad memory, and the virtual memory mappings are changed to translate the virtual address of the cacheline to the scratchpad. The original instruction is then written to replace the trap instruction in the scratchpad, so that process can execute the instructions in the scatchpad thereby avoiding the trap instruction.
Hardware assisted hypervisor introspection.
Shi, Jiangyong; Yang, Yuexiang; Tang, Chuan
2016-01-01
In this paper, we introduce hypervisor introspection, an out-of-box way to monitor the execution of hypervisors. Similar to virtual machine introspection which has been proposed to protect virtual machines in an out-of-box way over the past decade, hypervisor introspection can be used to protect hypervisors which are the basis of cloud security. Virtual machine introspection tools are usually deployed either in hypervisor or in privileged virtual machines, which might also be compromised. By utilizing hardware support including nested virtualization, EPT protection and #BP, we are able to monitor all hypercalls belongs to the virtual machines of one hypervisor, include that of privileged virtual machine and even when the hypervisor is compromised. What's more, hypercall injection method is used to simulate hypercall-based attacks and evaluate the performance of our method. Experiment results show that our method can effectively detect hypercall-based attacks with some performance cost. Lastly, we discuss our furture approaches of reducing the performance cost and preventing the compromised hypervisor from detecting the existence of our introspector, in addition with some new scenarios to apply our hypervisor introspection system.
An Analysis Platform for Mobile Ad Hoc Network (MANET) Scenario Execution Log Data
2016-01-01
these technologies. 4.1 Backend Technologies • Java 1.8 • my-sql-connector- java -5.0.8.jar • Tomcat • VirtualBox • Kali MANET Virtual Machine 4.2...Frontend Technologies • LAMPP 4.3 Database • MySQL Server 5. Database The SEDAP database settings and structure are described in this section...contains all the backend java functionality including the web services, should be placed in the webapps directory inside the Tomcat installation
Simulation Testing of Embedded Flight Software
NASA Technical Reports Server (NTRS)
Shahabuddin, Mohammad; Reinholtz, William
2004-01-01
Virtual Real Time (VRT) is a computer program for testing embedded flight software by computational simulation in a workstation, in contradistinction to testing it in its target central processing unit (CPU). The disadvantages of testing in the target CPU include the need for an expensive test bed, the necessity for testers and programmers to take turns using the test bed, and the lack of software tools for debugging in a real-time environment. By virtue of its architecture, most of the flight software of the type in question is amenable to development and testing on workstations, for which there is an abundance of commercially available debugging and analysis software tools. Unfortunately, the timing of a workstation differs from that of a target CPU in a test bed. VRT, in conjunction with closed-loop simulation software, provides a capability for executing embedded flight software on a workstation in a close-to-real-time environment. A scale factor is used to convert between execution time in VRT on a workstation and execution on a target CPU. VRT includes high-resolution operating- system timers that enable the synchronization of flight software with simulation software and ground software, all running on different workstations.
Virtual Observatory Interfaces to the Chandra Data Archive
NASA Astrophysics Data System (ADS)
Tibbetts, M.; Harbo, P.; Van Stone, D.; Zografou, P.
2014-05-01
The Chandra Data Archive (CDA) plays a central role in the operation of the Chandra X-ray Center (CXC) by providing access to Chandra data. Proprietary interfaces have been the backbone of the CDA throughout the Chandra mission. While these interfaces continue to provide the depth and breadth of mission specific access Chandra users expect, the CXC has been adding Virtual Observatory (VO) interfaces to the Chandra proposal catalog and observation catalog. VO interfaces provide standards-based access to Chandra data through simple positional queries or more complex queries using the Astronomical Data Query Language. Recent development at the CDA has generalized our existing VO services to create a suite of services that can be configured to provide VO interfaces to any dataset. This approach uses a thin web service layer for the individual VO interfaces, a middle-tier query component which is shared among the VO interfaces for parsing, scheduling, and executing queries, and existing web services for file and data access. The CXC VO services provide Simple Cone Search (SCS), Simple Image Access (SIA), and Table Access Protocol (TAP) implementations for both the Chandra proposal and observation catalogs within the existing archive architecture. Our work with the Chandra proposal and observation catalogs, as well as additional datasets beyond the CDA, illustrates how we can provide configurable VO services to extend core archive functionality.
DAMT - DISTRIBUTED APPLICATION MONITOR TOOL (HP9000 VERSION)
NASA Technical Reports Server (NTRS)
Keith, B.
1994-01-01
Typical network monitors measure status of host computers and data traffic among hosts. A monitor to collect statistics about individual processes must be unobtrusive and possess the ability to locate and monitor processes, locate and monitor circuits between processes, and report traffic back to the user through a single application program interface (API). DAMT, Distributed Application Monitor Tool, is a distributed application program that will collect network statistics and make them available to the user. This distributed application has one component (i.e., process) on each host the user wishes to monitor as well as a set of components at a centralized location. DAMT provides the first known implementation of a network monitor at the application layer of abstraction. Potential users only need to know the process names of the distributed application they wish to monitor. The tool locates the processes and the circuit between them, and reports any traffic between them at a user-defined rate. The tool operates without the cooperation of the processes it monitors. Application processes require no changes to be monitored by this tool. Neither does DAMT require the UNIX kernel to be recompiled. The tool obtains process and circuit information by accessing the operating system's existing process database. This database contains all information available about currently executing processes. Expanding the information monitored by the tool can be done by utilizing more information from the process database. Traffic on a circuit between processes is monitored by a low-level LAN analyzer that has access to the raw network data. The tool also provides features such as dynamic event reporting and virtual path routing. A reusable object approach was used in the design of DAMT. The tool has four main components; the Virtual Path Switcher, the Central Monitor Complex, the Remote Monitor, and the LAN Analyzer. All of DAMT's components are independent, asynchronously executing processes. The independent processes communicate with each other via UNIX sockets through a Virtual Path router, or Switcher. The Switcher maintains a routing table showing the host of each component process of the tool, eliminating the need for each process to do so. The Central Monitor Complex provides the single application program interface (API) to the user and coordinates the activities of DAMT. The Central Monitor Complex is itself divided into independent objects that perform its functions. The component objects are the Central Monitor, the Process Locator, the Circuit Locator, and the Traffic Reporter. Each of these objects is an independent, asynchronously executing process. User requests to the tool are interpreted by the Central Monitor. The Process Locator identifies whether a named process is running on a monitored host and which host that is. The circuit between any two processes in the distributed application is identified using the Circuit Locator. The Traffic Reporter handles communication with the LAN Analyzer and accumulates traffic updates until it must send a traffic report to the user. The Remote Monitor process is replicated on each monitored host. It serves the Central Monitor Complex processes with application process information. The Remote Monitor process provides access to operating systems information about currently executing processes. It allows the Process Locator to find processes and the Circuit Locator to identify circuits between processes. It also provides lifetime information about currently monitored processes. The LAN Analyzer consists of two processes. Low-level monitoring is handled by the Sniffer. The Sniffer analyzes the raw data on a single, physical LAN. It responds to commands from the Analyzer process, which maintains the interface to the Traffic Reporter and keeps track of which circuits to monitor. DAMT is written in C-language for HP-9000 series computers running HP-UX and Sun 3 and 4 series computers running SunOS. DAMT requires 1Mb of disk space and 4Mb of RAM for execution. This package requires MIT's X Window System, Version 11 Revision 4, with OSF/Motif 1.1. The HP-9000 version (GSC-13589) includes sample HP-9000/375 and HP-9000/730 executables which were compiled under HP-UX, and the Sun version (GSC-13559) includes sample Sun3 and Sun4 executables compiled under SunOS. The standard distribution medium for the HP version of DAMT is a .25 inch HP pre-formatted streaming magnetic tape cartridge in UNIX tar format. It is also available on a 4mm magnetic tape in UNIX tar format. The standard distribution medium for the Sun version of DAMT is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. DAMT was developed in 1992.
Performance of a Heterogeneous Grid Partitioner for N-body Applications
NASA Technical Reports Server (NTRS)
Harvey, Daniel J.; Das, Sajal K.; Biswas, Rupak
2003-01-01
An important characteristic of distributed grids is that they allow geographically separated multicomputers to be tied together in a transparent virtual environment to solve large-scale computational problems. However, many of these applications require effective runtime load balancing for the resulting solutions to be viable. Recently, we developed a latency tolerant partitioner, called MinEX, specifically for use in distributed grid environments. This paper compares the performance of MinEX to that of METIS, a popular multilevel family of partitioners, using simulated heterogeneous grid configurations. A solver for the classical N-body problem is implemented to provide a framework for the comparisons. Experimental results show that MinEX provides superior quality partitions while being competitive to METIS in speed of execution.
Chemuturi, Radhika; Amirabdollahian, Farshid; Dautenhahn, Kerstin
2013-09-28
Rehabilitation robotics is progressing towards developing robots that can be used as advanced tools to augment the role of a therapist. These robots are capable of not only offering more frequent and more accessible therapies but also providing new insights into treatment effectiveness based on their ability to measure interaction parameters. A requirement for having more advanced therapies is to identify how robots can 'adapt' to each individual's needs at different stages of recovery. Hence, our research focused on developing an adaptive interface for the GENTLE/A rehabilitation system. The interface was based on a lead-lag performance model utilising the interaction between the human and the robot. The goal of the present study was to test the adaptability of the GENTLE/A system to the performance of the user. Point-to-point movements were executed using the HapticMaster (HM) robotic arm, the main component of the GENTLE/A rehabilitation system. The points were displayed as balls on the screen and some of the points also had a real object, providing a test-bed for the human-robot interaction (HRI) experiment. The HM was operated in various modes to test the adaptability of the GENTLE/A system based on the leading/lagging performance of the user. Thirty-two healthy participants took part in the experiment comprising of a training phase followed by the actual-performance phase. The leading or lagging role of the participant could be used successfully to adjust the duration required by that participant to execute point-to-point movements, in various modes of robot operation and under various conditions. The adaptability of the GENTLE/A system was clearly evident from the durations recorded. The regression results showed that the participants required lower execution times with the help from a real object when compared to just a virtual object. The 'reaching away' movements were longer to execute when compared to the 'returning towards' movements irrespective of the influence of the gravity on the direction of the movement. The GENTLE/A system was able to adapt so that the duration required to execute point-to-point movement was according to the leading or lagging performance of the user with respect to the robot. This adaptability could be useful in the clinical settings when stroke subjects interact with the system and could also serve as an assessment parameter across various interaction sessions. As the system adapts to user input, and as the task becomes easier through practice, the robot would auto-tune for more demanding and challenging interactions. The improvement in performance of the participants in an embedded environment when compared to a virtual environment also shows promise for clinical applicability, to be tested in due time. Studying the physiology of upper arm to understand the muscle groups involved, and their influence on various movements executed during this study forms a key part of our future work.
Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms.
Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel
2014-01-01
With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies.
Support for Taverna workflows in the VPH-Share cloud platform.
Kasztelnik, Marek; Coto, Ernesto; Bubak, Marian; Malawski, Maciej; Nowakowski, Piotr; Arenas, Juan; Saglimbeni, Alfredo; Testi, Debora; Frangi, Alejandro F
2017-07-01
To address the increasing need for collaborative endeavours within the Virtual Physiological Human (VPH) community, the VPH-Share collaborative cloud platform allows researchers to expose and share sequences of complex biomedical processing tasks in the form of computational workflows. The Taverna Workflow System is a very popular tool for orchestrating complex biomedical & bioinformatics processing tasks in the VPH community. This paper describes the VPH-Share components that support the building and execution of Taverna workflows, and explains how they interact with other VPH-Share components to improve the capabilities of the VPH-Share platform. Taverna workflow support is delivered by the Atmosphere cloud management platform and the VPH-Share Taverna plugin. These components are explained in detail, along with the two main procedures that were developed to enable this seamless integration: workflow composition and execution. 1) Seamless integration of VPH-Share with other components and systems. 2) Extended range of different tools for workflows. 3) Successful integration of scientific workflows from other VPH projects. 4) Execution speed improvement for medical applications. The presented workflow integration provides VPH-Share users with a wide range of different possibilities to compose and execute workflows, such as desktop or online composition, online batch execution, multithreading, remote execution, etc. The specific advantages of each supported tool are presented, as are the roles of Atmosphere and the VPH-Share plugin within the VPH-Share project. The combination of the VPH-Share plugin and Atmosphere engenders the VPH-Share infrastructure with far more flexible, powerful and usable capabilities for the VPH-Share community. As both components can continue to evolve and improve independently, we acknowledge that further improvements are still to be developed and will be described. Copyright © 2017 Elsevier B.V. All rights reserved.
Measurement Capabilities of the DOE ARM Aerial Facility
NASA Astrophysics Data System (ADS)
Schmid, B.; Tomlinson, J. M.; Hubbe, J.; Comstock, J. M.; Kluzek, C. D.; Chand, D.; Pekour, M. S.
2012-12-01
The Department of Energy Atmospheric Radiation Measurement (ARM) Program is a climate research user facility operating stationary ground sites in three important climatic regimes that provide long-term measurements of climate relevant properties. ARM also operates mobile ground- and ship-based facilities to conduct shorter field campaigns (6-12 months) to investigate understudied climate regimes around the globe. Finally, airborne observations by ARM's Aerial Facility (AAF) enhance the surface-based ARM measurements by providing high-resolution in situ measurements for process understanding, retrieval algorithm development, and model evaluation that is not possible using ground-based techniques. AAF started out in 2007 as a "virtual hangar" with no dedicated aircraft and only a small number of instruments owned by ARM. In this mode, AAF successfully carried out several missions contracting with organizations and investigators who provided their research aircraft and instrumentation. In 2009, the Battelle owned G-1 aircraft was included in the ARM facility. The G-1 is a large twin turboprop aircraft, capable of measurements up to altitudes of 7.5 km and a range of 2,800 kilometers. Furthermore the American Recovery and Reinvestment Act of 2009 provided funding for the procurement of seventeen new instruments to be used aboard the G-1 and other AAF virtual-hangar aircraft. AAF now executes missions in the virtual- and real-hangar mode producing freely available datasets for studying aerosol, cloud, and radiative processes in the atmosphere. AAF is also heavily engaged in the maturation and testing of newly developed airborne sensors to help foster the next generation of airborne instruments. In the presentation we will showcase science applications based on measurements from recent field campaigns such as CARES, CALWATER and TCAP.
GPURFSCREEN: a GPU based virtual screening tool using random forest classifier.
Jayaraj, P B; Ajay, Mathias K; Nufail, M; Gopakumar, G; Jaleel, U C A
2016-01-01
In-silico methods are an integral part of modern drug discovery paradigm. Virtual screening, an in-silico method, is used to refine data models and reduce the chemical space on which wet lab experiments need to be performed. Virtual screening of a ligand data model requires large scale computations, making it a highly time consuming task. This process can be speeded up by implementing parallelized algorithms on a Graphical Processing Unit (GPU). Random Forest is a robust classification algorithm that can be employed in the virtual screening. A ligand based virtual screening tool (GPURFSCREEN) that uses random forests on GPU systems has been proposed and evaluated in this paper. This tool produces optimized results at a lower execution time for large bioassay data sets. The quality of results produced by our tool on GPU is same as that on a regular serial environment. Considering the magnitude of data to be screened, the parallelized virtual screening has a significantly lower running time at high throughput. The proposed parallel tool outperforms its serial counterpart by successfully screening billions of molecules in training and prediction phases.
Surface matching for correlation of virtual models: Theory and application
NASA Technical Reports Server (NTRS)
Caracciolo, Roberto; Fanton, Francesco; Gasparetto, Alessandro
1994-01-01
Virtual reality can enable a robot user to off line generate and test in a virtual environment a sequence of operations to be executed by the robot in an assembly cell. Virtual models of objects are to be correlated to the real entities they represent by means of a suitable transformation. A solution to the correlation problem, which is basically a problem of 3-dimensional adjusting, has been found exploiting the surface matching theory. An iterative algorithm has been developed, which matches the geometric surface representing the shape of the virtual model of an object, with a set of points measured on the surface in the real world. A peculiar feature of the algorithm is to work also if there is no one-to-one correspondence between the measured points and those representing the surface model. Furthermore the problem of avoiding convergence to local minima is solved, by defining a starting point of states ensuring convergence to the global minimum. The developed algorithm has been tested by simulation. Finally, this paper proposes a specific application, i.e., correlating a robot cell, equipped for biomedical use with its virtual representation.
78 FR 28295 - SES Positions That Were Career Reserved During CY 2012
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-14
... Management. of Agriculture Virtual University. Office of Advocacy Director, Office of and Outreach. Advocacy.... Executive Associate, Laboratory Services, Office of Public Health Science. Assistant Administrator, Office... Laboratory (Madison). Director, Southern Research Station (Asheville). Director, Pacific Southwest Forest and...
Shared control of a medical robot with haptic guidance.
Xiong, Linfei; Chng, Chin Boon; Chui, Chee Kong; Yu, Peiwu; Li, Yao
2017-01-01
Tele-operation of robotic surgery reduces the radiation exposure during the interventional radiological operations. However, endoscope vision without force feedback on the surgical tool increases the difficulty for precise manipulation and the risk of tissue damage. The shared control of vision and force provides a novel approach of enhanced control with haptic guidance, which could lead to subtle dexterity and better maneuvrability during MIS surgery. The paper provides an innovative shared control method for robotic minimally invasive surgery system, in which vision and haptic feedback are incorporated to provide guidance cues to the clinician during surgery. The incremental potential field (IPF) method is utilized to generate a guidance path based on the anatomy of tissue and surgical tool interaction. Haptic guidance is provided at the master end to assist the clinician during tele-operative surgical robotic task. The approach has been validated with path following and virtual tumor targeting experiments. The experiment results demonstrate that comparing with vision only guidance, the shared control with vision and haptics improved the accuracy and efficiency of surgical robotic manipulation, where the tool-position error distance and execution time are reduced. The validation experiment demonstrates that the shared control approach could help the surgical robot system provide stable assistance and precise performance to execute the designated surgical task. The methodology could also be implemented with other surgical robot with different surgical tools and applications.
''Virtual Welding,'' a new aid for teaching Manufacturing Process Engineering
NASA Astrophysics Data System (ADS)
Portela, José M.; Huerta, María M.; Pastor, Andrés; Álvarez, Miguel; Sánchez-Carrilero, Manuel
2009-11-01
Overcrowding in the classroom is a serious problem in universities, particularly in specialties that require a certain type of teaching practice. These practices often require expenditure on consumables and a space large enough to hold the necessary materials and the materials that have already been used. Apart from the budget, another problem concerns the attention paid to each student. The use of simulation systems in the early learning stages of the welding technique can prove very beneficial thanks to error detection functions installed in the system, which provide the student with feedbach during the execution of the practice session, and the significant savings in both consumables and energy.
LAVA web-based remote simulation: enhancements for education and technology innovation
NASA Astrophysics Data System (ADS)
Lee, Sang Il; Ng, Ka Chun; Orimoto, Takashi; Pittenger, Jason; Horie, Toshi; Adam, Konstantinos; Cheng, Mosong; Croffie, Ebo H.; Deng, Yunfei; Gennari, Frank E.; Pistor, Thomas V.; Robins, Garth; Williamson, Mike V.; Wu, Bo; Yuan, Lei; Neureuther, Andrew R.
2001-09-01
The Lithography Analysis using Virtual Access (LAVA) web site at http://cuervo.eecs.berkeley.edu/Volcano/ has been enhanced with new optical and deposition applets, graphical infrastructure and linkage to parallel execution on networks of workstations. More than ten new graphical user interface applets have been designed to support education, illustrate novel concepts from research, and explore usage of parallel machines. These applets have been improved through feedback and classroom use. Over the last year LAVA provided industry and other academic communities 1,300 session and 700 rigorous simulations per month among the SPLAT, SAMPLE2D, SAMPLE3D, TEMPEST, STORM, and BEBS simulators.
Lo Priore, Corrado; Castelnuovo, Gianluca; Liccione, Diego; Liccione, Davide
2003-06-01
The paper discusses the use of immersive virtual reality systems for the cognitive rehabilitation of dysexecutive syndrome, usually caused by prefrontal brain injuries. With respect to classical P&P and flat-screen computer rehabilitative tools, IVR systems might prove capable of evoking a more intense and compelling sense of presence, thanks to the highly naturalistic subject-environment interaction allowed. Within a constructivist framework applied to holistic rehabilitation, we suggest that this difference might enhance the ecological validity of cognitive training, partly overcoming the implicit limits of a lab setting, which seem to affect non-immersive procedures especially when applied to dysexecutive symptoms. We tested presence in a pilot study applied to a new VR-based rehabilitation tool for executive functions, V-Store; it allows patients to explore a virtual environment where they solve six series of tasks, ordered for complexity and designed to stimulate executive functions, programming, categorical abstraction, short-term memory and attention. We compared sense of presence experienced by unskilled normal subjects, randomly assigned to immersive or non-immersive (flat screen) sessions of V-Store, through four different indexes: self-report questionnaire, psychophysiological (GSR, skin conductance), neuropsychological (incidental recall memory test related to auditory information coming from the "real" environment) and count of breaks in presence (BIPs). Preliminary results show in the immersive group a significantly higher GSR response during tasks; neuropsychological data (fewer recalled elements from "reality") and less BIPs only show a congruent but yet non-significant advantage for the immersive condition; no differences were evident from the self-report questionnaire. A larger experimental group is currently under examination to evaluate significance of these data, which also might prove interesting with respect to the question of objective-subjective measures of presence.
Exergaming and older adult cognition: a cluster randomized clinical trial.
Anderson-Hanley, Cay; Arciero, Paul J; Brickman, Adam M; Nimon, Joseph P; Okuma, Naoko; Westen, Sarah C; Merz, Molly E; Pence, Brandt D; Woods, Jeffrey A; Kramer, Arthur F; Zimmerman, Earl A
2012-02-01
Dementia cases may reach 100 million by 2050. Interventions are sought to curb or prevent cognitive decline. Exercise yields cognitive benefits, but few older adults exercise. Virtual reality-enhanced exercise or "exergames" may elicit greater participation. To test the following hypotheses: (1) stationary cycling with virtual reality tours ("cybercycle") will enhance executive function and clinical status more than traditional exercise; (2) exercise effort will explain improvement; and (3) brain-derived neurotrophic growth factor (BDNF) will increase. Multi-site cluster randomized clinical trial (RCT) of the impact of 3 months of cybercycling versus traditional exercise, on cognitive function in older adults. Data were collected in 2008-2010; analyses were conducted in 2010-2011. 102 older adults from eight retirement communities enrolled; 79 were randomized and 63 completed. A recumbent stationary ergometer was utilized; virtual reality tours and competitors were enabled on the cybercycle. Executive function (Color Trails Difference, Stroop C, Digits Backward); clinical status (mild cognitive impairment; MCI); exercise effort/fitness; and plasma BDNF. Intent-to-treat analyses, controlling for age, education, and cluster randomization, revealed a significant group X time interaction for composite executive function (p=0.002). Cybercycling yielded a medium effect over traditional exercise (d=0.50). Cybercyclists had a 23% relative risk reduction in clinical progression to MCI. Exercise effort and fitness were comparable, suggesting another underlying mechanism. A significant group X time interaction for BDNF (p=0.05) indicated enhanced neuroplasticity among cybercyclists. Cybercycling older adults achieved better cognitive function than traditional exercisers, for the same effort, suggesting that simultaneous cognitive and physical exercise has greater potential for preventing cognitive decline. This study is registered at Clinicaltrials.gov NCT01167400. Copyright © 2012 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
A Trusted Portable Computing Device
NASA Astrophysics Data System (ADS)
Ming-wei, Fang; Jun-jun, Wu; Peng-fei, Yu; Xin-fang, Zhang
A trusted portable computing device and its security mechanism were presented to solve the security issues, such as the attack of virus and Trojan horse, the lost and stolen of storage device, in mobile office. It used smart card to build a trusted portable security base, virtualization to create a secure virtual execution environment, two-factor authentication mechanism to identify legitimate users, and dynamic encryption to protect data privacy. The security environment described in this paper is characteristic of portability, security and reliability. It can meet the security requirement of mobile office.
Chen, Yong-Quan; Hsieh, Shulan
2018-01-01
The aim of this study was to investigate if individuals with frequent internet gaming (IG) experience exhibited better or worse multitasking ability compared with those with infrequent IG experience. The individuals' multitasking abilities were measured using virtual environment multitasks, such as Edinburgh Virtual Errands Test (EVET), and conventional laboratory multitasks, such as the dual task and task switching. Seventy-two young healthy college students participated in this study. They were split into two groups based on the time spent on playing online games, as evaluated using the Internet Use Questionnaire. Each participant performed EVET, dual-task, and task-switching paradigms on a computer. The current results showed that the frequent IG group performed better on EVET compared with the infrequent IG group, but their performance on the dual-task and task-switching paradigms did not differ significantly. The results suggest that the frequent IG group exhibited better multitasking efficacy if measured using a more ecologically valid task, but not when measured using a conventional laboratory multitasking task. The differences in terms of the subcomponents of executive function measured by these task paradigms were discussed. The current results show the importance of the task effect while evaluating frequent internet gamers' multitasking ability.
Chen, Yong-Quan
2018-01-01
The aim of this study was to investigate if individuals with frequent internet gaming (IG) experience exhibited better or worse multitasking ability compared with those with infrequent IG experience. The individuals’ multitasking abilities were measured using virtual environment multitasks, such as Edinburgh Virtual Errands Test (EVET), and conventional laboratory multitasks, such as the dual task and task switching. Seventy-two young healthy college students participated in this study. They were split into two groups based on the time spent on playing online games, as evaluated using the Internet Use Questionnaire. Each participant performed EVET, dual-task, and task-switching paradigms on a computer. The current results showed that the frequent IG group performed better on EVET compared with the infrequent IG group, but their performance on the dual-task and task-switching paradigms did not differ significantly. The results suggest that the frequent IG group exhibited better multitasking efficacy if measured using a more ecologically valid task, but not when measured using a conventional laboratory multitasking task. The differences in terms of the subcomponents of executive function measured by these task paradigms were discussed. The current results show the importance of the task effect while evaluating frequent internet gamers’ multitasking ability. PMID:29879150
Sampson, Michael; Shau, Yio-Wha; King, Marcus James
2012-01-01
Stroke is a leading cause of disability with many survivors having upper limb (UL) hemiparesis. UL rehabilitation using bilateral exercise enhances outcomes and the Bilateral Upper Limb Trainer (BUiLT) was developed to provide symmetrical, bilateral arm exercise in a 'forced' and self-assistive manner, incorporating virtual reality (VR) to provide direction and task specificity to users as well as action observation-execution and greater motivation to exercise. The BUiLT + VR system was trialled on five post-stroke participants with UL hemiparesis: one sub-acute and four chronic. The intervention was supplied for 45 min, 4 days/week for 6 weeks. The Fugl-Meyer Upper Extremity score (FMA-UE) was used as the primary outcome measure. Secondary outcome measures used were UL isometric strength and the Intrinsic Motivation Inventory (IMI) questionnaire. The BUiLT + VR therapy increased FMA-UE scores from 1 to 5 and overall strength in the shoulder and elbow. Motivation at the end of intervention was positive. Therapy using the BUiLT + VR system is reliable, can be administered safely and has a positive trend of benefit as measured by the FMA-UE, isometric strength testing and IMI questionnaire.
Sakai, Hiromi; Nagano, Akinori; Seki, Keiko; Okahashi, Sayaka; Kojima, Maki; Luo, Zhiwei
2018-07-01
We developed a virtual reality test to assess the cognitive function of Japanese people in near-daily-life environment, namely, a virtual shopping test (VST). In this test, participants were asked to execute shopping tasks using touch panel operations in a "virtual shopping mall." We examined differences in VST performances among healthy participants of different ages and correlations between VST and screening tests, such as the Mini-Mental State Examination (MMSE) and Everyday Memory Checklist (EMC). We included 285 healthy participants between 20 and 86 years of age in seven age groups. Therefore, each VST index tended to decrease with advancing age; differences among age groups were significant. Most VST indices had a significantly negative correlation with MMSE and significantly positive correlation with EMC. VST may be useful for assessing general cognitive decline; effects of age must be considered for proper interpretation of the VST scores.
Classification of Movement and Inhibition Using a Hybrid BCI.
Chmura, Jennifer; Rosing, Joshua; Collazos, Steven; Goodwin, Shikha J
2017-01-01
Brain-computer interfaces (BCIs) are an emerging technology that are capable of turning brain electrical activity into commands for an external device. Motor imagery (MI)-when a person imagines a motion without executing it-is widely employed in BCI devices for motor control because of the endogenous origin of its neural control mechanisms, and the similarity in brain activation to actual movements. Challenges with translating a MI-BCI into a practical device used outside laboratories include the extensive training required, often due to poor user engagement and visual feedback response delays; poor user flexibility/freedom to time the execution/inhibition of their movements, and to control the movement type (right arm vs. left leg) and characteristics (reaching vs. grabbing); and high false positive rates of motion control. Solutions to improve sensorimotor activation and user performance of MI-BCIs have been explored. Virtual reality (VR) motor-execution tasks have replaced simpler visual feedback (smiling faces, arrows) and have solved this problem to an extent. Hybrid BCIs (hBCIs) implementing an additional control signal to MI have improved user control capabilities to a limited extent. These hBCIs either fail to allow the patients to gain asynchronous control of their movements, or have a high false positive rate. We propose an immersive VR environment which provides visual feedback that is both engaging and immediate, but also uniquely engages a different cognitive process in the patient that generates event-related potentials (ERPs). These ERPs provide a key executive function for the users to execute/inhibit movements. Additionally, we propose signal processing strategies and machine learning algorithms to move BCIs toward developing long-term signal stability in patients with distinctive brain signals and capabilities to control motor signals. The hBCI itself and the VR environment we propose would help to move BCI technology outside laboratory environments for motor rehabilitation in hospitals, and potentially for controlling a prosthetic.
Classification of Movement and Inhibition Using a Hybrid BCI
Chmura, Jennifer; Rosing, Joshua; Collazos, Steven; Goodwin, Shikha J.
2017-01-01
Brain-computer interfaces (BCIs) are an emerging technology that are capable of turning brain electrical activity into commands for an external device. Motor imagery (MI)—when a person imagines a motion without executing it—is widely employed in BCI devices for motor control because of the endogenous origin of its neural control mechanisms, and the similarity in brain activation to actual movements. Challenges with translating a MI-BCI into a practical device used outside laboratories include the extensive training required, often due to poor user engagement and visual feedback response delays; poor user flexibility/freedom to time the execution/inhibition of their movements, and to control the movement type (right arm vs. left leg) and characteristics (reaching vs. grabbing); and high false positive rates of motion control. Solutions to improve sensorimotor activation and user performance of MI-BCIs have been explored. Virtual reality (VR) motor-execution tasks have replaced simpler visual feedback (smiling faces, arrows) and have solved this problem to an extent. Hybrid BCIs (hBCIs) implementing an additional control signal to MI have improved user control capabilities to a limited extent. These hBCIs either fail to allow the patients to gain asynchronous control of their movements, or have a high false positive rate. We propose an immersive VR environment which provides visual feedback that is both engaging and immediate, but also uniquely engages a different cognitive process in the patient that generates event-related potentials (ERPs). These ERPs provide a key executive function for the users to execute/inhibit movements. Additionally, we propose signal processing strategies and machine learning algorithms to move BCIs toward developing long-term signal stability in patients with distinctive brain signals and capabilities to control motor signals. The hBCI itself and the VR environment we propose would help to move BCI technology outside laboratory environments for motor rehabilitation in hospitals, and potentially for controlling a prosthetic. PMID:28860986
An Extended Proof-Carrying Code Framework for Security Enforcement
NASA Astrophysics Data System (ADS)
Pirzadeh, Heidar; Dubé, Danny; Hamou-Lhadj, Abdelwahab
The rapid growth of the Internet has resulted in increased attention to security to protect users from being victims of security threats. In this paper, we focus on security mechanisms that are based on Proof-Carrying Code (PCC) techniques. In a PCC system, a code producer sends a code along with its safety proof to the consumer. The consumer executes the code only if the proof is valid. Although PCC has been shown to be a useful security framework, it suffers from the sheer size of typical proofs -proofs of even small programs can be considerably large. In this paper, we propose an extended PCC framework (EPCC) in which, instead of the proof, a proof generator for the program in question is transmitted. This framework enables the execution of the proof generator and the recovery of the proof on the consumer's side in a secure manner using a newly created virtual machine called the VEP (Virtual Machine for Extended PCC).
Intra-operative 3D imaging system for robot-assisted fracture manipulation.
Dagnino, G; Georgilas, I; Tarassoli, P; Atkins, R; Dogramadzi, S
2015-01-01
Reduction is a crucial step in the treatment of broken bones. Achieving precise anatomical alignment of bone fragments is essential for a good fast healing process. Percutaneous techniques are associated with faster recovery time and lower infection risk. However, deducing intra-operatively the desired reduction position is quite challenging due to the currently available technology. The 2D nature of this technology (i.e. the image intensifier) doesn't provide enough information to the surgeon regarding the fracture alignment and rotation, which is actually a three-dimensional problem. This paper describes the design and development of a 3D imaging system for the intra-operative virtual reduction of joint fractures. The proposed imaging system is able to receive and segment CT scan data of the fracture, to generate the 3D models of the bone fragments, and display them on a GUI. A commercial optical tracker was included into the system to track the actual pose of the bone fragments in the physical space, and generate the corresponding pose relations in the virtual environment of the imaging system. The surgeon virtually reduces the fracture in the 3D virtual environment, and a robotic manipulator connected to the fracture through an orthopedic pin executes the physical reductions accordingly. The system is here evaluated through fracture reduction experiments, demonstrating a reduction accuracy of 1.04 ± 0.69 mm (translational RMSE) and 0.89 ± 0.71 ° (rotational RMSE).
The standard-based open workflow system in GeoBrain (Invited)
NASA Astrophysics Data System (ADS)
Di, L.; Yu, G.; Zhao, P.; Deng, M.
2013-12-01
GeoBrain is an Earth science Web-service system developed and operated by the Center for Spatial Information Science and Systems, George Mason University. In GeoBrain, a standard-based open workflow system has been implemented to accommodate the automated processing of geospatial data through a set of complex geo-processing functions for advanced production generation. The GeoBrain models the complex geoprocessing at two levels, the conceptual and concrete. At the conceptual level, the workflows exist in the form of data and service types defined by ontologies. The workflows at conceptual level are called geo-processing models and cataloged in GeoBrain as virtual product types. A conceptual workflow is instantiated into a concrete, executable workflow when a user requests a product that matches a virtual product type. Both conceptual and concrete workflows are encoded in Business Process Execution Language (BPEL). A BPEL workflow engine, called BPELPower, has been implemented to execute the workflow for the product generation. A provenance capturing service has been implemented to generate the ISO 19115-compliant complete product provenance metadata before and after the workflow execution. The generation of provenance metadata before the workflow execution allows users to examine the usability of the final product before the lengthy and expensive execution takes place. The three modes of workflow executions defined in the ISO 19119, transparent, translucent, and opaque, are available in GeoBrain. A geoprocessing modeling portal has been developed to allow domain experts to develop geoprocessing models at the type level with the support of both data and service/processing ontologies. The geoprocessing models capture the knowledge of the domain experts and are become the operational offering of the products after a proper peer review of models is conducted. An automated workflow composition has been experimented successfully based on ontologies and artificial intelligence technology. The GeoBrain workflow system has been used in multiple Earth science applications, including the monitoring of global agricultural drought, the assessment of flood damage, the derivation of national crop condition and progress information, and the detection of nuclear proliferation facilities and events.
Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms
Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel
2017-01-01
With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies. PMID:29399237
A cyber infrastructure for the SKA Telescope Manager
NASA Astrophysics Data System (ADS)
Barbosa, Domingos; Barraca, João. P.; Carvalho, Bruno; Maia, Dalmiro; Gupta, Yashwant; Natarajan, Swaminathan; Le Roux, Gerhard; Swart, Paul
2016-07-01
The Square Kilometre Array Telescope Manager (SKA TM) will be responsible for assisting the SKA Operations and Observation Management, carrying out System diagnosis and collecting Monitoring and Control data from the SKA subsystems and components. To provide adequate compute resources, scalability, operation continuity and high availability, as well as strict Quality of Service, the TM cyber-infrastructure (embodied in the Local Infrastructure - LINFRA) consists of COTS hardware and infrastructural software (for example: server monitoring software, host operating system, virtualization software, device firmware), providing a specially tailored Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) solution. The TM infrastructure provides services in the form of computational power, software defined networking, power, storage abstractions, and high level, state of the art IaaS and PaaS management interfaces. This cyber platform will be tailored to each of the two SKA Phase 1 telescopes (SKA_MID in South Africa and SKA_LOW in Australia) instances, each presenting different computational and storage infrastructures and conditioned by location. This cyber platform will provide a compute model enabling TM to manage the deployment and execution of its multiple components (observation scheduler, proposal submission tools, MandC components, Forensic tools and several Databases, etc). In this sense, the TM LINFRA is primarily focused towards the provision of isolated instances, mostly resorting to virtualization technologies, while defaulting to bare hardware if specifically required due to performance, security, availability, or other requirement.
NASA Astrophysics Data System (ADS)
Filgueira, R.; Ferreira da Silva, R.; Deelman, E.; Atkinson, M.
2016-12-01
We present the Data-Intensive workflows as a Service (DIaaS) model for enabling easy data-intensive workflow composition and deployment on clouds using containers. DIaaS model backbone is Asterism, an integrated solution for running data-intensive stream-based applications on heterogeneous systems, which combines the benefits of dispel4py with Pegasus workflow systems. The stream-based executions of an Asterism workflow are managed by dispel4py, while the data movement between different e-Infrastructures, and the coordination of the application execution are automatically managed by Pegasus. DIaaS combines Asterism framework with Docker containers to provide an integrated, complete, easy-to-use, portable approach to run data-intensive workflows on distributed platforms. Three containers integrate the DIaaS model: a Pegasus node, and an MPI and an Apache Storm clusters. Container images are described as Dockerfiles (available online at http://github.com/dispel4py/pegasus_dispel4py), linked to Docker Hub for providing continuous integration (automated image builds), and image storing and sharing. In this model, all required software (workflow systems and execution engines) for running scientific applications are packed into the containers, which significantly reduces the effort (and possible human errors) required by scientists or VRE administrators to build such systems. The most common use of DIaaS will be to act as a backend of VREs or Scientific Gateways to run data-intensive applications, deploying cloud resources upon request. We have demonstrated the feasibility of DIaaS using the data-intensive seismic ambient noise cross-correlation application (Figure 1). The application preprocesses (Phase1) and cross-correlates (Phase2) traces from several seismic stations. The application is submitted via Pegasus (Container1), and Phase1 and Phase2 are executed in the MPI (Container2) and Storm (Container3) clusters respectively. Although both phases could be executed within the same environment, this setup demonstrates the flexibility of DIaaS to run applications across e-Infrastructures. In summary, DIaaS delivers specialized software to execute data-intensive applications in a scalable, efficient, and robust manner reducing the engineering time and computational cost.
Developing science gateways for drug discovery in a grid environment.
Pérez-Sánchez, Horacio; Rezaei, Vahid; Mezhuyev, Vitaliy; Man, Duhu; Peña-García, Jorge; den-Haan, Helena; Gesing, Sandra
2016-01-01
Methods for in silico screening of large databases of molecules increasingly complement and replace experimental techniques to discover novel compounds to combat diseases. As these techniques become more complex and computationally costly we are faced with an increasing problem to provide the research community of life sciences with a convenient tool for high-throughput virtual screening on distributed computing resources. To this end, we recently integrated the biophysics-based drug-screening program FlexScreen into a service, applicable for large-scale parallel screening and reusable in the context of scientific workflows. Our implementation is based on Pipeline Pilot and Simple Object Access Protocol and provides an easy-to-use graphical user interface to construct complex workflows, which can be executed on distributed computing resources, thus accelerating the throughput by several orders of magnitude.
2011-09-01
36 b. Designing for Vigilance .............................................. 38 viii c. Landmarks and Spatial Navigation... DESIGNING AND EXECUTING CHANGE DETECTION SCENARIO TRAINING ............................................................................... 153 A...SCENARIO INSTRUCTIONS FOR EDITORS ................................. 153 B. HOW TO DESIGN A CHANGE DETECTION SCENARIO .............. 157 APPENDIX K
Faria, Ana Lúcia; Andrade, Andreia; Soares, Luísa; I Badia, Sergi Bermúdez
2016-11-02
Stroke is one of the most common causes of acquired disability, leaving numerous adults with cognitive and motor impairments, and affecting patients' capability to live independently. There is substancial evidence on post-stroke cognitive rehabilitation benefits, but its implementation is generally limited by the use of paper-and-pencil methods, insufficient personalization, and suboptimal intensity. Virtual reality tools have shown potential for improving cognitive rehabilitation by supporting carefully personalized, ecologically valid tasks through accessible technologies. Notwithstanding important progress in VR-based cognitive rehabilitation systems, specially with Activities of Daily Living (ADL's) simulations, there is still a need of more clinical trials for its validation. In this work we present a one-month randomized controlled trial with 18 stroke in and outpatients from two rehabilitation units: 9 performing a VR-based intervention and 9 performing conventional rehabilitation. The VR-based intervention involved a virtual simulation of a city - Reh@City - where memory, attention, visuo-spatial abilities and executive functions tasks are integrated in the performance of several daily routines. The intervention had levels of difficulty progression through a method of fading cues. There was a pre and post-intervention assessment in both groups with the Addenbrooke Cognitive Examination (primary outcome) and the Trail Making Test A and B, Picture Arrangement from WAIS III and Stroke Impact Scale 3.0 (secondary outcomes). A within groups analysis revealed significant improvements in global cognitive functioning, attention, memory, visuo-spatial abilities, executive functions, emotion and overall recovery in the VR group. The control group only improved in self-reported memory and social participation. A between groups analysis, showed significantly greater improvements in global cognitive functioning, attention and executive functions when comparing VR to conventional therapy. Our results suggest that cognitive rehabilitation through the Reh@City, an ecologically valid VR system for the training of ADL's, has more impact than conventional methods. This trial was not registered because it is a small sample study that evaluates the clinical validity of a prototype virtual reality system.
The virtual mission approach: Empowering earth and space science missions
NASA Astrophysics Data System (ADS)
Hansen, Elaine
1993-08-01
Future Earth and Space Science missions will address increasingly broad and complex scientific issues. To accomplish this task, we will need to acquire and coordinate data sets from a number of different instrumetns, to make coordinated observations of a given phenomenon, and to coordinate the operation of the many individual instruments making these observations. These instruments will need to be used together as a single ``Virtual Mission.'' This coordinated approach is complicated in that these scientific instruments will generally be on different platforms, in different orbits, from different control centers, at different institutions, and report to different user groups. Before this Virtual Mission approach can be implemented, techniques need to be developed to enable separate instruments to work together harmoniously, to execute observing sequences in a synchronized manner, and to be managed by the Virtual Mission authority during times of these coordinated activities. Enabling technologies include object-oriented designed approaches, extended operations management concepts and distributed computing techniques. Once these technologies are developed and the Virtual Mission concept is available, we believe the concept will provide NASA's Science Program with a new, ``go-as-you-pay,'' flexible, and resilient way of accomplishing its science observing program. The concept will foster the use of smaller and lower cost satellites. It will enable the fleet of scientific satellites to evolve in directions that best meet prevailing science needs. It will empower scientists by enabling them to mix and match various combinations of in-space, ground, and suborbital instruments - combinations which can be called up quickly in response to new events or discoveries. And, it will enable small groups such as universities, Space Grant colleges, and small businesses to participate significantly in the program by developing small components of this evolving scientific fleet.
Archer, Charles J.; Blocksom, Michael A.; Ratterman, Joseph D.; Smith, Brian E.; Xue, Hanghon
2016-02-02
A parallel computer executes a number of tasks, each task includes a number of endpoints and the endpoints are configured to support collective operations. In such a parallel computer, establishing a group of endpoints receiving a user specification of a set of endpoints included in a global collection of endpoints, where the user specification defines the set in accordance with a predefined virtual representation of the endpoints, the predefined virtual representation is a data structure setting forth an organization of tasks and endpoints included in the global collection of endpoints and the user specification defines the set of endpoints without a user specification of a particular endpoint; and defining a group of endpoints in dependence upon the predefined virtual representation of the endpoints and the user specification.
Establishing a group of endpoints in a parallel computer
Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.; Xue, Hanhong
2016-02-02
A parallel computer executes a number of tasks, each task includes a number of endpoints and the endpoints are configured to support collective operations. In such a parallel computer, establishing a group of endpoints receiving a user specification of a set of endpoints included in a global collection of endpoints, where the user specification defines the set in accordance with a predefined virtual representation of the endpoints, the predefined virtual representation is a data structure setting forth an organization of tasks and endpoints included in the global collection of endpoints and the user specification defines the set of endpoints without a user specification of a particular endpoint; and defining a group of endpoints in dependence upon the predefined virtual representation of the endpoints and the user specification.
Exploiting virtual synchrony in distributed systems
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.; Joseph, Thomas A.
1987-01-01
Applications of a virtually synchronous environment are described for distributed programming, which underlies a collection of distributed programming tools in the ISIS2 system. A virtually synchronous environment allows processes to be structured into process groups, and makes events like broadcasts to the group as an entity, group membership changes, and even migration of an activity from one place to another appear to occur instantaneously, in other words, synchronously. A major advantage to this approach is that many aspects of a distributed application can be treated independently without compromising correctness. Moreover, user code that is designed as if the system were synchronous can often be executed concurrently. It is argued that this approach to building distributed and fault tolerant software is more straightforward, more flexible, and more likely to yield correct solutions than alternative approaches.
Virtual HR: The Impact of Information Technology on the Human Resource Professional.
ERIC Educational Resources Information Center
Gardner, Sharyn D.; Lepak, David P.; Bartol, Kathyrn M.
2003-01-01
Responses from 357 complete pairs of human resources executives and professionals from the same company showed that information technology has increased autonomy, the responsiveness of their information dissemination, and networking with other professionals; they spend more time in technology support activities. Organizational climate moderated…
Speaking Personally--With Paul Avon
ERIC Educational Resources Information Center
Martin, D'Arcy
2012-01-01
This article presents an interview with Paul Avon, the former executive director of the Canadian Virtual College Consortium. Avon has spent over fifteen years in the distance learning (DL) field managing the production and delivery of online learning at TVOntario, Humber College, the Sri Lankan National Online Distance Education Service, and the…
Tactile feedback for relief of deafferentation pain using virtual reality system: a pilot study.
Sano, Yuko; Wake, Naoki; Ichinose, Akimichi; Osumi, Michihiro; Oya, Reishi; Sumitani, Masahiko; Kumagaya, Shin-Ichiro; Kuniyoshi, Yasuo
2016-06-28
Previous studies have tried to relieve deafferentation pain (DP) by using virtual reality rehabilitation systems. However, the effectiveness of multimodal sensory feedback was not validated. The objective of this study is to relieve DP by neurorehabilitation using a virtual reality system with multimodal sensory feedback and to validate the efficacy of tactile feedback on immediate pain reduction. We have developed a virtual reality rehabilitation system with multimodal sensory feedback and applied it to seven patients with DP caused by brachial plexus avulsion or arm amputation. The patients executed a reaching task using the virtual phantom limb manipulated by their real intact limb. The reaching task was conducted under two conditions: one with tactile feedback on the intact hand and one without. The pain intensity was evaluated through a questionnaire. We found that the task with the tactile feedback reduced DP more (41.8 ± 19.8 %) than the task without the tactile feedback (28.2 ± 29.5 %), which was supported by a Wilcoxon signed-rank test result (p < 0.05). Overall, our findings indicate that the tactile feedback improves the immediate pain intensity through rehabilitation using our virtual reality system.
Novel Virtual Environment for Alternative Treatment of Children with Cerebral Palsy
de Oliveira, Juliana M.; Fernandes, Rafael Carneiro G.; Pinto, Cristtiano S.; Pinheiro, Plácido R.; Ribeiro, Sidarta
2016-01-01
Cerebral palsy is a severe condition usually caused by decreased brain oxygenation during pregnancy, at birth or soon after birth. Conventional treatments for cerebral palsy are often tiresome and expensive, leading patients to quit treatment. In this paper, we describe a virtual environment for patients to engage in a playful therapeutic game for neuropsychomotor rehabilitation, based on the experience of the occupational therapy program of the Nucleus for Integrated Medical Assistance (NAMI) at the University of Fortaleza, Brazil. Integration between patient and virtual environment occurs through the hand motion sensor “Leap Motion,” plus the electroencephalographic sensor “MindWave,” responsible for measuring attention levels during task execution. To evaluate the virtual environment, eight clinical experts on cerebral palsy were subjected to a questionnaire regarding the potential of the experimental virtual environment to promote cognitive and motor rehabilitation, as well as the potential of the treatment to enhance risks and/or negatively influence the patient's development. Based on the very positive appraisal of the experts, we propose that the experimental virtual environment is a promising alternative tool for the rehabilitation of children with cerebral palsy. PMID:27403154
High Resolution Integrated Hohlraum-Capsule Simulations for Virtual NIF Ignition Campaign
NASA Astrophysics Data System (ADS)
Jones, O. S.; Marinak, M. M.; Cerjan, C. J.; Clark, D. S.; Edwards, M. J.; Haan, S. W.; Langer, S. H.; Salmonson, J. D.
2009-11-01
We have undertaken a virtual campaign to assess the viability of the sequence of NIF experiments planned for 2010 that will experimentally tune the shock timing, symmetry, and ablator thickness of a cryogenic ignition capsule prior to the first ignition attempt. The virtual campaign consists of two teams. The ``red team'' creates realistic simulated diagnostic data for a given experiment from the output of a detailed radiation hydrodynamics calculation that has physics models that have been altered in a way that is consistent with probable physics uncertainties. The ``blue team'' executes a series of virtual experiments and interprets the simulated diagnostic data from those virtual experiments. To support this effort we have developed a capability to do very high spatial resolution integrated hohlraum-capsule simulations using the Hydra code. Surface perturbations for all ablator layer surfaces and the DT ice layer are calculated explicitly through mode 30. The effects of the fill tube, cracks in the ice layer, and defects in the ablator are included in models extracted from higher resolution calculations. Very high wave number mix is included through a mix model. We will show results from these calculations in the context of the ongoing virtual campaign.
Incorporating Brokers within Collaboration Environments
NASA Astrophysics Data System (ADS)
Rajasekar, A.; Moore, R.; de Torcy, A.
2013-12-01
A collaboration environment, such as the integrated Rule Oriented Data System (iRODS - http://irods.diceresearch.org), provides interoperability mechanisms for accessing storage systems, authentication systems, messaging systems, information catalogs, networks, and policy engines from a wide variety of clients. The interoperability mechanisms function as brokers, translating actions requested by clients to the protocol required by a specific technology. The iRODS data grid is used to enable collaborative research within hydrology, seismology, earth science, climate, oceanography, plant biology, astronomy, physics, and genomics disciplines. Although each domain has unique resources, data formats, semantics, and protocols, the iRODS system provides a generic framework that is capable of managing collaborative research initiatives that span multiple disciplines. Each interoperability mechanism (broker) is linked to a name space that enables unified access across the heterogeneous systems. The collaboration environment provides not only support for brokers, but also support for virtualization of name spaces for users, files, collections, storage systems, metadata, and policies. The broker enables access to data or information in a remote system using the appropriate protocol, while the collaboration environment provides a uniform naming convention for accessing and manipulating each object. Within the NSF DataNet Federation Consortium project (http://www.datafed.org), three basic types of interoperability mechanisms have been identified and applied: 1) drivers for managing manipulation at the remote resource (such as data subsetting), 2) micro-services that execute the protocol required by the remote resource, and 3) policies for controlling the execution. For example, drivers have been written for manipulating NetCDF and HDF formatted files within THREDDS servers. Micro-services have been written that manage interactions with the CUAHSI data repository, the DataONE information catalog, and the GeoBrain broker. Policies have been written that manage transfer of messages between an iRODS message queue and the Advanced Message Queuing Protocol. Examples of these brokering mechanisms will be presented. The DFC collaboration environment serves as the intermediary between community resources and compute grids, enabling reproducible data-driven research. It is possible to create an analysis workflow that retrieves data subsets from a remote server, assemble the required input files, automate the execution of the workflow, automatically track the provenance of the workflow, and share the input files, workflow, and output files. A collaborator can re-execute a shared workflow, compare results, change input files, and re-execute an analysis.
Gatica-Rojas, Valeska; Méndez-Rebolledo, Guillermo
2014-04-15
Two key characteristics of all virtual reality applications are interaction and immersion. Systemic interaction is achieved through a variety of multisensory channels (hearing, sight, touch, and smell), permitting the user to interact with the virtual world in real time. Immersion is the degree to which a person can feel wrapped in the virtual world through a defined interface. Virtual reality interface devices such as the Nintendo® Wii and its peripheral nunchuks-balance board, head mounted displays and joystick allow interaction and immersion in unreal environments created from computer software. Virtual environments are highly interactive, generating great activation of visual, vestibular and proprioceptive systems during the execution of a video game. In addition, they are entertaining and safe for the user. Recently, incorporating therapeutic purposes in virtual reality interface devices has allowed them to be used for the rehabilitation of neurological patients, e.g., balance training in older adults and dynamic stability in healthy participants. The improvements observed in neurological diseases (chronic stroke and cerebral palsy) have been shown by changes in the reorganization of neural networks in patients' brain, along with better hand function and other skills, contributing to their quality of life. The data generated by such studies could substantially contribute to physical rehabilitation strategies.
Flying Cassini with Virtual Operations Teams
NASA Technical Reports Server (NTRS)
Dodd, Suzanne; Gustavson, Robert
1998-01-01
The Cassini Program's challenge is to fly a large, complex mission with a reduced operations budget. A consequence of the reduced budget is elimination of the large, centrally located group traditionally used for uplink operations. Instead, responsibility for completing parts of the uplink function is distributed throughout the Program. A critical strategy employed to handle this challenge is the use of Virtual Uplink Operations Teams. A Virtual Team is comprised of a group of people with the necessary mix of engineering and science expertise who come together for the purpose of building a specific uplink product. These people are drawn from throughout the Cassini Program and participate across a large geographical area (from Germany to the West coast of the USA), covering ten time zones. The participants will often split their time between participating in the Virtual Team and accomplishing their core responsibilities, requiring significant planning and time management. When the particular uplink product task is complete, the Virtual Team disbands and the members turn back to their home organization element for future work assignments. This time-sharing of employees is used on Cassini to build mission planning products, via the Mission Planning Virtual Team, and sequencing products and monitoring of the sequence execution, via the Sequence Virtual Team. This challenging, multitasking approach allows efficient use of personnel in a resource constrained environment.
Gatica-Rojas, Valeska; Méndez-Rebolledo, Guillermo
2014-01-01
Two key characteristics of all virtual reality applications are interaction and immersion. Systemic interaction is achieved through a variety of multisensory channels (hearing, sight, touch, and smell), permitting the user to interact with the virtual world in real time. Immersion is the degree to which a person can feel wrapped in the virtual world through a defined interface. Virtual reality interface devices such as the Nintendo® Wii and its peripheral nunchuks-balance board, head mounted displays and joystick allow interaction and immersion in unreal environments created from computer software. Virtual environments are highly interactive, generating great activation of visual, vestibular and proprioceptive systems during the execution of a video game. In addition, they are entertaining and safe for the user. Recently, incorporating therapeutic purposes in virtual reality interface devices has allowed them to be used for the rehabilitation of neurological patients, e.g., balance training in older adults and dynamic stability in healthy participants. The improvements observed in neurological diseases (chronic stroke and cerebral palsy) have been shown by changes in the reorganization of neural networks in patients’ brain, along with better hand function and other skills, contributing to their quality of life. The data generated by such studies could substantially contribute to physical rehabilitation strategies. PMID:25206907
Aubin, Ginette; Béliveau, Marie-France; Klinger, Evelyne
2018-07-01
People with schizophrenia often have functional limitations that affect their daily activities due to executive function deficits. One way to assess these deficits is through the use of virtual reality programmes that reproduce real-life instrumental activities of daily living (IADLs). One such programme is the Virtual Action Planning-Supermarket (VAP-S). This exploratory study aimed to examine the ecological validity of this programme, specifically, how task performance in both virtual and natural environments compares. Case studies were used and involved five participants with schizophrenia, who were familiar with grocery shopping. They were assessed during both the VAP-S shopping task and a real-life grocery shopping task using an observational assessment tool, the Perceive, Recall, Plan and Perform (PRPP) System of Task Analysis. The results show that when difficulties were present in the virtual task, difficulties were also observed in the real-life task. For some participants, greater difficulties were observed in the virtual task. These difficulties could be explained by the presence of perceptual deficits and problems remembering the required sequenced actions in the virtual task. In conclusion, performance on the VAP-S by these five participants was generally comparable to the performance in a natural environment.
Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv
2014-01-01
JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded.
Collective network for computer structures
Blumrich, Matthias A; Coteus, Paul W; Chen, Dong; Gara, Alan; Giampapa, Mark E; Heidelberger, Philip; Hoenicke, Dirk; Takken, Todd E; Steinmacher-Burow, Burkhard D; Vranas, Pavlos M
2014-01-07
A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to the needs of a processing algorithm.
Collective network for computer structures
Blumrich, Matthias A [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Chen, Dong [Croton On Hudson, NY; Gara, Alan [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Takken, Todd E [Brewster, NY; Steinmacher-Burow, Burkhard D [Wernau, DE; Vranas, Pavlos M [Bedford Hills, NY
2011-08-16
A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices ate included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network and class structures. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to needs of a processing algorithm.
Boutiques: a flexible framework to integrate command-line applications in computing platforms.
Glatard, Tristan; Kiar, Gregory; Aumentado-Armstrong, Tristan; Beck, Natacha; Bellec, Pierre; Bernard, Rémi; Bonnet, Axel; Brown, Shawn T; Camarasu-Pop, Sorina; Cervenansky, Frédéric; Das, Samir; Ferreira da Silva, Rafael; Flandin, Guillaume; Girard, Pascal; Gorgolewski, Krzysztof J; Guttmann, Charles R G; Hayot-Sasson, Valérie; Quirion, Pierre-Olivier; Rioux, Pierre; Rousseau, Marc-Étienne; Evans, Alan C
2018-05-01
We present Boutiques, a system to automatically publish, integrate, and execute command-line applications across computational platforms. Boutiques applications are installed through software containers described in a rich and flexible JSON language. A set of core tools facilitates the construction, validation, import, execution, and publishing of applications. Boutiques is currently supported by several distinct virtual research platforms, and it has been used to describe dozens of applications in the neuroinformatics domain. We expect Boutiques to improve the quality of application integration in computational platforms, to reduce redundancy of effort, to contribute to computational reproducibility, and to foster Open Science.
Deferred compensation for tax-exempt entities.
Rich, C; Jenkins, G E
1993-10-01
Many executives in tax-exempt organizations, including healthcare executives, find their tax-advantaged savings opportunities dramatically reduced today compared to previous years. The benefit of employer-sponsored, "qualified" retirement and savings programs has been severely limited by ever-increasing tax restrictions on such plans when they are offered by tax-exempt organizations. And the opportunity for tax-sheltered personal investments has virtually disappeared. One of the last remaining opportunities for tax-advantaged savings in tax-exempt organizations is an employer-sponsored, non-qualified, deferred compensation plan, an option that appears increasingly attractive in light of the recently enacted increased personal tax rates.
Oliveira, Camila R; Lopes Filho, Brandel José P; Sugarman, Michael A; Esteves, Cristiane S; Lima, Margarida Maria B M P; Moret-Tatay, Carmen; Irigaray, Tatiana Q; Argimon, Irani Iracema L
2016-12-13
Cognitive assessment with virtual reality (VR) may have superior ecological validity for older adults compared to traditional pencil-and-paper cognitive assessment. However, few studies have reported the development of VR tasks. The aim of this study was to present the development, feasibility, content validity, and preliminary evidence of construct validity of an ecological task of cognitive assessment for older adults in VR (ECO-VR). The tasks were prepared based on theoretical and clinical backgrounds. We had 29 non-expert judges identify virtual visual stimuli and three-dimensional scenarios, and five expert judges assisted with content analysis and developing instructions. Finally, six older persons participated in three pilot studies and thirty older persons participated in the preliminary study to identify construct validity evidence. Data were analyzed by descriptive statistics and partial correlation. Target stimuli and three-dimensional scenarios were judged adequate and the content analysis demonstrated that ECO-VR evaluates temporo-spatial orientation, memory, language and executive functioning. We made significant changes to the instructions after the pilot studies to increase comprehensibility and reduce the completion time. The total score of ECO-VR was positively correlated mainly with performance in executive function (r = .172, p < .05) and memory tests (r = .488, p ≤ .01). The ECO-VR demonstrated feasibility for cognitive assessment in older adults, as well as content and construct validity evidences.
10 Management Controller for Time and Space Partitioning Architectures
NASA Astrophysics Data System (ADS)
Lachaize, Jerome; Deredempt, Marie-Helene; Galizzi, Julien
2015-09-01
The Integrated Modular Avionics (IMA) has been industrialized in aeronautical domain to enable the independent qualification of different application softwares from different suppliers on the same generic computer, this latter computer being a single terminal in a deterministic network. This concept allowed to distribute efficiently and transparently the different applications across the network, sizing accurately the HW equipments to embed on the aircraft, through the configuration of the virtual computers and the virtual network. , This concept has been studied for space domain and requirements issued [D04],[D05]. Experiments in the space domain have been done, for the computer level, through ESA and CNES initiatives [D02] [D03]. One possible IMA implementation may use Time and Space Partitioning (TSP) technology. Studies on Time and Space Partitioning [D02] for controlling resources access such as CPU and memories and studies on hardware/software interface standardization [D01] showed that for space domain technologies where I/O components (or IP) do not cover advanced features such as buffering, descriptors or virtualization, CPU overhead in terms of performances is mainly due to shared interface management in the execution platform, and to the high frequency of I/O accesses, these latter leading to an important number of context switches. This paper will present a solution to reduce this execution overhead with an open, modular and configurable controller.
Patient Autonomy for the Management of Chronic Conditions: A Two-Component Re-conceptualization
Naik, Aanand D.; Dyer, Carmel B.; Kunik, Mark E.; McCullough, Laurence B.
2010-01-01
The clinical application of the concept of patient autonomy has centered on the ability to deliberate and make treatment decisions (decisional autonomy) to the virtual exclusion of the capacity to execute the treatment plan (executive autonomy). However, the one-component concept of autonomy is problematic in the context of multiple chronic conditions. Adherence to complex treatments commonly breaks down when patients have functional, educational, and cognitive barriers that impair their capacity to plan, sequence, and carry out tasks associated with chronic care. The purpose of this article is to call for a two-component re-conceptualization of autonomy and to argue that the clinical assessment of capacity for patients with chronic conditions should be expanded to include both autonomous decision making and autonomous execution of the agreed-upon treatment plan. We explain how the concept of autonomy should be expanded to include both decisional and executive autonomy, describe the biopsychosocial correlates of the two-component concept of autonomy, and recommend diagnostic and treatment strategies to support patients with deficits in executive autonomy. PMID:19180389
NASA Astrophysics Data System (ADS)
Zeng, Qingtian; Liu, Cong; Duan, Hua
2016-09-01
Correctness of an emergency response process specification is critical to emergency mission success. Therefore, errors in the specification should be detected and corrected at build-time. In this paper, we propose a resource conflict detection approach and removal strategy for emergency response processes constrained by resources and time. In this kind of emergency response process, there are two timing functions representing the minimum and maximum execution time for each activity, respectively, and many activities require resources to be executed. Based on the RT_ERP_Net, the earliest time to start each activity and the ideal execution time of the process can be obtained. To detect and remove the resource conflicts in the process, the conflict detection algorithms and a priority-activity-first resolution strategy are given. In this way, real execution time for each activity is obtained and a conflict-free RT_ERP_Net is constructed by adding virtual activities. By experiments, it is proved that the resolution strategy proposed can shorten the execution time of the whole process to a great degree.
Application-oriented offloading in heterogeneous networks for mobile cloud computing
NASA Astrophysics Data System (ADS)
Tseng, Fan-Hsun; Cho, Hsin-Hung; Chang, Kai-Di; Li, Jheng-Cong; Shih, Timothy K.
2018-04-01
Nowadays Internet applications have become more complicated that mobile device needs more computing resources for shorter execution time but it is restricted to limited battery capacity. Mobile cloud computing (MCC) is emerged to tackle the finite resource problem of mobile device. MCC offloads the tasks and jobs of mobile devices to cloud and fog environments by using offloading scheme. It is vital to MCC that which task should be offloaded and how to offload efficiently. In the paper, we formulate the offloading problem between mobile device and cloud data center and propose two algorithms based on application-oriented for minimum execution time, i.e. the Minimum Offloading Time for Mobile device (MOTM) algorithm and the Minimum Execution Time for Cloud data center (METC) algorithm. The MOTM algorithm minimizes offloading time by selecting appropriate offloading links based on application categories. The METC algorithm minimizes execution time in cloud data center by selecting virtual and physical machines with corresponding resource requirements of applications. Simulation results show that the proposed mechanism not only minimizes total execution time for mobile devices but also decreases their energy consumption.
VO-KOREL: A Fourier Disentangling Service of the Virtual Observatory
NASA Astrophysics Data System (ADS)
Škoda, Petr; Hadrava, Petr; Fuchs, Jan
2012-04-01
VO-KOREL is a web service exploiting the technology of the Virtual Observatory for providing astronomers with the intuitive graphical front-end and distributed computing back-end running the most recent version of the Fourier disentangling code KOREL. The system integrates the ideas of the e-shop basket, conserving the privacy of every user by transfer encryption and access authentication, with features of laboratory notebook, allowing the easy housekeeping of both input parameters and final results, as well as it explores a newly emerging technology of cloud computing. While the web-based front-end allows the user to submit data and parameter files, edit parameters, manage a job list, resubmit or cancel running jobs and mainly watching the text and graphical results of a disentangling process, the main part of the back-end is a simple job queue submission system executing in parallel multiple instances of the FORTRAN code KOREL. This may be easily extended for GRID-based deployment on massively parallel computing clusters. The short introduction into underlying technologies is given, briefly mentioning advantages as well as bottlenecks of the design used.
Software platform virtualization in chemistry research and university teaching
2009-01-01
Background Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Results Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Conclusion Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide. PMID:20150997
Software platform virtualization in chemistry research and university teaching.
Kind, Tobias; Leamy, Tim; Leary, Julie A; Fiehn, Oliver
2009-11-16
Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide.
CLIPS++: Embedding CLIPS into C++
NASA Technical Reports Server (NTRS)
Obermeyer, Lance; Miranker, Daniel P.
1994-01-01
This paper describes a set of C++ extensions to the CLIPS language and their embodiment in CLIPS++. These extensions and the implementation approach of CLIPS++ provide a new level of embeddability with C and C++. These extensions are a C++ include statement and a defcontainer construct; (include (c++-header-file.h)) and (defcontainer (c++-type)). The include construct allows C++ functions to be embedded in both the LHS and RHS of CLIPS rules. The header file in an include construct is the same header file the programmer uses for his/her own C++ code, independent of CLIPS. The defcontainer construct allows the inference engine to treat C++ class instances as CLIPS deftemplate facts. Consequently existing C++ class libraries may be transparently imported into CLIPS. These C++ types may use advanced features like inheritance, virtual functions, and templates. The implementation has been tested with several class libraries, including Rogue Wave Software's Tools.h++, GNU's libg++, and USL's C++ Standard Components. The execution speed of CLIPS++ has been determined to be 5 to 700 times the execution speed of CLIPS 6.0 (10 to 20X typical).
Virtual Integrated Planning and Execution Resource System (VIPERS): The High Ground of 2025
1996-04-01
earth to one meter will allow modeling of 21 enemy actions, to a degree only dreamed of before.8 For example, before starting an air campaign, an...that is facilitated by the system. Interaction may take the form of the written word, voice, video conferencing, or mental telepathy . Control speaks
ERIC Educational Resources Information Center
Shen, Hao-Yu; Shen, Bo; Hardacre, Christopher
2013-01-01
A systematic approach to develop the teaching of instrumental analytical chemistry is discussed, as well as a conceptual framework for organizing and executing lectures and a laboratory course. Three main components are used in this course: theoretical knowledge developed in the classroom, simulations via a virtual laboratory, and practical…
ERIC Educational Resources Information Center
Belz, Julie A.; Muller-Hartmann, Andreas
2003-01-01
Examines how social, cultural, and institutional affordances and constraints in a telecollaborative foreign language learning partnership shape the agency of online teachers. Details how various aspects of school and schooling impact the negotiation, execution and management of a German-American virtual course from the perspectives of the…
Virtual Simulation Capability for Deployable Force Protection Analysis (VSCDFP) FY 15 Plan
2014-07-30
Unmanned Aircraft Systems ( SUAS ) outfitted with a baseline two-axis steerable “Infini-spin” electro- optic/infrared (EO/IR) sensor payload. The current...Payload (EPRP) enhanced sensor system to the Puma SUAS will be beneficial for Soldiers executing RCP mission sets. • Develop the RCP EPRP Concept of
Affordable Online Maths Tuition: Evaluation Report and Executive Summary
ERIC Educational Resources Information Center
Torgerson, Carole; Ainsworth, Hannah; Buckley, Hannah; Hampden-Thompson, Gillen; Hewitt, Catherine; Humphry, Deborah; Jefferson, Laura; Mitchell, Natasha; Torgerson, David
2016-01-01
"Affordable Online Maths Tuition" is a one-to-one tutoring programme where pupils receive maths tuition over the internet from trained maths graduates in India and Sri Lanka. It is delivered by the organisation Third Space Learning (TSL). Tutors and pupils communicate using video calling and a secure virtual classroom. Before each…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, William Eugene
These slides describe different strategies for installing Python software. Although I am a big fan of Python software development, robust strategies for software installation remains a challenge. This talk describes several different installation scenarios. The Good: the user has administrative privileges - Installing on Windows with an installer executable, Installing with Linux application utility, Installing a Python package from the PyPI repository, and Installing a Python package from source. The Bad: the user does not have administrative privileges - Using a virtual environment to isolate package installations, and Using an installer executable on Windows with a virtual environment. The Ugly:more » the user needs to install an extension package from source - Installing a Python extension package from source, and PyCoinInstall - Managing builds for Python extension packages. The last item referring to PyCoinInstall describes a utility being developed for the COIN-OR software, which is used within the operations research community. COIN-OR includes a variety of Python and C++ software packages, and this script uses a simple plug-in system to support the management of package builds and installation.« less
Lalonde, Gabrielle; Henry, Mylène; Drouin-Germain, Anne; Nolin, Pierre; Beauchamp, Miriam H
2013-09-30
Paper-pencil type tests are traditionally used in the assessment of executive functions (EF); however, concerns have been raised as to whether these represent actual functioning in everyday life. Virtual reality (VR) environments offer a novel alternative for the assessment of cognitive function and therefore have the potential to enhance the evaluation of EF by presenting individuals with stimuli that come closer to reproducing everyday situations. The aims of this study were to (1) establish which traditional paper-pencil EF tests from the Delis-Kaplan Executive Function System (D-KEFS) are associated with performance on a VR-Stroop task and (2) compare D-KEFS tests and the VR-Stroop task in their ability to predict everyday EF and behavior, as measured by the Behavioral Rating Inventory of Executive Function (BRIEF) and the Child Behavior Checklist (CBCL). Thirty-eight typically developing adolescents aged between 13 and 17 years completed the ClinicaVR: Classroom-Stroop, and five D-KEFS subtests (Trail Making, Tower, Twenty Questions, Verbal Fluency and Color-Word Interference). Their parents completed the BRIEF and CBCL questionnaires. The results indicate that performance on the VR-Stroop task correlates with both traditional forms of EF assessment (D-KEFS, BRIEF). In particular, performance on the VR-Stroop task was closely associated with performance on a paper-pencil inhibition task. Furthermore, VR-Stroop performance more accurately reflected everyday behavioral EF than paper-pencil tasks. VR appears to offer an ecological perspective on everyday functioning and could be seen as complementary to traditional tests in the assessment of complex cognitive abilities. Copyright © 2013 Elsevier B.V. All rights reserved.
Thomas, Thaddeus P.; Anderson, Donald D.; Willis, Andrew R.; Liu, Pengcheng; Marsh, J. Lawrence; Brown, Thomas D.
2010-01-01
Background Highly comminuted intra-articular fractures are complex and difficult injuries to treat. Once emergent care is rendered, the definitive treatment objective is to restore the original anatomy while minimizing surgically induced trauma. Operations that use limited or percutaneous approaches help preserve tissue vitality, but reduced visibility makes reconstruction more difficult. A pre-operative plan of how comminuted fragments would best be re-positioned to restore anatomy helps in executing a successful reduction. Methods In this study, methods for virtually reconstructing a tibial plafond fracture were developed and applied to clinical cases. Building upon previous benchtop work, novel image analysis techniques and puzzle solving algorithms were developed for clinical application. Specialty image analysis tools were used to segment the fracture fragment geometries from CT data. The original anatomy was then restored by matching fragment native (periosteal and subchondral) bone surfaces to an intact template, generated from the uninjured contralateral limb. Findings Virtual reconstructions obtained for ten tibial plafond fracture cases had average alignment errors of 0.39 (0.5 standard deviation) mm. In addition to precise reduction planning, 3D puzzle solutions can help identify articular deformities and bone loss. Interpretation The results from this study indicate that 3D puzzle solving provides a powerful new tool for planning the surgical reconstruction of comminuted articular fractures. PMID:21215501
Digital watermarking for secure and adaptive teleconferencing
NASA Astrophysics Data System (ADS)
Vorbrueggen, Jan C.; Thorwirth, Niels
2002-04-01
The EC-sponsored project ANDROID aims to develop a management system for secure active networks. Active network means allowing the network's customers to execute code (Java-based so-called proxylets) on parts of the network infrastructure. Secure means that the network operator nonetheless retains full control over the network and its resources, and that proxylets use ANDROID-developed facilities to provide secure applications. Management is based on policies and allows autonomous, distributed decisions and actions to be taken. Proxylets interface with the system via policies; among actions they can take is controlling execution of other proxylets or redirection of network traffic. Secure teleconferencing is used as the application to demonstrate the approach's advantages. A way to control a teleconference's data streams is to use digital watermarking of the video, audio and/or shared-whiteboard streams, providing an imperceptible and inseparable side channel that delivers information from originating or intermediate stations to downstream stations. Depending on the information carried by the watermark, these stations can take many different actions. Examples are forwarding decisions based on security classifications (possibly time-varying) at security boundaries, set-up and tear-down of virtual private networks, intelligent and adaptive transcoding, recorder or playback control (e.g., speaking off the record), copyright protection, and sender authentication.
NASA Astrophysics Data System (ADS)
Johnson, Tony; Metcalfe, Jason; Brewster, Benjamin; Manteuffel, Christopher; Jaswa, Matthew; Tierney, Terrance
2010-04-01
The proliferation of intelligent systems in today's military demands increased focus on the optimization of human-robot interactions. Traditional studies in this domain involve large-scale field tests that require humans to operate semiautomated systems under varying conditions within military-relevant scenarios. However, provided that adequate constraints are employed, modeling and simulation can be a cost-effective alternative and supplement. The current presentation discusses a simulation effort that was executed in parallel with a field test with Soldiers operating military vehicles in an environment that represented key elements of the true operational context. In this study, "constructive" human operators were designed to represent average Soldiers executing supervisory control over an intelligent ground system. The constructive Soldiers were simulated performing the same tasks as those performed by real Soldiers during a directly analogous field test. Exercising the models in a high-fidelity virtual environment provided predictive results that represented actual performance in certain aspects, such as situational awareness, but diverged in others. These findings largely reflected the quality of modeling assumptions used to design behaviors and the quality of information available on which to articulate principles of operation. Ultimately, predictive analyses partially supported expectations, with deficiencies explicable via Soldier surveys, experimenter observations, and previously-identified knowledge gaps.
Heyworth, Leonie; Clark, Justice; Marcello, Thomas B; Paquin, Allison M; Stewart, Max; Archambeault, Cliona; Simon, Steven R
2013-12-02
Virtual (non-face-to-face) medication reconciliation strategies may reduce adverse drug events (ADEs) among vulnerable ambulatory patients. Understanding provider perspectives on the use of technology for medication reconciliation can inform the design of patient-centered solutions to improve ambulatory medication safety. The aim of the study was to describe primary care providers' experiences of ambulatory medication reconciliation and secure messaging (secure email between patients and providers), and to elicit perceptions of a virtual medication reconciliation system using secure messaging (SM). This was a qualitative study using semi-structured interviews. From January 2012 to May 2012, we conducted structured observations of primary care clinical activities and interviewed 15 primary care providers within a Veterans Affairs Healthcare System in Boston, Massachusetts (USA). We carried out content analysis informed by the grounded theory. Of the 15 participating providers, 12 were female and 11 saw 10 or fewer patients in a typical workday. Experiences and perceptions elicited from providers during in-depth interviews were organized into 12 overarching themes: 4 themes for experiences with medication reconciliation, 3 themes for perceptions on how to improve ambulatory medication reconciliation, and 5 themes for experiences with SM. Providers generally recognized medication reconciliation as a valuable component of primary care delivery and all agreed that medication reconciliation following hospital discharge is a key priority. Most providers favored delegating the responsibility for medication reconciliation to another member of the staff, such as a nurse or a pharmacist. The 4 themes related to ambulatory medication reconciliation were (1) the approach to complex patients, (2) the effectiveness of medication reconciliation in preventing ADEs, (3) challenges to completing medication reconciliation, and (4) medication reconciliation during transitions of care. Specifically, providers emphasized the importance of medication reconciliation at the post-hospital visit. Providers indicated that assistance from a caregiver (eg, a family member) for medication reconciliation was helpful for complex or elderly patients and that patients' social or cognitive factors often made medication reconciliation challenging. Regarding providers' use of SM, about half reported using SM frequently, but all felt that it improved their clinical workflow and nearly all providers were enthusiastic about a virtual medication reconciliation system, such as one using SM. All providers thought that such a system could reduce ADEs. Although providers recognize the importance and value of ambulatory medication reconciliation, various factors make it difficult to execute this task effectively, particularly among complex or elderly patients and patients with complicated social circumstances. Many providers favor enlisting the support of pharmacists or nurses to perform medication reconciliation in the outpatient setting. In general, providers are enthusiastic about the prospect of using secure messaging for medication reconciliation, particularly during transitions of care, and believe a system of virtual medication reconciliation could reduce ADEs.
Distributed Monitoring Infrastructure for Worldwide LHC Computing Grid
NASA Astrophysics Data System (ADS)
Andrade, P.; Babik, M.; Bhatt, K.; Chand, P.; Collados, D.; Duggal, V.; Fuente, P.; Hayashi, S.; Imamagic, E.; Joshi, P.; Kalmady, R.; Karnani, U.; Kumar, V.; Lapka, W.; Quick, R.; Tarragon, J.; Teige, S.; Triantafyllidis, C.
2012-12-01
The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO[1] managers, service managers, management), from different middleware providers (ARC[2], dCache[3], gLite[4], UNICORE[5] and VDT[6]), consortiums (WLCG[7], EMI[11], EGI[15], OSG[13]), and operational teams (GOC[16], OMB[8], OTAG[9], CSIRT[10]). The seamless harmonization of these distributed actors is in daily use for monitoring of the WLCG infrastructure. In this paper we describe the monitoring of the WLCG infrastructure from the operational perspective. We explain the complexity of the journey of a monitoring probe from its execution on a grid node to the visualization on the MyWLCG[27] portal where it is exposed to other clients. This monitoring workflow profits from the interoperability established between the SAM[19] and RSV[20] frameworks. We show how these two distributed structures are capable of uniting technologies and hiding the complexity around them, making them easy to be used by the community. Finally, the different supported deployment strategies, tailored not only for monitoring the entire infrastructure but also for monitoring sites and virtual organizations, are presented and the associated operational benefits highlighted.
VPipe: Virtual Pipelining for Scheduling of DAG Stream Query Plans
NASA Astrophysics Data System (ADS)
Wang, Song; Gupta, Chetan; Mehta, Abhay
There are data streams all around us that can be harnessed for tremendous business and personal advantage. For an enterprise-level stream processing system such as CHAOS [1] (Continuous, Heterogeneous Analytic Over Streams), handling of complex query plans with resource constraints is challenging. While several scheduling strategies exist for stream processing, efficient scheduling of complex DAG query plans is still largely unsolved. In this paper, we propose a novel execution scheme for scheduling complex directed acyclic graph (DAG) query plans with meta-data enriched stream tuples. Our solution, called Virtual Pipelined Chain (or VPipe Chain for short), effectively extends the "Chain" pipelining scheduling approach to complex DAG query plans.
Diers, Martin; Kamping, Sandra; Kirsch, Pinar; Rance, Mariela; Bekrater-Bodmann, Robin; Foell, Jens; Trojan, Joerg; Fuchs, Xaver; Bach, Felix; Maaß, Heiko; Cakmak, Hüseyin; Flor, Herta
2015-01-12
Extended viewing of movements of one's intact limb in a mirror as well as motor imagery have been shown to decrease pain in persons with phantom limb pain or complex regional pain syndrome and to increase the movement ability in hemiparesis following stroke. In addition, mirrored movements differentially activate sensorimotor cortex in amputees with and without phantom limb pain. However, using a so-called mirror box has technical limitations, some of which can be overcome by virtual reality applications. We developed a virtual reality mirror box application and evaluated its comparability to a classical mirror box setup. We applied both paradigms to 20 healthy controls and analyzed vividness and authenticity of the illusion as well as brain activation patterns. In both conditions, subjects reported similar intensities for the sensation that movements of the virtual left hand felt as if they were executed by their own left hand. We found activation in the primary sensorimotor cortex contralateral to the actual movement, with stronger activation for the virtual reality 'mirror box' compared to the classical mirror box condition, as well as activation in the primary sensorimotor cortex contralateral to the mirrored/virtual movement. We conclude that a virtual reality application of the mirror box is viable and that it might be useful for future research. Copyright © 2014 Elsevier B.V. All rights reserved.
Tarnanas, Ioannis; Schlee, Winfried; Tsolaki, Magda; Müri, René; Mosimann, Urs; Nef, Tobias
2013-08-06
Dementia is a multifaceted disorder that impairs cognitive functions, such as memory, language, and executive functions necessary to plan, organize, and prioritize tasks required for goal-directed behaviors. In most cases, individuals with dementia experience difficulties interacting with physical and social environments. The purpose of this study was to establish ecological validity and initial construct validity of a fire evacuation Virtual Reality Day-Out Task (VR-DOT) environment based on performance profiles as a screening tool for early dementia. The objectives were (1) to examine the relationships among the performances of 3 groups of participants in the VR-DOT and traditional neuropsychological tests employed to assess executive functions, and (2) to compare the performance of participants with mild Alzheimer's-type dementia (AD) to those with amnestic single-domain mild cognitive impairment (MCI) and healthy controls in the VR-DOT and traditional neuropsychological tests used to assess executive functions. We hypothesized that the 2 cognitively impaired groups would have distinct performance profiles and show significantly impaired independent functioning in ADL compared to the healthy controls. The study population included 3 groups: 72 healthy control elderly participants, 65 amnestic MCI participants, and 68 mild AD participants. A natural user interface framework based on a fire evacuation VR-DOT environment was used for assessing physical and cognitive abilities of seniors over 3 years. VR-DOT focuses on the subtle errors and patterns in performing everyday activities and has the advantage of not depending on a subjective rating of an individual person. We further assessed functional capacity by both neuropsychological tests (including measures of attention, memory, working memory, executive functions, language, and depression). We also evaluated performance in finger tapping, grip strength, stride length, gait speed, and chair stands separately and while performing VR-DOTs in order to correlate performance in these measures with VR-DOTs because performance while navigating a virtual environment is a valid and reliable indicator of cognitive decline in elderly persons. The mild AD group was more impaired than the amnestic MCI group, and both were more impaired than healthy controls. The novel VR-DOT functional index correlated strongly with standard cognitive and functional measurements, such as mini-mental state examination (MMSE; rho=0.26, P=.01) and Bristol Activities of Daily Living (ADL) scale scores (rho=0.32, P=.001). Functional impairment is a defining characteristic of predementia and is partly dependent on the degree of cognitive impairment. The novel virtual reality measures of functional ability seem more sensitive to functional impairment than qualitative measures in predementia, thus accurately differentiating from healthy controls. We conclude that VR-DOT is an effective tool for discriminating predementia and mild AD from controls by detecting differences in terms of errors, omissions, and perseverations while measuring ADL functional ability.
Anderson-Hanley, Cay; Arciero, Paul J; Westen, Sarah C; Nimon, Joseph; Zimmerman, Earl
2012-07-01
This quasi-experimental exploratory study investigated neuropsychological effects of exercise among older adults with diabetes mellitus (DM) compared with adults without diabetes (non-DM), and it examined the feasibility of using a stationary bike exergame as a form of exercise for older adults with and without diabetes. It is a secondary analysis that uses a small dataset from a larger randomized clinical trial (RCT) called the Cybercycle Study, which compared cognitive and physiological effects of traditional stationary cycling versus cybercycling. In the RCT and the secondary analysis, older adults living in eight independent living retirement facilities in the state of New York were enrolled in the study and assigned to exercise five times per week for 45 min per session (two times per week was considered acceptable for retention in the study) by using a stationary bicycle over the course of 3 months. They were randomly assigned to use either a standard stationary bicycle or a "cybercycle" with a video screen that displayed virtual terrains, virtual tours, and racing games with virtual competitors. For this secondary analysis, participants in the RCT who had type 2 DM (n = 10) were compared with age-matched non-DM exercisers (n = 10). The relationship between exercise and executive function (i.e., Color Trials 2, Digit Span Backwards, and Stroop C tests) was examined for DM and non-DM patients. Older adults with and without diabetes were able to use cybercycles successfully and complete the study, so the feasibility of this form of exercise for this population was supported. However, in contrast with the larger RCT, this small subset did not demonstrate statistically significant differences in executive function between the participants who used cybercycles and those who used stationary bikes with no games or virtual content on a video screen. Therefore, the study combined the two groups and called them "exercisers" and compared cognitive outcomes for DM versus non-DM patients. As predicted, exercisers with DM exhibited significant gains in executive function as measured by the Color Trails 2 test, controlling for age and education, while non-DM exercisers did not significantly gain in this measure [group × time interaction, F(1,16]) = 9.75; p = .007]. These preliminary results support the growing literature that finds that exercise may improve cognition among older adult with DM. Additional research is needed to clarify why certain aspects of executive function might be differentially affected. The current findings may encourage physicians to prescribe exercise for diabetes management and may help motivate DM patients' compliance for engaging in physical activity. © 2012 Diabetes Technology Society.
Anderson-Hanley, Cay; Arciero, Paul J.; Westen, Sarah C.; Nimon, Joseph; Zimmerman, Earl
2012-01-01
Objective This quasi-experimental exploratory study investigated neuropsychological effects of exercise among older adults with diabetes mellitus (DM) compared with adults without diabetes (non-DM), and it examined the feasibility of using a stationary bike exergame as a form of exercise for older adults with and without diabetes. It is a secondary analysis that uses a small dataset from a larger randomized clinical trial (RCT) called the Cybercycle Study, which compared cognitive and physiological effects of traditional stationary cycling versus cybercycling. Methods In the RCT and the secondary analysis, older adults living in eight independent living retirement facilities in the state of New York were enrolled in the study and assigned to exercise five times per week for 45 min per session (two times per week was considered acceptable for retention in the study) by using a stationary bicycle over the course of 3 months. They were randomly assigned to use either a standard stationary bicycle or a “cybercycle” with a video screen that displayed virtual terrains, virtual tours, and racing games with virtual competitors. For this secondary analysis, participants in the RCT who had type 2 DM (n = 10) were compared with age-matched non-DM exercisers (n = 10). The relationship between exercise and executive function (i.e., Color Trials 2, Digit Span Backwards, and Stroop C tests) was examined for DM and non-DM patients. Results Older adults with and without diabetes were able to use cybercycles successfully and complete the study, so the feasibility of this form of exercise for this population was supported. However, in contrast with the larger RCT, this small subset did not demonstrate statistically significant differences in executive function between the participants who used cybercycles and those who used stationary bikes with no games or virtual content on a video screen. Therefore, the study combined the two groups and called them “exercisers” and compared cognitive outcomes for DM versus non-DM patients. As predicted, exercisers with DM exhibited significant gains in executive function as measured by the Color Trails 2 test, controlling for age and education, while non-DM exercisers did not significantly gain in this measure [group × time interaction, F(1,16]) = 9.75; p = .007]. Conclusions These preliminary results support the growing literature that finds that exercise may improve cognition among older adult with DM. Additional research is needed to clarify why certain aspects of executive function might be differentially affected. The current findings may encourage physicians to prescribe exercise for diabetes management and may help motivate DM patients’ compliance for engaging in physical activity. PMID:22920811
GPU based cloud system for high-performance arrhythmia detection with parallel k-NN algorithm.
Tae Joon Jun; Hyun Ji Park; Hyuk Yoo; Young-Hak Kim; Daeyoung Kim
2016-08-01
In this paper, we propose an GPU based Cloud system for high-performance arrhythmia detection. Pan-Tompkins algorithm is used for QRS detection and we optimized beat classification algorithm with K-Nearest Neighbor (K-NN). To support high performance beat classification on the system, we parallelized beat classification algorithm with CUDA to execute the algorithm on virtualized GPU devices on the Cloud system. MIT-BIH Arrhythmia database is used for validation of the algorithm. The system achieved about 93.5% of detection rate which is comparable to previous researches while our algorithm shows 2.5 times faster execution time compared to CPU only detection algorithm.
Boutiques: a flexible framework to integrate command-line applications in computing platforms
Glatard, Tristan; Kiar, Gregory; Aumentado-Armstrong, Tristan; Beck, Natacha; Bellec, Pierre; Bernard, Rémi; Bonnet, Axel; Brown, Shawn T; Camarasu-Pop, Sorina; Cervenansky, Frédéric; Das, Samir; Ferreira da Silva, Rafael; Flandin, Guillaume; Girard, Pascal; Gorgolewski, Krzysztof J; Guttmann, Charles R G; Hayot-Sasson, Valérie; Quirion, Pierre-Olivier; Rioux, Pierre; Rousseau, Marc-Étienne; Evans, Alan C
2018-01-01
Abstract We present Boutiques, a system to automatically publish, integrate, and execute command-line applications across computational platforms. Boutiques applications are installed through software containers described in a rich and flexible JSON language. A set of core tools facilitates the construction, validation, import, execution, and publishing of applications. Boutiques is currently supported by several distinct virtual research platforms, and it has been used to describe dozens of applications in the neuroinformatics domain. We expect Boutiques to improve the quality of application integration in computational platforms, to reduce redundancy of effort, to contribute to computational reproducibility, and to foster Open Science. PMID:29718199
Architecture for the Next Generation System Management Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallard, Jerome; Lebre, I Adrien; Morin, Christine
2011-01-01
To get more results or greater accuracy, computational scientists execute their applications on distributed computing platforms such as Clusters, Grids and Clouds. These platforms are different in terms of hardware and software resources as well as locality: some span across multiple sites and multiple administrative domains whereas others are limited to a single site/domain. As a consequence, in order to scale their applica- tions up the scientists have to manage technical details for each target platform. From our point of view, this complexity should be hidden from the scientists who, in most cases, would prefer to focus on their researchmore » rather than spending time dealing with platform configuration concerns. In this article, we advocate for a system management framework that aims to automatically setup the whole run-time environment according to the applications needs. The main difference with regards to usual approaches is that they generally only focus on the software layer whereas we address both the hardware and the software expecta- tions through a unique system. For each application, scientists describe their requirements through the definition of a Virtual Platform (VP) and a Virtual System Environment (VSE). Relying on the VP/VSE definitions, the framework is in charge of: (i) the configuration of the physical infrastructure to satisfy the VP requirements, (ii) the setup of the VP, and (iii) the customization of the execution environment (VSE) upon the former VP. We propose a new formalism that the system can rely upon to successfully perform each of these three steps without burdening the user with the specifics of the configuration for the physical resources, and system management tools. This formalism leverages Goldberg s theory for recursive virtual machines by introducing new concepts based on system virtualization (identity, partitioning, aggregation) and emulation (simple, abstraction). This enables the definition of complex VP/VSE configurations without making assumptions about the hardware and the software re- sources. For each requirement, the system executes the corresponding operation with the appropriate management tool. As a proof of concept, we implemented a first prototype that currently interacts with several system management tools (e.g., OSCAR, the Grid 5000 toolkit, and XtreemOS) and that can be easily extended to integrate new resource brokers or cloud systems such as Nimbus, OpenNebula or Eucalyptus for instance.« less
Carrieri, Marika; Petracca, Andrea; Lancia, Stefania; Basso Moro, Sara; Brigadoi, Sabrina; Spezialetti, Matteo; Ferrari, Marco; Placidi, Giuseppe; Quaresima, Valentina
2016-01-01
Functional near-infrared spectroscopy (fNIRS) is a non-invasive vascular-based functional neuroimaging technology that can assess, simultaneously from multiple cortical areas, concentration changes in oxygenated-deoxygenated hemoglobin at the level of the cortical microcirculation blood vessels. fNIRS, with its high degree of ecological validity and its very limited requirement of physical constraints to subjects, could represent a valid tool for monitoring cortical responses in the research field of neuroergonomics. In virtual reality (VR) real situations can be replicated with greater control than those obtainable in the real world. Therefore, VR is the ideal setting where studies about neuroergonomics applications can be performed. The aim of the present study was to investigate, by a 20-channel fNIRS system, the dorsolateral/ventrolateral prefrontal cortex (DLPFC/VLPFC) in subjects while performing a demanding VR hand-controlled task (HCT). Considering the complexity of the HCT, its execution should require the attentional resources allocation and the integration of different executive functions. The HCT simulates the interaction with a real, remotely-driven, system operating in a critical environment. The hand movements were captured by a high spatial and temporal resolution 3-dimensional (3D) hand-sensing device, the LEAP motion controller, a gesture-based control interface that could be used in VR for tele-operated applications. Fifteen University students were asked to guide, with their right hand/forearm, a virtual ball (VB) over a virtual route (VROU) reproducing a 42 m narrow road including some critical points. The subjects tried to travel as long as possible without making VB fall. The distance traveled by the guided VB was 70.2 ± 37.2 m. The less skilled subjects failed several times in guiding the VB over the VROU. Nevertheless, a bilateral VLPFC activation, in response to the HCT execution, was observed in all the subjects. No correlation was found between the distance traveled by the guided VB and the corresponding cortical activation. These results confirm the suitability of fNIRS technology to objectively evaluate cortical hemodynamic changes occurring in VR environments. Future studies could give a contribution to a better understanding of the cognitive mechanisms underlying human performance either in expert or non-expert operators during the simulation of different demanding/fatiguing activities. PMID:26909033
Carrieri, Marika; Petracca, Andrea; Lancia, Stefania; Basso Moro, Sara; Brigadoi, Sabrina; Spezialetti, Matteo; Ferrari, Marco; Placidi, Giuseppe; Quaresima, Valentina
2016-01-01
Functional near-infrared spectroscopy (fNIRS) is a non-invasive vascular-based functional neuroimaging technology that can assess, simultaneously from multiple cortical areas, concentration changes in oxygenated-deoxygenated hemoglobin at the level of the cortical microcirculation blood vessels. fNIRS, with its high degree of ecological validity and its very limited requirement of physical constraints to subjects, could represent a valid tool for monitoring cortical responses in the research field of neuroergonomics. In virtual reality (VR) real situations can be replicated with greater control than those obtainable in the real world. Therefore, VR is the ideal setting where studies about neuroergonomics applications can be performed. The aim of the present study was to investigate, by a 20-channel fNIRS system, the dorsolateral/ventrolateral prefrontal cortex (DLPFC/VLPFC) in subjects while performing a demanding VR hand-controlled task (HCT). Considering the complexity of the HCT, its execution should require the attentional resources allocation and the integration of different executive functions. The HCT simulates the interaction with a real, remotely-driven, system operating in a critical environment. The hand movements were captured by a high spatial and temporal resolution 3-dimensional (3D) hand-sensing device, the LEAP motion controller, a gesture-based control interface that could be used in VR for tele-operated applications. Fifteen University students were asked to guide, with their right hand/forearm, a virtual ball (VB) over a virtual route (VROU) reproducing a 42 m narrow road including some critical points. The subjects tried to travel as long as possible without making VB fall. The distance traveled by the guided VB was 70.2 ± 37.2 m. The less skilled subjects failed several times in guiding the VB over the VROU. Nevertheless, a bilateral VLPFC activation, in response to the HCT execution, was observed in all the subjects. No correlation was found between the distance traveled by the guided VB and the corresponding cortical activation. These results confirm the suitability of fNIRS technology to objectively evaluate cortical hemodynamic changes occurring in VR environments. Future studies could give a contribution to a better understanding of the cognitive mechanisms underlying human performance either in expert or non-expert operators during the simulation of different demanding/fatiguing activities.
Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv
2014-01-01
JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded. PMID:25110745
Challenges to the development of complex virtual reality surgical simulations.
Seymour, N E; Røtnes, J S
2006-11-01
Virtual reality simulation in surgical training has become more widely used and intensely investigated in an effort to develop safer, more efficient, measurable training processes. The development of virtual reality simulation of surgical procedures has begun, but well-described technical obstacles must be overcome to permit varied training in a clinically realistic computer-generated environment. These challenges include development of realistic surgical interfaces and physical objects within the computer-generated environment, modeling of realistic interactions between objects, rendering of the surgical field, and development of signal processing for complex events associated with surgery. Of these, the realistic modeling of tissue objects that are fully responsive to surgical manipulations is the most challenging. Threats to early success include relatively limited resources for development and procurement, as well as smaller potential for return on investment than in other simulation industries that face similar problems. Despite these difficulties, steady progress continues to be made in these areas. If executed properly, virtual reality offers inherent advantages over other training systems in creating a realistic surgical environment and facilitating measurement of surgeon performance. Once developed, complex new virtual reality training devices must be validated for their usefulness in formative training and assessment of skill to be established.
NASA Technical Reports Server (NTRS)
Drozda, Tomasz G.; Axdahl, Erik L.; Cabell, Karen F.
2014-01-01
With the increasing costs of physics experiments and simultaneous increase in availability and maturity of computational tools it is not surprising that computational fluid dynamics (CFD) is playing an increasingly important role, not only in post-test investigations, but also in the early stages of experimental planning. This paper describes a CFD-based effort executed in close collaboration between computational fluid dynamicists and experimentalists to develop a virtual experiment during the early planning stages of the Enhanced Injection and Mixing project at NASA Langley Research Center. This projects aims to investigate supersonic combustion ramjet (scramjet) fuel injection and mixing physics, improve the understanding of underlying physical processes, and develop enhancement strategies and functional relationships relevant to flight Mach numbers greater than 8. The purpose of the virtual experiment was to provide flow field data to aid in the design of the experimental apparatus and the in-stream rake probes, to verify the nonintrusive measurements based on NO-PLIF, and to perform pre-test analysis of quantities obtainable from the experiment and CFD. The approach also allowed for the joint team to develop common data processing and analysis tools, and to test research ideas. The virtual experiment consisted of a series of Reynolds-averaged simulations (RAS). These simulations included the facility nozzle, the experimental apparatus with a baseline strut injector, and the test cabin. Pure helium and helium-air mixtures were used to determine the efficacy of different inert gases to model hydrogen injection. The results of the simulations were analyzed by computing mixing efficiency, total pressure recovery, and stream thrust potential. As the experimental effort progresses, the simulation results will be compared with the experimental data to calibrate the modeling constants present in the CFD and validate simulation fidelity. CFD will also be used to investigate different injector concepts, improve understanding of the flow structure and flow physics, and develop functional relationships. Both RAS and large eddy simulations (LES) are planned for post-test analysis of the experimental data.
Artistic creativity, style and brain disorders.
Bogousslavsky, Julien
2005-01-01
The production of novel, motivated or useful material defines creativity, which appears to be one of the higher, specific, human brain functions. While creativity can express itself in virtually any domain, art might particularly well illustrate how creativity may be modulated by the normal or pathological brain. Evidence emphasizes global brain functioning in artistic creativity and output, but critical steps which link perception processing to execution of a work, such as extraction-abstraction, as well as major developments of non-esthetic values attached to art also underline complex activation and inhibition processes mainly localized in the frontal lobe. Neurological diseases in artists provide a unique opportunity to study brain-creativity relationships, in particular through the stylistic changes which may develop after brain lesion. (c) 2005 S. Karger AG, Basel
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-12
... Deafness and Other Communication Disorders Special Emphasis Panel, LRP's. Date: May 3, 2010. Time: 1 p.m..., 6120 Executive Blvd., Rockville, MD 20852. (Virtual Meeting.) Contact Person: Sheo Singh, PhD...'s Clinical Trial. Date: May 7, 2010. Time: 4 p.m. to 5 p.m. Agenda: To review and evaluate grant...
Executive dysfunction in schizophrenia and its association with mentalizing abilities.
Gavilán, José M; García-Albea, José E
2015-01-01
Patients with schizophrenia have been found impaired in important aspects of their basic and social cognition. Our aim in this study is to explore the relationship between executive function (EF) and theory of mind (ToM) deficiencies in patients that suffer the illness. Twenty-two Spanish-speaking inpatients and 22 healthy controls matched in age, sex, education, language dominance, and premorbid IQ were assessed in EF and ToM abilities. The former were assessed using 10 tasks that covered 5 cognitive dimensions and the latter using 3 different tasks. Correlation analyses were used to explore the level of association between executive and mentalizing abilities. A series of discriminant function analyses were carried out to examine the relative contribution of each executive and mentalizing task to discriminate between patients and controls. Patients showed impairments in both, executive and ToM abilities. The correlation analyses showed a virtual absence of association between EF and ToM abilities within the group of patients, and an almost opposite pattern within the healthy group. ToM performance was more accurate than executive performance to discriminate patients from controls. Although EFs and ToM deficits come into view together in schizophrenia, they appear to belong to different and relatively independent cognitive domains. Copyright © 2013 SEP y SEPB. Published by Elsevier España. All rights reserved.
Air-condition Control System of Weaving Workshop Based on LabVIEW
NASA Astrophysics Data System (ADS)
Song, Jian
The project of air-condition measurement and control system based on LabVIEW is put forward for the sake of controlling effectively the environmental targets in the weaving workshop. In this project, which is based on the virtual instrument technology and in which LabVIEW development platform by NI is adopted, the system is constructed on the basis of the virtual instrument technology. It is composed of the upper PC, central control nodes based on CC2530, sensor nodes, sensor modules and executive device. Fuzzy control algorithm is employed to achieve the accuracy control of the temperature and humidity. A user-friendly man-machine interaction interface is designed with virtual instrument technology at the core of the software. It is shown by experiments that the measurement and control system can run stably and reliably and meet the functional requirements for controlling the weaving workshop.
Interactive visuo-motor therapy system for stroke rehabilitation.
Eng, Kynan; Siekierka, Ewa; Pyk, Pawel; Chevrier, Edith; Hauser, Yves; Cameirao, Monica; Holper, Lisa; Hägni, Karin; Zimmerli, Lukas; Duff, Armin; Schuster, Corina; Bassetti, Claudio; Verschure, Paul; Kiper, Daniel
2007-09-01
We present a virtual reality (VR)-based motor neurorehabilitation system for stroke patients with upper limb paresis. It is based on two hypotheses: (1) observed actions correlated with self-generated or intended actions engage cortical motor observation, planning and execution areas ("mirror neurons"); (2) activation in damaged parts of motor cortex can be enhanced by viewing mirrored movements of non-paretic limbs. We postulate that our approach, applied during the acute post-stroke phase, facilitates motor re-learning and improves functional recovery. The patient controls a first-person view of virtual arms in tasks varying from simple (hitting objects) to complex (grasping and moving objects). The therapist adjusts weighting factors in the non-paretic limb to move the paretic virtual limb, thereby stimulating the mirror neuron system and optimizing patient motivation through graded task success. We present the system's neuroscientific background, technical details and preliminary results.
Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation
NASA Technical Reports Server (NTRS)
Stocker, John C.; Golomb, Andrew M.
2011-01-01
Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.
Scientific Services on the Cloud
NASA Astrophysics Data System (ADS)
Chapman, David; Joshi, Karuna P.; Yesha, Yelena; Halem, Milt; Yesha, Yaacov; Nguyen, Phuong
Scientific Computing was one of the first every applications for parallel and distributed computation. To this date, scientific applications remain some of the most compute intensive, and have inspired creation of petaflop compute infrastructure such as the Oak Ridge Jaguar and Los Alamos RoadRunner. Large dedicated hardware infrastructure has become both a blessing and a curse to the scientific community. Scientists are interested in cloud computing for much the same reason as businesses and other professionals. The hardware is provided, maintained, and administrated by a third party. Software abstraction and virtualization provide reliability, and fault tolerance. Graduated fees allow for multi-scale prototyping and execution. Cloud computing resources are only a few clicks away, and by far the easiest high performance distributed platform to gain access to. There may still be dedicated infrastructure for ultra-scale science, but the cloud can easily play a major part of the scientific computing initiative.
Hung, Chun-Chi; Li, Yuan-Ta; Chou, Yu-Ching; Chen, Jia-En; Wu, Chia-Chun; Shen, Hsain-Chung; Yeh, Tsu-Te
2018-05-03
Treating pelvic fractures remains a challenging task for orthopaedic surgeons. We aimed to evaluate the feasibility, accuracy, and effectiveness of three-dimensional (3D) printing technology and computer-assisted virtual surgery for pre-operative planning in anterior ring fractures of the pelvis. We hypothesized that using 3D printing models would reduce operation time and significantly improve the surgical outcomes of pelvic fracture repair. We retrospectively reviewed the records of 30 patients with pelvic fractures treated by anterior pelvic fixation with locking plates (14 patients, conventional locking plate fixation; 16 patients, pre-operative virtual simulation with 3D, printing-assisted, pre-contoured, locking plate fixation). We compared operative time, instrumentation time, blood loss, and post-surgical residual displacements, as evaluated on X-ray films, among groups. Statistical analyses evaluated significant differences between the groups for each of these variables. The patients treated with the virtual simulation and 3D printing-assisted technique had significantly shorter internal fixation times, shorter surgery duration, and less blood loss (- 57 minutes, - 70 minutes, and - 274 ml, respectively; P < 0.05) than patients in the conventional surgery group. However, the post-operative radiological result was similar between groups (P > 0.05). The complication rate was less in the 3D printing group (1/16 patients) than in the conventional surgery group (3/14 patients). The 3D simulation and printing technique is an effective and reliable method for treating anterior pelvic ring fractures. With precise pre-operative planning and accurate execution of the procedures, this time-saving approach can provide a more personalized treatment plan, allowing for a safer orthopaedic surgery.
NASA Astrophysics Data System (ADS)
Bibi, T.; Azahari Razak, K.; Rahman, A. Abdul; Latif, A.
2017-10-01
Landslides are an inescapable natural disaster, resulting in massive social, environmental and economic impacts all over the world. The tropical, mountainous landscape in generally all over Malaysia especially in eastern peninsula (Borneo) is highly susceptible to landslides because of heavy rainfall and tectonic disturbances. The purpose of the Landslide hazard mapping is to identify the hazardous regions for the execution of mitigation plans which can reduce the loss of life and property from future landslide incidences. Currently, the Malaysian research bodies e.g. academic institutions and government agencies are trying to develop a landslide hazard and risk database for susceptible areas to backing the prevention, mitigation, and evacuation plan. However, there is a lack of devotion towards landslide inventory mapping as an elementary input of landslide susceptibility, hazard and risk mapping. The developing techniques based on remote sensing technologies (satellite, terrestrial and airborne) are promising techniques to accelerate the production of landslide maps, shrinking the time and resources essential for their compilation and orderly updates. The aim of the study is to provide a better perception regarding the use of virtual mapping of landslides with the help of LiDAR technology. The focus of the study is spatio temporal detection and virtual mapping of landslide inventory via visualization and interpretation of very high-resolution data (VHR) in forested terrain of Mesilau river, Kundasang. However, to cope with the challenges of virtual inventory mapping on in forested terrain high resolution LiDAR derivatives are used. This study specifies that the airborne LiDAR technology can be an effective tool for mapping landslide inventories in a complex climatic and geological conditions, and a quick way of mapping regional hazards in the tropics.
NASA Astrophysics Data System (ADS)
Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.
2012-12-01
Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.
Short-term memory, executive control, and children's route learning.
Purser, Harry R M; Farran, Emily K; Courbois, Yannick; Lemahieu, Axelle; Mellier, Daniel; Sockeel, Pascal; Blades, Mark
2012-10-01
The aim of this study was to investigate route-learning ability in 67 children aged 5 to 11years and to relate route-learning performance to the components of Baddeley's model of working memory. Children carried out tasks that included measures of verbal and visuospatial short-term memory and executive control and also measures of verbal and visuospatial long-term memory; the route-learning task was conducted using a maze in a virtual environment. In contrast to previous research, correlations were found between both visuospatial and verbal memory tasks-the Corsi task, short-term pattern span, digit span, and visuospatial long-term memory-and route-learning performance. However, further analyses indicated that these relationships were mediated by executive control demands that were common to the tasks, with long-term memory explaining additional unique variance in route learning. Copyright © 2012 Elsevier Inc. All rights reserved.
de Castro, Marcus Vasconcelos; Bissaco, Márcia Aparecida Silva; Panccioni, Bruno Marques; Rodrigues, Silvia Cristina Martini; Domingues, Andreia Miranda
2014-01-01
In this study, we show the effectiveness of a virtual environment comprising 18 computer games that cover mathematics topics in a playful setting and that can be executed on the Internet with the possibility of player interaction through chat. An arithmetic pre-test contained in the Scholastic Performance Test was administered to 300 children between 7 and 10 years old, including 162 males and 138 females, in the second grade of primary school. Twenty-six children whose scores showed a low level of mathematical knowledge were chosen and randomly divided into the control (CG) and experimental (EG) groups. The EG participated to the virtual environment and the CG participated in reinforcement using traditional teaching methods. Both groups took a post-test in which the Scholastic Performance Test (SPT) was given again. A statistical analysis of the results using the Student's t-test showed a significant learning improvement for the EG and no improvement for the CG (p≤0.05). The virtual environment allows the students to integrate thought, feeling and action, thus motivating the children to learn and contributing to their intellectual development.
The future of medical education is no longer blood and guts, it is bits and bytes.
Gorman, P J; Meier, A H; Rawn, C; Krummel, T M
2000-11-01
In the United States, medical care consumes approximately $1.2 trillion annually (14% of the gross domestic product) and involves 250,000 physicians, almost 1 million nurses, and countless other providers. While the Information Age has changed virtually every other facet of our life, the education of these healthcare professionals, both present and future, is largely mired in the 100-year-old apprenticeship model best exemplified by the phase "see one, do one, teach one." Continuing medical education is even less advanced. While the half-life of medical information is less than 5 years, the average physician practices 30 years and the average nurse 40 years. Moreover, as medical care has become increasingly complex, medical error has become a substantial problem. The current convulsive climate in academic health centers provides an opportunity to rethink the way medical education is delivered across a continuum of professional lifetimes. If this is well executed, it will truly make medical education better, safer, and cheaper, and provide real benefits to patient care, with instantaneous access to learning modules. At the Center for Advanced Technology in Surgery at Stanford we envision this future: within the next 10 years we will select, train, credential, remediate, and recredential physicians and surgeons using simulation, virtual reality, and Web-based electronic learning. Future physicians will be able to rehearse an operation on a projectable palpable hologram derived from patient-specific data, and deliver the data set of that operation with robotic assistance the next day.
Virtual healthcare delivery: defined, modeled, and predictive barriers to implementation identified.
Harrop, V M
2001-01-01
Provider organizations lack: 1. a definition of "virtual" healthcare delivery relative to the products, services, and processes offered by dot.coms, web-compact disk healthcare content providers, telemedicine, and telecommunications companies, and 2. a model for integrating real and virtual healthcare delivery. This paper defines virtual healthcare delivery as asynchronous, outsourced, and anonymous, then proposes a 2x2 Real-Virtual Healthcare Delivery model focused on real and virtual patients and real and virtual provider organizations. Using this model, provider organizations can systematically deconstruct healthcare delivery in the real world and reconstruct appropriate pieces in the virtual world. Observed barriers to virtual healthcare delivery are: resistance to telecommunication integrated delivery networks and outsourcing; confusion over virtual infrastructure requirements for telemedicine and full-service web portals, and the impact of integrated delivery networks and outsourcing on extant cultural norms and revenue generating practices. To remain competitive provider organizations must integrate real and virtual healthcare delivery.
Virtual healthcare delivery: defined, modeled, and predictive barriers to implementation identified.
Harrop, V. M.
2001-01-01
Provider organizations lack: 1. a definition of "virtual" healthcare delivery relative to the products, services, and processes offered by dot.coms, web-compact disk healthcare content providers, telemedicine, and telecommunications companies, and 2. a model for integrating real and virtual healthcare delivery. This paper defines virtual healthcare delivery as asynchronous, outsourced, and anonymous, then proposes a 2x2 Real-Virtual Healthcare Delivery model focused on real and virtual patients and real and virtual provider organizations. Using this model, provider organizations can systematically deconstruct healthcare delivery in the real world and reconstruct appropriate pieces in the virtual world. Observed barriers to virtual healthcare delivery are: resistance to telecommunication integrated delivery networks and outsourcing; confusion over virtual infrastructure requirements for telemedicine and full-service web portals, and the impact of integrated delivery networks and outsourcing on extant cultural norms and revenue generating practices. To remain competitive provider organizations must integrate real and virtual healthcare delivery. PMID:11825189
Model-Based Experimental Development of Passive Compliant Robot Legs from Fiberglass Composites
Lin, Shang-Chang; Hu, Chia-Jui; Lin, Pei-Chun
2015-01-01
We report on the methodology of developing compliant, half-circular, and composite robot legs with designable stiffness. First, force-displacement experiments on flat cantilever composites made by one or multifiberglass cloths are executed. By mapping the cantilever mechanics to the virtual spring model, the equivalent elastic moduli of the composites can be derived. Next, by using the model that links the curved beam mechanics back to the virtual spring, the resultant stiffness of the composite in a half-circular shape can be estimated without going through intensive experimental tryouts. The overall methodology has been experimentally validated, and the fabricated composites were used on a hexapod robot to perform walking and leaping behaviors. PMID:27065748
LoPresti, Melissa; Daniels, Bradley; Buchanan, Edward P; Monson, Laura; Lam, Sandi
2017-04-01
Repeat surgery for restenosis after initial nonsyndromic craniosynostosis intervention is sometimes needed. Calvarial vault reconstruction through a healed surgical bed adds a level of intraoperative complexity and may benefit from preoperative and intraoperative definitions of biometric and aesthetic norms. Computer-assisted design and manufacturing using 3D imaging allows the precise formulation of operative plans in anticipation of surgical intervention. 3D printing turns virtual plans into anatomical replicas, templates, or customized implants by using a variety of materials. The authors present a technical note illustrating the use of this technology: a repeat calvarial vault reconstruction that was planned and executed using computer-assisted design and 3D printed intraoperative guides.
Design-of-Experiments Approach to Improving Inferior Vena Cava Filter Retrieval Rates.
Makary, Mina S; Shah, Summit H; Warhadpande, Shantanu; Vargas, Ivan G; Sarbinoff, James; Dowell, Joshua D
2017-01-01
The association of retrievable inferior vena cava filters (IVCFs) with adverse events has led to increased interest in prompt retrieval, particularly in younger patients given the progressive nature of these complications over time. This study takes a design-of-experiments (DOE) approach to investigate methods to best improve filter retrieval rates, with a particular focus on younger (<60 years) patients. A DOE approach was executed in which combinations of variables were tested to best improve retrieval rates. The impact of a virtual IVCF clinic, primary care physician (PCP) letters, and discharge instructions was investigated. The decision for filter retrieval in group 1 was determined solely by the referring physician. Group 2 included those patients prospectively followed in an IVCF virtual clinic in which filter retrieval was coordinated by the interventional radiologist when clinically appropriate. In group 3, in addition to being followed through the IVCF clinic, each patient's PCP was faxed a follow-up letter, and information regarding IVCF retrieval was added to the patient's discharge instructions. A total of 10 IVCFs (8.4%) were retrieved among 119 retrievable IVCFs placed in group 1. Implementation of the IVCF clinic in group 2 significantly improved the retrieval rate to 25.3% (23 of 91 retrievable IVCFs placed, P < .05). The addition of discharge instructions and PCP letters to the virtual clinic (group 3) resulted in a retrieval rate of 33.3% (17 of 51). The retrieval rates demonstrated more pronounced improvement when examining only younger patients, with retrieval rates of 11.3% (7 of 62), 29.5% (13 of 44, P < .05), and 45.2% (14 of 31) for groups 1, 2, and 3, respectively. DOE methodology is not routinely executed in health care, but it is an effective approach to evaluating clinical practice behavior and patient quality measures. In this study, implementation of the combination of a virtual clinic, PCP letters, and discharge instructions improved retrieval rates compared with a virtual clinic alone. Quality improvement strategies such as these that augment patient and referring physician knowledge on interventional radiologic procedures may ultimately improve patient safety and personalized care. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Reducing acquisition risk through integrated systems of systems engineering
NASA Astrophysics Data System (ADS)
Gross, Andrew; Hobson, Brian; Bouwens, Christina
2016-05-01
In the fall of 2015, the Joint Staff J7 (JS J7) sponsored the Bold Quest (BQ) 15.2 event and conducted planning and coordination to combine this event into a joint event with the Army Warfighting Assessment (AWA) 16.1 sponsored by the U.S. Army. This multipurpose event combined a Joint/Coalition exercise (JS J7) with components of testing, training, and experimentation required by the Army. In support of Assistant Secretary of the Army for Acquisition, Logistics, and Technology (ASA(ALT)) System of Systems Engineering and Integration (SoSE&I), Always On-On Demand (AO-OD) used a system of systems (SoS) engineering approach to develop a live, virtual, constructive distributed environment (LVC-DE) to support risk mitigation utilizing this complex and challenging exercise environment for a system preparing to enter limited user test (LUT). AO-OD executed a requirements-based SoS engineering process starting with user needs and objectives from Army Integrated Air and Missile Defense (AIAMD), Patriot units, Coalition Intelligence, Surveillance and Reconnaissance (CISR), Focused End State 4 (FES4) Mission Command (MC) Interoperability with Unified Action Partners (UAP), and Mission Partner Environment (MPE) Integration and Training, Tactics and Procedures (TTP) assessment. The SoS engineering process decomposed the common operational, analytical, and technical requirements, while utilizing the Institute of Electrical and Electronics Engineers (IEEE) Distributed Simulation Engineering and Execution Process (DSEEP) to provide structured accountability for the integration and execution of the AO-OD LVC-DE. As a result of this process implementation, AO-OD successfully planned for, prepared, and executed a distributed simulation support environment that responsively satisfied user needs and objectives, demonstrating the viability of an LVC-DE environment to support multiple user objectives and support risk mitigation activities for systems in the acquisition process.
2017-12-21
deterrence approach is capable of exer- cising deterrence with virtual, psychological , moral, and physical aspects in an integrated way, thus lever...Julie Ryan T H E F I F T H D O M A I N Integrating cyber and electronic warfare capabilities increases the commander’s situational awareness. (U.S...cyber organizations and the interagency; and integrating cyber requirements into operational planning and execution. It will take continued
GED® Collapse: Ohio Needs Launch Pads, Not Barricades. Executive Summary
ERIC Educational Resources Information Center
Halbert, Hannah
2016-01-01
The number of people attempting and passing the GED has plummeted. The Ohio economy is tough on low-wage workers with limited formal education. Without a high school diploma, it is virtually impossible to get a family-supporting job. But the GED has become a barricade, blocking Ohio workers from career goals, instead of a launching pad. Employers…
NASA Technical Reports Server (NTRS)
Grasso, Christopher; Page, Dennis; O'Reilly, Taifun; Fteichert, Ralph; Lock, Patricia; Lin, Imin; Naviaux, Keith; Sisino, John
2005-01-01
Virtual Machine Language (VML) is a mission-independent, reusable software system for programming for spacecraft operations. Features of VML include a rich set of data types, named functions, parameters, IF and WHILE control structures, polymorphism, and on-the-fly creation of spacecraft commands from calculated values. Spacecraft functions can be abstracted into named blocks that reside in files aboard the spacecraft. These named blocks accept parameters and execute in a repeatable fashion. The sizes of uplink products are minimized by the ability to call blocks that implement most of the command steps. This block approach also enables some autonomous operations aboard the spacecraft, such as aerobraking, telemetry conditional monitoring, and anomaly response, without developing autonomous flight software. Operators on the ground write blocks and command sequences in a concise, high-level, human-readable programming language (also called VML ). A compiler translates the human-readable blocks and command sequences into binary files (the operations products). The flight portion of VML interprets the uplinked binary files. The ground subsystem of VML also includes an interactive sequence- execution tool hosted on workstations, which runs sequences at several thousand times real-time speed, affords debugging, and generates reports. This tool enables iterative development of blocks and sequences within times of the order of seconds.
Fault Mitigation Schemes for Future Spaceflight Multicore Processors
NASA Technical Reports Server (NTRS)
Alexander, James W.; Clement, Bradley J.; Gostelow, Kim P.; Lai, John Y.
2012-01-01
Future planetary exploration missions demand significant advances in on-board computing capabilities over current avionics architectures based on a single-core processing element. The state-of-the-art multi-core processor provides much promise in meeting such challenges while introducing new fault tolerance problems when applied to space missions. Software-based schemes are being presented in this paper that can achieve system-level fault mitigation beyond that provided by radiation-hard-by-design (RHBD). For mission and time critical applications such as the Terrain Relative Navigation (TRN) for planetary or small body navigation, and landing, a range of fault tolerance methods can be adapted by the application. The software methods being investigated include Error Correction Code (ECC) for data packet routing between cores, virtual network routing, Triple Modular Redundancy (TMR), and Algorithm-Based Fault Tolerance (ABFT). A robust fault tolerance framework that provides fail-operational behavior under hard real-time constraints and graceful degradation will be demonstrated using TRN executing on a commercial Tilera(R) processor with simulated fault injections.
A distributed version of the NASA Engine Performance Program
NASA Technical Reports Server (NTRS)
Cours, Jeffrey T.; Curlett, Brian P.
1993-01-01
Distributed NEPP, a version of the NASA Engine Performance Program, uses the original NEPP code but executes it in a distributed computer environment. Multiple workstations connected by a network increase the program's speed and, more importantly, the complexity of the cases it can handle in a reasonable time. Distributed NEPP uses the public domain software package, called Parallel Virtual Machine, allowing it to execute on clusters of machines containing many different architectures. It includes the capability to link with other computers, allowing them to process NEPP jobs in parallel. This paper discusses the design issues and granularity considerations that entered into programming Distributed NEPP and presents the results of timing runs.
On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers
NASA Astrophysics Data System (ADS)
Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.
2017-10-01
This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.
Automated Instrumentation, Monitoring and Visualization of PVM Programs Using AIMS
NASA Technical Reports Server (NTRS)
Mehra, Pankaj; VanVoorst, Brian; Yan, Jerry; Tucker, Deanne (Technical Monitor)
1994-01-01
We present views and analysis of the execution of several PVM codes for Computational Fluid Dynamics on a network of Sparcstations, including (a) NAS Parallel benchmarks CG and MG (White, Alund and Sunderam 1993); (b) a multi-partitioning algorithm for NAS Parallel Benchmark SP (Wijngaart 1993); and (c) an overset grid flowsolver (Smith 1993). These views and analysis were obtained using our Automated Instrumentation and Monitoring System (AIMS) version 3.0, a toolkit for debugging the performance of PVM programs. We will describe the architecture, operation and application of AIMS. The AIMS toolkit contains (a) Xinstrument, which can automatically instrument various computational and communication constructs in message-passing parallel programs; (b) Monitor, a library of run-time trace-collection routines; (c) VK (Visual Kernel), an execution-animation tool with source-code clickback; and (d) Tally, a tool for statistical analysis of execution profiles. Currently, Xinstrument can handle C and Fortran77 programs using PVM 3.2.x; Monitor has been implemented and tested on Sun 4 systems running SunOS 4.1.2; and VK uses X11R5 and Motif 1.2. Data and views obtained using AIMS clearly illustrate several characteristic features of executing parallel programs on networked workstations: (a) the impact of long message latencies; (b) the impact of multiprogramming overheads and associated load imbalance; (c) cache and virtual-memory effects; and (4significant skews between workstation clocks. Interestingly, AIMS can compensate for constant skew (zero drift) by calibrating the skew between a parent and its spawned children. In addition, AIMS' skew-compensation algorithm can adjust timestamps in a way that eliminates physically impossible communications (e.g., messages going backwards in time). Our current efforts are directed toward creating new views to explain the observed performance of PVM programs. Some of the features planned for the near future include: (a) ConfigView, showing the physical topology of the virtual machine, inferred using specially formatted IP (Internet Protocol) packets; and (b) LoadView, synchronous animation of PVM-program execution and resource-utilization patterns.
A task scheduler framework for self-powered wireless sensors.
Nordman, Mikael M
2003-10-01
The cost and inconvenience of cabling is a factor limiting widespread use of intelligent sensors. Recent developments in short-range, low-power radio seem to provide an opening to this problem, making development of wireless sensors feasible. However, for these sensors the energy availability is a main concern. The common solution is either to use a battery or to harvest ambient energy. The benefit of harvested ambient energy is that the energy feeder can be considered as lasting a lifetime, thus it saves the user from concerns related to energy management. The problem is, however, the unpredictability and unsteady behavior of ambient energy sources. This becomes a main concern for sensors that run multiple tasks at different priorities. This paper proposes a new scheduler framework that enables the reliable assignment of task priorities and scheduling in sensors powered by ambient energy. The framework being based on environment parameters, virtual queues, and a state machine with transition conditions, dynamically manages task execution according to priorities. The framework is assessed in a test system powered by a solar panel. The results show the functionality of the framework and how task execution reliably is handled without violating the priority scheme that has been assigned to it.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Junghyun; Gangwon, Jo; Jaehoon, Jung
Applications written solely in OpenCL or CUDA cannot execute on a cluster as a whole. Most previous approaches that extend these programming models to clusters are based on a common idea: designating a centralized host node and coordinating the other nodes with the host for computation. However, the centralized host node is a serious performance bottleneck when the number of nodes is large. In this paper, we propose a scalable and distributed OpenCL framework called SnuCL-D for large-scale clusters. SnuCL-D's remote device virtualization provides an OpenCL application with an illusion that all compute devices in a cluster are confined inmore » a single node. To reduce the amount of control-message and data communication between nodes, SnuCL-D replicates the OpenCL host program execution and data in each node. We also propose a new OpenCL host API function and a queueing optimization technique that significantly reduce the overhead incurred by the previous centralized approaches. To show the effectiveness of SnuCL-D, we evaluate SnuCL-D with a microbenchmark and eleven benchmark applications on a large-scale CPU cluster and a medium-scale GPU cluster.« less
Partnership For Edge Physics Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parashar, Manish
In this effort, we will extend our prior work as part of CPES (i.e., DART and DataSpaces) to support in-situ tight coupling between application codes that exploits data locality and core-level parallelism to maximize on-chip data exchange and reuse. This will be accomplished by mapping coupled simulations so that the data exchanges are more localized within the nodes. Coupled simulation workflows can more effectively utilize the resources available on emerging HEC platforms if they can be mapped and executed to exploit data locality as well as the communication patterns between application components. Scheduling and running such workflows requires an extendedmore » framework that should provide a unified hybrid abstraction to enable coordination and data sharing across computation tasks that run on the heterogeneous multi-core-based systems, and develop a data-locality based dynamic tasks scheduling approach to increase on-chip or intra-node data exchanges and in-situ execution. This effort will extend our prior work as part of CPES (i.e., DART and DataSpaces), which provided a simple virtual shared-space abstraction hosted at the staging nodes, to support application coordination, data sharing and active data processing services. Moreover, it will transparently manage the low-level operations associated with the inter-application data exchange, such as data redistributions, and will enable running coupled simulation workflow on multi-cores computing platforms.« less
Brown, J B; Nakatsui, Masahiko; Okuno, Yasushi
2014-12-01
The cost of pharmaceutical R&D has risen enormously, both worldwide and in Japan. However, Japan faces a particularly difficult situation in that its population is aging rapidly, and the cost of pharmaceutical R&D affects not only the industry but the entire medical system as well. To attempt to reduce costs, the newly launched K supercomputer is available for big data drug discovery and structural simulation-based drug discovery. We have implemented both primary (direct) and secondary (infrastructure, data processing) methods for the two types of drug discovery, custom tailored to maximally use the 88 128 compute nodes/CPUs of K, and evaluated the implementations. We present two types of results. In the first, we executed the virtual screening of nearly 19 billion compound-protein interactions, and calculated the accuracy of predictions against publicly available experimental data. In the second investigation, we implemented a very computationally intensive binding free energy algorithm, and found that comparison of our binding free energies was considerably accurate when validated against another type of publicly available experimental data. The common feature of both result types is the scale at which computations were executed. The frameworks presented in this article provide prospectives and applications that, while tuned to the computing resources available in Japan, are equally applicable to any equivalent large-scale infrastructure provided elsewhere. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Proof of Concept of Home IoT Connected Vehicles
Kim, Younsun; Oh, Hyunggoy; Kang, Sungho
2017-01-01
The way in which we interact with our cars is changing, driven by the increased use of mobile devices, cloud-based services, and advanced automotive technology. In particular, the requirements and market demand for the Internet of Things (IoT) device-connected vehicles will continuously increase. In addition, the advances in cloud computing and IoT have provided a promising opportunity for developing vehicular software and services in the automotive domain. In this paper, we introduce the concept of a home IoT connected vehicle with a voice-based virtual personal assistant comprised of a vehicle agent and a home agent. The proposed concept is evaluated by implementing a smartphone linked with home IoT devices that are connected to an infotainment system for the vehicle, a smartphone-based natural language interface input device, and cloud-based home IoT devices for the home. The home-to-vehicle connected service scenarios that aim to reduce the inconvenience due to simple and repetitive tasks by improving the urban mobility efficiency in IoT environments are substantiated by analyzing real vehicle testing and lifestyle research. Remarkable benefits are derived by making repetitive routine tasks one task that is executed by a command and by executing essential tasks automatically, without any request. However, it should be used with authorized permission, applied without any error at the right time, and applied under limited conditions to sense the habitants’ intention correctly and to gain the required trust regarding the remote execution of tasks. PMID:28587246
Proof of Concept of Home IoT Connected Vehicles.
Kim, Younsun; Oh, Hyunggoy; Kang, Sungho
2017-06-05
The way in which we interact with our cars is changing, driven by the increased use of mobile devices, cloud-based services, and advanced automotive technology. In particular, the requirements and market demand for the Internet of Things (IoT) device-connected vehicles will continuously increase. In addition, the advances in cloud computing and IoT have provided a promising opportunity for developing vehicular software and services in the automotive domain. In this paper, we introduce the concept of a home IoT connected vehicle with a voice-based virtual personal assistant comprised of a vehicle agent and a home agent. The proposed concept is evaluated by implementing a smartphone linked with home IoT devices that are connected to an infotainment system for the vehicle, a smartphone-based natural language interface input device, and cloud-based home IoT devices for the home. The home-to-vehicle connected service scenarios that aim to reduce the inconvenience due to simple and repetitive tasks by improving the urban mobility efficiency in IoT environments are substantiated by analyzing real vehicle testing and lifestyle research. Remarkable benefits are derived by making repetitive routine tasks one task that is executed by a command and by executing essential tasks automatically, without any request. However, it should be used with authorized permission, applied without any error at the right time, and applied under limited conditions to sense the habitants' intention correctly and to gain the required trust regarding the remote execution of tasks.
1001 Ways to run AutoDock Vina for virtual screening
NASA Astrophysics Data System (ADS)
Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D.
2016-03-01
Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.
1001 Ways to run AutoDock Vina for virtual screening.
Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D
2016-03-01
Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.
Ellaway, Rachel H; Poulton, Terry; Jivram, Trupti
2015-01-01
In 2009, St George's University of London (SGUL) replaced their paper-based problem-based learning (PBL) cases with virtual patients for intermediate-level undergraduate students. This involved the development of Decision-Problem-Based Learning (D-PBL), a variation on progressive-release PBL that uses virtual patients instead of paper cases, and focuses on patient management decisions and their consequences. Using a case study method, this paper describes four years of developing and running D-PBL at SGUL from individual activities up to the ways in which D-PBL functioned as an educational system. A number of broad issues were identified: the importance of debates and decision-making in making D-PBL activities engaging and rewarding; the complexities of managing small group dynamics; the time taken to complete D-PBL activities; the changing role of the facilitator; and the erosion of the D-PBL process over time. A key point in understanding this work is the construction and execution of the D-PBL activity, as much of the value of this approach arises from the actions and interactions of students, their facilitators and the virtual patients rather than from the design of the virtual patients alone. At a systems level D-PBL needs to be periodically refreshed to retain its effectiveness.
Leveraging simulation to evaluate system performance in presence of fixed pattern noise
NASA Astrophysics Data System (ADS)
Teaney, Brian P.
2017-05-01
The development of image simulation techniques which map the effects of a notional, modeled sensor system onto an existing image can be used to evaluate the image quality of camera systems prior to the development of prototype systems. In addition, image simulation or `virtual prototyping' can be utilized to reduce the time and expense associated with conducting extensive field trials. In this paper we examine the development of a perception study designed to assess the performance of the NVESD imager performance metrics as a function of fixed pattern noise. This paper discusses the development of the model theory and the implementation and execution of the perception study. In addition, other applications of the image simulation component including the evaluation of limiting resolution and other test targets is provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Exercise environment for Introduction to Cyber Technologies class. This software is essentially a collection of short scripts, configuration files, and small executables that form the exercise component of the Sandia Cyber Technologies Academy's Introduction to Cyber Technologies class. It builds upon other open-source technologies, such as Debian Linux and minimega, to provide comprehensive Linux and networking exercises that make learning these topics exciting and fun. Sample exercises: a pre-built set of home directories the student must navigate through to learn about privilege escalation, the creation of a virtual network playground designed to teach the student about the resiliency of themore » Internet, and a two-hour Capture the Flag challenge for the final lesson. There are approximately thirty (30) exercises included for the students to complete as part of the course.« less
Mirelman, Anat; Rochester, Lynn; Reelick, Miriam; Nieuwhof, Freek; Pelosin, Elisa; Abbruzzese, Giovanni; Dockx, Kim; Nieuwboer, Alice; Hausdorff, Jeffrey M
2013-02-06
Recent work has demonstrated that fall risk can be attributed to cognitive as well as motor deficits. Indeed, everyday walking in complex environments utilizes executive function, dual tasking, planning and scanning, all while walking forward. Pilot studies suggest that a multi-modal intervention that combines treadmill training to target motor function and a virtual reality obstacle course to address the cognitive components of fall risk may be used to successfully address the motor-cognitive interactions that are fundamental for fall risk reduction. The proposed randomized controlled trial will evaluate the effects of treadmill training augmented with virtual reality on fall risk. Three hundred older adults with a history of falls will be recruited to participate in this study. This will include older adults (n=100), patients with mild cognitive impairment (n=100), and patients with Parkinson's disease (n=100). These three sub-groups will be recruited in order to evaluate the effects of the intervention in people with a range of motor and cognitive deficits. Subjects will be randomly assigned to the intervention group (treadmill training with virtual reality) or to the active-control group (treadmill training without virtual reality). Each person will participate in a training program set in an outpatient setting 3 times per week for 6 weeks. Assessments will take place before, after, and 1 month and 6 months after the completion of the training. A falls calendar will be kept by each participant for 6 months after completing the training to assess fall incidence (i.e., the number of falls, multiple falls and falls rate). In addition, we will measure gait under usual and dual task conditions, balance, community mobility, health related quality of life, user satisfaction and cognitive function. This randomized controlled trial will demonstrate the extent to which an intervention that combines treadmill training augmented by virtual reality reduces fall risk, improves mobility and enhances cognitive function in a diverse group of older adults. In addition, the comparison to an active control group that undergoes treadmill training without virtual reality will provide evidence as to the added value of addressing motor cognitive interactions as an integrated unit. (NIH)-NCT01732653.
2013-01-01
Background Recent work has demonstrated that fall risk can be attributed to cognitive as well as motor deficits. Indeed, everyday walking in complex environments utilizes executive function, dual tasking, planning and scanning, all while walking forward. Pilot studies suggest that a multi-modal intervention that combines treadmill training to target motor function and a virtual reality obstacle course to address the cognitive components of fall risk may be used to successfully address the motor-cognitive interactions that are fundamental for fall risk reduction. The proposed randomized controlled trial will evaluate the effects of treadmill training augmented with virtual reality on fall risk. Methods/Design Three hundred older adults with a history of falls will be recruited to participate in this study. This will include older adults (n=100), patients with mild cognitive impairment (n=100), and patients with Parkinson’s disease (n=100). These three sub-groups will be recruited in order to evaluate the effects of the intervention in people with a range of motor and cognitive deficits. Subjects will be randomly assigned to the intervention group (treadmill training with virtual reality) or to the active-control group (treadmill training without virtual reality). Each person will participate in a training program set in an outpatient setting 3 times per week for 6 weeks. Assessments will take place before, after, and 1 month and 6 months after the completion of the training. A falls calendar will be kept by each participant for 6 months after completing the training to assess fall incidence (i.e., the number of falls, multiple falls and falls rate). In addition, we will measure gait under usual and dual task conditions, balance, community mobility, health related quality of life, user satisfaction and cognitive function. Discussion This randomized controlled trial will demonstrate the extent to which an intervention that combines treadmill training augmented by virtual reality reduces fall risk, improves mobility and enhances cognitive function in a diverse group of older adults. In addition, the comparison to an active control group that undergoes treadmill training without virtual reality will provide evidence as to the added value of addressing motor cognitive interactions as an integrated unit. Trial Registration (NIH)–NCT01732653 PMID:23388087
Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models.
Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A
2014-01-01
Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients.
Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models
Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A.
2014-01-01
Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients. PMID:25374542
2012-05-17
theories work together to explain learning in aviation—behavioral learning theory , cognitive learning theory , constructivism, experiential ...solve problems, and make decisions. Experiential learning theory incorporates both behavioral and cognitive theories .104 This theory harnesses the...34Evaluation of the Effectiveness of Flight School XXI," 7. 106 David A. Kolb , Experiential Learning : Experience as the Source of
ERIC Educational Resources Information Center
Bresciano, Cora
2012-01-01
In a virtual space between nations, children can come together to learn about and appreciate each other's culture--and their own. As the Co-Executive Director of Blue Planet Writers' Room, a nonprofit writing center in West Palm Beach, Florida, the author was one of the creators of an international collaboration through which they taught…
2015-06-01
version of the Bear operating system. The full system is depicted in Figure 3 and is composed of a minimalist micro-kernel with an associated...which are intended to support a general virtual machine execution environment, this minimalist hypervisor is designed to support only the operations...The use of a minimalist hypervisor in the Bear system opened the door to discovery of zero-day exploits. The approach leverages the hypervisors
Clark, Justice; Marcello, Thomas B; Paquin, Allison M; Stewart, Max; Archambeault, Cliona; Simon, Steven R
2013-01-01
Background Virtual (non-face-to-face) medication reconciliation strategies may reduce adverse drug events (ADEs) among vulnerable ambulatory patients. Understanding provider perspectives on the use of technology for medication reconciliation can inform the design of patient-centered solutions to improve ambulatory medication safety. Objective The aim of the study was to describe primary care providers’ experiences of ambulatory medication reconciliation and secure messaging (secure email between patients and providers), and to elicit perceptions of a virtual medication reconciliation system using secure messaging (SM). Methods This was a qualitative study using semi-structured interviews. From January 2012 to May 2012, we conducted structured observations of primary care clinical activities and interviewed 15 primary care providers within a Veterans Affairs Healthcare System in Boston, Massachusetts (USA). We carried out content analysis informed by the grounded theory. Results Of the 15 participating providers, 12 were female and 11 saw 10 or fewer patients in a typical workday. Experiences and perceptions elicited from providers during in-depth interviews were organized into 12 overarching themes: 4 themes for experiences with medication reconciliation, 3 themes for perceptions on how to improve ambulatory medication reconciliation, and 5 themes for experiences with SM. Providers generally recognized medication reconciliation as a valuable component of primary care delivery and all agreed that medication reconciliation following hospital discharge is a key priority. Most providers favored delegating the responsibility for medication reconciliation to another member of the staff, such as a nurse or a pharmacist. The 4 themes related to ambulatory medication reconciliation were (1) the approach to complex patients, (2) the effectiveness of medication reconciliation in preventing ADEs, (3) challenges to completing medication reconciliation, and (4) medication reconciliation during transitions of care. Specifically, providers emphasized the importance of medication reconciliation at the post-hospital visit. Providers indicated that assistance from a caregiver (eg, a family member) for medication reconciliation was helpful for complex or elderly patients and that patients’ social or cognitive factors often made medication reconciliation challenging. Regarding providers’ use of SM, about half reported using SM frequently, but all felt that it improved their clinical workflow and nearly all providers were enthusiastic about a virtual medication reconciliation system, such as one using SM. All providers thought that such a system could reduce ADEs. Conclusions Although providers recognize the importance and value of ambulatory medication reconciliation, various factors make it difficult to execute this task effectively, particularly among complex or elderly patients and patients with complicated social circumstances. Many providers favor enlisting the support of pharmacists or nurses to perform medication reconciliation in the outpatient setting. In general, providers are enthusiastic about the prospect of using secure messaging for medication reconciliation, particularly during transitions of care, and believe a system of virtual medication reconciliation could reduce ADEs. PMID:24297865
Virtual alternative to the oral examination for emergency medicine residents.
McGrath, Jillian; Kman, Nicholas; Danforth, Douglas; Bahner, David P; Khandelwal, Sorabh; Martin, Daniel R; Nagel, Rollin; Verbeck, Nicole; Way, David P; Nelson, Richard
2015-03-01
The oral examination is a traditional method for assessing the developing physician's medical knowledge, clinical reasoning and interpersonal skills. The typical oral examination is a face-to-face encounter in which examiners quiz examinees on how they would confront a patient case. The advantage of the oral exam is that the examiner can adapt questions to the examinee's response. The disadvantage is the potential for examiner bias and intimidation. Computer-based virtual simulation technology has been widely used in the gaming industry. We wondered whether virtual simulation could serve as a practical format for delivery of an oral examination. For this project, we compared the attitudes and performance of emergency medicine (EM) residents who took our traditional oral exam to those who took the exam using virtual simulation. EM residents (n=35) were randomized to a traditional oral examination format (n=17) or a simulated virtual examination format (n=18) conducted within an immersive learning environment, Second Life (SL). Proctors scored residents using the American Board of Emergency Medicine oral examination assessment instruments, which included execution of critical actions and ratings on eight competency categories (1-8 scale). Study participants were also surveyed about their oral examination experience. We observed no differences between virtual and traditional groups on critical action scores or scores on eight competency categories. However, we noted moderate effect sizes favoring the Second Life group on the clinical competence score. Examinees from both groups thought that their assessment was realistic, fair, objective, and efficient. Examinees from the virtual group reported a preference for the virtual format and felt that the format was less intimidating. The virtual simulated oral examination was shown to be a feasible alternative to the traditional oral examination format for assessing EM residents. Virtual environments for oral examinations should continue to be explored, particularly since they offer an inexpensive, more comfortable, yet equally rigorous alternative.
Virtual Alternative to the Oral Examination for Emergency Medicine Residents
McGrath, Jillian; Kman, Nicholas; Danforth, Douglas; Bahner, David P.; Khandelwal, Sorabh; Martin, Daniel R.; Nagel, Rollin; Verbeck, Nicole; Way, David P.; Nelson, Richard
2015-01-01
Introduction The oral examination is a traditional method for assessing the developing physician’s medical knowledge, clinical reasoning and interpersonal skills. The typical oral examination is a face-to-face encounter in which examiners quiz examinees on how they would confront a patient case. The advantage of the oral exam is that the examiner can adapt questions to the examinee’s response. The disadvantage is the potential for examiner bias and intimidation. Computer-based virtual simulation technology has been widely used in the gaming industry. We wondered whether virtual simulation could serve as a practical format for delivery of an oral examination. For this project, we compared the attitudes and performance of emergency medicine (EM) residents who took our traditional oral exam to those who took the exam using virtual simulation. Methods EM residents (n=35) were randomized to a traditional oral examination format (n=17) or a simulated virtual examination format (n=18) conducted within an immersive learning environment, Second Life (SL). Proctors scored residents using the American Board of Emergency Medicine oral examination assessment instruments, which included execution of critical actions and ratings on eight competency categories (1–8 scale). Study participants were also surveyed about their oral examination experience. Results We observed no differences between virtual and traditional groups on critical action scores or scores on eight competency categories. However, we noted moderate effect sizes favoring the Second Life group on the clinical competence score. Examinees from both groups thought that their assessment was realistic, fair, objective, and efficient. Examinees from the virtual group reported a preference for the virtual format and felt that the format was less intimidating. Conclusion The virtual simulated oral examination was shown to be a feasible alternative to the traditional oral examination format for assessing EM residents. Virtual environments for oral examinations should continue to be explored, particularly since they offer an inexpensive, more comfortable, yet equally rigorous alternative. PMID:25834684
Multimodal correlation and intraoperative matching of virtual models in neurosurgery
NASA Technical Reports Server (NTRS)
Ceresole, Enrico; Dalsasso, Michele; Rossi, Aldo
1994-01-01
The multimodal correlation between different diagnostic exams, the intraoperative calibration of pointing tools and the correlation of the patient's virtual models with the patient himself, are some examples, taken from the biomedical field, of a unique problem: determine the relationship linking representation of the same object in different reference frames. Several methods have been developed in order to determine this relationship, among them, the surface matching method is one that gives the patient minimum discomfort and the errors occurring are compatible with the required precision. The surface matching method has been successfully applied to the multimodal correlation of diagnostic exams such as CT, MR, PET and SPECT. Algorithms for automatic segmentation of diagnostic images have been developed to extract the reference surfaces from the diagnostic exams, whereas the surface of the patient's skull has been monitored, in our approach, by means of a laser sensor mounted on the end effector of an industrial robot. An integrated system for virtual planning and real time execution of surgical procedures has been realized.
Monitoring Contract Enforcement within Virtual Organizations
NASA Astrophysics Data System (ADS)
Squicciarini, Anna; Paci, Federica
Virtual Organizations (VOs) represent a new collaboration paradigm in which the participating entities pool resources, services, and information to achieve a common goal. VOs are often created on demand and dynamically evolve over time. An organization identifies a business opportunity and creates a VO to meet it. In this paper we develop a system for monitoring the sharing of resources in VO. Sharing rules are defined by a particular, common type of contract in which virtual organization members agree to make available some amount of specified resource over a given time period. The main component of the system is a monitoring tool for policy enforcement, called Security Controller (SC). VO members’ interactions are monitored in a decentralized manner in that each member has one associated SC which intercepts all the exchanged messages. We show that having SCs in VOs prevents from serious security breaches and guarantees VOs correct functioning without degrading the execution time of members’ interactions. We base our discussion on application scenarios and illustrate the SC prototype, along with some performance evaluation.
[Application of virtual instrumentation technique in toxicological studies].
Moczko, Jerzy A
2005-01-01
Research investigations require frequently direct connection of measuring equipment to the computer. Virtual instrumentation technique considerably facilitates programming of sophisticated acquisition-and-analysis procedures. In standard approach these two steps are performed subsequently with separate software tools. The acquired data are transfered with export / import procedures of particular program to the another one which executes next step of analysis. The described procedure is cumbersome, time consuming and may be potential source of the errors. In 1987 National Instruments Corporation introduced LabVIEW language based on the concept of graphical programming. Contrary to conventional textual languages it allows the researcher to concentrate on the resolved problem and omit all syntactical rules. Programs developed in LabVIEW are called as virtual instruments (VI) and are portable among different computer platforms as PCs, Macintoshes, Sun SPARCstations, Concurrent PowerMAX stations, HP PA/RISK workstations. This flexibility warrants that the programs prepared for one particular platform would be also appropriate to another one. In presented paper basic principles of connection of research equipment to computer systems were described.
Hardware accuracy counters for application precision and quality feedback
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Paula Rosa Piga, Leonardo; Majumdar, Abhinandan; Paul, Indrani
Methods, devices, and systems for capturing an accuracy of an instruction executing on a processor. An instruction may be executed on the processor, and the accuracy of the instruction may be captured using a hardware counter circuit. The accuracy of the instruction may be captured by analyzing bits of at least one value of the instruction to determine a minimum or maximum precision datatype for representing the field, and determining whether to adjust a value of the hardware counter circuit accordingly. The representation may be output to a debugger or logfile for use by a developer, or may be outputmore » to a runtime or virtual machine to automatically adjust instruction precision or gating of portions of the processor datapath.« less
Oliveira, Jorge; Gamito, Pedro; Alghazzawi, Daniyal M; Fardoun, Habib M; Rosa, Pedro J; Sousa, Tatiana; Picareli, Luís Felipe; Morais, Diogo; Lopes, Paulo
2017-08-14
This investigation sought to understand whether performance in naturalistic virtual reality tasks for cognitive assessment relates to the cognitive domains that are supposed to be measured. The Shoe Closet Test (SCT) was developed based on a simple visual search task involving attention skills, in which participants have to match each pair of shoes with the colors of the compartments in a virtual shoe closet. The interaction within the virtual environment was made using the Microsoft Kinect. The measures consisted of concurrent paper-and-pencil neurocognitive tests for global cognitive functioning, executive functions, attention, psychomotor ability, and the outcomes of the SCT. The results showed that the SCT correlated with global cognitive performance as measured with the Montreal Cognitive Assessment (MoCA). The SCT explained one third of the total variance of this test and revealed good sensitivity and specificity in discriminating scores below one standard deviation in this screening tool. These findings suggest that performance of such functional tasks involves a broad range of cognitive processes that are associated with global cognitive functioning and that may be difficult to isolate through paper-and-pencil neurocognitive tests.
de Castro, Marcus Vasconcelos; Bissaco, Márcia Aparecida Silva; Panccioni, Bruno Marques; Rodrigues, Silvia Cristina Martini; Domingues, Andreia Miranda
2014-01-01
In this study, we show the effectiveness of a virtual environment comprising 18 computer games that cover mathematics topics in a playful setting and that can be executed on the Internet with the possibility of player interaction through chat. An arithmetic pre-test contained in the Scholastic Performance Test was administered to 300 children between 7 and 10 years old, including 162 males and 138 females, in the second grade of primary school. Twenty-six children whose scores showed a low level of mathematical knowledge were chosen and randomly divided into the control (CG) and experimental (EG) groups. The EG participated to the virtual environment and the CG participated in reinforcement using traditional teaching methods. Both groups took a post-test in which the Scholastic Performance Test (SPT) was given again. A statistical analysis of the results using the Student's t-test showed a significant learning improvement for the EG and no improvement for the CG (p≤0.05). The virtual environment allows the students to integrate thought, feeling and action, thus motivating the children to learn and contributing to their intellectual development. PMID:25068511
ERIC Educational Resources Information Center
National Educational Service, Bloomington, IN.
On June 8, 1992, the presidents of the nation's two largest teachers unions joined the directors and presidents of virtually every educational organization, as well as political leaders and executives from Ford, General Motors, and Chrysler in an effort to redesign U.S. schools using the quality principles of W. Edwards Deming. Panelists spent the…
Recommended Practices for Interactive Video Portability
1990-10-01
3-9 4. Implementation details 4-1 4.1 Installation issues ....................... 4-1 April 15, 1990 Release R 1.0 vii contents 4.1.1 VDI ...passed via an ASCII or binary application interface to the Virtual Device Interface ( VDI ) Management Software. ’ VDI Management, in turn, executes...the commands by calling appropriate low-level services and passes responses back to the application via the application interface. VDI Manage- ment is
Russian Defense Legislation and Russian Democracy,
1995-08-17
system denoting a President who is virtually unencumbered by the division of and separation of powers and by a system of checks and balances... separation of powers and is himself able to rule by decree. This trend to concentrate power in the President and in unresponsive executive branch...enhanced activity of the President is legally sanctioned along with the concept of rule by decree, a renunciation of separation of powers , exemption from
Marine light attack helicopter close air support trainer for situation awareness
2017-06-01
environmental elements outside the aircraft. The initial environment elements included in the trainer are those relating directly to the CAS execution...ambient environmental elements. These elements were limited the few items required to create a virtual environment . The terrain is simulated to...words) In today’s dynamic combat environment , the importance of Close Air Support (CAS) has increased significantly due to a greater need to avoid
Fast Realistic MRI Simulations Based on Generalized Multi-Pool Exchange Tissue Model.
Liu, Fang; Velikina, Julia V; Block, Walter F; Kijowski, Richard; Samsonov, Alexey A
2017-02-01
We present MRiLab, a new comprehensive simulator for large-scale realistic MRI simulations on a regular PC equipped with a modern graphical processing unit (GPU). MRiLab combines realistic tissue modeling with numerical virtualization of an MRI system and scanning experiment to enable assessment of a broad range of MRI approaches including advanced quantitative MRI methods inferring microstructure on a sub-voxel level. A flexible representation of tissue microstructure is achieved in MRiLab by employing the generalized tissue model with multiple exchanging water and macromolecular proton pools rather than a system of independent proton isochromats typically used in previous simulators. The computational power needed for simulation of the biologically relevant tissue models in large 3D objects is gained using parallelized execution on GPU. Three simulated and one actual MRI experiments were performed to demonstrate the ability of the new simulator to accommodate a wide variety of voxel composition scenarios and demonstrate detrimental effects of simplified treatment of tissue micro-organization adapted in previous simulators. GPU execution allowed ∼ 200× improvement in computational speed over standard CPU. As a cross-platform, open-source, extensible environment for customizing virtual MRI experiments, MRiLab streamlines the development of new MRI methods, especially those aiming to infer quantitatively tissue composition and microstructure.
Fast Realistic MRI Simulations Based on Generalized Multi-Pool Exchange Tissue Model
Velikina, Julia V.; Block, Walter F.; Kijowski, Richard; Samsonov, Alexey A.
2017-01-01
We present MRiLab, a new comprehensive simulator for large-scale realistic MRI simulations on a regular PC equipped with a modern graphical processing unit (GPU). MRiLab combines realistic tissue modeling with numerical virtualization of an MRI system and scanning experiment to enable assessment of a broad range of MRI approaches including advanced quantitative MRI methods inferring microstructure on a sub-voxel level. A flexibl representation of tissue microstructure is achieved in MRiLab by employing the generalized tissue model with multiple exchanging water and macromolecular proton pools rather than a system of independent proton isochromats typically used in previous simulators. The computational power needed for simulation of the biologically relevant tissue models in large 3D objects is gained using parallelized execution on GPU. Three simulated and one actual MRI experiments were performed to demonstrate the ability of the new simulator to accommodate a wide variety of voxel composition scenarios and demonstrate detrimental effects of simplifie treatment of tissue micro-organization adapted in previous simulators. GPU execution allowed ∼200× improvement in computational speed over standard CPU. As a cross-platform, open-source, extensible environment for customizing virtual MRI experiments, MRiLab streamlines the development of new MRI methods, especially those aiming to infer quantitatively tissue composition and microstructure. PMID:28113746
David, Nicole; Skoruppa, Stefan; Gulberti, Alessandro
2016-01-01
The sense of agency describes the ability to experience oneself as the agent of one's own actions. Previous studies of the sense of agency manipulated the predicted sensory feedback related either to movement execution or to the movement’s outcome, for example by delaying the movement of a virtual hand or the onset of a tone that resulted from a button press. Such temporal sensorimotor discrepancies reduce the sense of agency. It remains unclear whether movement-related feedback is processed differently than outcome-related feedback in terms of agency experience, especially if these types of feedback differ with respect to sensory modality. We employed a mixed-reality setup, in which participants tracked their finger movements by means of a virtual hand. They performed a single tap, which elicited a sound. The temporal contingency between the participants’ finger movements and (i) the movement of the virtual hand or (ii) the expected auditory outcome was systematically varied. In a visual control experiment, the tap elicited a visual outcome. For each feedback type and participant, changes in the sense of agency were quantified using a forced-choice paradigm and the Method of Constant Stimuli. Participants were more sensitive to delays of outcome than to delays of movement execution. This effect was very similar for visual or auditory outcome delays. Our results indicate different contributions of movement- versus outcome-related sensory feedback to the sense of agency, irrespective of the modality of the outcome. We propose that this differential sensitivity reflects the behavioral importance of assessing authorship of the outcome of an action. PMID:27536948
NASA Astrophysics Data System (ADS)
Barr, David; Basden, Alastair; Dipper, Nigel; Schwartz, Noah; Vick, Andy; Schnetler, Hermine
2014-08-01
We present wavefront reconstruction acceleration of high-order AO systems using an Intel Xeon Phi processor. The Xeon Phi is a coprocessor providing many integrated cores and designed for accelerating compute intensive, numerical codes. Unlike other accelerator technologies, it allows virtually unchanged C/C++ to be recompiled to run on the Xeon Phi, giving the potential of making development, upgrade and maintenance faster and less complex. We benchmark the Xeon Phi in the context of AO real-time control by running a matrix vector multiply (MVM) algorithm. We investigate variability in execution time and demonstrate a substantial speed-up in loop frequency. We examine the integration of a Xeon Phi into an existing RTC system and show that performance improvements can be achieved with limited development effort.
Cell-Division Behavior in a Heterogeneous Swarm Environment.
Erskine, Adam; Herrmann, J Michael
2015-01-01
We present a system of virtual particles that interact using simple kinetic rules. It is known that heterogeneous mixtures of particles can produce particularly interesting behaviors. Here we present a two-species three-dimensional swarm in which a behavior emerges that resembles cell division. We show that the dividing behavior exists across a narrow but finite band of parameters and for a wide range of population sizes. When executed in a two-dimensional environment the swarm's characteristics and dynamism manifest differently. In further experiments we show that repeated divisions can occur if the system is extended by a biased equilibrium process to control the split of populations. We propose that this repeated division behavior provides a simple model for cell-division mechanisms and is of interest for the formation of morphological structure and to swarm robotics.
Monteiro-Junior, Renato Sobral; da Silva Figueiredo, Luiz Felipe; Maciel-Pinheiro, Paulo de Tarso; Abud, Erick Lohan Rodrigues; Braga, Ana Elisa Mendes Montalvão; Barca, Maria Lage; Engedal, Knut; Nascimento, Osvaldo José M; Deslandes, Andrea Camaz; Laks, Jerson
2017-06-01
Improvements on balance, gait and cognition are some of the benefits of exergames. Few studies have investigated the cognitive effects of exergames in institutionalized older persons. To assess the acute effect of a single session of exergames on cognition of institutionalized older persons. Nineteen institutionalized older persons were randomly allocated to Wii (WG, n = 10, 86 ± 7 year, two males) or control groups (CG, n = 9, 86 ± 5 year, one male). The WG performed six exercises with virtual reality, whereas CG performed six exercises without virtual reality. Verbal fluency test (VFT), digit span forward and digit span backward were used to evaluate semantic memory/executive function, short-term memory and work memory, respectively, before and after exergames and Δ post- to pre-session (absolute) and Δ % (relative) were calculated. Parametric (t independent test) and nonparametric (Mann-Whitney test) statistics and effect size were applied to tests for efficacy. VFT was statistically significant within WG (-3.07, df = 9, p = 0.013). We found no statistically significant differences between the two groups (p > 0.05). Effect size between groups of Δ % (median = 21 %) showed moderate effect for WG (0.63). Our data show moderate improvement of semantic memory/executive function due to exergames session. It is possible that cognitive brain areas are activated during exergames, increasing clinical response. A single session of exergames showed no significant improvement in short-term memory, working memory and semantic memory/executive function. The effect size for verbal fluency was promising, and future studies on this issue should be developed. RBR-6rytw2.
Virtual reality for intelligent and interactive operating, training, and visualization systems
NASA Astrophysics Data System (ADS)
Freund, Eckhard; Rossmann, Juergen; Schluse, Michael
2000-10-01
Virtual Reality Methods allow a new and intuitive way of communication between man and machine. The basic idea of Virtual Reality (VR) is the generation of artificial computer simulated worlds, which the user not only can look at but also can interact with actively using data glove and data helmet. The main emphasis for the use of such techniques at the IRF is the development of a new generation of operator interfaces for the control of robots and other automation components and for intelligent training systems for complex tasks. The basic idea of the methods developed at the IRF for the realization of Projective Virtual Reality is to let the user work in the virtual world as he would act in reality. The user actions are recognized by the Virtual reality System and by means of new and intelligent control software projected onto the automation components like robots which afterwards perform the necessary actions in reality to execute the users task. In this operation mode the user no longer has to be a robot expert to generate tasks for robots or to program them, because intelligent control software recognizes the users intention and generated automatically the commands for nearly every automation component. Now, Virtual Reality Methods are ideally suited for universal man-machine-interfaces for the control and supervision of a big class of automation components, interactive training and visualization systems. The Virtual Reality System of the IRF-COSIMIR/VR- forms the basis for different projects starting with the control of space automation systems in the projects CIROS, VITAL and GETEX, the realization of a comprehensive development tool for the International Space Station and last but not least with the realistic simulation fire extinguishing, forest machines and excavators which will be presented in the final paper in addition to the key ideas of this Virtual Reality System.
The Virtual Reference Librarian's Handbook.
ERIC Educational Resources Information Center
Lipow, Anne Grodzins
This book is a practical guide to librarians and their administrators who are thinking about or in the early stages of providing virtual reference service. Part 1, "The Decision to Go Virtual," provides a context for thinking about virtual reference, including the benefits and problems, getting in the virtual frame of mind, and shopping…
Clinical perspective: creating an effective practice peer review process-a primer.
Gandhi, Manisha; Louis, Frances S; Wilson, Shae H; Clark, Steven L
2017-03-01
Peer review serves as an important adjunct to other hospital quality and safety programs. Despite its importance, the available literature contains virtually no guidance regarding the structure and function of effective peer review committees. This Clinical Perspective provides a summary of the purposes, structure, and functioning of effective peer review committees. We also discuss important legal considerations that are a necessary component of such processes. This discussion includes useful templates for case selection and review. Proper committee structure, membership, work flow, and leadership as well as close cooperation with the hospital medical executive committee and legal representatives are essential to any effective peer review process. A thoughtful, fair, systematic, and organized approach to creating a peer review process will lead to confidence in the committee by providers, hospital leadership, and patients. If properly constructed, such committees may also assist in monitoring and enforcing compliance with departmental protocols, thus reducing harm and promoting high-quality practice. Copyright © 2016 Elsevier Inc. All rights reserved.
Using Docker Compose for the Simple Deployment of an Integrated Drug Target Screening Platform.
List, Markus
2017-06-10
Docker virtualization allows for software tools to be executed in an isolated and controlled environment referred to as a container. In Docker containers, dependencies are provided exactly as intended by the developer and, consequently, they simplify the distribution of scientific software and foster reproducible research. The Docker paradigm is that each container encapsulates one particular software tool. However, to analyze complex biomedical data sets, it is often necessary to combine several software tools into elaborate workflows. To address this challenge, several Docker containers need to be instantiated and properly integrated, which complicates the software deployment process unnecessarily. Here, we demonstrate how an extension to Docker, Docker compose, can be used to mitigate these problems by providing a unified setup routine that deploys several tools in an integrated fashion. We demonstrate the power of this approach by example of a Docker compose setup for a drug target screening platform consisting of five integrated web applications and shared infrastructure, deployable in just two lines of codes.
An intact action-perception coupling depends on the integrity of the cerebellum.
Christensen, Andrea; Giese, Martin A; Sultan, Fahad; Mueller, Oliver M; Goericke, Sophia L; Ilg, Winfried; Timmann, Dagmar
2014-05-07
It is widely accepted that action and perception in humans functionally interact on multiple levels. Moreover, areas originally suggested to be predominantly motor-related, as the cerebellum, are also involved in action observation. However, as yet, few studies provided unequivocal evidence that the cerebellum is involved in the action perception coupling (APC), specifically in the integration of motor and multisensory information for perception. We addressed this question studying patients with focal cerebellar lesions in a virtual-reality paradigm measuring the effect of action execution on action perception presenting self-generated movements as point lights. We measured the visual sensitivity to the point light stimuli based on signal detection theory. Compared with healthy controls cerebellar patients showed no beneficial influence of action execution on perception indicating deficits in APC. Applying lesion symptom mapping, we identified distinct areas in the dentate nucleus and the lateral cerebellum of both hemispheres that are causally involved in APC. Lesions of the right ventral dentate, the ipsilateral motor representations (lobules V/VI), and most interestingly the contralateral posterior cerebellum (lobule VII) impede the benefits of motor execution on perception. We conclude that the cerebellum establishes time-dependent multisensory representations on different levels, relevant for motor control as well as supporting action perception. Ipsilateral cerebellar motor representations are thought to support the somatosensory state estimate of ongoing movements, whereas the ventral dentate and the contralateral posterior cerebellum likely support sensorimotor integration in the cerebellar-parietal loops. Both the correct somatosensory as well as the multisensory state representations are vital for an intact APC.
NASA Technical Reports Server (NTRS)
Hanold, Gregg T.; Hanold, David T.
2010-01-01
This paper presents a new Route Generation Algorithm that accurately and realistically represents human route planning and navigation for Military Operations in Urban Terrain (MOUT). The accuracy of this algorithm in representing human behavior is measured using the Unreal Tournament(Trademark) 2004 (UT2004) Game Engine to provide the simulation environment in which the differences between the routes taken by the human player and those of a Synthetic Agent (BOT) executing the A-star algorithm and the new Route Generation Algorithm can be compared. The new Route Generation Algorithm computes the BOT route based on partial or incomplete knowledge received from the UT2004 game engine during game play. To allow BOT navigation to occur continuously throughout the game play with incomplete knowledge of the terrain, a spatial network model of the UT2004 MOUT terrain is captured and stored in an Oracle 11 9 Spatial Data Object (SOO). The SOO allows a partial data query to be executed to generate continuous route updates based on the terrain knowledge, and stored dynamic BOT, Player and environmental parameters returned by the query. The partial data query permits the dynamic adjustment of the planned routes by the Route Generation Algorithm based on the current state of the environment during a simulation. The dynamic nature of this algorithm more accurately allows the BOT to mimic the routes taken by the human executing under the same conditions thereby improving the realism of the BOT in a MOUT simulation environment.
Leinen, Philipp; Green, Matthew F B; Esat, Taner; Wagner, Christian; Tautz, F Stefan; Temirov, Ruslan
2015-01-01
Controlled manipulation of single molecules is an important step towards the fabrication of single molecule devices and nanoscale molecular machines. Currently, scanning probe microscopy (SPM) is the only technique that facilitates direct imaging and manipulations of nanometer-sized molecular compounds on surfaces. The technique of hand-controlled manipulation (HCM) introduced recently in Beilstein J. Nanotechnol. 2014, 5, 1926-1932 simplifies the identification of successful manipulation protocols in situations when the interaction pattern of the manipulated molecule with its environment is not fully known. Here we present a further technical development that substantially improves the effectiveness of HCM. By adding Oculus Rift virtual reality goggles to our HCM set-up we provide the experimentalist with 3D visual feedback that displays the currently executed trajectory and the position of the SPM tip during manipulation in real time, while simultaneously plotting the experimentally measured frequency shift (Δf) of the non-contact atomic force microscope (NC-AFM) tuning fork sensor as well as the magnitude of the electric current (I) flowing between the tip and the surface. The advantages of the set-up are demonstrated by applying it to the model problem of the extraction of an individual PTCDA molecule from its hydrogen-bonded monolayer grown on Ag(111) surface.
Network Hardware Virtualization for Application Provisioning in Core Networks
Gumaste, Ashwin; Das, Tamal; Khandwala, Kandarp; ...
2017-02-03
We present that service providers and vendors are moving toward a network virtualized core, whereby multiple applications would be treated on their own merit in programmable hardware. Such a network would have the advantage of being customized for user requirements and allow provisioning of next generation services that are built specifically to meet user needs. In this article, we articulate the impact of network virtualization on networks that provide customized services and how a provider's business can grow with network virtualization. We outline a decision map that allows mapping of applications with technology that is supported in network-virtualization - orientedmore » equipment. Analogies to the world of virtual machines and generic virtualization show that hardware supporting network virtualization will facilitate new customer needs while optimizing the provider network from the cost and performance perspectives. A key conclusion of the article is that growth would yield sizable revenue when providers plan ahead in terms of supporting network-virtualization-oriented technology in their networks. To be precise, providers have to incorporate into their growth plans network elements capable of new service deployments while protecting network neutrality. Finally, a simulation study validates our NV-induced model.« less
Network Hardware Virtualization for Application Provisioning in Core Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gumaste, Ashwin; Das, Tamal; Khandwala, Kandarp
We present that service providers and vendors are moving toward a network virtualized core, whereby multiple applications would be treated on their own merit in programmable hardware. Such a network would have the advantage of being customized for user requirements and allow provisioning of next generation services that are built specifically to meet user needs. In this article, we articulate the impact of network virtualization on networks that provide customized services and how a provider's business can grow with network virtualization. We outline a decision map that allows mapping of applications with technology that is supported in network-virtualization - orientedmore » equipment. Analogies to the world of virtual machines and generic virtualization show that hardware supporting network virtualization will facilitate new customer needs while optimizing the provider network from the cost and performance perspectives. A key conclusion of the article is that growth would yield sizable revenue when providers plan ahead in terms of supporting network-virtualization-oriented technology in their networks. To be precise, providers have to incorporate into their growth plans network elements capable of new service deployments while protecting network neutrality. Finally, a simulation study validates our NV-induced model.« less
Generating Contextual Descriptions of Virtual Reality (VR) Spaces
NASA Astrophysics Data System (ADS)
Olson, D. M.; Zaman, C. H.; Sutherland, A.
2017-12-01
Virtual reality holds great potential for science communication, education, and research. However, interfaces for manipulating data and environments in virtual worlds are limited and idiosyncratic. Furthermore, speech and vision are the primary modalities by which humans collect information about the world, but the linking of visual and natural language domains is a relatively new pursuit in computer vision. Machine learning techniques have been shown to be effective at image and speech classification, as well as at describing images with language (Karpathy 2016), but have not yet been used to describe potential actions. We propose a technique for creating a library of possible context-specific actions associated with 3D objects in immersive virtual worlds based on a novel dataset generated natively in virtual reality containing speech, image, gaze, and acceleration data. We will discuss the design and execution of a user study in virtual reality that enabled the collection and the development of this dataset. We will also discuss the development of a hybrid machine learning algorithm linking vision data with environmental affordances in natural language. Our findings demonstrate that it is possible to develop a model which can generate interpretable verbal descriptions of possible actions associated with recognized 3D objects within immersive VR environments. This suggests promising applications for more intuitive user interfaces through voice interaction within 3D environments. It also demonstrates the potential to apply vast bodies of embodied and semantic knowledge to enrich user interaction within VR environments. This technology would allow for applications such as expert knowledge annotation of 3D environments, complex verbal data querying and object manipulation in virtual spaces, and computer-generated, dynamic 3D object affordances and functionality during simulations.
In Silico Simulation of a Clinical Trial Concerning Tumour Response to Radiotherapy
NASA Astrophysics Data System (ADS)
Dionysiou, Dimitra D.; Stamatakos, Georgios S.; Athanaileas, Theodoras E.; Merrychtas, Andreas; Kaklamani, Dimitra; Varvarigou, Theodora; Uzunoglu, Nikolaos
2008-11-01
The aim of this paper is to demonstrate how multilevel tumour growth and response to therapeutic treatment models can be used in order to simulate clinical trials, with the long-term intention of both better designing clinical studies and understanding their outcome based on basic biological science. For this purpose, an already developed computer simulation model of glioblastoma multiforme response to radiotherapy has been used and a clinical study concerning glioblastoma multiforme response to radiotherapy has been simulated. In order to facilitate the simulation of such virtual trials, a toolkit enabling the user-friendly execution of the simulations on grid infrastructures has been designed and developed. The results of the conducted virtual trial are in agreement with the outcome of the real clinical study.
Effects of virtual reality training on mobility in individuals with Parkinson's disease.
Melo, G; Kleiner, A F R; Lopes, J; Zen, G Z D; Marson, N; Santos, T; Dumont, A; Galli, M; Oliveira, C
2018-06-19
The aim of the present study was to evaluate the effects of gait training with virtual reality (VR) on mobility in patients with Parkinson's disease (PD). Thirty-seven individuals with PD were allocated to three groups (control = 12, VR = 12 and treadmill = 13) submitted to 12 twenty-minute training sessions. Evaluations involved the Timed Up and Go (TUG) test before the intervention, after one session, after all 12 sessions and 30 days after the end of the intervention. The groups submitted to VR and treadmill training took less time to execute the TUG test than the control group. Individuals with PD submitted to VR and treadmill gait training presented mobility improvements in comparison to traditional physiotherapeutic training. Copyright © 2018. Published by Elsevier B.V.
Cox, Daniel J; Brown, Timothy; Ross, Veerle; Moncrief, Matthew; Schmitt, Rose; Gaffney, Gary; Reeve, Ron
2017-08-01
Investigate how novice drivers with autism spectrum disorder (ASD) differ from experienced drivers and whether virtual reality driving simulation training (VRDST) improves ASD driving performance. 51 novice ASD drivers (mean age 17.96 years, 78% male) were randomized to routine training (RT) or one of three types of VRDST (8-12 sessions). All participants followed DMV behind-the-wheel training guidelines for earning a driver's license. Participants were assessed pre- and post-training for driving-specific executive function (EF) abilities and tactical driving skills. ASD drivers showed worse baseline EF and driving skills than experienced drivers. At post-assessment, VRDST significantly improved driving and EF performance over RT. This study demonstrated feasibility and potential efficacy of VRDST for novice ASD drivers.
CV-Muzar - The Virtual Community Environment that Uses Multiagent Systems for Formation of Groups
NASA Astrophysics Data System (ADS)
de Marchi, Ana Carolina Bertoletti; Moraes, Márcia Cristina
The purpose of this chapter is to present two agents' societies responsible for group formation (sub-communities) in CV-Muzar (Augusto Ruschi Zoobotanical Museum Virtual Community of the University of Passo Fundo). These societies are integrated to execute a data mining classification process. The first society is a static society that intends preprocessing data, investigating the information about groups in the CV-Muzar. The second society is a dynamical society that will make a classification process by analyzing the existing groups and look for participants that have common subjects in order to constitute a sub-community. The formation of sub-communities is a new functionality within the CV-Muzar that intends to bring the participants together according to two scopes: interest similarity and knowledge complementarities.
NASA Astrophysics Data System (ADS)
Aktas, Mehmet; Aydin, Galip; Donnellan, Andrea; Fox, Geoffrey; Granat, Robert; Grant, Lisa; Lyzenga, Greg; McLeod, Dennis; Pallickara, Shrideep; Parker, Jay; Pierce, Marlon; Rundle, John; Sayar, Ahmet; Tullis, Terry
2006-12-01
We describe the goals and initial implementation of the International Solid Earth Virtual Observatory (iSERVO). This system is built using a Web Services approach to Grid computing infrastructure and is accessed via a component-based Web portal user interface. We describe our implementations of services used by this system, including Geographical Information System (GIS)-based data grid services for accessing remote data repositories and job management services for controlling multiple execution steps. iSERVO is an example of a larger trend to build globally scalable scientific computing infrastructures using the Service Oriented Architecture approach. Adoption of this approach raises a number of research challenges in millisecond-latency message systems suitable for internet-enabled scientific applications. We review our research in these areas.
[Virtual surgical education: experience with medicine and surgery students].
Bonavina, Luigi; Mozzi, Enrico; Peracchia, Alberto
2003-01-01
The use of virtual reality simulation is currently being proposed within programs of postgraduate surgical education. The simple tasks that make up an operative procedure can be repeatedly performed until satisfactory execution is achieved, and the errors can be corrected by means of objective assessment. The aim of this study was to evaluate the applicability and the results of structured practice with the LapSim laparoscopic simulator used by undergraduate medical students. A significant reduction in operative time and errors was noted in several tasks (navigation, clipping, etc.). Although the transfer of technical skills to the operating room environment remains to be demonstrated, our research shows that this type of teaching is applicable to undergraduate medical students and in future may become a useful tool for selecting individuals for surgical residency programs.
NASA Technical Reports Server (NTRS)
Stevens, H. D.; Miles, E. S.; Rock, S. J.; Cannon, R. H.
1994-01-01
Expanding man's presence in space requires capable, dexterous robots capable of being controlled from the Earth. Traditional 'hand-in-glove' control paradigms require the human operator to directly control virtually every aspect of the robot's operation. While the human provides excellent judgment and perception, human interaction is limited by low bandwidth, delayed communications. These delays make 'hand-in-glove' operation from Earth impractical. In order to alleviate many of the problems inherent to remote operation, Stanford University's Aerospace Robotics Laboratory (ARL) has developed the Object-Based Task-Level Control architecture. Object-Based Task-Level Control (OBTLC) removes the burden of teleoperation from the human operator and enables execution of tasks not possible with current techniques. OBTLC is a hierarchical approach to control where the human operator is able to specify high-level, object-related tasks through an intuitive graphical user interface. Infrequent task-level command replace constant joystick operations, eliminating communications bandwidth and time delay problems. The details of robot control and task execution are handled entirely by the robot and computer control system. The ARL has implemented the OBTLC architecture on a set of Free-Flying Space Robots. The capability of the OBTLC architecture has been demonstrated by controlling the ARL Free-Flying Space Robots from NASA Ames Research Center.
NASA Astrophysics Data System (ADS)
Bamiah, Mervat Adib; Brohi, Sarfraz Nawaz; Chuprat, Suriayati
2012-01-01
Virtualization is one of the hottest research topics nowadays. Several academic researchers and developers from IT industry are designing approaches for solving security and manageability issues of Virtual Machines (VMs) residing on virtualized cloud infrastructures. Moving the application from a physical to a virtual platform increases the efficiency, flexibility and reduces management cost as well as effort. Cloud computing is adopting the paradigm of virtualization, using this technique, memory, CPU and computational power is provided to clients' VMs by utilizing the underlying physical hardware. Beside these advantages there are few challenges faced by adopting virtualization such as management of VMs and network traffic, unexpected additional cost and resource allocation. Virtual Machine Monitor (VMM) or hypervisor is the tool used by cloud providers to manage the VMs on cloud. There are several heterogeneous hypervisors provided by various vendors that include VMware, Hyper-V, Xen and Kernel Virtual Machine (KVM). Considering the challenge of VM management, this paper describes several techniques to monitor and manage virtualized cloud infrastructures.
PiCO QL: A software library for runtime interactive queries on program data
NASA Astrophysics Data System (ADS)
Fragkoulis, Marios; Spinellis, Diomidis; Louridas, Panos
PiCO QL is an open source C/C++ software whose scientific scope is real-time interactive analysis of in-memory data through SQL queries. It exposes a relational view of a system's or application's data structures, which is queryable through SQL. While the application or system is executing, users can input queries through a web-based interface or issue web service requests. Queries execute on the live data structures through the respective relational views. PiCO QL makes a good candidate for ad-hoc data analysis in applications and for diagnostics in systems settings. Applications of PiCO QL include the Linux kernel, the Valgrind instrumentation framework, a GIS application, a virtual real-time observatory of stellar objects, and a source code analyser.
Latency and User Performance in Virtual Environments and Augmented Reality
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.
2009-01-01
System rendering latency has been recognized by senior researchers, such as Professor Fredrick Brooks of UNC (Turing Award 1999), as a major factor limiting the realism and utility of head-referenced displays systems. Latency has been shown to reduce the user's sense of immersion within a virtual environment, disturb user interaction with virtual objects, and to contribute to motion sickness during some simulation tasks. Latency, however, is not just an issue for external display systems since finite nerve conduction rates and variation in transduction times in the human body's sensors also pose problems for latency management within the nervous system. Some of the phenomena arising from the brain's handling of sensory asynchrony due to latency will be discussed as a prelude to consideration of the effects of latency in interactive displays. The causes and consequences of the erroneous movement that appears in displays due to latency will be illustrated with examples of the user performance impact provided by several experiments. These experiments will review the generality of user sensitivity to latency when users judge either object or environment stability. Hardware and signal processing countermeasures will also be discussed. In particular the tuning of a simple extrapolative predictive filter not using a dynamic movement model will be presented. Results show that it is possible to adjust this filter so that the appearance of some latencies may be hidden without the introduction of perceptual artifacts such as overshoot. Several examples of the effects of user performance will be illustrated by three-dimensional tracking and tracing tasks executed in virtual environments. These experiments demonstrate classic phenomena known from work on manual control and show the need for very responsive systems if they are indented to support precise manipulation. The practical benefits of removing interfering latencies from interactive systems will be emphasized with some classic final examples from surgical telerobotics, and human-computer interaction.
A virtual shopping test for realistic assessment of cognitive function
2013-01-01
Background Cognitive dysfunction caused by brain injury often prevents a patient from achieving a healthy and high quality of life. By now, each cognitive function is assessed precisely by neuropsychological tests. However, it is also important to provide an overall assessment of the patients’ ability in their everyday life. We have developed a Virtual Shopping Test (VST) using virtual reality technology. The objective of this study was to clarify 1) the significance of VST by comparing VST with other conventional tests, 2) the applicability of VST to brain-damaged patients, and 3) the performance of VST in relation to age differences. Methods The participants included 10 patients with brain damage, 10 age-matched healthy subjects for controls, 10 old healthy subjects, and 10 young healthy subjects. VST and neuropsychological tests/questionnaires about attention, memory and executive function were conducted on the patients, while VST and the Mini-Mental State Examination (MMSE) were conducted on the controls and healthy subjects. Within the VST, the participants were asked to buy four items in the virtual shopping mall quickly in a rational way. The score for evaluation included the number of items bought correctly, the number of times to refer to hints, the number of movements between shops, and the total time spent to complete the shopping. Results Some variables on VST correlated with the scores of conventional assessment about attention and everyday memory. The mean number of times referring to hints and the mean number of movements were significantly larger for the patients with brain damage, and the mean total time was significantly longer for the patients than for the controls. In addition, the mean total time was significantly longer for the old than for the young. Conclusions The results suggest that VST is able to evaluate the ability of attention and everyday memory in patients with brain damage. The time of VST is increased by age. PMID:23777412
A virtual shopping test for realistic assessment of cognitive function.
Okahashi, Sayaka; Seki, Keiko; Nagano, Akinori; Luo, Zhiwei; Kojima, Maki; Futaki, Toshiko
2013-06-18
Cognitive dysfunction caused by brain injury often prevents a patient from achieving a healthy and high quality of life. By now, each cognitive function is assessed precisely by neuropsychological tests. However, it is also important to provide an overall assessment of the patients' ability in their everyday life. We have developed a Virtual Shopping Test (VST) using virtual reality technology. The objective of this study was to clarify 1) the significance of VST by comparing VST with other conventional tests, 2) the applicability of VST to brain-damaged patients, and 3) the performance of VST in relation to age differences. The participants included 10 patients with brain damage, 10 age-matched healthy subjects for controls, 10 old healthy subjects, and 10 young healthy subjects. VST and neuropsychological tests/questionnaires about attention, memory and executive function were conducted on the patients, while VST and the Mini-Mental State Examination (MMSE) were conducted on the controls and healthy subjects. Within the VST, the participants were asked to buy four items in the virtual shopping mall quickly in a rational way. The score for evaluation included the number of items bought correctly, the number of times to refer to hints, the number of movements between shops, and the total time spent to complete the shopping. Some variables on VST correlated with the scores of conventional assessment about attention and everyday memory. The mean number of times referring to hints and the mean number of movements were significantly larger for the patients with brain damage, and the mean total time was significantly longer for the patients than for the controls. In addition, the mean total time was significantly longer for the old than for the young. The results suggest that VST is able to evaluate the ability of attention and everyday memory in patients with brain damage. The time of VST is increased by age.
Scardovelli, Terigi Augusto; Frère, Annie France
2015-01-01
Many children with motor impairments cannot participate in games and jokes that contribute to their formation. Currently, commercial computer games there are few options of software and sufficiently flexible access devices to meet the needs of this group of children. In this study, a peripheral access device and a 3D computerized game that do not require the actions of dragging, clicking, or activating various keys at the same time were developed. The peripheral access device consists of a webcam and a supervisory system that processes the images. This method provides a field of action that can be adjusted to various types of motor impairments. To analyze the sensitivity of the commands, a virtual course was developed using the scenario of a path of straight lines and curves. A volunteer with good ability in virtual games performed a short training with the virtual course and, after 15min of training, obtained similar results with a standard keyboard and the adapted peripheral device. A 3D game in the Amazon forest was developed using the Blender 3D tool. This free software was used to model the characters and scenarios. To evaluate the usability of the 3D game, the game was tested by 20 volunteers without motor impairments (group A) and 13 volunteers with severe motor limitations of the upper limbs (group B). All the volunteers (group A and B) could easily execute all the actions of the game using the adapted peripheral device. The majority positively evaluated the questions of usability and expressed their satisfaction. The computerized game coupled to the adapted device will offer the option of leisure and learning to people with severe motor impairments who previously lacked this possibility. It also provided equality in this activity to all the users. Copyright © 2014. Published by Elsevier Ireland Ltd.
Durisko, Corrine; McCue, Michael; Doyle, Patrick J.; Dickey, Michael Walsh
2016-01-01
Abstract Background: Neuropsychological testing is a central aspect of stroke research because it provides critical information about the cognitive-behavioral status of stroke survivors, as well as the diagnosis and treatment of stroke-related disorders. Standard neuropsychological methods rely upon face-to-face interactions between a patient and researcher, which creates geographic and logistical barriers that impede research progress and treatment advances. Introduction: To overcome these barriers, we created a flexible and integrated system for the remote acquisition of neuropsychological data (RAND). The system we developed has a secure architecture that permits collaborative videoconferencing. The system supports shared audiovisual feeds that can provide continuous virtual interaction between a participant and researcher throughout a testing session. Shared presentation and computing controls can be used to deliver auditory and visual test items adapted from standard face-to-face materials or execute computer-based assessments. Spoken and manual responses can be acquired, and the components of the session can be recorded for offline data analysis. Materials and Methods: To evaluate its feasibility, our RAND system was used to administer a speech-language test battery to 16 stroke survivors with a variety of communication, sensory, and motor impairments. The sessions were initiated virtually without prior face-to-face instruction in the RAND technology or test battery. Results: Neuropsychological data were successfully acquired from all participants, including those with limited technology experience, and those with a communication, sensory, or motor impairment. Furthermore, participants indicated a high level of satisfaction with the RAND system and the remote assessment that it permits. Conclusions: The results indicate the feasibility of using the RAND system for virtual home-based neuropsychological assessment without prior face-to-face contact between a participant and researcher. Because our RAND system architecture uses off-the-shelf technology and software, it can be duplicated without specialized expertise or equipment. In sum, our RAND system offers a readily available and promising alternative to face-to-face neuropsychological assessment in stroke research. PMID:27214198
Electronic-projecting Moire method applying CBR-technology
NASA Astrophysics Data System (ADS)
Kuzyakov, O. N.; Lapteva, U. V.; Andreeva, M. A.
2018-01-01
Electronic-projecting method based on Moire effect for examining surface topology is suggested. Conditions of forming Moire fringes and their parameters’ dependence on reference parameters of object and virtual grids are analyzed. Control system structure and decision-making subsystem are elaborated. Subsystem execution includes CBR-technology, based on applying case base. The approach related to analysing and forming decision for each separate local area with consequent formation of common topology map is applied.
A Medical Area Network of Virtual Technology (MANVT)
2011-10-01
translational research projects be captured, undergo quality control and are stored/managed in such a way that they can be mined to test and generate new...translational research is that access to detailed clinical data typically requires proper informed consent and IRB approval; however, in order to design ...attributes of interest at an early stage of the process. 12B2 overcomes these challenges by enabling researchers to design and execute queries
2016-01-01
issues comes from the Fukushima Daiichi nuclear disaster (2011). The local medical health professional on staff at the U.S. embassy in Tokyo was not...distribution unlimited. This page intentionally left blank. iii EXECUTIVE SUMMARY An improvised nuclear device (IND...from the phase one analysis are as follows : • There is strong consistency in both the key decisions and underlying skills emphasized by emergency
1988-02-29
by memory copyin g will degrade system performance on shared-memory multiprocessors. Virtual memor y (VM) remapping, as opposed to memory copying...Bershad, G.D. Giuseppe Facchetti, Kevin Fall, G . Scott Graham, Ellen Nelson , P. Venkat Rangan, Bruno Sartirana, Shin-Yuan Tzou, Raj Vaswani, and Robert...Remote Execution in NEST", IEEE Trans. on Software Eng. 13, 8 (August 1987), 905-912. 3. G . T. Almes, A. P. Black, E. Lazowska and J. Noe, "The Eden
NASA Technical Reports Server (NTRS)
1992-01-01
A major innovation of the Civil Service Reform Act of 1978 was the creation of a Senior Executive Service (SES). The purpose of the SES is both simple and bold: to attract executives of the highest quality into Federal service and to retain them by providing outstanding opportunities for career growth and reward. The SES is intended to: provide greater authority in managing executive resources; attract and retain highly competent executives, and assign them where they will effectively accomplish their missions and best use their talents; provide for systematic development of executives; hold executives accountable for individual and organizational performance; reward outstanding performers and remove poor performers; and provide for an executive merit system free of inappropriate personnel practices and arbitrary actions. This Handbook summarizes the key features of the SES at NASA. It is intended as a special welcome to new appointees and also as a general reference document. It contains an overview of SES management at NASA, including the Executive Resources Board and the Performance Review Board, which are mandated by law to carry out key SES functions. In addition, assistance is provided by a Senior Executive Committee in certain reviews and decisions and by Executive Position Managers in day-to-day administration and oversight.
Application driven interface generation for EASIE. M.S. Thesis
NASA Technical Reports Server (NTRS)
Kao, Ya-Chen
1992-01-01
The Environment for Application Software Integration and Execution (EASIE) provides a user interface and a set of utility programs which support the rapid integration and execution of analysis programs about a central relational database. EASIE provides users with two basic modes of execution. One of them is a menu-driven execution mode, called Application-Driven Execution (ADE), which provides sufficient guidance to review data, select a menu action item, and execute an application program. The other mode of execution, called Complete Control Execution (CCE), provides an extended executive interface which allows in-depth control of the design process. Currently, the EASIE system is based on alphanumeric techniques only. It is the purpose of this project to extend the flexibility of the EASIE system in the ADE mode by implementing it in a window system. Secondly, a set of utilities will be developed to assist the experienced engineer in the generation of an ADE application.
Virtual gaming simulation of a mental health assessment: A usability study.
Verkuyl, Margaret; Romaniuk, Daria; Mastrilli, Paula
2018-05-18
Providing safe and realistic virtual simulations could be an effective way to facilitate the transition from the classroom to clinical practice. As nursing programs begin to include virtual simulations as a learning strategy; it is critical to first assess the technology for ease of use and usefulness. A virtual gaming simulation was developed, and a usability study was conducted to assess its ease of use and usefulness for students and faculty. The Technology Acceptance Model provided the framework for the study, which included expert review and testing by nursing faculty and nursing students. This study highlighted the importance of assessing ease of use and usefulness in a virtual game simulation and provided feedback for the development of an effective virtual gaming simulation. The study participants said the virtual gaming simulation was engaging, realistic and similar to a clinical experience. Participants found the game easy to use and useful. Testing provided the development team with ideas to improve the user interface. The usability methodology provided is a replicable approach to testing virtual experiences before a research study or before implementing virtual experiences into curriculum. Copyright © 2018 Elsevier Ltd. All rights reserved.
Sample Analysis at Mars Instrument Simulator
NASA Technical Reports Server (NTRS)
Benna, Mehdi; Nolan, Tom
2013-01-01
The Sample Analysis at Mars Instrument Simulator (SAMSIM) is a numerical model dedicated to plan and validate operations of the Sample Analysis at Mars (SAM) instrument on the surface of Mars. The SAM instrument suite, currently operating on the Mars Science Laboratory (MSL), is an analytical laboratory designed to investigate the chemical and isotopic composition of the atmosphere and volatiles extracted from solid samples. SAMSIM was developed using Matlab and Simulink libraries of MathWorks Inc. to provide MSL mission planners with accurate predictions of the instrument electrical, thermal, mechanical, and fluid responses to scripted commands. This tool is a first example of a multi-purpose, full-scale numerical modeling of a flight instrument with the purpose of supplementing or even eliminating entirely the need for a hardware engineer model during instrument development and operation. SAMSIM simulates the complex interactions that occur between the instrument Command and Data Handling unit (C&DH) and all subsystems during the execution of experiment sequences. A typical SAM experiment takes many hours to complete and involves hundreds of components. During the simulation, the electrical, mechanical, thermal, and gas dynamics states of each hardware component are accurately modeled and propagated within the simulation environment at faster than real time. This allows the simulation, in just a few minutes, of experiment sequences that takes many hours to execute on the real instrument. The SAMSIM model is divided into five distinct but interacting modules: software, mechanical, thermal, gas flow, and electrical modules. The software module simulates the instrument C&DH by executing a customized version of the instrument flight software in a Matlab environment. The inputs and outputs to this synthetic C&DH are mapped to virtual sensors and command lines that mimic in their structure and connectivity the layout of the instrument harnesses. This module executes, and thus validates, complex command scripts prior to their up-linking to the SAM instrument. As an output, this module generates synthetic data and message logs at a rate that is similar to the actual instrument.
Virtual manufacturing work cell for engineering
NASA Astrophysics Data System (ADS)
Watanabe, Hideo; Ohashi, Kazushi; Takahashi, Nobuyuki; Kato, Kiyotaka; Fujita, Satoru
1997-12-01
The life cycles of products have been getting shorter. To meet this rapid turnover, manufacturing systems must be frequently changed as well. In engineering to develop manufacturing systems, there are several tasks such as process planning, layout design, programming, and final testing using actual machines. This development of manufacturing systems takes a long time and is expensive. To aid the above engineering process, we have developed the virtual manufacturing workcell (VMW). This paper describes a concept of VMW and design method through computer aided manufacturing engineering using VMW (CAME-VMW) related to the above engineering tasks. The VMW has all design data, and realizes a behavior of equipment and devices using a simulator. The simulator has logical and physical functionality. The one simulates a sequence control and the other simulates motion control, shape movement in 3D space. The simulator can execute the same control software made for actual machines. Therefore we can verify the behavior precisely before the manufacturing workcell will be constructed. The VMW creates engineering work space for several engineers and offers debugging tools such as virtual equipment and virtual controllers. We applied this VMW to development of a transfer workcell for vaporization machine in actual manufacturing system to produce plasma display panel (PDP) workcell and confirmed its effectiveness.
Parallel-distributed mobile robot simulator
NASA Astrophysics Data System (ADS)
Okada, Hiroyuki; Sekiguchi, Minoru; Watanabe, Nobuo
1996-06-01
The aim of this project is to achieve an autonomous learning and growth function based on active interaction with the real world. It should also be able to autonomically acquire knowledge about the context in which jobs take place, and how the jobs are executed. This article describes a parallel distributed movable robot system simulator with an autonomous learning and growth function. The autonomous learning and growth function which we are proposing is characterized by its ability to learn and grow through interaction with the real world. When the movable robot interacts with the real world, the system compares the virtual environment simulation with the interaction result in the real world. The system then improves the virtual environment to match the real-world result more closely. This the system learns and grows. It is very important that such a simulation is time- realistic. The parallel distributed movable robot simulator was developed to simulate the space of a movable robot system with an autonomous learning and growth function. The simulator constructs a virtual space faithful to the real world and also integrates the interfaces between the user, the actual movable robot and the virtual movable robot. Using an ultrafast CG (computer graphics) system (FUJITSU AG series), time-realistic 3D CG is displayed.
Cyber-Physical System Security With Deceptive Virtual Hosts for Industrial Control Networks
Vollmer, Todd; Manic, Milos
2014-05-01
A challenge facing industrial control network administrators is protecting the typically large number of connected assets for which they are responsible. These cyber devices may be tightly coupled with the physical processes they control and human induced failures risk dire real-world consequences. Dynamic virtual honeypots are effective tools for observing and attracting network intruder activity. This paper presents a design and implementation for self-configuring honeypots that passively examine control system network traffic and actively adapt to the observed environment. In contrast to prior work in the field, six tools were analyzed for suitability of network entity information gathering. Ettercap, anmore » established network security tool not commonly used in this capacity, outperformed the other tools and was chosen for implementation. Utilizing Ettercap XML output, a novel four-step algorithm was developed for autonomous creation and update of a Honeyd configuration. This algorithm was tested on an existing small campus grid and sensor network by execution of a collaborative usage scenario. Automatically created virtual hosts were deployed in concert with an anomaly behavior (AB) system in an attack scenario. Virtual hosts were automatically configured with unique emulated network stack behaviors for 92% of the targeted devices. The AB system alerted on 100% of the monitored emulated devices.« less
Taillade, Mathieu; N'Kaoua, Bernard; Sauzéon, Hélène
2016-01-01
The present study investigated the effect of aging on direct navigation measures and self-reported ones according to the real-virtual test manipulation. Navigation (wayfinding tasks) and spatial memory (paper-pencil tasks) performances, obtained either in real-world or in virtual-laboratory test conditions, were compared between young (n = 32) and older (n = 32) adults who had self-rated their everyday navigation behavior (SBSOD scale). Real age-related differences were observed in navigation tasks as well as in paper-pencil tasks, which investigated spatial learning relative to the distinction between survey-route knowledge. The manipulation of test conditions (real vs. virtual) did not change these age-related differences, which are mostly explained by age-related decline in both spatial abilities and executive functioning (measured with neuropsychological tests). In contrast, elderly adults did not differ from young adults in their self-reporting relative to everyday navigation, suggesting some underestimation of navigation difficulties by elderly adults. Also, spatial abilities in young participants had a mediating effect on the relations between actual and self-reported navigation performance, but not for older participants. So, it is assumed that the older adults carried out the navigation task with fewer available spatial abilities compared to young adults, resulting in inaccurate self-estimates. PMID:26834666
Taillade, Mathieu; N'Kaoua, Bernard; Sauzéon, Hélène
2015-01-01
The present study investigated the effect of aging on direct navigation measures and self-reported ones according to the real-virtual test manipulation. Navigation (wayfinding tasks) and spatial memory (paper-pencil tasks) performances, obtained either in real-world or in virtual-laboratory test conditions, were compared between young (n = 32) and older (n = 32) adults who had self-rated their everyday navigation behavior (SBSOD scale). Real age-related differences were observed in navigation tasks as well as in paper-pencil tasks, which investigated spatial learning relative to the distinction between survey-route knowledge. The manipulation of test conditions (real vs. virtual) did not change these age-related differences, which are mostly explained by age-related decline in both spatial abilities and executive functioning (measured with neuropsychological tests). In contrast, elderly adults did not differ from young adults in their self-reporting relative to everyday navigation, suggesting some underestimation of navigation difficulties by elderly adults. Also, spatial abilities in young participants had a mediating effect on the relations between actual and self-reported navigation performance, but not for older participants. So, it is assumed that the older adults carried out the navigation task with fewer available spatial abilities compared to young adults, resulting in inaccurate self-estimates.
Flexible workflow sharing and execution services for e-scientists
NASA Astrophysics Data System (ADS)
Kacsuk, Péter; Terstyanszky, Gábor; Kiss, Tamas; Sipos, Gergely
2013-04-01
The sequence of computational and data manipulation steps required to perform a specific scientific analysis is called a workflow. Workflows that orchestrate data and/or compute intensive applications on Distributed Computing Infrastructures (DCIs) recently became standard tools in e-science. At the same time the broad and fragmented landscape of workflows and DCIs slows down the uptake of workflow-based work. The development, sharing, integration and execution of workflows is still a challenge for many scientists. The FP7 "Sharing Interoperable Workflow for Large-Scale Scientific Simulation on Available DCIs" (SHIWA) project significantly improved the situation, with a simulation platform that connects different workflow systems, different workflow languages, different DCIs and workflows into a single, interoperable unit. The SHIWA Simulation Platform is a service package, already used by various scientific communities, and used as a tool by the recently started ER-flow FP7 project to expand the use of workflows among European scientists. The presentation will introduce the SHIWA Simulation Platform and the services that ER-flow provides based on the platform to space and earth science researchers. The SHIWA Simulation Platform includes: 1. SHIWA Repository: A database where workflows and meta-data about workflows can be stored. The database is a central repository to discover and share workflows within and among communities . 2. SHIWA Portal: A web portal that is integrated with the SHIWA Repository and includes a workflow executor engine that can orchestrate various types of workflows on various grid and cloud platforms. 3. SHIWA Desktop: A desktop environment that provides similar access capabilities than the SHIWA Portal, however it runs on the users' desktops/laptops instead of a portal server. 4. Workflow engines: the ASKALON, Galaxy, GWES, Kepler, LONI Pipeline, MOTEUR, Pegasus, P-GRADE, ProActive, Triana, Taverna and WS-PGRADE workflow engines are already integrated with the execution engine of the SHIWA Portal. Other engines can be added when required. Through the SHIWA Portal one can define and run simulations on the SHIWA Virtual Organisation, an e-infrastructure that gathers computing and data resources from various DCIs, including the European Grid Infrastructure. The Portal via third party workflow engines provides support for the most widely used academic workflow engines and it can be extended with other engines on demand. Such extensions translate between workflow languages and facilitate the nesting of workflows into larger workflows even when those are written in different languages and require different interpreters for execution. Through the workflow repository and the portal lonely scientists and scientific collaborations can share and offer workflows for reuse and execution. Given the integrated nature of the SHIWA Simulation Platform the shared workflows can be executed online, without installing any special client environment and downloading workflows. The FP7 "Building a European Research Community through Interoperable Workflows and Data" (ER-flow) project disseminates the achievements of the SHIWA project and use these achievements to build workflow user communities across Europe. ER-flow provides application supports to research communities within and beyond the project consortium to develop, share and run workflows with the SHIWA Simulation Platform.
ERIC Educational Resources Information Center
Argan, Mehpare Tokay; Argan, Metin; Suher, Idil K.
2011-01-01
Like in all areas, virtual communities make their presence felt in the area of healthcare too. Virtual communities play an important role in healthcare in terms of gathering information on healthcare, sharing of personal interests and providing social support. Virtual communities provide a way for a group of peers to communicate with each other.…
Dagenais, Emmanuelle; Rouleau, Isabelle; Tremblay, Alexandra; Demers, Mélanie; Roger, Élaine; Jobin, Céline; Duquette, Pierre
2016-01-01
Patients diagnosed with multiple sclerosis (MS) often report prospective memory (PM) deficits. Although PM is important for daily functioning, it is not formally assessed in clinical practice. The aim of this study was to examine the role of executive functions in MS patients' PM revealed by the effect of strength of cue-action association on PM performance. Thirty-nine MS patients were compared to 18 healthy controls matched for age, gender, and education on a PM task modulating the strength of association between the cue and the intended action. Deficits in MS patients affecting both prospective and retrospective components of PM were confirmed using 2 × 2 × 2 mixed analyses of variance (ANOVAs). Among patients, multiple regression analyses revealed that the impairment was modulated by the efficiency of executive functions, whereas retrospective memory seemed to have little impact on PM performance, contrary to expectation. More specifically, results of 2 × 2 × 2 mixed-model analyses of covariance (ANCOVAs) showed that low-executive patients had more difficulty detecting and, especially, retrieving the appropriate action when the cue and the action were unrelated, whereas high-executive patients' performance seemed to be virtually unaffected by the cue-action association. Using an objective measure, these findings confirm the presence of PM deficits in MS. They also suggest that such deficits depend on executive functioning and can be reduced when automatic PM processes are engaged through semantic cue-action association. They underscore the importance of assessing PM in clinical settings through a cognitive evaluation and offer an interesting avenue for rehabilitation.
Fusion interfaces for tactical environments: An application of virtual reality technology
NASA Technical Reports Server (NTRS)
Haas, Michael W.
1994-01-01
The term Fusion Interface is defined as a class of interface which integrally incorporates both virtual and nonvirtual concepts and devices across the visual, auditory, and haptic sensory modalities. A fusion interface is a multisensory virtually-augmented synthetic environment. A new facility has been developed within the Human Engineering Division of the Armstrong Laboratory dedicated to exploratory development of fusion interface concepts. This new facility, the Fusion Interfaces for Tactical Environments (FITE) Facility is a specialized flight simulator enabling efficient concept development through rapid prototyping and direct experience of new fusion concepts. The FITE Facility also supports evaluation of fusion concepts by operation fighter pilots in an air combat environment. The facility is utilized by a multidisciplinary design team composed of human factors engineers, electronics engineers, computer scientists, experimental psychologists, and oeprational pilots. The FITE computational architecture is composed of twenty-five 80486-based microcomputers operating in real-time. The microcomputers generate out-the-window visuals, in-cockpit and head-mounted visuals, localized auditory presentations, haptic displays on the stick and rudder pedals, as well as executing weapons models, aerodynamic models, and threat models.
Virtual reality-based prospective memory training program for people with acquired brain injury.
Yip, Ben C B; Man, David W K
2013-01-01
Acquired brain injuries (ABI) may display cognitive impairments and lead to long-term disabilities including prospective memory (PM) failure. Prospective memory serves to remember to execute an intended action in the future. PM problems would be a challenge to an ABI patient's successful community reintegration. While retrospective memory (RM) has been extensively studied, treatment programs for prospective memory are rarely reported. The development of a treatment program for PM, which is considered timely, can be cost-effective and appropriate to the patient's environment. A 12-session virtual reality (VR)-based cognitive rehabilitation program was developed using everyday PM activities as training content. 37 subjects were recruited to participate in a pretest-posttest control experimental study to evaluate its treatment effectiveness. Results suggest that significantly better changes were seen in both VR-based and real-life PM outcome measures, related cognitive attributes such as frontal lobe functions and semantic fluency. VR-based training may be well accepted by ABI patients as encouraging improvement has been shown. Large-scale studies of a virtual reality-based prospective memory (VRPM) training program are indicated.
NASA Technical Reports Server (NTRS)
Rabideau, Gregg R.; Chien, Steve A.
2010-01-01
AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.
Working Group Reports and Presentations: Virtual Worlds and Virtual Exploration
NASA Technical Reports Server (NTRS)
LAmoreaux, Claudia
2006-01-01
Scientists and engineers are continually developing innovative methods to capitalize on recent developments in computational power. Virtual worlds and virtual exploration present a new toolset for project design, implementation, and resolution. Replication of the physical world in the virtual domain provides stimulating displays to augment current data analysis techniques and to encourage public participation. In addition, the virtual domain provides stakeholders with a low cost, low risk design and test environment. The following document defines a virtual world and virtual exploration, categorizes the chief motivations for virtual exploration, elaborates upon specific objectives, identifies roadblocks and enablers for realizing the benefits, and highlights the more immediate areas of implementation (i.e. the action items). While the document attempts a comprehensive evaluation of virtual worlds and virtual exploration, the innovative nature of the opportunities presented precludes completeness. The authors strongly encourage readers to derive additional means of utilizing the virtual exploration toolset.
A Future Outlook for Women Executives.
1988-04-01
job satisfaction, to name a few. vii CONTINUED After the women’s liberation movement of the 1960s, however, women entered all realms of the business ...competing In a man’s world was equally distorted (2:339). Women still held traditional Jobs while the positions In the professional and business worlds...changing role of women , including the influx of women into the workplace " (16:130). Virtually all new Jobs by the year 2000 will be in the service
CloudMC: a cloud computing application for Monte Carlo simulation.
Miras, H; Jiménez, R; Miras, C; Gomà, C
2013-04-21
This work presents CloudMC, a cloud computing application-developed in Windows Azure®, the platform of the Microsoft® cloud-for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based-the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.
Computational Study of Scenarios Regarding Explosion Risk Mitigation
NASA Astrophysics Data System (ADS)
Vlasin, Nicolae-Ioan; Mihai Pasculescu, Vlad; Florea, Gheorghe-Daniel; Cornel Suvar, Marius
2016-10-01
Exploration in order to discover new deposits of natural gas, upgrading techniques to exploit these resources and new ways to convert the heat capacity of these gases into industrial usable energy is the research areas of great interest around the globe. But all activities involving the handling of natural gas (exploitation, transport, combustion) are subjected to the same type of risk: the risk to explosion. Experiments carried out physical scenarios to determine ways to reduce this risk can be extremely costly, requiring suitable premises, equipment and apparatus, manpower, time and, not least, presenting the risk of personnel injury. Taking in account the above mentioned, the present paper deals with the possibility of studying the scenarios of gas explosion type events in virtual domain, exemplifying by performing a computer simulation of a stoichiometric air - methane explosion (methane is the main component of natural gas). The advantages of computer-assisted imply are the possibility of using complex virtual geometries of any form as the area of deployment phenomenon, the use of the same geometry for an infinite number of settings of initial parameters as input, total elimination the risk of personnel injury, decrease the execution time etc. Although computer simulations are hardware resources consuming and require specialized personnel to use the CFD (Computational Fluid Dynamics) techniques, the costs and risks associated with these methods are greatly diminished, presenting, in the same time, a major benefit in terms of execution time.
Safe teleoperation based on flexible intraoperative planning for robot-assisted laser microsurgery.
Mattos, Leonardo S; Caldwell, Darwin G
2012-01-01
This paper describes a new intraoperative planning system created to improve precision and safety in teleoperated laser microsurgeries. It addresses major safety issues related to real-time control of a surgical laser during teleoperated procedures, which are related to the reliability and robustness of the telecommunication channels. Here, a safe solution is presented, consisting in a new planning system architecture that maintains the flexibility and benefits of real-time teleoperation and keeps the surgeon in control of all surgical actions. The developed system is based on our virtual scalpel system for robot-assisted laser microsurgery, and allows the intuitive use of stylus to create surgical plans directly over live video of the surgical field. In this case, surgical plans are defined as graphic objects overlaid on the live video, which can be easily modified or replaced as needed, and which are transmitted to the main surgical system controller for subsequent safe execution. In the process of improving safety, this new planning system also resulted in improved laser aiming precision and improved capability for higher quality laser procedures, both due to the new surgical plan execution module, which allows very fast and precise laser aiming control. Experimental results presented herein show that, in addition to the safety improvements, the new planning system resulted in a 48% improvement in laser aiming precision when compared to the previous virtual scalpel system.
Improvement of force health protection through preventive medicine oversight of contractor support.
Mower, Scott A
2009-01-01
Unprecedented numbers of contractors are used throughout the Iraq theater of operations to alleviate military manpower shortages. At virtually every major forward operating base, US-based contractors perform the preponderance of essential life support services. At more remote sites, local national contractors are increasingly relied upon to maintain chemical latrines, remove trash, deliver bulk water, and execute other janitorial functions. Vigorous oversight of contractor performance is essential to ensure services are delivered according to specified standards. Poor oversight can increase the risk of criminal activities, permit substandard performance, elevate disease and nonbattle injury rates, degrade morale, and diminish Soldier readiness. As the principal force health protection proponents in the Department of Defense, preventive medicine units must be tightly integrated into the oversight processes. This article defines the force health protection implications associated with service contracts and provide recommendations for strengthening preventive medicine's oversight role.
Sensory Agreement Guides Kinetic Energy Optimization of Arm Movements during Object Manipulation.
Farshchiansadegh, Ali; Melendez-Calderon, Alejandro; Ranganathan, Rajiv; Murphey, Todd D; Mussa-Ivaldi, Ferdinando A
2016-04-01
The laws of physics establish the energetic efficiency of our movements. In some cases, like locomotion, the mechanics of the body dominate in determining the energetically optimal course of action. In other tasks, such as manipulation, energetic costs depend critically upon the variable properties of objects in the environment. Can the brain identify and follow energy-optimal motions when these motions require moving along unfamiliar trajectories? What feedback information is required for such optimal behavior to occur? To answer these questions, we asked participants to move their dominant hand between different positions while holding a virtual mechanical system with complex dynamics (a planar double pendulum). In this task, trajectories of minimum kinetic energy were along curvilinear paths. Our findings demonstrate that participants were capable of finding the energy-optimal paths, but only when provided with veridical visual and haptic information pertaining to the object, lacking which the trajectories were executed along rectilinear paths.
Smart training environment for power electronics
NASA Astrophysics Data System (ADS)
Hinov, Nikolay; Hranov, Tsveti
2017-12-01
The idea of the paper is to present a successful symbiosis of the products of the leading firms in the electronics - National Instruments and Texas Instruments. The developed test bench is composed of hardware for data acquisition and control (sbRIO), working with the LabVIEW environment and the novel Power Management Lab Kit (PMLK) educational boards. The manipulation of these hi-tech boards becomes more accessible for a broader range of students, including undergraduates in schools, with the use of LabVIEW virtual instruments (VI), which assist the trainees in the manipulation of the kits - for example if a incompatible working configuration is set, the VI will pop up a message describing the result if its execution. Moreover it will provide guidance for choosing the right setup along the active decisions from the student and also with the VI can be taken measurements, without the need of external hardware.
A Latency-Tolerant Partitioner for Distributed Computing on the Information Power Grid
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biwas, Rupak; Kwak, Dochan (Technical Monitor)
2001-01-01
NASA's Information Power Grid (IPG) is an infrastructure designed to harness the power of graphically distributed computers, databases, and human expertise, in order to solve large-scale realistic computational problems. This type of a meta-computing environment is necessary to present a unified virtual machine to application developers that hides the intricacies of a highly heterogeneous environment and yet maintains adequate security. In this paper, we present a novel partitioning scheme. called MinEX, that dynamically balances processor workloads while minimizing data movement and runtime communication, for applications that are executed in a parallel distributed fashion on the IPG. We also analyze the conditions that are required for the IPG to be an effective tool for such distributed computations. Our results show that MinEX is a viable load balancer provided the nodes of the IPG are connected by a high-speed asynchronous interconnection network.
NASA Astrophysics Data System (ADS)
Maffioletti, Sergio; Dawes, Nicholas; Bavay, Mathias; Sarni, Sofiane; Lehning, Michael
2013-04-01
The Swiss Experiment platform (SwissEx: http://www.swiss-experiment.ch) provides a distributed storage and processing infrastructure for environmental research experiments. The aim of the second phase project (the Open Support Platform for Environmental Research, OSPER, 2012-2015) is to develop the existing infrastructure to provide scientists with an improved workflow. This improved workflow will include pre-defined, documented and connected processing routines. A large-scale computing and data facility is required to provide reliable and scalable access to data for analysis, and it is desirable that such an infrastructure should be free of traditional data handling methods. Such an infrastructure has been developed using the cloud-based part of the Swiss national infrastructure SMSCG (http://www.smscg.ch) and Academic Cloud. The infrastructure under construction supports two main usage models: 1) Ad-hoc data analysis scripts: These scripts are simple processing scripts, written by the environmental researchers themselves, which can be applied to large data sets via the high power infrastructure. Examples of this type of script are spatial statistical analysis scripts (R-based scripts), mostly computed on raw meteorological and/or soil moisture data. These provide processed output in the form of a grid, a plot, or a kml. 2) Complex models: A more intense data analysis pipeline centered (initially) around the physical process model, Alpine3D, and the MeteoIO plugin; depending on the data set, this may require a tightly coupled infrastructure. SMSCG already supports Alpine3D executions as both regular grid jobs and as virtual software appliances. A dedicated appliance with the Alpine3D specific libraries has been created and made available through the SMSCG infrastructure. The analysis pipelines are activated and supervised by simple control scripts that, depending on the data fetched from the meteorological stations, launch new instances of the Alpine3D appliance, execute location-based subroutines at each grid point and store the results back into the central repository for post-processing. An optional extension of this infrastructure will be to provide a 'ring buffer'-type database infrastructure, such that model results (e.g. test runs made to check parameter dependency or for development) can be visualised and downloaded after completion without submitting them to a permanent storage infrastructure. Data organization Data collected from sensors are archived and classified in distributed sites connected with an open-source software middleware, GSN. Publicly available data are available through common web services and via a cloud storage server (based on Swift). Collocation of the data and processing in the cloud would eventually eliminate data transfer requirements. Execution control logic Execution of the data analysis pipelines (for both the R-based analysis and the Alpine3D simulations) has been implemented using the GC3Pie framework developed by UZH. (https://code.google.com/p/gc3pie/). This allows large-scale, fault-tolerant execution of the pipelines to be described in terms of software appliances. GC3Pie also allows supervision of the execution of large campaigns of appliances as a single simulation. This poster will present the fundamental architectural components of the data analysis pipelines together with initial experimental results.
Interfacing HTCondor-CE with OpenStack
NASA Astrophysics Data System (ADS)
Bockelman, B.; Caballero Bejar, J.; Hover, J.
2017-10-01
Over the past few years, Grid Computing technologies have reached a high level of maturity. One key aspect of this success has been the development and adoption of newer Compute Elements to interface the external Grid users with local batch systems. These new Compute Elements allow for better handling of jobs requirements and a more precise management of diverse local resources. However, despite this level of maturity, the Grid Computing world is lacking diversity in local execution platforms. As Grid Computing technologies have historically been driven by the needs of the High Energy Physics community, most resource providers run the platform (operating system version and architecture) that best suits the needs of their particular users. In parallel, the development of virtualization and cloud technologies has accelerated recently, making available a variety of solutions, both commercial and academic, proprietary and open source. Virtualization facilitates performing computational tasks on platforms not available at most computing sites. This work attempts to join the technologies, allowing users to interact with computing sites through one of the standard Computing Elements, HTCondor-CE, but running their jobs within VMs on a local cloud platform, OpenStack, when needed. The system will re-route, in a transparent way, end user jobs into dynamically-launched VM worker nodes when they have requirements that cannot be satisfied by the static local batch system nodes. Also, once the automated mechanisms are in place, it becomes straightforward to allow an end user to invoke a custom Virtual Machine at the site. This will allow cloud resources to be used without requiring the user to establish a separate account. Both scenarios are described in this work.
Public storage for the Open Science Grid
NASA Astrophysics Data System (ADS)
Levshina, T.; Guru, A.
2014-06-01
The Open Science Grid infrastructure doesn't provide efficient means to manage public storage offered by participating sites. A Virtual Organization that relies on opportunistic storage has difficulties finding appropriate storage, verifying its availability, and monitoring its utilization. The involvement of the production manager, site administrators and VO support personnel is required to allocate or rescind storage space. One of the main requirements for Public Storage implementation is that it should use SRM or GridFTP protocols to access the Storage Elements provided by the OSG Sites and not put any additional burden on sites. By policy, no new services related to Public Storage can be installed and run on OSG sites. Opportunistic users also have difficulties in accessing the OSG Storage Elements during the execution of jobs. A typical users' data management workflow includes pre-staging common data on sites before a job's execution, then storing for a subsequent download to a local institution the output data produced by a job on a worker node. When the amount of data is significant, the only means to temporarily store the data is to upload it to one of the Storage Elements. In order to do that, a user's job should be aware of the storage location, availability, and free space. After a successful data upload, users must somehow keep track of the data's location for future access. In this presentation we propose solutions for storage management and data handling issues in the OSG. We are investigating the feasibility of using the integrated Rule-Oriented Data System developed at RENCI as a front-end service to the OSG SEs. The current architecture, state of deployment and performance test results will be discussed. We will also provide examples of current usage of the system by beta-users.
A Systematic Review of Virtual Reality Simulators for Robot-assisted Surgery.
Moglia, Andrea; Ferrari, Vincenzo; Morelli, Luca; Ferrari, Mauro; Mosca, Franco; Cuschieri, Alfred
2016-06-01
No single large published randomized controlled trial (RCT) has confirmed the efficacy of virtual simulators in the acquisition of skills to the standard required for safe clinical robotic surgery. This remains the main obstacle for the adoption of these virtual simulators in surgical residency curricula. To evaluate the level of evidence in published studies on the efficacy of training on virtual simulators for robotic surgery. In April 2015 a literature search was conducted on PubMed, Web of Science, Scopus, Cochrane Library, the Clinical Trials Database (US) and the Meta Register of Controlled Trials. All publications were scrutinized for relevance to the review and for assessment of the levels of evidence provided using the classification developed by the Oxford Centre for Evidence-Based Medicine. The publications included in the review consisted of one RCT and 28 cohort studies on validity, and seven RCTs and two cohort studies on skills transfer from virtual simulators to robot-assisted surgery. Simulators were rated good for realism (face validity) and for usefulness as a training tool (content validity). However, the studies included used various simulation training methodologies, limiting the assessment of construct validity. The review confirms the absence of any consensus on which tasks and metrics are the most effective for the da Vinci Skills Simulator and dV-Trainer, the most widely investigated systems. Although there is consensus for the RoSS simulator, this is based on only two studies on construct validity involving four exercises. One study on initial evaluation of an augmented reality module for partial nephrectomy using the dV-Trainer reported high correlation (r=0.8) between in vivo porcine nephrectomy and a virtual renorrhaphy task according to the overall Global Evaluation Assessment of Robotic Surgery (GEARS) score. In one RCT on skills transfer, the experimental group outperformed the control group, with a significant difference in overall GEARS score (p=0.012) during performance of urethrovesical anastomosis on an inanimate model. Only one study included assessment of a surgical procedure on real patients: subjects trained on a virtual simulator outperformed the control group following traditional training. However, besides the small numbers, this study was not randomized. There is an urgent need for a large, well-designed, preferably multicenter RCT to study the efficacy of virtual simulation for acquisition competence in and safe execution of clinical robotic-assisted surgery. We reviewed the literature on virtual simulators for robot-assisted surgery. Validity studies used various simulation training methodologies. It is not clear which exercises and metrics are the most effective in distinguishing different levels of experience on the da Vinci robot. There is no reported evidence of skills transfer from simulation to clinical surgery on real patients. Copyright © 2015 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Get the right mix of bricks & clicks.
Gulati, R; Garino, J
2000-01-01
The bright line that once distinguished the dot-com from the incumbent is rapidly fading. Success in the new economy will go to those who can execute clicks-and-mortar strategies that bridge the physical and virtual worlds. But how executives forge such strategies is under considerable debate. Despite the obvious benefits that integration offers--cross-promotion, shared information, purchasing leverage, distribution economies, and the like--many executives now assume that Internet businesses have to be separate to thrive. They believe that the very nature of traditional business--its protectiveness of current customers, its fear of cannibalization, its general myopia--will smother any Internet initiative. Authors Ranjay Gulati and Jason Garino contend that executives don't have to make an either- or choice when it comes to their clicks-and-mortar strategies. The question isn't, "Should we develop our Internet channel in-house or launch a spin-off?" but rather, "What degree of integration makes sense for our company?" To determine the best level of integration for their companies, executives should examine four business dimensions: brand, management, operations, and equity. Drawing on the experiences of three established retailers--Office Depot, KB Toys, and Rite Aid--the authors show the spectrum of strategies available and discuss the trade-offs involved in each choice. By thinking carefully about which aspects of a business to integrate and which to keep distinct, companies can tailor their clicks-and-mortar strategy to their own particular market and competitive situation, dramatically increasing their odds of e-business success.
The agent-based spatial information semantic grid
NASA Astrophysics Data System (ADS)
Cui, Wei; Zhu, YaQiong; Zhou, Yong; Li, Deren
2006-10-01
Analyzing the characteristic of multi-Agent and geographic Ontology, The concept of the Agent-based Spatial Information Semantic Grid (ASISG) is defined and the architecture of the ASISG is advanced. ASISG is composed with Multi-Agents and geographic Ontology. The Multi-Agent Systems are composed with User Agents, General Ontology Agent, Geo-Agents, Broker Agents, Resource Agents, Spatial Data Analysis Agents, Spatial Data Access Agents, Task Execution Agent and Monitor Agent. The architecture of ASISG have three layers, they are the fabric layer, the grid management layer and the application layer. The fabric layer what is composed with Data Access Agent, Resource Agent and Geo-Agent encapsulates the data of spatial information system so that exhibits a conceptual interface for the Grid management layer. The Grid management layer, which is composed with General Ontology Agent, Task Execution Agent and Monitor Agent and Data Analysis Agent, used a hybrid method to manage all resources that were registered in a General Ontology Agent that is described by a General Ontology System. The hybrid method is assembled by resource dissemination and resource discovery. The resource dissemination push resource from Local Ontology Agent to General Ontology Agent and the resource discovery pull resource from the General Ontology Agent to Local Ontology Agents. The Local Ontology Agent is derived from special domain and describes the semantic information of local GIS. The nature of the Local Ontology Agents can be filtrated to construct a virtual organization what could provides a global scheme. The virtual organization lightens the burdens of guests because they need not search information site by site manually. The application layer what is composed with User Agent, Geo-Agent and Task Execution Agent can apply a corresponding interface to a domain user. The functions that ASISG should provide are: 1) It integrates different spatial information systems on the semantic The Grid management layer establishes a virtual environment that integrates seamlessly all GIS notes. 2) When the resource management system searches data on different spatial information systems, it transfers the meaning of different Local Ontology Agents rather than access data directly. So the ability of search and query can be said to be on the semantic level. 3) The data access procedure is transparent to guests, that is, they could access the information from remote site as current disk because the General Ontology Agent could automatically link data by the Data Agents that link the Ontology concept to GIS data. 4) The capability of processing massive spatial data. Storing, accessing and managing massive spatial data from TB to PB; efficiently analyzing and processing spatial data to produce model, information and knowledge; and providing 3D and multimedia visualization services. 5) The capability of high performance computing and processing on spatial information. Solving spatial problems with high precision, high quality, and on a large scale; and process spatial information in real time or on time, with high-speed and high efficiency. 6) The capability of sharing spatial resources. The distributed heterogeneous spatial information resources are Shared and realizing integrated and inter-operated on semantic level, so as to make best use of spatial information resources,such as computing resources, storage devices, spatial data (integrating from GIS, RS and GPS), spatial applications and services, GIS platforms, 7) The capability of integrating legacy GIS system. A ASISG can not only be used to construct new advanced spatial application systems, but also integrate legacy GIS system, so as to keep extensibility and inheritance and guarantee investment of users. 8) The capability of collaboration. Large-scale spatial information applications and services always involve different departments in different geographic places, so remote and uniform services are needed. 9) The capability of supporting integration of heterogeneous systems. Large-scale spatial information systems are always synthetically applications, so ASISG should provide interoperation and consistency through adopting open and applied technology standards. 10) The capability of adapting dynamic changes. Business requirements, application patterns, management strategies, and IT products always change endlessly for any departments, so ASISG should be self-adaptive. Two examples are provided in this paper, those examples provide a detailed way on how you design your semantic grid based on Multi-Agent systems and Ontology. In conclusion, the semantic grid of spatial information system could improve the ability of the integration and interoperability of spatial information grid.
Tielman, Myrthe L.; Neerincx, Mark A.; van Meggelen, Marieke; Franken, Ingmar; Brinkman, Willem-Paul
2017-01-01
BACKGROUND AND OBJECTIVE: With the rise of autonomous e-mental health applications, virtual agents can play a major role in improving trustworthiness, therapy outcome and adherence. In these applications, it is important that patients adhere in the sense that they perform the tasks, but also that they adhere to the specific recommendations on how to do them well. One important construct in improving adherence is psychoeducation, information on the why and how of therapeutic interventions. In an e-mental health context, this can be delivered in two different ways: verbally by a (virtual) embodied conversational agent or just via text on the screen. The aim of this research is to study which presentation mode is preferable for improving adherence. METHODS : This study takes the approach of evaluating a specific part of a therapy, namely psychoeducation. This was done in a non-clinical sample, to first test the general constructs of the human-computer interaction. We performed an experimental study on the effect of presentation mode of psychoeducation on adherence. In this study, we took into account the moderating effects of attitude towards the virtual agent and recollection of the information. Within the paradigm of expressive writing, we asked participants (n= 46) to pick one of their worst memories to describe in a digital diary after receiving verbal or textual psychoeducation. RESULTS AND CONCLUSION: We found that both the attitude towards the virtual agent and how well the psychoeducation was recollected were positively related to adherence in the form of task execution. Moreover, after controlling for the attitude to the agent and recollection, presentation of psychoeducation via text resulted in higher adherence than verbal presentation by the virtual agent did. PMID:28800346
An Effective Mechanism for Virtual Machine Placement using Aco in IAAS Cloud
NASA Astrophysics Data System (ADS)
Shenbaga Moorthy, Rajalakshmi; Fareentaj, U.; Divya, T. K.
2017-08-01
Cloud computing provides an effective way to dynamically provide numerous resources to meet customer demands. A major challenging problem for cloud providers is designing efficient mechanisms for optimal virtual machine Placement (OVMP). Such mechanisms enable the cloud providers to effectively utilize their available resources and obtain higher profits. In order to provide appropriate resources to the clients an optimal virtual machine placement algorithm is proposed. Virtual machine placement is NP-Hard problem. Such NP-Hard problem can be solved using heuristic algorithm. In this paper, Ant Colony Optimization based virtual machine placement is proposed. Our proposed system focuses on minimizing the cost spending in each plan for hosting virtual machines in a multiple cloud provider environment and the response time of each cloud provider is monitored periodically, in such a way to minimize delay in providing the resources to the users. The performance of the proposed algorithm is compared with greedy mechanism. The proposed algorithm is simulated in Eclipse IDE. The results clearly show that the proposed algorithm minimizes the cost, response time and also number of migrations.
Towards Cloud-based Asynchronous Elasticity for Iterative HPC Applications
NASA Astrophysics Data System (ADS)
da Rosa Righi, Rodrigo; Facco Rodrigues, Vinicius; André da Costa, Cristiano; Kreutz, Diego; Heiss, Hans-Ulrich
2015-10-01
Elasticity is one of the key features of cloud computing. It allows applications to dynamically scale computing and storage resources, avoiding over- and under-provisioning. In high performance computing (HPC), initiatives are normally modeled to handle bag-of-tasks or key-value applications through a load balancer and a loosely-coupled set of virtual machine (VM) instances. In the joint-field of Message Passing Interface (MPI) and tightly-coupled HPC applications, we observe the need of rewriting source codes, previous knowledge of the application and/or stop-reconfigure-and-go approaches to address cloud elasticity. Besides, there are problems related to how profit this new feature in the HPC scope, since in MPI 2.0 applications the programmers need to handle communicators by themselves, and a sudden consolidation of a VM, together with a process, can compromise the entire execution. To address these issues, we propose a PaaS-based elasticity model, named AutoElastic. It acts as a middleware that allows iterative HPC applications to take advantage of dynamic resource provisioning of cloud infrastructures without any major modification. AutoElastic provides a new concept denoted here as asynchronous elasticity, i.e., it provides a framework to allow applications to either increase or decrease their computing resources without blocking the current execution. The feasibility of AutoElastic is demonstrated through a prototype that runs a CPU-bound numerical integration application on top of the OpenNebula middleware. The results showed the saving of about 3 min at each scaling out operations, emphasizing the contribution of the new concept on contexts where seconds are precious.
WORM - WINDOWED OBSERVATION OF RELATIVE MOTION
NASA Technical Reports Server (NTRS)
Bauer, F.
1994-01-01
The Windowed Observation of Relative Motion, WORM, program is primarily intended for the generation of simple X-Y plots from data created by other programs. It allows the user to label, zoom, and change the scale of various plots. Three dimensional contour and line plots are provided, although with more limited capabilities. The input data can be in binary or ASCII format, although all data must be in the same format. A great deal of control over the details of the plot is provided, such as gridding, size of tick marks, colors, log/semilog capability, time tagging, and multiple and phase plane plots. Many color and monochrome graphics terminals and hard copy printer/plotters are supported. The WORM executive commands, menu selections and macro files can be used to develop plots and tabular data, query the WORM Help library, retrieve data from input files, and invoke VAX DCL commands. WORM generated plots are displayed on local graphics terminals and can be copied using standard hard copy capabilities. Some of the graphics features of WORM include: zooming and dezooming various portions of the plot; plot documentation including curve labeling and function listing; multiple curves on the same plot; windowing of multiple plots and insets of the same plot; displaying a specific on a curve; and spinning the curve left, right, up, and down. WORM is written in PASCAL for interactive execution and has been implemented on a DEC VAX computer operating under VMS 4.7 with a virtual memory requirement of approximately 392K of 8 bit bytes. It uses the QPLOT device independent graphics library included with WORM. It was developed in 1988.
Narco-Crime in Mexico: Indication of State Failure or Symptoms of an Emerging Democracy
2010-05-21
Estudios sobre la Inseguridad a.c. (IESCI) or The Citizen’s Institute for Insecurity states in Olson’s article, “In terms of security, we are like those...losses have virtually rendered 30 Center for Latin American and Border Studies, “The Mexican Military’s Role in Crime Ridden Border Areas,” ( Las Cruces...executive, these criminal organizations have no interest in national or federal level governance inclusive of the spectrum of essential services required
2016-04-01
IND Response Decision-Making: Models for Government–Industry Collaboration for the Development of Game -Based Training Tools R.M. Seater C.E. Rose...Models for Government–Industry Collaboration for the Development of Game -Based Training Tools C.E. Rose A.S. Norige Group 44 R.M. Seater K.C...Report 1208 Lexington Massachusetts This page intentionally left blank. iii EXECUTIVE SUMMARY Game -based training tools, sometimes called “serious
Bring It On, Complexity! Present and Future of Self-Organising Middle-Out Abstraction
NASA Astrophysics Data System (ADS)
Mammen, Sebastian Von; Steghöfer, Jan-Philipp
The following sections are included: * The Great Complexity Challenge * Self-Organising Middle-Out Abstraction * Optimising Graphics, Physics and Artificial Intelligence * Emergence and Hierarchies in a Natural System * The Technical Concept of SOMO * Observation of interactions * Interaction pattern recognition and behavioural abstraction * Creating and adjusting hierarchies * Confidence measures * Execution model * Learning SOMO: parameters, knowledge propagation, and procreation * Current Implementations * Awareness Beyond Virtuality * Integration and emergence * Model inference * SOMO net * SOMO after me * The Future of SOMO
The Challenge of New and Emerging Information Operations
1999-06-01
Information Dominance Center (IDC) are addressing the operational and technological needs. The IDC serves as a model for the DoD and a proposed virtual hearing room for Congress. As the IDC and its supporting technologies mature, individuals will be able to freely enter, navigate, plan, and execute operations within Perceptual and Knowledge Landscapes. This capability begins the transition from Information Dominance to Knowledge Dominance. The IDC is instantiating such entities as smart rooms, avatars, square pixel displays, polymorphic views, and
2000-07-01
acceptance is not as simple a matter as it may first appear. Several points must be kept in mind. (1) Risk is a fundamental reality . (2) Risk...1) Proper preparation of an SSPP requires coming to grips with the hard realities of program execution. It involves the exami- nation and...Interfaces. (32:48) Since the conduct of a system safety program will eventually touch on virtually every other element of a system devel- opment program, a
Open Innovation at NASA: A New Business Model for Advancing Human Health and Performance Innovations
NASA Technical Reports Server (NTRS)
Davis, Jeffrey R.; Richard, Elizabeth E.; Keeton, Kathryn E.
2014-01-01
This paper describes a new business model for advancing NASA human health and performance innovations and demonstrates how open innovation shaped its development. A 45 percent research and technology development budget reduction drove formulation of a strategic plan grounded in collaboration. We describe the strategy execution, including adoption and results of open innovation initiatives, the challenges of cultural change, and the development of virtual centers and a knowledge management tool to educate and engage the workforce and promote cultural change.
High-Fidelity Modeling of Computer Network Worms
2004-06-22
plots the propagation of the TCP-based worm. This execution is among the largest TCP worm models simulated to date at packet-level. TCP vs . UDP Worm...the mapping of the virtual IP addresses to honeyd’s MAC address in the proxy’s ARP table. The proxy server listens for packets from both sides of...experimental setup, we used two ntium-4 ThinkPad , and an IBM Pentium-III ThinkPad ), running the proxy server and honeyd respectively. The Code Red II worm
Virtual hydrology observatory: an immersive visualization of hydrology modeling
NASA Astrophysics Data System (ADS)
Su, Simon; Cruz-Neira, Carolina; Habib, Emad; Gerndt, Andreas
2009-02-01
The Virtual Hydrology Observatory will provide students with the ability to observe the integrated hydrology simulation with an instructional interface by using a desktop based or immersive virtual reality setup. It is the goal of the virtual hydrology observatory application to facilitate the introduction of field experience and observational skills into hydrology courses through innovative virtual techniques that mimic activities during actual field visits. The simulation part of the application is developed from the integrated atmospheric forecast model: Weather Research and Forecasting (WRF), and the hydrology model: Gridded Surface/Subsurface Hydrologic Analysis (GSSHA). Both the output from WRF and GSSHA models are then used to generate the final visualization components of the Virtual Hydrology Observatory. The various visualization data processing techniques provided by VTK are 2D Delaunay triangulation and data optimization. Once all the visualization components are generated, they are integrated into the simulation data using VRFlowVis and VR Juggler software toolkit. VR Juggler is used primarily to provide the Virtual Hydrology Observatory application with fully immersive and real time 3D interaction experience; while VRFlowVis provides the integration framework for the hydrologic simulation data, graphical objects and user interaction. A six-sided CAVETM like system is used to run the Virtual Hydrology Observatory to provide the students with a fully immersive experience.
Virtual Visits and Patient-Centered Care: Results of a Patient Survey and Observational Study
2017-01-01
Background Virtual visits are clinical interactions in health care that do not involve the patient and provider being in the same room at the same time. The use of virtual visits is growing rapidly in health care. Some health systems are integrating virtual visits into primary care as a complement to existing modes of care, in part reflecting a growing focus on patient-centered care. There is, however, limited empirical evidence about how patients view this new form of care and how it affects overall health system use. Objective Descriptive objectives were to assess users and providers of virtual visits, including the reasons patients give for use. The analytic objective was to assess empirically the influence of virtual visits on overall primary care use and costs, including whether virtual care is with a known or a new primary care physician. Methods The study took place in British Columbia, Canada, where virtual visits have been publicly funded since October 2012. A survey of patients who used virtual visits and an observational study of users and nonusers of virtual visits were conducted. Comparison groups included two groups: (1) all other BC residents, and (2) a group matched (3:1) to the cohort. The first virtual visit was used as the intervention and the main outcome measures were total primary care visits and costs. Results During 2013-2014, there were 7286 virtual visit encounters, involving 5441 patients and 144 physicians. Younger patients and physicians were more likely to use and provide virtual visits (P<.001), with no differences by sex. Older and sicker patients were more likely to see a known provider, whereas the lowest socioeconomic groups were the least likely (P<.001). The survey of 399 virtual visit patients indicated that virtual visits were liked by patients, with 372 (93.2%) of respondents saying their virtual visit was of high quality and 364 (91.2%) reporting their virtual visit was “very” or “somewhat” helpful to resolve their health issue. Segmented regression analysis and the corresponding regression parameter estimates suggested virtual visits appear to have the potential to decrease primary care costs by approximately Can $4 per quarter (Can –$3.79, P=.12), but that benefit is most associated with seeing a known provider (Can –$8.68, P<.001). Conclusions Virtual visits may be one means of making the health system more patient-centered, but careful attention needs to be paid to how these services are integrated into existing health care delivery systems. PMID:28550006
Flow-Centric, Back-in-Time Debugging
NASA Astrophysics Data System (ADS)
Lienhard, Adrian; Fierz, Julien; Nierstrasz, Oscar
Conventional debugging tools present developers with means to explore the run-time context in which an error has occurred. In many cases this is enough to help the developer discover the faulty source code and correct it. However, rather often errors occur due to code that has executed in the past, leaving certain objects in an inconsistent state. The actual run-time error only occurs when these inconsistent objects are used later in the program. So-called back-in-time debuggers help developers step back through earlier states of the program and explore execution contexts not available to conventional debuggers. Nevertheless, even Back-in-Time Debuggers do not help answer the question, “Where did this object come from?” The Object-Flow Virtual Machine, which we have proposed in previous work, tracks the flow of objects to answer precisely such questions, but this VM does not provide dedicated debugging support to explore faulty programs. In this paper we present a novel debugger, called Compass, to navigate between conventional run-time stack-oriented control flow views and object flows. Compass enables a developer to effectively navigate from an object contributing to an error back-in-time through all the code that has touched the object. We present the design and implementation of Compass, and we demonstrate how flow-centric, back-in-time debugging can be used to effectively locate the source of hard-to-find bugs.
(abstract) An Ada Language Modular Telerobot Task Execution System
NASA Technical Reports Server (NTRS)
Backes, Paul; Long, Mark; Steele, Robert
1993-01-01
A telerobotic task execution system is described which has been developed for space flight applications. The Modular Telerobot Task Execution System (MOTES) provides the remote site task execution capability in a local-remote telerobotic system. The system provides supervised autonomous control, shared control, and teleoperation for a redundant manipulator. The system is capable of nominal task execution as well as monitoring and reflex motion.
Marintcheva, Boriana
2017-05-01
Virtual virus is a semester-long interdisciplinary project offered as part of upper level elective course in virology. Students are challenged to apply key concepts from multiple biological sub-disciplines to 'synthesize' a plausible virtual virus. The project is executed as a scaffolded series of hands-on sessions and mini-projects that are integrated into continuous story leading to mock conference presentation and comprehensive report modeling article publication. It complements classroom instruction helping students to meet overarching learning targets traditionally associated undergraduate virology courses such as viral structure and function, mode of viral propagation and flow of genetic information and virus/host interactions on the cellular and organismal level. Formal instructor and informal peer feedback were used as tools to prompt reflection and guide revisions of the final report. Student learning gains and attitudes toward the approach were studied by evaluating project work product and end of the semester survey. Outcome analysis demonstrated that students exit the course with elaborated conceptual understanding of viruses and ownership of their work. The project can be viewed as an approach to model the process of scientific discovery in fast-forward mode by combining active learning, creativity and problem solving to assemble and communicate a virtual virus story. © FEMS 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
JunoCam Outreach: Lessons Learned from Juno's Earth Flyby
NASA Astrophysics Data System (ADS)
Hansen, C. J.; Caplinger, M. A.; Ravine, M. A.
2014-12-01
The JunoCam visible imager is on the Juno spacecraft explicitly to include the public in the operation of a spacecraft instrument at Jupiter. Amateur astronomers will provide images in 2015 and 2016, as the spacecraft approaches Jupiter, to be used for planning purposes, and also during the mission to provide context for JunoCam's high-resolution pictures. Targeted imaging of specific features would enhance science value, but the dynamic nature of the jovian atmosphere makes this almost completely dependent on ground-based observations. The public will be involved in the decision of which images to acquire in each perijove pass. Partnership with the amateur image processing community will be essential for processing images during the Juno mission. This piece of the virtual team plan was successfully carried out as Juno executed its earth flyby gravity assist in 2013. Although we will have a professional ops team at Malin Space Science Systems, the tiny size of the team overall means that the public participation is not just an extra - it is essential to our success.
Resilient workflows for computational mechanics platforms
NASA Astrophysics Data System (ADS)
Nguyên, Toàn; Trifan, Laurentiu; Désidéri, Jean-Antoine
2010-06-01
Workflow management systems have recently been the focus of much interest and many research and deployment for scientific applications worldwide [26, 27]. Their ability to abstract the applications by wrapping application codes have also stressed the usefulness of such systems for multidiscipline applications [23, 24]. When complex applications need to provide seamless interfaces hiding the technicalities of the computing infrastructures, their high-level modeling, monitoring and execution functionalities help giving production teams seamless and effective facilities [25, 31, 33]. Software integration infrastructures based on programming paradigms such as Python, Mathlab and Scilab have also provided evidence of the usefulness of such approaches for the tight coupling of multidisciplne application codes [22, 24]. Also high-performance computing based on multi-core multi-cluster infrastructures open new opportunities for more accurate, more extensive and effective robust multi-discipline simulations for the decades to come [28]. This supports the goal of full flight dynamics simulation for 3D aircraft models within the next decade, opening the way to virtual flight-tests and certification of aircraft in the future [23, 24, 29].
An overview of the DII-HEP OpenStack based CMS data analysis
NASA Astrophysics Data System (ADS)
Osmani, L.; Tarkoma, S.; Eerola, P.; Komu, M.; Kortelainen, M. J.; Kraemer, O.; Lindén, T.; Toor, S.; White, J.
2015-05-01
An OpenStack based private cloud with the Cluster File System has been built and used with both CMS analysis and Monte Carlo simulation jobs in the Datacenter Indirection Infrastructure for Secure High Energy Physics (DII-HEP) project. On the cloud we run the ARC middleware that allows running CMS applications without changes on the job submission side. Our test results indicate that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability. To manage the virtual machines (VM) dynamically in an elastic fasion, we are testing the EMI authorization service (Argus) and the Execution Environment Service (Argus-EES). An OpenStackplugin has been developed for Argus-EES. The Host Identity Protocol (HIP) has been designed for mobile networks and it provides a secure method for IP multihoming. HIP separates the end-point identifier and locator role for IP address which increases the network availability for the applications. Our solution leverages HIP for traffic management. This presentation gives an update on the status of the work and our lessons learned in creating an OpenStackbased cloud for HEP.
Using the iPlant collaborative discovery environment.
Oliver, Shannon L; Lenards, Andrew J; Barthelson, Roger A; Merchant, Nirav; McKay, Sheldon J
2013-06-01
The iPlant Collaborative is an academic consortium whose mission is to develop an informatics and social infrastructure to address the "grand challenges" in plant biology. Its cyberinfrastructure supports the computational needs of the research community and facilitates solving major challenges in plant science. The Discovery Environment provides a powerful and rich graphical interface to the iPlant Collaborative cyberinfrastructure by creating an accessible virtual workbench that enables all levels of expertise, ranging from students to traditional biology researchers and computational experts, to explore, analyze, and share their data. By providing access to iPlant's robust data-management system and high-performance computing resources, the Discovery Environment also creates a unified space in which researchers can access scalable tools. Researchers can use available Applications (Apps) to execute analyses on their data, as well as customize or integrate their own tools to better meet the specific needs of their research. These Apps can also be used in workflows that automate more complicated analyses. This module describes how to use the main features of the Discovery Environment, using bioinformatics workflows for high-throughput sequence data as examples. © 2013 by John Wiley & Sons, Inc.
Distributed decision support for the 21st century mission space
NASA Astrophysics Data System (ADS)
McQuay, William K.
2002-07-01
The past decade has produced significant changes in the conduct of military operations: increased humanitarian missions, asymmetric warfare, the reliance on coalitions and allies, stringent rules of engagement, concern about casualties, and the need for sustained air operations. Future mission commanders will need to assimilate a tremendous amount of information, make quick-response decisions, and quantify the effects of those decisions in the face of uncertainty. Integral to this process is creating situational assessment-understanding the mission space, simulation to analyze alternative futures, current capabilities, planning assessments, course-of-action assessments, and a common operational picture-keeping everyone on the same sheet of paper. Decision support tools in a distributed collaborative environment offer the capability of decomposing these complex multitask processes and distributing them over a dynamic set of execution assets. Decision support technologies can semi-automate activities, such as planning an operation, that have a reasonably well-defined process and provide machine-level interfaces to refine the myriad of information that is not currently fused. The marriage of information and simulation technologies provides the mission commander with a collaborative virtual environment for planning and decision support.
Environment Study of AGNs at z = 0.3 to 3.0 Using the Japanese Virtual Observatory
NASA Astrophysics Data System (ADS)
Shirasaki, Y.; Ohishi, M.; Mizumoto, Y.; Takata, T.; Tanaka, M.; Yasuda, N.
2010-12-01
We present a science use case of Virtual Observatory, which was achieved to examine an environment of AGN up to redshift of 3.0. We used the Japanese Virtual Observatory (JVO) to obtain Subaru Suprime-Cam images around known AGNs. According to the hierarchical galaxy formation model, AGNs are expected to be found in an environment of higher galaxy density than that of typical galaxies. The current observations, however, indicate that AGNs do not reside in a particularly high density environment. We investigated ˜1000 AGNs, which is about ten times larger samples than the other studies covering the redshifts larger than 0.6. We successfully found significant excess of galaxies around AGNs at redshifts of 0.3 to 1.8. If this work was done in a classical manner, that is, raw data were retrieved from the archive through a form-based web interface in an interactive way, and the data were reduced on a low performance computer, it might take several years to finish it. Since the Virtual Observatory system is accessible through a standard interface, it is easy to query and retrieve data in an automatic way. We constructed a pipeline for retrieving the data and calculating the galaxy number density around a given coordinate. This procedure was executed in parallel on ˜10 quad core PCs, and it took only one day for obtaining the final result. Our result implies that the Virtual Observatory can be a powerful tool to do an astronomical research based on large amount of data.
Coil motion effects in watt balances: a theoretical check
NASA Astrophysics Data System (ADS)
Li, Shisong; Schlamminger, Stephan; Haddad, Darine; Seifert, Frank; Chao, Leon; Pratt, Jon R.
2016-04-01
A watt balance is a precision apparatus for the measurement of the Planck constant that has been proposed as a primary method for realizing the unit of mass in a revised International System of Units. In contrast to an ampere balance, which was historically used to realize the unit of current in terms of the kilogram, the watt balance relates electrical and mechanical units through a virtual power measurement and has far greater precision. However, because the virtual power measurement requires the execution of a prescribed motion of a coil in a fixed magnetic field, systematic errors introduced by horizontal and rotational deviations of the coil from its prescribed path will compromise the accuracy. We model these potential errors using an analysis that accounts for the fringing field in the magnet, creating a framework for assessing the impact of this class of errors on the uncertainty of watt balance results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garzoglio, Gabriele
The Fermilab Grid and Cloud Computing Department and the KISTI Global Science experimental Data hub Center are working on a multi-year Collaborative Research and Development Agreement.With the knowledge developed in the first year on how to provision and manage a federation of virtual machines through Cloud management systems. In this second year, we expanded the work on provisioning and federation, increasing both scale and diversity of solutions, and we started to build on-demand services on the established fabric, introducing the paradigm of Platform as a Service to assist with the execution of scientific workflows. We have enabled scientific workflows ofmore » stakeholders to run on multiple cloud resources at the scale of 1,000 concurrent machines. The demonstrations have been in the areas of (a) Virtual Infrastructure Automation and Provisioning, (b) Interoperability and Federation of Cloud Resources, and (c) On-demand Services for ScientificWorkflows.« less
Toward a Virtual Laboratory to Assess Biodiversity from Data Produced by an Underwater Microscope
NASA Astrophysics Data System (ADS)
Beaulieu, S.; Ball, M.; Futrelle, J.; Sosik, H. M.
2016-12-01
Real-time data from sensors deployed in the ocean are increasingly available online for broad use by scientists, educators, and the public. Such data have previously been limited to physical parameters, but data for biological parameters are becoming more prevalent with the development of new submersible instruments. Imaging FlowCytobot (IFCB), for example, automatically and rapidly acquires images of microscopic algae (phytoplankton) at the base of the food web in marine ecosystems. These images and products from image processing and automated classification are accessible via web services from an IFCB dashboard. However, until now, to process these data further into results representing the biodiversity of the phytoplankton required a complex workflow that could only be executed by scientists involved in the instrument development. Also, because these data have been collected near continuously for a decade, a number of "big data" challenges arise in attempting to implement and reproduce the workflow. Our research is geared toward the development of a virtual laboratory to enable other scientists and educators, as new users of data from this underwater microscope, to generate biodiversity data products. Our solution involves an electronic notebook (Jupyter Notebook) that can be re-purposed by users with some Python programming experience. However, when we scaled the virtual laboratory to accommodate a 2-month example time series (thousands of binned files each representing thousands of images), we needed to expand the execution environment to include batch processing outside of the notebook. We will share how we packaged these tools to share with other scientists to perform their own biodiversity assessment from data available on an IFCB dashboard. Additional outcomes of software development in this project include a prototype for time-series visualizations to be generated in near-real-time and recommendations for new products accessible via web services from the IFCB dashboard.
Modeling Constellation Virtual Missions Using the Vdot(Trademark) Process Management Tool
NASA Technical Reports Server (NTRS)
Hardy, Roger; ONeil, Daniel; Sturken, Ian; Nix, Michael; Yanez, Damian
2011-01-01
The authors have identified a software tool suite that will support NASA's Virtual Mission (VM) effort. This is accomplished by transforming a spreadsheet database of mission events, task inputs and outputs, timelines, and organizations into process visualization tools and a Vdot process management model that includes embedded analysis software as well as requirements and information related to data manipulation and transfer. This paper describes the progress to date, and the application of the Virtual Mission to not only Constellation but to other architectures, and the pertinence to other aerospace applications. Vdot s intuitive visual interface brings VMs to life by turning static, paper-based processes into active, electronic processes that can be deployed, executed, managed, verified, and continuously improved. A VM can be executed using a computer-based, human-in-the-loop, real-time format, under the direction and control of the NASA VM Manager. Engineers in the various disciplines will not have to be Vdot-proficient but rather can fill out on-line, Excel-type databases with the mission information discussed above. The author s tool suite converts this database into several process visualization tools for review and into Microsoft Project, which can be imported directly into Vdot. Many tools can be embedded directly into Vdot, and when the necessary data/information is received from a preceding task, the analysis can be initiated automatically. Other NASA analysis tools are too complex for this process but Vdot automatically notifies the tool user that the data has been received and analysis can begin. The VM can be simulated from end-to-end using the author s tool suite. The planned approach for the Vdot-based process simulation is to generate the process model from a database; other advantages of this semi-automated approach are the participants can be geographically remote and after refining the process models via the human-in-the-loop simulation, the system can evolve into a process management server for the actual process.
Ring-based ultrasonic virtual point detector with applications to photoacoustic tomography
NASA Astrophysics Data System (ADS)
Yang, Xinmai; Li, Meng-Lin; Wang, Lihong V.
2007-06-01
An ultrasonic virtual point detector is constructed using the center of a ring transducer. The virtual point detector provides ideal omnidirectional detection free of any aperture effect. Compared with a real point detector, the virtual one has lower thermal noise and can be scanned with its center inside a physically inaccessible medium. When applied to photoacoustic tomography, the virtual point detector provides both high spatial resolution and high signal-to-noise ratio. It can also be potentially applied to other ultrasound-related technologies.
Simulators and virtual reality in surgical education.
Chou, Betty; Handa, Victoria L
2006-06-01
This article explores the pros and cons of virtual reality simulators, their abilities to train and assess surgical skills, and their potential future applications. Computer-based virtual reality simulators and more conventional box trainers are compared and contrasted. The virtual reality simulator provides objective assessment of surgical skills and immediate feedback further to enhance training. With this ability to provide standardized, unbiased assessment of surgical skills, the virtual reality trainer has the potential to be a tool for selecting, instructing, certifying, and recertifying gynecologists.
Time and Space Partition Platform for Safe and Secure Flight Software
NASA Astrophysics Data System (ADS)
Esquinas, Angel; Zamorano, Juan; de la Puente, Juan A.; Masmano, Miguel; Crespo, Alfons
2012-08-01
There are a number of research and development activities that are exploring Time and Space Partition (TSP) to implement safe and secure flight software. This approach allows to execute different real-time applications with different levels of criticality in the same computer board. In order to do that, flight applications must be isolated from each other in the temporal and spatial domains. This paper presents the first results of a partitioning platform based on the Open Ravenscar Kernel (ORK+) and the XtratuM hypervisor. ORK+ is a small, reliable realtime kernel supporting the Ada Ravenscar Computational model that is central to the ASSERT development process. XtratuM supports multiple virtual machines, i.e. partitions, on a single computer and is being used in the Integrated Modular Avionics for Space study. ORK+ executes in an XtratuM partition enabling Ada applications to share the computer board with other applications.
Automated Vectorization of Decision-Based Algorithms
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.
Health care globalization: a need for virtual leadership.
Holland, J Brian; Malvey, Donna; Fottler, Myron D
2009-01-01
As health care organizations expand and move into global markets, they face many leadership challenges, including the difficulty of leading individuals who are geographically dispersed. This article provides global managers with guidelines for leading and motivating individuals or teams from a distance while overcoming the typical challenges that "virtual leaders" and "virtual teams" face: employee isolation, confusion, language barriers, cultural differences, and technological breakdowns. Fortunately, technological advances in communications have provided various methods to accommodate geographically dispersed or "global virtual teams." Health care leaders now have the ability to lead global teams from afar by becoming "virtual leaders" with a responsibility to lead a "virtual team." Three models of globalization presented and discussed are outsourcing of health care services, medical tourism, and telerobotics. These models require global managers to lead virtually, and a positive relationship between the virtual leader and the virtual team member is vital in the success of global health care organizations.
Simulating hemispatial neglect with virtual reality.
Baheux, Kenji; Yoshizawa, Makoto; Yoshida, Yasuko
2007-07-19
Hemispatial neglect is a cognitive disorder defined as a lack of attention for stimuli contra-lateral to the brain lesion. The assessment is traditionally done with basic pencil and paper tests and the rehabilitation programs are generally not well adapted. We propose a virtual reality system featuring an eye-tracking device for a better characterization of the neglect that will lead to new rehabilitation techniques. This paper presents a comparison of eye-gaze patterns of healthy subjects, patients and healthy simulated patients on a virtual line bisection test. The task was also executed with a reduced visual field condition hoping that fewer stimuli would limit the neglect. We found that patients and healthy simulated patients had similar eye-gaze patterns. However, while the reduced visual field condition had no effect on the healthy simulated patients, it actually had a negative impact on the patients. We discuss the reasons for these differences and how they relate to the limitations of the neglect simulation. We argue that with some improvements the technique could be used to determine the potential of new rehabilitation techniques and also help the rehabilitation staff or the patient's relatives to better understand the neglect condition.
[Memory assessment by means of virtual reality: its present and future].
Diaz-Orueta, Unai; Climent, Gema; Cardas-Ibanez, Jaione; Alonso, Laura; Olmo-Osa, Juan; Tirapu-Ustarroz, Javier
2016-01-16
The human memory is a complex cognitive system whose close relationship with executive functions implies that, in many occasions, a mnemonic deficit comprises difficulties to operate with correctly stored contents. Traditional memory tests, more focused in the information storage than in its processing, may be poorly sensitive both to subjects' daily life functioning and to changes originated by rehabilitation programs. In memory assessment, there is plenty evidence with regards to the need of improving it by means of tests which offer a higher ecological validity, with information that may be presented in various sensorial modalities and produced in a simultaneous way. Virtual reality reproduces three-dimensional environments with which the patient interacts in a dynamic way, with a sense of immersion in the environment similar to the presence and exposure to a real environment, and in which presentation of such stimuli, distractors and other variables may be systematically controlled. The current review aims to go deeply into the trajectory of neuropsychological assessment of memory based in virtual reality environments, making a tour through existing tests designed for assessing learning, prospective, episodic and spatial memory, as well as the most recent attempts to perform a comprehensive evaluation of all memory components.
Cogné, Mélanie; Knebel, Jean-François; Klinger, Evelyne; Bindschaedler, Claire; Rapin, Pierre-André; Joseph, Pierre-Alain; Clarke, Stephanie
2018-01-01
Topographical disorientation is a frequent deficit among patients suffering from brain injury. Spatial navigation can be explored in this population using virtual reality environments, even in the presence of motor or sensory disorders. Furthermore, the positive or negative impact of specific stimuli can be investigated. We studied how auditory stimuli influence the performance of brain-injured patients in a navigational task, using the Virtual Action Planning-Supermarket (VAP-S) with the addition of contextual ("sonar effect" and "name of product") and non-contextual ("periodic randomised noises") auditory stimuli. The study included 22 patients with a first unilateral hemispheric brain lesion and 17 healthy age-matched control subjects. After a software familiarisation, all subjects were tested without auditory stimuli, with a sonar effect or periodic random sounds in a random order, and with the stimulus "name of product". Contextual auditory stimuli improved patient performance more than control group performance. Contextual stimuli benefited most patients with severe executive dysfunction or with severe unilateral neglect. These results indicate that contextual auditory stimuli are useful in the assessment of navigational abilities in brain-damaged patients and that they should be used in rehabilitation paradigms.
The electronic-commerce-oriented virtual merchandise model
NASA Astrophysics Data System (ADS)
Fang, Xiaocui; Lu, Dongming
2004-03-01
Electronic commerce has been the trend of commerce activities. Providing with Virtual Reality interface, electronic commerce has better expressing capacity and interaction means. But most of the applications of virtual reality technology in EC, 3D model is only the appearance description of merchandises. There is almost no information concerned with commerce information and interaction information. This resulted in disjunction of virtual model and commerce information. So we present Electronic Commerce oriented Virtual Merchandise Model (ECVMM), which combined a model with commerce information, interaction information and figure information of virtual merchandise. ECVMM with abundant information provides better support to information obtainment and communication in electronic commerce.
Virtual Visits and Patient-Centered Care: Results of a Patient Survey and Observational Study.
McGrail, Kimberlyn Marie; Ahuja, Megan Alyssa; Leaver, Chad Andrew
2017-05-26
Virtual visits are clinical interactions in health care that do not involve the patient and provider being in the same room at the same time. The use of virtual visits is growing rapidly in health care. Some health systems are integrating virtual visits into primary care as a complement to existing modes of care, in part reflecting a growing focus on patient-centered care. There is, however, limited empirical evidence about how patients view this new form of care and how it affects overall health system use. Descriptive objectives were to assess users and providers of virtual visits, including the reasons patients give for use. The analytic objective was to assess empirically the influence of virtual visits on overall primary care use and costs, including whether virtual care is with a known or a new primary care physician. The study took place in British Columbia, Canada, where virtual visits have been publicly funded since October 2012. A survey of patients who used virtual visits and an observational study of users and nonusers of virtual visits were conducted. Comparison groups included two groups: (1) all other BC residents, and (2) a group matched (3:1) to the cohort. The first virtual visit was used as the intervention and the main outcome measures were total primary care visits and costs. During 2013-2014, there were 7286 virtual visit encounters, involving 5441 patients and 144 physicians. Younger patients and physicians were more likely to use and provide virtual visits (P<.001), with no differences by sex. Older and sicker patients were more likely to see a known provider, whereas the lowest socioeconomic groups were the least likely (P<.001). The survey of 399 virtual visit patients indicated that virtual visits were liked by patients, with 372 (93.2%) of respondents saying their virtual visit was of high quality and 364 (91.2%) reporting their virtual visit was "very" or "somewhat" helpful to resolve their health issue. Segmented regression analysis and the corresponding regression parameter estimates suggested virtual visits appear to have the potential to decrease primary care costs by approximately Can $4 per quarter (Can -$3.79, P=.12), but that benefit is most associated with seeing a known provider (Can -$8.68, P<.001). Virtual visits may be one means of making the health system more patient-centered, but careful attention needs to be paid to how these services are integrated into existing health care delivery systems. ©Kimberlyn Marie McGrail, Megan Alyssa Ahuja, Chad Andrew Leaver. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 26.05.2017.
Self-assembled software and method of overriding software execution
Bouchard, Ann M.; Osbourn, Gordon C.
2013-01-08
A computer-implemented software self-assembled system and method for providing an external override and monitoring capability to dynamically self-assembling software containing machines that self-assemble execution sequences and data structures. The method provides an external override machine that can be introduced into a system of self-assembling machines while the machines are executing such that the functionality of the executing software can be changed or paused without stopping the code execution and modifying the existing code. Additionally, a monitoring machine can be introduced without stopping code execution that can monitor specified code execution functions by designated machines and communicate the status to an output device.
ViRPET--combination of virtual reality and PET brain imaging
Majewski, Stanislaw; Brefczynski-Lewis, Julie
2017-05-23
Various methods, systems and apparatus are provided for brain imaging during virtual reality stimulation. In one example, among others, a system for virtual ambulatory environment brain imaging includes a mobile brain imager configured to obtain positron emission tomography (PET) scans of a subject in motion, and a virtual reality (VR) system configured to provide one or more stimuli to the subject during the PET scans. In another example, a method for virtual ambulatory environment brain imaging includes providing stimulation to a subject through a virtual reality (VR) system; and obtaining a positron emission tomography (PET) scan of the subject while moving in response to the stimulation from the VR system. The mobile brain imager can be positioned on the subject with an array of imaging photodetector modules distributed about the head of the subject.
The virtual laboratory: a new on-line resource for the history of psychology.
Schmidgen, Henning; Evans, Rand B
2003-05-01
The authors provide a description of the Virtual Laboratory at Department III of the Max Planck Institute for the History of Science in Berlin. The Virtual Laboratory currently provides Internet links to rooms that present texts, instruments, model organisms, research sites, and biographies. Existing links provide access to a library of journals, handbooks, monographs, and trade catalogues; research institutes and laboratories; biographies and bibliographic essays; and essays by contemporary researchers. Historians of psychology are encouraged to submit photographic material and essays to the Virtual Laboratory.
An Investigation of Communication in Virtual High Schools
ERIC Educational Resources Information Center
Belair, Marley
2012-01-01
Virtual schooling is an increasing trend for secondary education. Research of the communication practices in virtual schools has provided a myriad of suggestions for virtual school policies. The purpose of this qualitative study was to investigate the activities and processes involved in the daily rituals of virtual school teachers and learners…
77 FR 23512 - Advisory Committee on Apprenticeship; virtual meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-19
...; virtual meeting AGENCY: Employment and Training Administration (ETA), Labor. ACTION: Notice of a virtual....S.C. APP. 1), notice is hereby given to announce a open virtual meeting of the Advisory Committee on... the public. A virtual meeting of the ACA provides cost savings and a greater degree of public...
Dynamic Extension of a Virtualized Cluster by using Cloud Resources
NASA Astrophysics Data System (ADS)
Oberst, Oliver; Hauth, Thomas; Kernert, David; Riedel, Stephan; Quast, Günter
2012-12-01
The specific requirements concerning the software environment within the HEP community constrain the choice of resource providers for the outsourcing of computing infrastructure. The use of virtualization in HPC clusters and in the context of cloud resources is therefore a subject of recent developments in scientific computing. The dynamic virtualization of worker nodes in common batch systems provided by ViBatch serves each user with a dynamically virtualized subset of worker nodes on a local cluster. Now it can be transparently extended by the use of common open source cloud interfaces like OpenNebula or Eucalyptus, launching a subset of the virtual worker nodes within the cloud. This paper demonstrates how a dynamically virtualized computing cluster is combined with cloud resources by attaching remotely started virtual worker nodes to the local batch system.
NASA Astrophysics Data System (ADS)
Vilotte, J. P.; Atkinson, M.; Spinuso, A.; Rietbrock, A.; Michelini, A.; Igel, H.; Frank, A.; Carpené, M.; Schwichtenberg, H.; Casarotti, E.; Filgueira, R.; Garth, T.; Germünd, A.; Klampanos, I.; Krause, A.; Krischer, L.; Leong, S. H.; Magnoni, F.; Matser, J.; Moguilny, G.
2015-12-01
Seismology addresses both fundamental problems in understanding the Earth's internal wave sources and structures and augmented societal applications, like earthquake and tsunami hazard assessment and risk mitigation; and puts a premium on open-data accessible by the Federated Digital Seismological Networks. The VERCE project, "Virtual Earthquake and seismology Research Community e-science environment in Europe", has initiated a virtual research environment to support complex orchestrated workflows combining state-of-art wave simulation codes and data analysis tools on distributed computing and data infrastructures (DCIs) along with multiple sources of observational data and new capabilities to combine simulation results with observational data. The VERCE Science Gateway provides a view of all the available resources, supporting collaboration with shared data and methods, with data access controls. The mapping to DCIs handles identity management, authority controls, transformations between representations and controls, and access to resources. The framework for computational science that provides simulation codes, like SPECFEM3D, democratizes their use by getting data from multiple sources, managing Earth models and meshes, distilling them as input data, and capturing results with meta-data. The dispel4py data-intensive framework allows for developing data-analysis applications using Python and the ObsPy library, which can be executed on different DCIs. A set of tools allows coupling with seismology and external data services. Provenance driven tools validate results and show relationships between data to facilitate method improvement. Lessons learned from VERCE training lead us to conclude that solid-Earth scientists could make significant progress by using VERCE e-science environment. VERCE has already contributed to the European Plate Observation System (EPOS), and is part of the EPOS implementation phase. Its cross-disciplinary capabilities are being extended for the EPOS implantation phase.
Fully Three-Dimensional Virtual-Reality System
NASA Technical Reports Server (NTRS)
Beckman, Brian C.
1994-01-01
Proposed virtual-reality system presents visual displays to simulate free flight in three-dimensional space. System, virtual space pod, is testbed for control and navigation schemes. Unlike most virtual-reality systems, virtual space pod would not depend for orientation on ground plane, which hinders free flight in three dimensions. Space pod provides comfortable seating, convenient controls, and dynamic virtual-space images for virtual traveler. Controls include buttons plus joysticks with six degrees of freedom.
Torak, L.J.
1993-01-01
A MODular Finite-Element, digital-computer program (MODFE) was developed to simulate steady or unsteady-state, two-dimensional or axisymmetric ground-water-flow. The modular structure of MODFE places the computationally independent tasks that are performed routinely by digital-computer programs simulating ground-water flow into separate subroutines, which are executed from the main program by control statements. Each subroutine consists of complete sets of computations, or modules, which are identified by comment statements, and can be modified by the user without affecting unrelated computations elsewhere in the program. Simulation capabilities can be added or modified by either adding or modifying subroutines that perform specific computational tasks, and the modular-program structure allows the user to create versions of MODFE that contain only the simulation capabilities that pertain to the ground-water problem of interest. MODFE is written in a Fortran programming language that makes it virtually device independent and compatible with desk-top personal computers and large mainframes. MODFE uses computer storage and execution time efficiently by taking advantage of symmetry and sparseness within the coefficient matrices of the finite-element equations. Parts of the matrix coefficients are computed and stored as single-subscripted variables, which are assembled into a complete coefficient just prior to solution. Computer storage is reused during simulation to decrease storage requirements. Descriptions of subroutines that execute the computational steps of the modular-program structure are given in tables that cross reference the subroutines with particular versions of MODFE. Programming details of linear and nonlinear hydrologic terms are provided. Structure diagrams for the main programs show the order in which subroutines are executed for each version and illustrate some of the linear and nonlinear versions of MODFE that are possible. Computational aspects of changing stresses and boundary conditions with time and of mass-balance and error terms are given for each hydrologic feature. Program variables are listed and defined according to their occurrence in the main programs and in subroutines. Listings of the main programs and subroutines are given.
Establishing a virtual learning environment: a nursing experience.
Wood, Anya; McPhee, Carolyn
2011-11-01
The use of virtual worlds has exploded in popularity, but getting started may not be easy. In this article, the authors, members of the corporate nursing education team at University Health Network, outline their experience with incorporating virtual technology into their learning environment. Over a period of several months, a virtual hospital, including two nursing units, was created in Second Life®, allowing more than 500 nurses to role-play in a safe environment without the fear of making a mistake. This experience has provided valuable insight into the best ways to develop and learn in a virtual environment. The authors discuss the challenges of installing and building the Second Life® platform and provide guidelines for preparing users and suggestions for crafting educational activities. This article provides a starting point for organizations planning to incorporate virtual worlds into their learning environment. Copyright 2011, SLACK Incorporated.
NASA Astrophysics Data System (ADS)
Chao, Jie; Chiu, Jennifer L.; DeJaegher, Crystal J.; Pan, Edward A.
2016-02-01
Deep learning of science involves integration of existing knowledge and normative science concepts. Past research demonstrates that combining physical and virtual labs sequentially or side by side can take advantage of the unique affordances each provides for helping students learn science concepts. However, providing simultaneously connected physical and virtual experiences has the potential to promote connections among ideas. This paper explores the effect of augmenting a virtual lab with physical controls on high school chemistry students' understanding of gas laws. We compared students using the augmented virtual lab to students using a similar sensor-based physical lab with teacher-led discussions. Results demonstrate that students in the augmented virtual lab condition made significant gains from pretest and posttest and outperformed traditional students on some but not all concepts. Results provide insight into incorporating mixed-reality technologies into authentic classroom settings.
Modulation of visually evoked movement responses in moving virtual environments.
Reed-Jones, Rebecca J; Vallis, Lori Ann
2009-01-01
Virtual-reality technology is being increasingly used to understand how humans perceive and act in the moving world around them. What is currently not clear is how virtual reality technology is perceived by human participants and what virtual scenes are effective in evoking movement responses to visual stimuli. We investigated the effect of virtual-scene context on human responses to a virtual visual perturbation. We hypothesised that exposure to a natural scene that matched the visual expectancies of the natural world would create a perceptual set towards presence, and thus visual guidance of body movement in a subsequently presented virtual scene. Results supported this hypothesis; responses to a virtual visual perturbation presented in an ambiguous virtual scene were increased when participants first viewed a scene that consisted of natural landmarks which provided 'real-world' visual motion cues. Further research in this area will provide a basis of knowledge for the effective use of this technology in the study of human movement responses.
NASA Astrophysics Data System (ADS)
Davis, G. A.; Battistuz, B.; Foley, S.; Vernon, F. L.; Eakins, J. A.
2009-12-01
Since April 2004 the Earthscope USArray Transportable Array (TA) network has grown to over 400 broadband seismic stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. In total, over 1.7 terabytes per year of 24-bit, 40 samples-per-second seismic and state of health data is recorded from the stations. The ANF provides analysts access to real-time and archived data, as well as state-of-health data, metadata, and interactive tools for station engineers and the public via a website. Additional processing and recovery of missing data from on-site recorders (balers) at the stations is performed before the final data is transmitted to the IRIS Data Management Center (DMC). Assembly of the final data set requires additional storage and processing capabilities to combine the real-time data with baler data. The infrastructure supporting these diverse computational and storage needs currently consists of twelve virtualized Sun Solaris Zones executing on nine physical server systems. The servers are protected against failure by redundant power, storage, and networking connections. Storage needs are provided by a hybrid iSCSI and Fiber Channel Storage Area Network (SAN) with access to over 40 terabytes of RAID 5 and 6 storage. Processing tasks are assigned to systems based on parallelization and floating-point calculation needs. On-site buffering at the data-loggers provide protection in case of short-term network or hardware problems, while backup acquisition systems at the San Diego Supercomputer Center and the DMC protect against catastrophic failure of the primary site. Configuration management and monitoring of these systems is accomplished with open-source (Cfengine, Nagios, Solaris Community Software) and commercial tools (Intermapper). In the evolution from a single server to multiple virtualized server instances, Sun Cluster software was evaluated and found to be unstable in our environment. Shared filesystem architectures using PxFS and QFS were found to be incompatible with our software architecture, so sharing of data between systems is accomplished via traditional NFS. Linux was found to be limited in terms of deployment flexibility and consistency between versions. Despite the experimentation with various technologies, our current virtualized architecture is stable to the point of an average daily real time data return rate of 92.34% over the entire lifetime of the project to date.
An Overview of Evaluative Instrumentation for Virtual High Schools
ERIC Educational Resources Information Center
Black, Erik W.; Ferdig, Richard E.; DiPietro, Meredith
2008-01-01
With an increasing prevalence of virtual high school programs in the United States, a better understanding of evaluative tools available for distance educators and administrators is needed. These evaluative tools would provide opportunities for assessment and a determination of success within virtual schools. This article seeks to provide an…
Efficacy of a Virtual Teaching Assistant in an Open Laboratory Environment for Electric Circuits
ERIC Educational Resources Information Center
Saleheen, Firdous; Wang, Zicong; Picone, Joseph; Butz, Brian P.; Won, Chang-Hee
2018-01-01
In order to provide an on-demand, open electrical engineering laboratory, we developed an innovative software-based Virtual Open Laboratory Teaching Assistant (VOLTA). This web-based virtual assistant provides laboratory instructions, equipment usage videos, circuit simulation assistance, and hardware implementation diagnostics. VOLTA allows…
Virtual Reality and Its Potential Application in Education and Training.
ERIC Educational Resources Information Center
Milheim, William D.
1995-01-01
An overview is provided of current trends in virtual reality research and development, including discussion of hardware, types of virtual reality, and potential problems with virtual reality. Implications for education and training are explored. (Author/JKP)
Event-Driven Technology to Generate Relevant Collections of Near-Realtime Data
NASA Astrophysics Data System (ADS)
Graves, S. J.; Keiser, K.; Nair, U. S.; Beck, J. M.; Ebersole, S.
2017-12-01
Getting the right data when it is needed continues to be a challenge for researchers and decision makers. Event-Driven Data Delivery (ED3), funded by the NASA Applied Science program, is a technology that allows researchers and decision makers to pre-plan what data, information and processes they need to have collected or executed in response to future events. The Information Technology and Systems Center at the University of Alabama in Huntsville (UAH) has developed the ED3 framework in collaboration with atmospheric scientists at UAH, scientists at the Geological Survey of Alabama, and other federal, state and local stakeholders to meet the data preparedness needs for research, decisions and situational awareness. The ED3 framework supports an API that supports the addition of loosely-coupled, distributed event handlers and data processes. This approach allows the easy addition of new events and data processes so the system can scale to support virtually any type of event or data process. Using ED3's underlying services, applications have been developed that monitor for alerts of registered event types and automatically triggers subscriptions that match new events, providing users with a living "album" of results that can continued to be curated as more information for an event becomes available. This capability can allow users to improve capacity for the collection, creation and use of data and real-time processes (data access, model execution, product generation, sensor tasking, social media filtering, etc), in response to disaster (and other) events by preparing in advance for data and information needs for future events. This presentation will provide an update on the ED3 developments and deployments, and further explain the applicability for utilizing near-realtime data in hazards research, response and situational awareness.
PRAIS: Distributed, real-time knowledge-based systems made easy
NASA Technical Reports Server (NTRS)
Goldstein, David G.
1990-01-01
This paper discusses an architecture for real-time, distributed (parallel) knowledge-based systems called the Parallel Real-time Artificial Intelligence System (PRAIS). PRAIS strives for transparently parallelizing production (rule-based) systems, even when under real-time constraints. PRAIS accomplishes these goals by incorporating a dynamic task scheduler, operating system extensions for fact handling, and message-passing among multiple copies of CLIPS executing on a virtual blackboard. This distributed knowledge-based system tool uses the portability of CLIPS and common message-passing protocols to operate over a heterogeneous network of processors.
A Real-Time Executive for Multiple-Computer Clusters.
1984-12-01
in a real-time environment is tantamount to speed and efficiency. By effectively co-locating real-time sensors and related processing modules, real...of which there are two ki n1 s : multicast group address - virtually any nur.,ber of node groups can be assigned a group address so they are all able...interfaceloopbark by ’b4, internal _loopback by 02"b4, clear loooback by ’b4, go offline by Ŝ"b4, eo online by ’b4, onboard _diagnostic by Oa’b4, cdr
A LabVIEW based template for user created experiment automation.
Kim, D J; Fisk, Z
2012-12-01
We have developed an expandable software template to automate user created experiments. The LabVIEW based template is easily modifiable to add together user created measurements, controls, and data logging with virtually any type of laboratory equipment. We use reentrant sequential selection to implement sequence script making it possible to wrap a long series of the user created experiments and execute them in sequence. Details of software structure and application examples for scanning probe microscope and automated transport experiments using custom built laboratory electronics and a cryostat are described.
Urgent Virtual Machine Eviction with Enlightened Post-Copy
2015-12-01
memory is in use, almost all of which is by Memcached. MySQL : The VMs run MySQL 5.6, and the clients execute OLTPBenchmark [3] using the Twitter...workload with scale factor of 960. The VMs are each allocated 16 cores and 30 GB of memory, and MySQL is configured with a 16 GB buffer pool in memory. The...operation mix for 5 minutes as a warm-up. At the time of migration, MySQL uses approximately 17 GB of memory, and almost all of the 30 GB memory is
NASA Astrophysics Data System (ADS)
Basso Moro, Sara; Carrieri, Marika; Avola, Danilo; Brigadoi, Sabrina; Lancia, Stefania; Petracca, Andrea; Spezialetti, Matteo; Ferrari, Marco; Placidi, Giuseppe; Quaresima, Valentina
2016-06-01
Objective. In the last few years, the interest in applying virtual reality systems for neurorehabilitation is increasing. Their compatibility with neuroimaging techniques, such as functional near-infrared spectroscopy (fNIRS), allows for the investigation of brain reorganization with multimodal stimulation and real-time control of the changes occurring in brain activity. The present study was aimed at testing a novel semi-immersive visuo-motor task (VMT), which has the features of being adopted in the field of neurorehabilitation of the upper limb motor function. Approach. A virtual environment was simulated through a three-dimensional hand-sensing device (the LEAP Motion Controller), and the concomitant VMT-related prefrontal cortex (PFC) response was monitored non-invasively by fNIRS. Upon the VMT, performed at three different levels of difficulty, it was hypothesized that the PFC would be activated with an expected greater level of activation in the ventrolateral PFC (VLPFC), given its involvement in the motor action planning and in the allocation of the attentional resources to generate goals from current contexts. Twenty-one subjects were asked to move their right hand/forearm with the purpose of guiding a virtual sphere over a virtual path. A twenty-channel fNIRS system was employed for measuring changes in PFC oxygenated-deoxygenated hemoglobin (O2Hb/HHb, respectively). Main results. A VLPFC O2Hb increase and a concomitant HHb decrease were observed during the VMT performance, without any difference in relation to the task difficulty. Significance. The present study has revealed a particular involvement of the VLPFC in the execution of the novel proposed semi-immersive VMT adoptable in the neurorehabilitation field.
Moro, Sara Basso; Carrieri, Marika; Avola, Danilo; Brigadoi, Sabrina; Lancia, Stefania; Petracca, Andrea; Spezialetti, Matteo; Ferrari, Marco; Placidi, Giuseppe; Quaresima, Valentina
2016-06-01
In the last few years, the interest in applying virtual reality systems for neurorehabilitation is increasing. Their compatibility with neuroimaging techniques, such as functional near-infrared spectroscopy (fNIRS), allows for the investigation of brain reorganization with multimodal stimulation and real-time control of the changes occurring in brain activity. The present study was aimed at testing a novel semi-immersive visuo-motor task (VMT), which has the features of being adopted in the field of neurorehabilitation of the upper limb motor function. A virtual environment was simulated through a three-dimensional hand-sensing device (the LEAP Motion Controller), and the concomitant VMT-related prefrontal cortex (PFC) response was monitored non-invasively by fNIRS. Upon the VMT, performed at three different levels of difficulty, it was hypothesized that the PFC would be activated with an expected greater level of activation in the ventrolateral PFC (VLPFC), given its involvement in the motor action planning and in the allocation of the attentional resources to generate goals from current contexts. Twenty-one subjects were asked to move their right hand/forearm with the purpose of guiding a virtual sphere over a virtual path. A twenty-channel fNIRS system was employed for measuring changes in PFC oxygenated-deoxygenated hemoglobin (O2Hb/HHb, respectively). A VLPFC O2Hb increase and a concomitant HHb decrease were observed during the VMT performance, without any difference in relation to the task difficulty. The present study has revealed a particular involvement of the VLPFC in the execution of the novel proposed semi-immersive VMT adoptable in the neurorehabilitation field.
Virtual Sensors in a Web 2.0 Digital Watershed
NASA Astrophysics Data System (ADS)
Liu, Y.; Hill, D. J.; Marini, L.; Kooper, R.; Rodriguez, A.; Myers, J. D.
2008-12-01
The lack of rainfall data in many watersheds is one of the major barriers for modeling and studying many environmental and hydrological processes and supporting decision making. There are just not enough rain gages on the ground. To overcome this data scarcity issue, a Web 2.0 digital watershed is developed at NCSA(National Center for Supercomputing Applications), where users can point-and-click on a web-based google map interface and create new precipitation virtual sensors at any location within the same coverage region as a NEXRAD station. A set of scientific workflows are implemented to perform spatial, temporal and thematic transformations to the near-real-time NEXRAD Level II data. Such workflows can be triggered by the users' actions and generate either rainfall rate or rainfall accumulation streaming data at a user-specified time interval. We will discuss some underlying components of this digital watershed, which consists of a semantic content management middleware, a semantically enhanced streaming data toolkit, virtual sensor management functionality, and RESTful (REpresentational State Transfer) web service that can trigger the workflow execution. Such loosely coupled architecture presents a generic framework for constructing a Web 2.0 style digital watershed. An implementation of this architecture at the Upper Illinois Rive Basin will be presented. We will also discuss the implications of the virtual sensor concept for the broad environmental observatory community and how such concept will help us move towards a participatory digital watershed.
Rose, Nathan S.; Rendell, Peter G.; Hering, Alexandra; Kliegel, Matthias; Bidelman, Gavin M.; Craik, Fergus I. M.
2015-01-01
Prospective memory (PM) – the ability to remember and successfully execute our intentions and planned activities – is critical for functional independence and declines with age, yet few studies have attempted to train PM in older adults. We developed a PM training program using the Virtual Week computer game. Trained participants played the game in 12, 1-h sessions over 1 month. Measures of neuropsychological functions, lab-based PM, event-related potentials (ERPs) during performance on a lab-based PM task, instrumental activities of daily living, and real-world PM were assessed before and after training. Performance was compared to both no-contact and active (music training) control groups. PM on the Virtual Week game dramatically improved following training relative to controls, suggesting PM plasticity is preserved in older adults. Relative to control participants, training did not produce reliable transfer to laboratory-based tasks, but was associated with a reduction of an ERP component (sustained negativity over occipito-parietal cortex) associated with processing PM cues, indicative of more automatic PM retrieval. Most importantly, training produced far transfer to real-world outcomes including improvements in performance on real-world PM and activities of daily living. Real-world gains were not observed in either control group. Our findings demonstrate that short-term training with the Virtual Week game produces cognitive and neural plasticity that may result in real-world benefits to supporting functional independence in older adulthood. PMID:26578936
Richwine, M P; McGowan, J J
2001-01-01
The Shared Hospital Electronic Library of Southern Indiana (SHELSI) research project was designed to determine whether access to a virtual health sciences library and training in its use would support medical decision making in rural southern Indiana and achieve the same level of impact seen by targeted information services provided by health sciences librarians in urban hospitals. Based on the results of a needs assessment, a virtual medical library was created; various levels of training were provided. Virtual library users were asked to complete a Likert-type survey, which included questions on intent of use and impact of use. At the conclusion of the project period, structured interviews were conducted. Impact of the virtual health sciences library showed a strong correlation with the impact of information provided by health sciences librarians. Both interventions resulted in avoidance of adverse health events. Data collected from the structured interviews confirmed the perceived value of the virtual library. While librarians continue to hold a strong position in supporting information access for health care providers, their roles in the information age must begin to move away from providing information toward selecting and organizing knowledge resources and instruction in their use.
78 FR 45948 - Wildland Fire Executive Council Meeting Schedule
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-30
... DEPARTMENT OF THE INTERIOR Office of the Secretary Wildland Fire Executive Council Meeting... Interior, Office of the Secretary, Wildland Fire Executive Council (WFEC) will meet as indicated below... provide advice on coordinated national-level wildland fire policy and to provide leadership, direction...
78 FR 65698 - Wildland Fire Executive Council Meeting Schedule
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-01
... DEPARTMENT OF THE INTERIOR Office of the Secretary Wildland Fire Executive Council Meeting... Interior, Office of the Secretary, Wildland Fire Executive Council (WFEC) will meet as indicated below... provide advice on coordinated national-level wildland fire policy and to provide leadership, direction...
78 FR 15033 - Wildland Fire Executive Council Meeting Schedule
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-08
... DEPARTMENT OF THE INTERIOR Office of the Secretary Wildland Fire Executive Council Meeting... of the Interior, Office of the Secretary, Wildland Fire Executive Council (WFEC) will meet as... provide advice on coordinated national-level wildland fire policy and to provide leadership, direction...
78 FR 45949 - Wildland Fire Executive Council Meeting Schedule
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-30
... DEPARTMENT OF THE INTERIOR Office of the Secretary Wildland Fire Executive Council Meeting... Interior, Office of the Secretary, Wildland Fire Executive Council (WFEC) will meet as indicated below... provide advice on coordinated national-level wildland fire policy and to provide leadership, direction...
76 FR 22130 - Wildland Fire Executive Council Meeting Schedule
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-20
... DEPARTMENT OF THE INTERIOR Office of the Secretary Wildland Fire Executive Council Meeting... Interior, Office of the Secretary, Wildland Fire Executive Council (WFEC) will meet as indicated below... provide advice on coordinated national-level wildland fire policy and to provide leadership, direction...
A cognitive approach to classifying perceived behaviors
NASA Astrophysics Data System (ADS)
Benjamin, Dale Paul; Lyons, Damian
2010-04-01
This paper describes our work on integrating distributed, concurrent control in a cognitive architecture, and using it to classify perceived behaviors. We are implementing the Robot Schemas (RS) language in Soar. RS is a CSP-type programming language for robotics that controls a hierarchy of concurrently executing schemas. The behavior of every RS schema is defined using port automata. This provides precision to the semantics and also a constructive means of reasoning about the behavior and meaning of schemas. Our implementation uses Soar operators to build, instantiate and connect port automata as needed. Our approach is to use comprehension through generation (similar to NLSoar) to search for ways to construct port automata that model perceived behaviors. The generality of RS permits us to model dynamic, concurrent behaviors. A virtual world (Ogre) is used to test the accuracy of these automata. Soar's chunking mechanism is used to generalize and save these automata. In this way, the robot learns to recognize new behaviors.
NASA Astrophysics Data System (ADS)
Berdychowski, Piotr P.; Zabolotny, Wojciech M.
2010-09-01
The main goal of C to VHDL compiler project is to make FPGA platform more accessible for scientists and software developers. FPGA platform offers unique ability to configure the hardware to implement virtually any dedicated architecture, and modern devices provide sufficient number of hardware resources to implement parallel execution platforms with complex processing units. All this makes the FPGA platform very attractive for those looking for efficient heterogeneous, computing environment. Current industry standard in development of digital systems on FPGA platform is based on HDLs. Although very effective and expressive in hands of hardware development specialists, these languages require specific knowledge and experience, unreachable for most scientists and software programmers. C to VHDL compiler project attempts to remedy that by creating an application, that derives initial VHDL description of a digital system (for further compilation and synthesis), from purely algorithmic description in C programming language. This idea itself is not new, and the C to VHDL compiler combines the best approaches from existing solutions developed over many previous years, with the introduction of some new unique improvements.
CrocoBLAST: Running BLAST efficiently in the age of next-generation sequencing.
Tristão Ramos, Ravi José; de Azevedo Martins, Allan Cézar; da Silva Delgado, Gabrielle; Ionescu, Crina-Maria; Ürményi, Turán Peter; Silva, Rosane; Koca, Jaroslav
2017-11-15
CrocoBLAST is a tool for dramatically speeding up BLAST+ execution on any computer. Alignments that would take days or weeks with NCBI BLAST+ can be run overnight with CrocoBLAST. Additionally, CrocoBLAST provides features critical for NGS data analysis, including: results identical to those of BLAST+; compatibility with any BLAST+ version; real-time information regarding calculation progress and remaining run time; access to partial alignment results; queueing, pausing, and resuming BLAST+ calculations without information loss. CrocoBLAST is freely available online, with ample documentation (webchem.ncbr.muni.cz/Platform/App/CrocoBLAST). No installation or user registration is required. CrocoBLAST is implemented in C, while the graphical user interface is implemented in Java. CrocoBLAST is supported under Linux and Windows, and can be run under Mac OS X in a Linux virtual machine. jkoca@ceitec.cz. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Stochastic approach to error estimation for image-guided robotic systems.
Haidegger, Tamas; Gyõri, Sándor; Benyo, Balazs; Benyó, Zoltáán
2010-01-01
Image-guided surgical systems and surgical robots are primarily developed to provide patient safety through increased precision and minimal invasiveness. Even more, robotic devices should allow for refined treatments that are not possible by other means. It is crucial to determine the accuracy of a system, to define the expected overall task execution error. A major step toward this aim is to quantitatively analyze the effect of registration and tracking-series of multiplication of erroneous homogeneous transformations. First, the currently used models and algorithms are introduced along with their limitations, and a new, probability distribution based method is described. The new approach has several advantages, as it was demonstrated in our simulations. Primarily, it determines the full 6 degree of freedom accuracy of the point of interest, allowing for the more accurate use of advanced application-oriented concepts, such as Virtual Fixtures. On the other hand, it becomes feasible to consider different surgical scenarios with varying weighting factors.
Virtual Schools Report 2016: Directory and Performance Review
ERIC Educational Resources Information Center
Miron, Gary; Gulosino, Charisse
2016-01-01
This 2016 report is the fourth in an annual series of National Education Policy Center (NEPC) reports on the fast-growing U.S virtual school sector. This year's report provides a comprehensive directory of the nation's full-time virtual and blended learning school providers. It also pulls together and assesses the available evidence on the…
ERIC Educational Resources Information Center
Abdal-Haqq, Ismat, Ed.
This book is designed to provide practical information about planning and operating virtual, or online, schools. It discusses and illustrates promising practices and successful models and approaches; provides planning resources for implementation; presents costs and benefits of launching virtual schools; offers preventive strategies that help…
Pre-Service Teachers Designing Virtual World Learning Environments
ERIC Educational Resources Information Center
Jacka, Lisa; Booth, Kate
2012-01-01
Integrating Information Technology Communications in the classroom has been an important part of pre-service teacher education for over a decade. The advent of virtual worlds provides the pre-service teacher with an opportunity to study teaching and learning in a highly immersive 3D computer-based environment. Virtual worlds also provide a place…
Designing virtual science labs for the Islamic Academy of Delaware
NASA Astrophysics Data System (ADS)
AlZahrani, Nada Saeed
Science education is a basic part of the curriculum in modern day classrooms. Instructional approaches to science education can take many forms but hands-on application of theory via science laboratory activities for the learner is common. Not all schools have the resources to provide the laboratory environment necessary for hands-on application of science theory. Some settings rely on technology to provide a virtual laboratory experience instead. The Islamic Academy of Delaware (IAD), a typical community-based organization, was formed to support and meet the essential needs of the Muslim community of Delaware. IAD provides science education as part of the overall curriculum, but cannot provide laboratory activities as part of the science program. Virtual science labs may be a successful model for students at IAD. This study was conducted to investigate the potential of implementing virtual science labs at IAD and to develop an implementation plan for integrating the virtual labs. The literature has shown us that the lab experience is a valuable part of the science curriculum (NBPTS, 2013, Wolf, 2010, National Research Council, 1997 & 2012). The National Research Council (2012) stressed the inclusion of laboratory investigations in the science curriculum. The literature also supports the use of virtual labs as an effective substitute for classroom labs (Babateen, 2011; National Science Teachers Association, 2008). Pyatt and Simms (2011) found evidence that virtual labs were as good, if not better than physical lab experiences in some respects. Although not identical in experience to a live lab, the virtual lab has been shown to provide the student with an effective laboratory experience in situations where the live lab is not possible. The results of the IAD teacher interviews indicate that the teachers are well-prepared for, and supportive of, the implementation of virtual labs to improve the science education curriculum. The investigator believes that with the support of the literature and the readiness of the IAD administration and teachers, a recommendation to implement virtual labs into the curriculum can be made.
An Audio Architecture Integrating Sound and Live Voice for Virtual Environments
2002-09-01
implementation of a virtual environment. As real world training locations become scarce and training budgets are trimmed, training system developers ...look more and more towards virtual environments as the answer. Virtual environments provide training system developers with several key benefits
Performance comparison analysis library communication cluster system using merge sort
NASA Astrophysics Data System (ADS)
Wulandari, D. A. R.; Ramadhan, M. E.
2018-04-01
Begins by using a single processor, to increase the speed of computing time, the use of multi-processor was introduced. The second paradigm is known as parallel computing, example cluster. The cluster must have the communication potocol for processing, one of it is message passing Interface (MPI). MPI have many library, both of them OPENMPI and MPICH2. Performance of the cluster machine depend on suitable between performance characters of library communication and characters of the problem so this study aims to analyze the comparative performances libraries in handling parallel computing process. The case study in this research are MPICH2 and OpenMPI. This case research execute sorting’s problem to know the performance of cluster system. The sorting problem use mergesort method. The research method is by implementing OpenMPI and MPICH2 on a Linux-based cluster by using five computer virtual then analyze the performance of the system by different scenario tests and three parameters for to know the performance of MPICH2 and OpenMPI. These performances are execution time, speedup and efficiency. The results of this study showed that the addition of each data size makes OpenMPI and MPICH2 have an average speed-up and efficiency tend to increase but at a large data size decreases. increased data size doesn’t necessarily increased speed up and efficiency but only execution time example in 100000 data size. OpenMPI has a execution time greater than MPICH2 example in 1000 data size average execution time with MPICH2 is 0,009721 and OpenMPI is 0,003895 OpenMPI can customize communication needs.
Ortiz-Catalan, Max; Guðmundsdóttir, Rannveig A; Kristoffersen, Morten B; Zepeda-Echavarria, Alejandra; Caine-Winterberger, Kerstin; Kulbacka-Ortiz, Katarzyna; Widehammar, Cathrine; Eriksson, Karin; Stockselius, Anita; Ragnö, Christina; Pihlar, Zdenka; Burger, Helena; Hermansson, Liselotte
2016-12-10
Phantom limb pain is a debilitating condition for which no effective treatment has been found. We hypothesised that re-engagement of central and peripheral circuitry involved in motor execution could reduce phantom limb pain via competitive plasticity and reversal of cortical reorganisation. Patients with upper limb amputation and known chronic intractable phantom limb pain were recruited at three clinics in Sweden and one in Slovenia. Patients received 12 sessions of phantom motor execution using machine learning, augmented and virtual reality, and serious gaming. Changes in intensity, frequency, duration, quality, and intrusion of phantom limb pain were assessed by the use of the numeric rating scale, the pain rating index, the weighted pain distribution scale, and a study-specific frequency scale before each session and at follow-up interviews 1, 3, and 6 months after the last session. Changes in medication and prostheses were also monitored. Results are reported using descriptive statistics and analysed by non-parametric tests. The trial is registered at ClinicalTrials.gov, number NCT02281539. Between Sept 15, 2014, and April 10, 2015, 14 patients with intractable chronic phantom limb pain, for whom conventional treatments failed, were enrolled. After 12 sessions, patients showed statistically and clinically significant improvements in all metrics of phantom limb pain. Phantom limb pain decreased from pre-treatment to the last treatment session by 47% (SD 39; absolute mean change 1·0 [0·8]; p=0·001) for weighted pain distribution, 32% (38; absolute mean change 1·6 [1·8]; p=0·007) for the numeric rating scale, and 51% (33; absolute mean change 9·6 [8·1]; p=0·0001) for the pain rating index. The numeric rating scale score for intrusion of phantom limb pain in activities of daily living and sleep was reduced by 43% (SD 37; absolute mean change 2·4 [2·3]; p=0·004) and 61% (39; absolute mean change 2·3 [1·8]; p=0·001), respectively. Two of four patients who were on medication reduced their intake by 81% (absolute reduction 1300 mg, gabapentin) and 33% (absolute reduction 75 mg, pregabalin). Improvements remained 6 months after the last treatment. Our findings suggest potential value in motor execution of the phantom limb as a treatment for phantom limb pain. Promotion of phantom motor execution aided by machine learning, augmented and virtual reality, and gaming is a non-invasive, non-pharmacological, and engaging treatment with no identified side-effects at present. Promobilia Foundation, VINNOVA, Jimmy Dahlstens Fond, PicoSolve, and Innovationskontor Väst. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Freund, Eckhard; Rossmann, Juergen
2002-02-01
In 2004, the European COLUMBUS Module is to be attached to the International Space Station. On the way to the successful planning, deployment and operation of the module, computer generated and animated models are being used to optimize performance. Under contract of the German Space Agency DLR, it has become IRF's task to provide a Projective Virtual Reality System to provide a virtual world built after the planned layout of the COLUMBUS module let astronauts and experimentators practice operational procedures and the handling of experiments. The key features of the system currently being realized comprise the possibility for distributed multi-user access to the virtual lab and the visualization of real-world experiment data. Through the capabilities to share the virtual world, cooperative operations can be practiced easily, but also trainers and trainees can work together more effectively sharing the virtual environment. The capability to visualize real-world data will be used to introduce measured data of experiments into the virtual world online in order to realistically interact with the science-reference model hardware: The user's actions in the virtual world are translated into corresponding changes of the inputs of the science reference model hardware; the measured data is than in turn fed back into the virtual world. During the operation of COLUMBUS, the capabilities for distributed access and the capabilities to visualize measured data through the use of metaphors and augmentations of the virtual world may be used to provide virtual access to the COLUMBUS module, e.g. via Internet. Currently, finishing touches are being put to the system. In November 2001 the virtual world shall be operational, so that besides the design and the key ideas, first experimental results can be presented.
78 FR 59949 - Wildland Fire Executive Council Meeting Schedule
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-30
... DEPARTMENT OF THE INTERIOR Office of the Secretary Wildland Fire Executive Council Meeting.... Department of the Interior, Office of the Secretary, Wildland Fire Executive Council (WFEC) will meet as... provide advice on coordinated national-level wildland fire policy and to provide leadership, direction...
78 FR 15032 - Wildland Fire Executive Council Meeting Schedule
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-08
... DEPARTMENT OF THE INTERIOR Office of the Secretary Wildland Fire Executive Council Meeting... Interior, Office of the Secretary, Wildland Fire Executive Council (WFEC) will meet as indicated below... is to provide advice on coordinated national-level wildland fire policy and to provide leadership...
A Hybrid Procedural/Deductive Executive for Autonomous Spacecraft
NASA Technical Reports Server (NTRS)
Pell, Barney; Gamble, Edward B.; Gat, Erann; Kessing, Ron; Kurien, James; Millar, William; Nayak, P. Pandurang; Plaunt, Christian; Williams, Brian C.; Lau, Sonie (Technical Monitor)
1998-01-01
The New Millennium Remote Agent (NMRA) will be the first AI system to control an actual spacecraft. The spacecraft domain places a strong premium on autonomy and requires dynamic recoveries and robust concurrent execution, all in the presence of tight real-time deadlines, changing goals, scarce resource constraints, and a wide variety of possible failures. To achieve this level of execution robustness, we have integrated a procedural executive based on generic procedures with a deductive model-based executive. A procedural executive provides sophisticated control constructs such as loops, parallel activity, locks, and synchronization which are used for robust schedule execution, hierarchical task decomposition, and routine configuration management. A deductive executive provides algorithms for sophisticated state inference and optimal failure recover), planning. The integrated executive enables designers to code knowledge via a combination of procedures and declarative models, yielding a rich modeling capability suitable to the challenges of real spacecraft control. The interface between the two executives ensures both that recovery sequences are smoothly merged into high-level schedule execution and that a high degree of reactivity is retained to effectively handle additional failures during recovery.
A Human Machine Interface for EVA
NASA Astrophysics Data System (ADS)
Hartmann, L.
EVA astronauts work in a challenging environment that includes high rate of muscle fatigue, haptic and proprioception impairment, lack of dexterity and interaction with robotic equipment. Currently they are heavily dependent on support from on-board crew and ground station staff for information and robotics operation. They are limited to the operation of simple controls on the suit exterior and external robot controls that are difficult to operate because of the heavy gloves that are part of the EVA suit. A wearable human machine interface (HMI) inside the suit provides a powerful alternative for robot teleoperation, procedure checklist access, generic equipment operation via virtual control panels and general information retrieval and presentation. The HMI proposed here includes speech input and output, a simple 6 degree of freedom (dof) pointing device and a heads up display (HUD). The essential characteristic of this interface is that it offers an alternative to the standard keyboard and mouse interface of a desktop computer. The astronaut's speech is used as input to command mode changes, execute arbitrary computer commands and generate text. The HMI can respond with speech also in order to confirm selections, provide status and feedback and present text output. A candidate 6 dof pointing device is Measurand's Shapetape, a flexible "tape" substrate to which is attached an optic fiber with embedded sensors. Measurement of the modulation of the light passing through the fiber can be used to compute the shape of the tape and, in particular, the position and orientation of the end of the Shapetape. It can be used to provide any kind of 3d geometric information including robot teleoperation control. The HUD can overlay graphical information onto the astronaut's visual field including robot joint torques, end effector configuration, procedure checklists and virtual control panels. With suitable tracking information about the position and orientation of the EVA suit, the overlaid graphical information can be registered with the external world. For example, information about an object can be positioned on or beside the object. This wearable HMI supports many applications during EVA including robot teleoperation, procedure checklist usage, operation of virtual control panels and general information or documentation retrieval and presentation. Whether the robot end effector is a mobile platform for the EVA astronaut or is an assistant to the astronaut in an assembly or repair task, the astronaut can control the robot via a direct manipulation interface. Embedded in the suit or the astronaut's clothing, Shapetape can measure the user's arm/hand position and orientation which can be directly mapped into the workspace coordinate system of the robot. Motion of the users hand can generate corresponding motion of the robot end effector in order to reposition the EVA platform or to manipulate objects in the robot's grasp. Speech input can be used to execute commands and mode changes without the astronaut having to withdraw from the teleoperation task. Speech output from the system can provide feedback without affecting the user's visual attention. The procedure checklist guiding the astronaut's detailed activities can be presented on the HUD and manipulated (e.g., move, scale, annotate, mark tasks as done, consult prerequisite tasks) by spoken command. Virtual control panels for suit equipment, equipment being repaired or arbitrary equipment on the space station can be displayed on the HUD and can be operated by speech commands or by hand gestures. For example, an antenna being repaired could be pointed under the control of the EVA astronaut. Additionally arbitrary computer activities such as information retrieval and presentation can be carried out using similar interface techniques. Considering the risks, expense and physical challenges of EVA work, it is appropriate that EVA astronauts have considerable support from station crew and ground station staff. Reducing their dependence on such personnel may under many circumstances, however, improve performance and reduce risk. For example, the EVA astronaut is likely to have the best viewpoint at a robotic worksite. Direct access to the procedure checklist can help provide temporal context and continuity throughout an EVA. Access to station facilities through an HMI such as the one described here could be invaluable during an emergency or in a situation in which a fault occurs. The full paper will describe the HMI operation and applications in the EVA context in more detail and will describe current laboratory prototyping activities.
Knowledge-Driven Design of Virtual Patient Simulations
ERIC Educational Resources Information Center
Vergara, Victor; Caudell, Thomas; Goldsmith, Timothy; Panaiotis; Alverson, Dale
2009-01-01
Virtual worlds provide unique opportunities for instructors to promote, study, and evaluate student learning and comprehension. In this article, Victor Vergara, Thomas Caudell, Timothy Goldsmith, Panaiotis, and Dale Alverson explore the advantages of using virtual reality environments to create simulations for medical students. Virtual simulations…
An Intelligent Virtual Human System For Providing Healthcare Information And Support
2011-01-01
for clinical purposes. Shifts in the social and scientific landscape have now set the stage for the next major movement in Clinical Virtual Reality ...College; dMadigan Army Medical Center Army Abstract. Over the last 15 years, a virtual revolution has taken place in the use of Virtual Reality ... Virtual Reality with the “birth” of intelligent virtual humans. Seminal research and development has appeared in the creation of highly interactive
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-16
... companion manual to provide agency-wide guidance for executing compliance with Executive Order 11988... procedures and guidance in accordance with specific sections of Executive Order 11988 and Executive Order.... ADDRESSES: Written comments should be sent to Emily Johannes, Senior Environmental Technical Advisor, NOAA...
77 FR 35420 - Wildland Fire Executive Council Meeting Schedule
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-13
... DEPARTMENT OF THE INTERIOR Office of the Secretary Wildland Fire Executive Council Meeting... Interior, Office of the Secretary, Wildland Fire Executive Council (WFEC) will meet as indicated below... purpose of the WFEC is to provide advice on coordinated national-level wildland fire policy and to provide...
77 FR 18851 - Wildland Fire Executive Council Meeting Schedule
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-28
... DEPARTMENT OF THE INTERIOR Office of the Secretary Wildland Fire Executive Council Meeting... Interior, Office of the Secretary, Wildland Fire Executive Council (WFEC) will meet as indicated below... purpose of the WFEC is to provide advice on coordinated national-level wildland fire policy and to provide...
76 FR 79205 - Wildland Fire Executive Council Meeting Schedule
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-21
... DEPARTMENT OF THE INTERIOR Office of the Secretary Wildland Fire Executive Council Meeting... Interior, Office of the Secretary, Wildland Fire Executive Council (WFEC) will meet as indicated below... purpose of the WFEC is to provide advice on coordinated national-level wildland fire policy and to provide...
Feys, Peter; Coninx, Karin; Kerkhofs, Lore; De Weyer, Tom; Truyens, Veronik; Maris, Anneleen; Lamers, Ilse
2015-07-23
Despite the functional impact of upper limb dysfunction in multiple sclerosis (MS), effects of intensive exercise programs and specifically robot-supported training have been rarely investigated in persons with advanced MS. To investigate the effects of additional robot-supported upper limb training in persons with MS compared to conventional treatment only. Seventeen persons with MS (pwMS) (median Expanded Disability Status Scale of 8, range 3.5-8.5) were included in a pilot RCT comparing the effects of additional robot-supported training to conventional treatment only. Additional training consisted of 3 weekly sessions of 30 min interacting with the HapticMaster robot within an individualised virtual learning environment (I-TRAVLE). Clinical measures at body function (Hand grip strength, Motricity Index, Fugl-Meyer) and activity (Action Research Arm test, Motor Activity Log) level were administered before and after an intervention period of 8 weeks. The intervention group were also evaluated on robot-mediated movement tasks in three dimensions, providing active range of motion, movement duration and speed and hand-path ratio as indication of movement efficiency in the spatial domain. Non-parametric statistics were applied. PwMS commented favourably on the robot-supported virtual learning environment and reported functional training effects in daily life. Movement tasks in three dimensions, measured with the robot, were performed in less time and for the transporting and reaching movement tasks more efficiently. There were however no significant changes for any clinical measure in neither intervention nor control group although observational analyses of the included cases indicated large improvements on the Fugl-Meyer in persons with more marked upper limb dysfunction. Robot-supported training lead to more efficient movement execution which was however, on group level, not reflected by significant changes on standard clinical tests. Persons with more marked upper limb dysfunction may benefit most from additional robot-supported training, but larger studies are needed. This trial is registered within the registry Clinical Trials GOV ( NCT02257606 ).
Werbach, Kevin
2005-09-01
Internet telephony, or VoIP, is rapidly replacing the conventional kind. This year, for the first time, U.S. companies bought more new Internet-phone connections than standard lines. The major driver behind this change is cost. But VoIP isn't just a new technology for making old-fashioned calls cheaper, says consultant Kevin Werbach. It is fundamentally changing how companies use voice communications. What makes VoIP so powerful is that it turns voice into digital data packets that can be stored, copied, combined with other data, and distributed to virtually any device that connects to the Internet. And it makes it simple to provide all the functionality of a corporate phone-call features, directories, security-to anyone anywhere there's broadband access. That fosters new kinds of businesses such as virtual call centers, where widely dispersed agents work at all hours from their homes. The most successful early adopters, says Werbach, will focus more on achieving business objectives than on saving money. They will also consider how to push VoIP capabilities out to the extended organization, making use of everyone as a resource. Deployment may be incremental, but companies should be thinking about where VoIP could take them. Executives should ask what they could do if, on demand, they could bring all their employees, customers, suppliers, and partners together in a virtual room, with shared access to every modern communications and computing channel. They should take a fresh look at their business processes to find points at which richer and more customizable communications could eliminate bottlenecks and enhance quality. The important dividing line won't be between those who deploy Vol P and those who don't, or even between early adopters and laggards. It will be between those who see Vol P as just a new way to do the same old things and those who use itto rethink their entire businesses.
Analyzing huge pathology images with open source software
2013-01-01
Background Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer’s memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. Results We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Conclusions Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. Virtual slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5955513929846272 PMID:23829479
Virtual reality simulators and training in laparoscopic surgery.
Yiannakopoulou, Eugenia; Nikiteas, Nikolaos; Perrea, Despina; Tsigris, Christos
2015-01-01
Virtual reality simulators provide basic skills training without supervision in a controlled environment, free of pressure of operating on patients. Skills obtained through virtual reality simulation training can be transferred on the operating room. However, relative evidence is limited with data available only for basic surgical skills and for laparoscopic cholecystectomy. No data exist on the effect of virtual reality simulation on performance on advanced surgical procedures. Evidence suggests that performance on virtual reality simulators reliably distinguishes experienced from novice surgeons Limited available data suggest that independent approach on virtual reality simulation training is not different from proctored approach. The effect of virtual reality simulators training on acquisition of basic surgical skills does not seem to be different from the effect the physical simulators. Limited data exist on the effect of virtual reality simulation training on the acquisition of visual spatial perception and stress coping skills. Undoubtedly, virtual reality simulation training provides an alternative means of improving performance in laparoscopic surgery. However, future research efforts should focus on the effect of virtual reality simulation on performance in the context of advanced surgical procedure, on standardization of training, on the possibility of synergistic effect of virtual reality simulation training combined with mental training, on personalized training. Copyright © 2014 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
Gulliver, Amelia; Chan, Jade KY; Bennett, Kylie; Griffiths, Kathleen M
2015-01-01
Background Help seeking for mental health problems among university students is low, and Internet-based interventions such as virtual clinics have the potential to provide private, streamlined, and high quality care to this vulnerable group. Objective The objective of this study was to conduct focus groups with university students to obtain input on potential functions and features of a university-specific virtual clinic for mental health. Methods Participants were 19 undergraduate students from an Australian university between 19 and 24 years of age. Focus group discussion was structured by questions that addressed the following topics: (1) the utility and acceptability of a virtual mental health clinic for students, and (2) potential features of a virtual mental health clinic. Results Participants viewed the concept of a virtual clinic for university students favorably, despite expressing concerns about privacy of personal information. Participants expressed a desire to connect with professionals through the virtual clinic, for the clinic to provide information tailored to issues faced by students, and for the clinic to enable peer-to-peer interaction. Conclusions Overall, results of the study suggest the potential for virtual clinics to play a positive role in providing students with access to mental health support. PMID:26543908
HydroShare: A Platform for Collaborative Data and Model Sharing in Hydrology
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Couch, A.; Hooper, R. P.; Dash, P. K.; Stealey, M.; Yi, H.; Bandaragoda, C.; Castronova, A. M.
2017-12-01
HydroShare is an online, collaboration system for sharing of hydrologic data, analytical tools, and models. It supports the sharing of and collaboration around "resources" which are defined by standardized content types for data formats and models commonly used in hydrology. With HydroShare you can: Share your data and models with colleagues; Manage who has access to the content that you share; Share, access, visualize and manipulate a broad set of hydrologic data types and models; Use the web services application programming interface (API) to program automated and client access; Publish data and models and obtain a citable digital object identifier (DOI); Aggregate your resources into collections; Discover and access data and models published by others; Use web apps to visualize, analyze and run models on data in HydroShare. This presentation will describe the functionality and architecture of HydroShare highlighting its use as a virtual environment supporting education and research. HydroShare has components that support: (1) resource storage, (2) resource exploration, and (3) web apps for actions on resources. The HydroShare data discovery, sharing and publishing functions as well as HydroShare web apps provide the capability to analyze data and execute models completely in the cloud (servers remote from the user) overcoming desktop platform limitations. The HydroShare GIS app provides a basic capability to visualize spatial data. The HydroShare JupyterHub Notebook app provides flexible and documentable execution of Python code snippets for analysis and modeling in a way that results can be shared among HydroShare users and groups to support research collaboration and education. We will discuss how these developments can be used to support different types of educational efforts in Hydrology where being completely web based is of value in an educational setting as students can all have access to the same functionality regardless of their computer.
Carolan-Rees, G; Ray, A F
2015-05-01
The aim of this study was to produce an economic cost model comparing the use of the Medaphor ScanTrainer virtual reality training simulator for obstetrics and gynaecology ultrasound to achieve basic competence, with the traditional training method. A literature search and survey of expert opinion were used to identify resources used in training. An executable model was produced in Excel. The model showed a cost saving for a clinic using the ScanTrainer of £7114 per annum. The uncertainties of the model were explored and it was found to be robust. Threshold values for the key drivers of the model were identified. Using the ScanTrainer is cost saving for clinics with at least two trainees per year to train, if it would take at least six lists to train them using the traditional training method and if a traditional training list has at least two fewer patients than a standard list.
New Web Server - the Java Version of Tempest - Produced
NASA Technical Reports Server (NTRS)
York, David W.; Ponyik, Joseph G.
2000-01-01
A new software design and development effort has produced a Java (Sun Microsystems, Inc.) version of the award-winning Tempest software (refs. 1 and 2). In 1999, the Embedded Web Technology (EWT) team received a prestigious R&D 100 Award for Tempest, Java Version. In this article, "Tempest" will refer to the Java version of Tempest, a World Wide Web server for desktop or embedded systems. Tempest was designed at the NASA Glenn Research Center at Lewis Field to run on any platform for which a Java Virtual Machine (JVM, Sun Microsystems, Inc.) exists. The JVM acts as a translator between the native code of the platform and the byte code of Tempest, which is compiled in Java. These byte code files are Java executables with a ".class" extension. Multiple byte code files can be zipped together as a "*.jar" file for more efficient transmission over the Internet. Today's popular browsers, such as Netscape (Netscape Communications Corporation) and Internet Explorer (Microsoft Corporation) have built-in Virtual Machines to display Java applets.
Test of 3D CT reconstructions by EM + TV algorithm from undersampled data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evseev, Ivan; Ahmann, Francielle; Silva, Hamilton P. da
2013-05-06
Computerized tomography (CT) plays an important role in medical imaging for diagnosis and therapy. However, CT imaging is connected with ionization radiation exposure of patients. Therefore, the dose reduction is an essential issue in CT. In 2011, the Expectation Maximization and Total Variation Based Model for CT Reconstruction (EM+TV) was proposed. This method can reconstruct a better image using less CT projections in comparison with the usual filtered back projection (FBP) technique. Thus, it could significantly reduce the overall dose of radiation in CT. This work reports the results of an independent numerical simulation for cone beam CT geometry withmore » alternative virtual phantoms. As in the original report, the 3D CT images of 128 Multiplication-Sign 128 Multiplication-Sign 128 virtual phantoms were reconstructed. It was not possible to implement phantoms with lager dimensions because of the slowness of code execution even by the CORE i7 CPU.« less
78 FR 33432 - Wildland Fire Executive Council Meeting Schedule
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-04
... DEPARTMENT OF THE INTERIOR Office of the Secretary Wildland Fire Executive Council Meeting.... Department of the Interior, Office of the Secretary, Wildland Fire Executive Council (WFEC) will meet as... purpose of the WFEC is to provide advice on coordinated national-level wildland fire policy and to provide...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-31
... submission pursuant to this rule must obtain documentary evidence of a non-Exchange trade execution no later... which entered the original submission provides documentary evidence that the original trade execution... entered on terms which differed from the reported trade execution, if provided with documentary evidence...
The Investigation of Teacher Communication Practices in Virtual High School
ERIC Educational Resources Information Center
Belair, Marley
2011-01-01
Virtual schooling is an increasing trend for secondary education. Research of the communication practices in virtual schools has provided a myriad of suggestions for virtual school policies. Although transactional distance has been investigated in relation to certain aspects of the communication process, a small-scale qualitative study has not…
Librarians without Borders? Virtual Reference Service to Unaffiliated Users
ERIC Educational Resources Information Center
Kibbee, Jo
2006-01-01
The author investigates issues faced by academic research libraries in providing virtual reference services to unaffiliated users. These libraries generally welcome visitors who use on-site collections and reference services, but are these altruistic policies feasible in a virtual environment? This paper reviews the use of virtual reference…
Can Virtual Schools Thrive in the Real World?
ERIC Educational Resources Information Center
Wang, Yinying; Decker, Janet R.
2014-01-01
Despite the relatively large number of students enrolled in Ohio's virtual schools, it is unclear how virtual schools compare to their traditional school counterparts on measures of student achievement. To provide some insight, we compared the school performance from 2007-2011 at Ohio's virtual and traditional schools. The results suggest that…
A "Virtual Spin" on the Teaching of Probability
ERIC Educational Resources Information Center
Beck, Shari A.; Huse, Vanessa E.
2007-01-01
This article, which describes integrating virtual manipulatives with the teaching of probability at the elementary level, puts a "virtual spin" on the teaching of probability to provide more opportunities for students to experience successful learning. The traditional use of concrete manipulatives is enhanced with virtual coins and spinners from…
Duffau, Hugues; Moritz-Gasser, Sylvie; Mandonnet, Emmanuel
2014-04-01
From recent findings provided by brain stimulation mapping during picture naming, we re-examine the neural basis of language. We studied structural-functional relationships by correlating the types of language disturbances generated by stimulation in awake patients, mimicking a transient virtual lesion both at cortical and subcortical levels (white matter and deep grey nuclei), with the anatomical location of the stimulation probe. We propose a hodotopical (delocalized) and dynamic model of language processing, which challenges the traditional modular and serial view. According to this model, following the visual input, the language network is organized in parallel, segregated (even if interconnected) large-scale cortico-subcortical sub-networks underlying semantic, phonological and syntactic processing. Our model offers several advantages (i) it explains double dissociations during stimulation (comprehension versus naming disorders, semantic versus phonemic paraphasias, syntactic versus naming disturbances, plurimodal judgment versus naming disorders); (ii) it takes into account the cortical and subcortical anatomic constraints; (iii) it explains the possible recovery of aphasia following a lesion within the "classical" language areas; (iv) it establishes links with a model executive functions. Copyright © 2013 Elsevier Inc. All rights reserved.
Physician-executives past, present, and future.
Smallwood, K G; Wilson, C N
1992-08-01
The dramatic changes in the United States' health care system during the last decade have sparked increasing interest in physician-executives. These executives, skilled in both clinical medicine and health care management, can be found in hospitals, managed care organizations, group practices, and government institutions. This paper outlines the physician-executive's roles and the development process. The remarkable growth in the number of physician-executives is expected to continue as they demonstrate their abilities to help health care providers expand ambulatory services, facilitate provider-physician relationships and physician recruitment, and lend expertise in quality improvement and risk management issues.
VICAR - VIDEO IMAGE COMMUNICATION AND RETRIEVAL
NASA Technical Reports Server (NTRS)
Wall, R. J.
1994-01-01
VICAR (Video Image Communication and Retrieval) is a general purpose image processing software system that has been under continuous development since the late 1960's. Originally intended for data from the NASA Jet Propulsion Laboratory's unmanned planetary spacecraft, VICAR is now used for a variety of other applications including biomedical image processing, cartography, earth resources, and geological exploration. The development of this newest version of VICAR emphasized a standardized, easily-understood user interface, a shield between the user and the host operating system, and a comprehensive array of image processing capabilities. Structurally, VICAR can be divided into roughly two parts; a suite of applications programs and an executive which serves as the interfaces between the applications, the operating system, and the user. There are several hundred applications programs ranging in function from interactive image editing, data compression/decompression, and map projection, to blemish, noise, and artifact removal, mosaic generation, and pattern recognition and location. An information management system designed specifically for handling image related data can merge image data with other types of data files. The user accesses these programs through the VICAR executive, which consists of a supervisor and a run-time library. From the viewpoint of the user and the applications programs, the executive is an environment that is independent of the operating system. VICAR does not replace the host computer's operating system; instead, it overlays the host resources. The core of the executive is the VICAR Supervisor, which is based on NASA Goddard Space Flight Center's Transportable Applications Executive (TAE). Various modifications and extensions have been made to optimize TAE for image processing applications, resulting in a user friendly environment. The rest of the executive consists of the VICAR Run-Time Library, which provides a set of subroutines (image I/O, label I/O, parameter I/O, etc.) to facilitate image processing and provide the fastest I/O possible while maintaining a wide variety of capabilities. The run-time library also includes the Virtual Raster Display Interface (VRDI) which allows display oriented applications programs to be written for a variety of display devices using a set of common routines. (A display device can be any frame-buffer type device which is attached to the host computer and has memory planes for the display and manipulation of images. A display device may have any number of separate 8-bit image memory planes (IMPs), a graphics overlay plane, pseudo-color capabilities, hardware zoom and pan, and other features). The VRDI supports the following display devices: VICOM (Gould/Deanza) IP8500, RAMTEK RM-9465, ADAGE (Ikonas) IK3000 and the International Imaging Systems IVAS. VRDI's purpose is to provide a uniform operating environment not only for an application programmer, but for the user as well. The programmer is able to write programs without being concerned with the specifics of the device for which the application is intended. The VICAR Interactive Display Subsystem (VIDS) is a collection of utilities for easy interactive display and manipulation of images on a display device. VIDS has characteristics of both the executive and an application program, and offers a wide menu of image manipulation options. VIDS uses the VRDI to communicate with display devices. The first step in using VIDS to analyze and enhance an image (one simple example of VICAR's numerous capabilities) is to examine the histogram of the image. The histogram is a plot of frequency of occurrence for each pixel value (0 - 255) loaded in the image plane. If, for example, the histogram shows that there are no pixel values below 64 or above 192, the histogram can be "stretched" so that the value of 64 is mapped to zero and 192 is mapped to 255. Now the user can use the full dynamic range of the display device to display the data and better see its contents. Another example of a VIDS procedure is the JMOVIE command, which allows the user to run animations interactively on the display device. JMOVIE uses the concept of "frames", which are the individual frames which comprise the animation to be viewed. The user loads images into the frames after the size and number of frames has been selected. VICAR's source languages are primarily FORTRAN and C, with some VAX Assembler and array processor code. The VICAR run-time library is designed to work equally easily from either FORTRAN or C. The program was implemented on a DEC VAX series computer operating under VMS 4.7. The virtual memory required is 1.5MB. Approximately 180,000 blocks of storage are needed for the saveset. VICAR (version 2.3A/3G/13H) is a copyrighted work with all copyright vested in NASA and is available by license for a period of ten (10) years to approved licensees. This program was developed in 1989.
[Testing system design and analysis for the execution units of anti-thrombotic device].
Li, Zhelong; Cui, Haipo; Shang, Kun; Liao, Yuehua; Zhou, Xun
2015-02-01
In an anti-thrombotic pressure circulatory device, relays and solenoid valves serve as core execution units. Thus the therapeutic efficacy and patient safety of the device will directly depend on their performance. A new type of testing system for relays and solenoid valves used in the anti-thrombotic device has been developed, which can test action response time and fatigue performance of relay and solenoid valve. PC, data acquisition card and test platform are used in this testing system based on human-computer interaction testing modules. The testing objectives are realized by using the virtual instrument technology, the high-speed data acquisition technology and reasonable software design. The two sets of the system made by relay and solenoid valve are tested. The results proved the universality and reliability of the testing system so that these relays and solenoid valves could be accurately used in the antithrombotic pressure circulatory equipment. The newly-developed testing system has a bright future in the aspects of promotion and application prospect.
Real-time interactive virtual tour on the World Wide Web (WWW)
NASA Astrophysics Data System (ADS)
Yoon, Sanghyuk; Chen, Hai-jung; Hsu, Tom; Yoon, Ilmi
2003-12-01
Web-based Virtual Tour has become a desirable and demanded application, yet challenging due to the nature of web application's running environment such as limited bandwidth and no guarantee of high computation power on the client side. Image-based rendering approach has attractive advantages over traditional 3D rendering approach in such Web Applications. Traditional approach, such as VRML, requires labor-intensive 3D modeling process, high bandwidth and computation power especially for photo-realistic virtual scenes. QuickTime VR and IPIX as examples of image-based approach, use panoramic photos and the virtual scenes that can be generated from photos directly skipping the modeling process. But, these image-based approaches may require special cameras or effort to take panoramic views and provide only one fixed-point look-around and zooming in-out rather than 'walk around', that is a very important feature to provide immersive experience to virtual tourists. The Web-based Virtual Tour using Tour into the Picture employs pseudo 3D geometry with image-based rendering approach to provide viewers with immersive experience of walking around the virtual space with several snap shots of conventional photos.
Nature and origins of virtual environments - A bibliographical essay
NASA Technical Reports Server (NTRS)
Ellis, S. R.
1991-01-01
Virtual environments presented via head-mounted, computer-driven displays provide a new media for communication. They may be analyzed by considering: (1) what may be meant by an environment; (2) what is meant by the process of virtualization; and (3) some aspects of human performance that constrain environmental design. Their origins are traced from previous work in vehicle simulation and multimedia research. Pointers are provided to key technical references, in the dispersed, archival literature, that are relevant to the development and evaluation of virtual-environment interface systems.
Modeling the behavior of human body tissues on penetration
NASA Astrophysics Data System (ADS)
Conci, A.; Brazil, A. L.; Popovici, D.; Jiga, G.; Lebon, F.
2018-02-01
Several procedures in medicine (such as anesthesia, injections, biopsies and percutaneous treatments) involve a needle insertion. Such procedures operate without vision of the internal involved areas. Physicians and anesthetists rely on manual (force and tactile) feedback to guide their movements, so a number of medical practice is strongly based on manual skill. In order to be expert in the execution of such procedures the medical students must practice a number of times, but before practice in a real patient they must be trained in some place and a virtual environment, using Virtual Reality (VR) or Augmented Reality (AR) is the best possible solution for such training. In a virtual environment the success of user practices is improved by the addition of force output using haptic device to improve the manual sensations in the interactions between user and computer. Haptic devices enable simulate the physical restriction of the diverse tissues and force reactions to movements of operator hands. The trainees can effectively "feel" the reactions to theirs movements and receive immediate feedback from the actions executed by them in the implemented environment. However, in order to implement such systems, the tissue reaction to penetration and cutting must be modeled. A proper model must emulate the physical sensations of the needle action in the skin, fat, muscle, and so one, as if it really done in a patient that is as they are holding a real needle and feeling each tissue resistance when inserting it through the body. For example an average force value for human skin puncture is 6.0 N, it is 2.0 N for subcutaneous fat tissue and 4.4 N for muscles: this difference of sensations to penetration of each layers trespassed by the needle makes possible to suppose the correct position inside the body. This work presents a model for tissues before and after the cutting that with proper assumptions of proprieties can model any part of human body. It was based on experiments and used in embryonic system for epidural anesthesia having good evaluation as presented in the last section "Preliminary Results".
Cogné, Mélanie; Violleau, Marie-Hélène; Klinger, Evelyne; Joseph, Pierre-Alain
2018-01-31
Topographical disorientation is frequent among patients after a stroke and can be well explored with virtual environments (VEs). VEs also allow for the addition of stimuli. A previous study did not find any effect of non-contextual auditory stimuli on navigational performance in the virtual action planning-supermarket (VAP-S) simulating a medium-sized 3D supermarket. However, the perceptual or cognitive load of the sounds used was not high. We investigated how non-contextual auditory stimuli with high load affect navigational performance in the VAP-S for patients who have had a stroke and any correlation between this performance and dysexecutive disorders. Four kinds of stimuli were considered: sounds from living beings, sounds from supermarket objects, beeping sounds and names of other products that were not available in the VAP-S. The condition without auditory stimuli was the control. The Groupe de réflexion pour l'évaluation des fonctions exécutives (GREFEX) battery was used to evaluate executive functions of patients. The study included 40 patients who have had a stroke (n=22 right-hemisphere and n=18 left-hemisphere stroke). Patients' navigational performance was decreased under the 4 conditions with non-contextual auditory stimuli (P<0.05), especially for those with dysexecutive disorders. For the 5 conditions, the lower the performance, the more GREFEX tests were failed. Patients felt significantly disadvantaged by the non-contextual sounds sounds from living beings, sounds from supermarket objects and names of other products as compared with beeping sounds (P<0.01). Patients' verbal recall of the collected objects was significantly lower under the condition with names of other products (P<0.001). Left and right brain-damaged patients did not differ in navigational performance in the VAP-S under the 5 auditory conditions. These non-contextual auditory stimuli could be used in neurorehabilitation paradigms to train patients with dysexecutive disorders to inhibit disruptive stimuli. Copyright © 2018 Elsevier Masson SAS. All rights reserved.