Sample records for computationally expensive task

  1. Low-Cost Terminal Alternative for Learning Center Managers. Final Report.

    ERIC Educational Resources Information Center

    Nix, C. Jerome; And Others

    This study established the feasibility of replacing high performance and relatively expensive computer terminals with less expensive ones adequate for supporting specific tasks of Advanced Instructional System (AIS) at Lowry AFB, Colorado. Surveys of user requirements and available devices were conducted and the results used in a system analysis.…

  2. Student and Teacher Perceptions of the Use of Multimedia Supported Predict-Observe-Explain Tasks To Probe Understanding.

    ERIC Educational Resources Information Center

    Kearney, Matthew; Treagust, David F.; Yeo, Shelley; Zadnik, Marjan G.

    2001-01-01

    Discusses student and teacher perceptions of a new development in the use of the predict-observe-explain (POE) strategy. This development involves the incorporation of POE tasks into a multimedia computer program that uses real-life, digital video clips of difficult, expensive, time consuming, or dangerous scenarios as stimuli for these tasks.…

  3. Habitual control of goal selection in humans

    PubMed Central

    Cushman, Fiery; Morris, Adam

    2015-01-01

    Humans choose actions based on both habit and planning. Habitual control is computationally frugal but adapts slowly to novel circumstances, whereas planning is computationally expensive but can adapt swiftly. Current research emphasizes the competition between habits and plans for behavioral control, yet many complex tasks instead favor their integration. We consider a hierarchical architecture that exploits the computational efficiency of habitual control to select goals while preserving the flexibility of planning to achieve those goals. We formalize this mechanism in a reinforcement learning setting, illustrate its costs and benefits, and experimentally demonstrate its spontaneous application in a sequential decision-making task. PMID:26460050

  4. An efficient temporal logic for robotic task planning

    NASA Technical Reports Server (NTRS)

    Becker, Jeffrey M.

    1989-01-01

    Computations required for temporal reasoning can be prohibitively expensive if fully general representations are used. Overly simple representations, such as totally ordered sequence of time points, are inadequate for use in a nonlinear task planning system. A middle ground is identified which is general enough to support a capable nonlinear task planner, but specialized enough that the system can support online task planning in real time. A Temporal Logic System (TLS) was developed during the Intelligent Task Automation (ITA) project to support robotic task planning. TLS is also used within the ITA system to support plan execution, monitoring, and exception handling.

  5. Learning Hierarchical Skills for Game Agents from Video of Human Behavior

    DTIC Science & Technology

    2009-01-01

    intelligent agents for computer games is an im- portant aspect of game development . However, traditional methods are expensive, and the resulting agents...Constructing autonomous agents is an essential task in game development . In this paper, we outlined a system that an- alyzes preprocessed video footage of

  6. The effective use of virtualization for selection of data centers in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Kumar, B. Santhosh; Parthiban, Latha

    2018-04-01

    Data centers are the places which consist of network of remote servers to store, access and process the data. Cloud computing is a technology where users worldwide will submit the tasks and the service providers will direct the requests to the data centers which are responsible for execution of tasks. The servers in the data centers need to employ the virtualization concept so that multiple tasks can be executed simultaneously. In this paper we proposed an algorithm for data center selection based on energy of virtual machines created in server. The virtualization energy in each of the server is calculated and total energy of the data center is obtained by the summation of individual server energy. The tasks submitted are routed to the data center with least energy consumption which will result in minimizing the operational expenses of a service provider.

  7. Optimize Resources and Help Reduce Cost of Ownership with Dell[TM] Systems Management

    ERIC Educational Resources Information Center

    Technology & Learning, 2008

    2008-01-01

    Maintaining secure, convenient administration of the PC system environment can be a significant drain on resources. Deskside visits can greatly increase the cost of supporting a large number of computers. Even simple tasks, such as tracking inventory or updating software, quickly become expensive when they require physically visiting every…

  8. Cloud-based computation for accelerating vegetation mapping and change detection at regional to national scales

    Treesearch

    Matthew J. Gregory; Zhiqiang Yang; David M. Bell; Warren B. Cohen; Sean Healey; Janet L. Ohmann; Heather M. Roberts

    2015-01-01

    Mapping vegetation and landscape change at fine spatial scales is needed to inform natural resource and conservation planning, but such maps are expensive and time-consuming to produce. For Landsat-based methodologies, mapping efforts are hampered by the daunting task of manipulating multivariate data for millions to billions of pixels. The advent of cloud-based...

  9. Multi-Scale Surface Descriptors

    PubMed Central

    Cipriano, Gregory; Phillips, George N.; Gleicher, Michael

    2010-01-01

    Local shape descriptors compactly characterize regions of a surface, and have been applied to tasks in visualization, shape matching, and analysis. Classically, curvature has be used as a shape descriptor; however, this differential property characterizes only an infinitesimal neighborhood. In this paper, we provide shape descriptors for surface meshes designed to be multi-scale, that is, capable of characterizing regions of varying size. These descriptors capture statistically the shape of a neighborhood around a central point by fitting a quadratic surface. They therefore mimic differential curvature, are efficient to compute, and encode anisotropy. We show how simple variants of mesh operations can be used to compute the descriptors without resorting to expensive parameterizations, and additionally provide a statistical approximation for reduced computational cost. We show how these descriptors apply to a number of uses in visualization, analysis, and matching of surfaces, particularly to tasks in protein surface analysis. PMID:19834190

  10. Condor-COPASI: high-throughput computing for biochemical networks

    PubMed Central

    2012-01-01

    Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage. PMID:22834945

  11. Improving the Aircraft Design Process Using Web-Based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)

    2000-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  12. Improving the Aircraft Design Process Using Web-based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.

    2003-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  13. Reactive transport modeling in the subsurface environment with OGS-IPhreeqc

    NASA Astrophysics Data System (ADS)

    He, Wenkui; Beyer, Christof; Fleckenstein, Jan; Jang, Eunseon; Kalbacher, Thomas; Naumov, Dimitri; Shao, Haibing; Wang, Wenqing; Kolditz, Olaf

    2015-04-01

    Worldwide, sustainable water resource management becomes an increasingly challenging task due to the growth of population and extensive applications of fertilizer in agriculture. Moreover, climate change causes further stresses to both water quantity and quality. Reactive transport modeling in the coupled soil-aquifer system is a viable approach to assess the impacts of different land use and groundwater exploitation scenarios on the water resources. However, the application of this approach is usually limited in spatial scale and to simplified geochemical systems due to the huge computational expense involved. Such computational expense is not only caused by solving the high non-linearity of the initial boundary value problems of water flow in the unsaturated zone numerically with rather fine spatial and temporal discretization for the correct mass balance and numerical stability, but also by the intensive computational task of quantifying geochemical reactions. In the present study, a flexible and efficient tool for large scale reactive transport modeling in variably saturated porous media and its applications are presented. The open source scientific software OpenGeoSys (OGS) is coupled with the IPhreeqc module of the geochemical solver PHREEQC. The new coupling approach makes full use of advantages from both codes: OGS provides a flexible choice of different numerical approaches for simulation of water flow in the vadose zone such as the pressure-based or mixed forms of Richards equation; whereas the IPhreeqc module leads to a simplification of data storage and its communication with OGS, which greatly facilitates the coupling and code updating. Moreover, a parallelization scheme with MPI (Message Passing Interface) is applied, in which the computational task of water flow and mass transport is partitioned through domain decomposition, whereas the efficient parallelization of geochemical reactions is achieved by smart allocation of computational workload over multiple compute nodes. The plausibility of the new coupling is verified by several benchmark tests. In addition, the efficiency of the new coupling approach is demonstrated by its application in a large scale scenario, in which the environmental fate of pesticides in a complex soil-aquifer system is studied.

  14. Reactive transport modeling in variably saturated porous media with OGS-IPhreeqc

    NASA Astrophysics Data System (ADS)

    He, W.; Beyer, C.; Fleckenstein, J. H.; Jang, E.; Kalbacher, T.; Shao, H.; Wang, W.; Kolditz, O.

    2014-12-01

    Worldwide, sustainable water resource management becomes an increasingly challenging task due to the growth of population and extensive applications of fertilizer in agriculture. Moreover, climate change causes further stresses to both water quantity and quality. Reactive transport modeling in the coupled soil-aquifer system is a viable approach to assess the impacts of different land use and groundwater exploitation scenarios on the water resources. However, the application of this approach is usually limited in spatial scale and to simplified geochemical systems due to the huge computational expense involved. Such computational expense is not only caused by solving the high non-linearity of the initial boundary value problems of water flow in the unsaturated zone numerically with rather fine spatial and temporal discretization for the correct mass balance and numerical stability, but also by the intensive computational task of quantifying geochemical reactions. In the present study, a flexible and efficient tool for large scale reactive transport modeling in variably saturated porous media and its applications are presented. The open source scientific software OpenGeoSys (OGS) is coupled with the IPhreeqc module of the geochemical solver PHREEQC. The new coupling approach makes full use of advantages from both codes: OGS provides a flexible choice of different numerical approaches for simulation of water flow in the vadose zone such as the pressure-based or mixed forms of Richards equation; whereas the IPhreeqc module leads to a simplification of data storage and its communication with OGS, which greatly facilitates the coupling and code updating. Moreover, a parallelization scheme with MPI (Message Passing Interface) is applied, in which the computational task of water flow and mass transport is partitioned through domain decomposition, whereas the efficient parallelization of geochemical reactions is achieved by smart allocation of computational workload over multiple compute nodes. The plausibility of the new coupling is verified by several benchmark tests. In addition, the efficiency of the new coupling approach is demonstrated by its application in a large scale scenario, in which the environmental fate of pesticides in a complex soil-aquifer system is studied.

  15. Computationally efficient target classification in multispectral image data with Deep Neural Networks

    NASA Astrophysics Data System (ADS)

    Cavigelli, Lukas; Bernath, Dominic; Magno, Michele; Benini, Luca

    2016-10-01

    Detecting and classifying targets in video streams from surveillance cameras is a cumbersome, error-prone and expensive task. Often, the incurred costs are prohibitive for real-time monitoring. This leads to data being stored locally or transmitted to a central storage site for post-incident examination. The required communication links and archiving of the video data are still expensive and this setup excludes preemptive actions to respond to imminent threats. An effective way to overcome these limitations is to build a smart camera that analyzes the data on-site, close to the sensor, and transmits alerts when relevant video sequences are detected. Deep neural networks (DNNs) have come to outperform humans in visual classifications tasks and are also performing exceptionally well on other computer vision tasks. The concept of DNNs and Convolutional Networks (ConvNets) can easily be extended to make use of higher-dimensional input data such as multispectral data. We explore this opportunity in terms of achievable accuracy and required computational effort. To analyze the precision of DNNs for scene labeling in an urban surveillance scenario we have created a dataset with 8 classes obtained in a field experiment. We combine an RGB camera with a 25-channel VIS-NIR snapshot sensor to assess the potential of multispectral image data for target classification. We evaluate several new DNNs, showing that the spectral information fused together with the RGB frames can be used to improve the accuracy of the system or to achieve similar accuracy with a 3x smaller computation effort. We achieve a very high per-pixel accuracy of 99.1%. Even for scarcely occurring, but particularly interesting classes, such as cars, 75% of the pixels are labeled correctly with errors occurring only around the border of the objects. This high accuracy was obtained with a training set of only 30 labeled images, paving the way for fast adaptation to various application scenarios.

  16. Efficient computation of photonic crystal waveguide modes with dispersive material.

    PubMed

    Schmidt, Kersten; Kappeler, Roman

    2010-03-29

    The optimization of PhC waveguides is a key issue for successfully designing PhC devices. Since this design task is computationally expensive, efficient methods are demanded. The available codes for computing photonic bands are also applied to PhC waveguides. They are reliable but not very efficient, which is even more pronounced for dispersive material. We present a method based on higher order finite elements with curved cells, which allows to solve for the band structure taking directly into account the dispersiveness of the materials. This is accomplished by reformulating the wave equations as a linear eigenproblem in the complex wave-vectors k. For this method, we demonstrate the high efficiency for the computation of guided PhC waveguide modes by a convergence analysis.

  17. The gputools package enables GPU computing in R.

    PubMed

    Buckner, Joshua; Wilson, Justin; Seligman, Mark; Athey, Brian; Watson, Stanley; Meng, Fan

    2010-01-01

    By default, the R statistical environment does not make use of parallelism. Researchers may resort to expensive solutions such as cluster hardware for large analysis tasks. Graphics processing units (GPUs) provide an inexpensive and computationally powerful alternative. Using R and the CUDA toolkit from Nvidia, we have implemented several functions commonly used in microarray gene expression analysis for GPU-equipped computers. R users can take advantage of the better performance provided by an Nvidia GPU. The package is available from CRAN, the R project's repository of packages, at http://cran.r-project.org/web/packages/gputools More information about our gputools R package is available at http://brainarray.mbni.med.umich.edu/brainarray/Rgpgpu

  18. Computational Properties of the Hippocampus Increase the Efficiency of Goal-Directed Foraging through Hierarchical Reinforcement Learning

    PubMed Central

    Chalmers, Eric; Luczak, Artur; Gruber, Aaron J.

    2016-01-01

    The mammalian brain is thought to use a version of Model-based Reinforcement Learning (MBRL) to guide “goal-directed” behavior, wherein animals consider goals and make plans to acquire desired outcomes. However, conventional MBRL algorithms do not fully explain animals' ability to rapidly adapt to environmental changes, or learn multiple complex tasks. They also require extensive computation, suggesting that goal-directed behavior is cognitively expensive. We propose here that key features of processing in the hippocampus support a flexible MBRL mechanism for spatial navigation that is computationally efficient and can adapt quickly to change. We investigate this idea by implementing a computational MBRL framework that incorporates features inspired by computational properties of the hippocampus: a hierarchical representation of space, “forward sweeps” through future spatial trajectories, and context-driven remapping of place cells. We find that a hierarchical abstraction of space greatly reduces the computational load (mental effort) required for adaptation to changing environmental conditions, and allows efficient scaling to large problems. It also allows abstract knowledge gained at high levels to guide adaptation to new obstacles. Moreover, a context-driven remapping mechanism allows learning and memory of multiple tasks. Simulating dorsal or ventral hippocampal lesions in our computational framework qualitatively reproduces behavioral deficits observed in rodents with analogous lesions. The framework may thus embody key features of how the brain organizes model-based RL to efficiently solve navigation and other difficult tasks. PMID:28018203

  19. GPU Accelerated Vector Median Filter

    NASA Technical Reports Server (NTRS)

    Aras, Rifat; Shen, Yuzhong

    2011-01-01

    Noise reduction is an important step for most image processing tasks. For three channel color images, a widely used technique is vector median filter in which color values of pixels are treated as 3-component vectors. Vector median filters are computationally expensive; for a window size of n x n, each of the n(sup 2) vectors has to be compared with other n(sup 2) - 1 vectors in distances. General purpose computation on graphics processing units (GPUs) is the paradigm of utilizing high-performance many-core GPU architectures for computation tasks that are normally handled by CPUs. In this work. NVIDIA's Compute Unified Device Architecture (CUDA) paradigm is used to accelerate vector median filtering. which has to the best of our knowledge never been done before. The performance of GPU accelerated vector median filter is compared to that of the CPU and MPI-based versions for different image and window sizes, Initial findings of the study showed 100x improvement of performance of vector median filter implementation on GPUs over CPU implementations and further speed-up is expected after more extensive optimizations of the GPU algorithm .

  20. The curse of planning: dissecting multiple reinforcement-learning systems by taxing the central executive.

    PubMed

    Otto, A Ross; Gershman, Samuel J; Markman, Arthur B; Daw, Nathaniel D

    2013-05-01

    A number of accounts of human and animal behavior posit the operation of parallel and competing valuation systems in the control of choice behavior. In these accounts, a flexible but computationally expensive model-based reinforcement-learning system has been contrasted with a less flexible but more efficient model-free reinforcement-learning system. The factors governing which system controls behavior-and under what circumstances-are still unclear. Following the hypothesis that model-based reinforcement learning requires cognitive resources, we demonstrated that having human decision makers perform a demanding secondary task engenders increased reliance on a model-free reinforcement-learning strategy. Further, we showed that, across trials, people negotiate the trade-off between the two systems dynamically as a function of concurrent executive-function demands, and people's choice latencies reflect the computational expenses of the strategy they employ. These results demonstrate that competition between multiple learning systems can be controlled on a trial-by-trial basis by modulating the availability of cognitive resources.

  1. The Curse of Planning: Dissecting multiple reinforcement learning systems by taxing the central executive

    PubMed Central

    Otto, A. Ross; Gershman, Samuel J.; Markman, Arthur B.; Daw, Nathaniel D.

    2013-01-01

    A number of accounts of human and animal behavior posit the operation of parallel and competing valuation systems in the control of choice behavior. Along these lines, a flexible but computationally expensive model-based reinforcement learning system has been contrasted with a less flexible but more efficient model-free reinforcement learning system. The factors governing which system controls behavior—and under what circumstances—are still unclear. Based on the hypothesis that model-based reinforcement learning requires cognitive resources, we demonstrate that having human decision-makers perform a demanding secondary task engenders increased reliance on a model-free reinforcement learning strategy. Further, we show that across trials, people negotiate this tradeoff dynamically as a function of concurrent executive function demands and their choice latencies reflect the computational expenses of the strategy employed. These results demonstrate that competition between multiple learning systems can be controlled on a trial-by-trial basis by modulating the availability of cognitive resources. PMID:23558545

  2. Using a MaxEnt Classifier for the Automatic Content Scoring of Free-Text Responses

    NASA Astrophysics Data System (ADS)

    Sukkarieh, Jana Z.

    2011-03-01

    Criticisms against multiple-choice item assessments in the USA have prompted researchers and organizations to move towards constructed-response (free-text) items. Constructed-response (CR) items pose many challenges to the education community—one of which is that they are expensive to score by humans. At the same time, there has been widespread movement towards computer-based assessment and hence, assessment organizations are competing to develop automatic content scoring engines for such items types—which we view as a textual entailment task. This paper describes how MaxEnt Modeling is used to help solve the task. MaxEnt has been used in many natural language tasks but this is the first application of the MaxEnt approach to textual entailment and automatic content scoring.

  3. Fast neuromimetic object recognition using FPGA outperforms GPU implementations.

    PubMed

    Orchard, Garrick; Martin, Jacob G; Vogelstein, R Jacob; Etienne-Cummings, Ralph

    2013-08-01

    Recognition of objects in still images has traditionally been regarded as a difficult computational problem. Although modern automated methods for visual object recognition have achieved steadily increasing recognition accuracy, even the most advanced computational vision approaches are unable to obtain performance equal to that of humans. This has led to the creation of many biologically inspired models of visual object recognition, among them the hierarchical model and X (HMAX) model. HMAX is traditionally known to achieve high accuracy in visual object recognition tasks at the expense of significant computational complexity. Increasing complexity, in turn, increases computation time, reducing the number of images that can be processed per unit time. In this paper we describe how the computationally intensive and biologically inspired HMAX model for visual object recognition can be modified for implementation on a commercial field-programmable aate Array, specifically the Xilinx Virtex 6 ML605 evaluation board with XC6VLX240T FPGA. We show that with minor modifications to the traditional HMAX model we can perform recognition on images of size 128 × 128 pixels at a rate of 190 images per second with a less than 1% loss in recognition accuracy in both binary and multiclass visual object recognition tasks.

  4. QMachine: commodity supercomputing in web browsers.

    PubMed

    Wilkinson, Sean R; Almeida, Jonas S

    2014-06-09

    Ongoing advancements in cloud computing provide novel opportunities in scientific computing, especially for distributed workflows. Modern web browsers can now be used as high-performance workstations for querying, processing, and visualizing genomics' "Big Data" from sources like The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) without local software installation or configuration. The design of QMachine (QM) was driven by the opportunity to use this pervasive computing model in the context of the Web of Linked Data in Biomedicine. QM is an open-sourced, publicly available web service that acts as a messaging system for posting tasks and retrieving results over HTTP. The illustrative application described here distributes the analyses of 20 Streptococcus pneumoniae genomes for shared suffixes. Because all analytical and data retrieval tasks are executed by volunteer machines, few server resources are required. Any modern web browser can submit those tasks and/or volunteer to execute them without installing any extra plugins or programs. A client library provides high-level distribution templates including MapReduce. This stark departure from the current reliance on expensive server hardware running "download and install" software has already gathered substantial community interest, as QM received more than 2.2 million API calls from 87 countries in 12 months. QM was found adequate to deliver the sort of scalable bioinformatics solutions that computation- and data-intensive workflows require. Paradoxically, the sandboxed execution of code by web browsers was also found to enable them, as compute nodes, to address critical privacy concerns that characterize biomedical environments.

  5. Economical Unsteady High-Fidelity Aerodynamics for Structural Optimization with a Flutter Constraint

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.; Stanford, Bret K.

    2017-01-01

    Structural optimization with a flutter constraint for a vehicle designed to fly in the transonic regime is a particularly difficult task. In this speed range, the flutter boundary is very sensitive to aerodynamic nonlinearities, typically requiring high-fidelity Navier-Stokes simulations. However, the repeated application of unsteady computational fluid dynamics to guide an aeroelastic optimization process is very computationally expensive. This expense has motivated the development of methods that incorporate aspects of the aerodynamic nonlinearity, classical tools of flutter analysis, and more recent methods of optimization. While it is possible to use doublet lattice method aerodynamics, this paper focuses on the use of an unsteady high-fidelity aerodynamic reduced order model combined with successive transformations that allows for an economical way of utilizing high-fidelity aerodynamics in the optimization process. This approach is applied to the common research model wing structural design. As might be expected, the high-fidelity aerodynamics produces a heavier wing than that optimized with doublet lattice aerodynamics. It is found that the optimized lower skin of the wing using high-fidelity aerodynamics differs significantly from that using doublet lattice aerodynamics.

  6. Stereoscopic, Force-Feedback Trainer For Telerobot Operators

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Schenker, Paul S.; Bejczy, Antal K.

    1994-01-01

    Computer-controlled simulator for training technicians to operate remote robots provides both visual and kinesthetic virtual reality. Used during initial stage of training; saves time and expense, increases operational safety, and prevents damage to robots by inexperienced operators. Computes virtual contact forces and torques of compliant robot in real time, providing operator with feel of forces experienced by manipulator as well as view in any of three modes: single view, two split views, or stereoscopic view. From keyboard, user specifies force-reflection gain and stiffness of manipulator hand for three translational and three rotational axes. System offers two simulated telerobotic tasks: insertion of peg in hole in three dimensions, and removal and insertion of drawer.

  7. An approach to the design of wide-angle optical systems with special illumination and IFOV requirements

    NASA Astrophysics Data System (ADS)

    Pravdivtsev, Andrey V.

    2012-06-01

    The article presents the approach to the design wide-angle optical systems with special illumination and instantaneous field of view (IFOV) requirements. The unevenness of illumination reduces the dynamic range of the system, which negatively influence on the system ability to perform their task. The result illumination on the detector depends among other factors from the IFOV changes. It is also necessary to consider IFOV in the synthesis of data processing algorithms, as it directly affects to the potential "signal/background" ratio for the case of statistically homogeneous backgrounds. A numerical-analytical approach that simplifies the design of wideangle optical systems with special illumination and IFOV requirements is presented. The solution can be used for optical systems which field of view greater than 180 degrees. Illumination calculation in optical CAD is based on computationally expensive tracing of large number of rays. The author proposes to use analytical expression for some characteristics which illumination depends on. The rest characteristic are determined numerically in calculation with less computationally expensive operands, the calculation performs not every optimization step. The results of analytical calculation inserts in the merit function of optical CAD optimizer. As a result we reduce the optimizer load, since using less computationally expensive operands. It allows reducing time and resources required to develop a system with the desired characteristics. The proposed approach simplifies the creation and understanding of the requirements for the quality of the optical system, reduces the time and resources required to develop an optical system, and allows creating more efficient EOS.

  8. Real-time depth processing for embedded platforms

    NASA Astrophysics Data System (ADS)

    Rahnama, Oscar; Makarov, Aleksej; Torr, Philip

    2017-05-01

    Obtaining depth information of a scene is an important requirement in many computer-vision and robotics applications. For embedded platforms, passive stereo systems have many advantages over their active counterparts (i.e. LiDAR, Infrared). They are power efficient, cheap, robust to lighting conditions and inherently synchronized to the RGB images of the scene. However, stereo depth estimation is a computationally expensive task that operates over large amounts of data. For embedded applications which are often constrained by power consumption, obtaining accurate results in real-time is a challenge. We demonstrate a computationally and memory efficient implementation of a stereo block-matching algorithm in FPGA. The computational core achieves a throughput of 577 fps at standard VGA resolution whilst consuming less than 3 Watts of power. The data is processed using an in-stream approach that minimizes memory-access bottlenecks and best matches the raster scan readout of modern digital image sensors.

  9. GeneImp: Fast Imputation to Large Reference Panels Using Genotype Likelihoods from Ultralow Coverage Sequencing

    PubMed Central

    Spiliopoulou, Athina; Colombo, Marco; Orchard, Peter; Agakov, Felix; McKeigue, Paul

    2017-01-01

    We address the task of genotype imputation to a dense reference panel given genotype likelihoods computed from ultralow coverage sequencing as inputs. In this setting, the data have a high-level of missingness or uncertainty, and are thus more amenable to a probabilistic representation. Most existing imputation algorithms are not well suited for this situation, as they rely on prephasing for computational efficiency, and, without definite genotype calls, the prephasing task becomes computationally expensive. We describe GeneImp, a program for genotype imputation that does not require prephasing and is computationally tractable for whole-genome imputation. GeneImp does not explicitly model recombination, instead it capitalizes on the existence of large reference panels—comprising thousands of reference haplotypes—and assumes that the reference haplotypes can adequately represent the target haplotypes over short regions unaltered. We validate GeneImp based on data from ultralow coverage sequencing (0.5×), and compare its performance to the most recent version of BEAGLE that can perform this task. We show that GeneImp achieves imputation quality very close to that of BEAGLE, using one to two orders of magnitude less time, without an increase in memory complexity. Therefore, GeneImp is the first practical choice for whole-genome imputation to a dense reference panel when prephasing cannot be applied, for instance, in datasets produced via ultralow coverage sequencing. A related future application for GeneImp is whole-genome imputation based on the off-target reads from deep whole-exome sequencing. PMID:28348060

  10. Do dichromats see colours in this way? Assessing simulation tools without colorimetric measurements.

    PubMed

    Lillo Jover, Julio A; Álvaro Llorente, Leticia; Moreira Villegas, Humberto; Melnikova, Anna

    2016-11-01

    Simulcheck evaluates Colour Simulation Tools (CSTs, they transform colours to mimic those seen by colour vision deficients). Two CSTs (Variantor and Coblis) were used to know if the standard Simulcheck version (direct measurement based, DMB) can be substituted by another (RGB values based) not requiring sophisticated measurement instruments. Ten normal trichromats performed the two psychophysical tasks included in the Simulcheck method. The Pseudoachromatic Stimuli Identification task provided the h uv (hue angle) values of the pseudoachromatic stimuli: colours seen as red or green by normal trichromats but as grey by colour deficient people. The Minimum Achromatic Contrast task was used to compute the L R (relative luminance) values of the pseudoachromatic stimuli. Simulcheck DMB version showed that Variantor was accurate to simulate protanopia but neither Variantor nor Coblis were accurate to simulate deuteranopia. Simulcheck RGB version provided accurate h uv values, so this variable can be adequately estimated when lacking a colorimeter —an expensive and unusual apparatus—. Contrary, the inaccuracy of the L R estimations provided by Simulcheck RGB version makes it advisable to compute this variable from the measurements performed with a photometer, a cheap and easy to find apparatus.

  11. QMachine: commodity supercomputing in web browsers

    PubMed Central

    2014-01-01

    Background Ongoing advancements in cloud computing provide novel opportunities in scientific computing, especially for distributed workflows. Modern web browsers can now be used as high-performance workstations for querying, processing, and visualizing genomics’ “Big Data” from sources like The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) without local software installation or configuration. The design of QMachine (QM) was driven by the opportunity to use this pervasive computing model in the context of the Web of Linked Data in Biomedicine. Results QM is an open-sourced, publicly available web service that acts as a messaging system for posting tasks and retrieving results over HTTP. The illustrative application described here distributes the analyses of 20 Streptococcus pneumoniae genomes for shared suffixes. Because all analytical and data retrieval tasks are executed by volunteer machines, few server resources are required. Any modern web browser can submit those tasks and/or volunteer to execute them without installing any extra plugins or programs. A client library provides high-level distribution templates including MapReduce. This stark departure from the current reliance on expensive server hardware running “download and install” software has already gathered substantial community interest, as QM received more than 2.2 million API calls from 87 countries in 12 months. Conclusions QM was found adequate to deliver the sort of scalable bioinformatics solutions that computation- and data-intensive workflows require. Paradoxically, the sandboxed execution of code by web browsers was also found to enable them, as compute nodes, to address critical privacy concerns that characterize biomedical environments. PMID:24913605

  12. Current CFD Practices in Launch Vehicle Applications

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kiris, Cetin

    2012-01-01

    The quest for sustained space exploration will require the development of advanced launch vehicles, and efficient and reliable operating systems. Development of launch vehicles via test-fail-fix approach is very expensive and time consuming. For decision making, modeling and simulation (M&S) has played increasingly important roles in many aspects of launch vehicle development. It is therefore essential to develop and maintain most advanced M&S capability. More specifically computational fluid dynamics (CFD) has been providing critical data for developing launch vehicles complementing expensive testing. During the past three decades CFD capability has increased remarkably along with advances in computer hardware and computing technology. However, most of the fundamental CFD capability in launch vehicle applications is derived from the past advances. Specific gaps in the solution procedures are being filled primarily through "piggy backed" efforts.on various projects while solving today's problems. Therefore, some of the advanced capabilities are not readily available for various new tasks, and mission-support problems are often analyzed using ad hoc approaches. The current report is intended to present our view on state-of-the-art (SOA) in CFD and its shortcomings in support of space transport vehicle development. Best practices in solving current issues will be discussed using examples from ascending launch vehicles. Some of the pacing will be discussed in conjunction with these examples.

  13. 48 CFR 9904.410-60 - Illustrations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... budgets for the other segment should be removed from B's G&A expense pool and transferred to the other...; all home office expenses allocated to Segment H are included in Segment H's G&A expense pool. (2) This... cost of scientific computer operations in its G&A expense pool. The scientific computer is used...

  14. 48 CFR 9904.410-60 - Illustrations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... budgets for the other segment should be removed from B's G&A expense pool and transferred to the other...; all home office expenses allocated to Segment H are included in Segment H's G&A expense pool. (2) This... cost of scientific computer operations in its G&A expense pool. The scientific computer is used...

  15. CombiMotif: A new algorithm for network motifs discovery in protein-protein interaction networks

    NASA Astrophysics Data System (ADS)

    Luo, Jiawei; Li, Guanghui; Song, Dan; Liang, Cheng

    2014-12-01

    Discovering motifs in protein-protein interaction networks is becoming a current major challenge in computational biology, since the distribution of the number of network motifs can reveal significant systemic differences among species. However, this task can be computationally expensive because of the involvement of graph isomorphic detection. In this paper, we present a new algorithm (CombiMotif) that incorporates combinatorial techniques to count non-induced occurrences of subgraph topologies in the form of trees. The efficiency of our algorithm is demonstrated by comparing the obtained results with the current state-of-the art subgraph counting algorithms. We also show major differences between unicellular and multicellular organisms. The datasets and source code of CombiMotif are freely available upon request.

  16. Variable-Complexity Multidisciplinary Optimization on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Grossman, Bernard; Mason, William H.; Watson, Layne T.; Haftka, Raphael T.

    1998-01-01

    This report covers work conducted under grant NAG1-1562 for the NASA High Performance Computing and Communications Program (HPCCP) from December 7, 1993, to December 31, 1997. The objective of the research was to develop new multidisciplinary design optimization (MDO) techniques which exploit parallel computing to reduce the computational burden of aircraft MDO. The design of the High-Speed Civil Transport (HSCT) air-craft was selected as a test case to demonstrate the utility of our MDO methods. The three major tasks of this research grant included: development of parallel multipoint approximation methods for the aerodynamic design of the HSCT, use of parallel multipoint approximation methods for structural optimization of the HSCT, mathematical and algorithmic development including support in the integration of parallel computation for items (1) and (2). These tasks have been accomplished with the development of a response surface methodology that incorporates multi-fidelity models. For the aerodynamic design we were able to optimize with up to 20 design variables using hundreds of expensive Euler analyses together with thousands of inexpensive linear theory simulations. We have thereby demonstrated the application of CFD to a large aerodynamic design problem. For the predicting structural weight we were able to combine hundreds of structural optimizations of refined finite element models with thousands of optimizations based on coarse models. Computations have been carried out on the Intel Paragon with up to 128 nodes. The parallel computation allowed us to perform combined aerodynamic-structural optimization using state of the art models of a complex aircraft configurations.

  17. A survey of GPU-based medical image computing techniques

    PubMed Central

    Shi, Lin; Liu, Wen; Zhang, Heye; Xie, Yongming

    2012-01-01

    Medical imaging currently plays a crucial role throughout the entire clinical applications from medical scientific research to diagnostics and treatment planning. However, medical imaging procedures are often computationally demanding due to the large three-dimensional (3D) medical datasets to process in practical clinical applications. With the rapidly enhancing performances of graphics processors, improved programming support, and excellent price-to-performance ratio, the graphics processing unit (GPU) has emerged as a competitive parallel computing platform for computationally expensive and demanding tasks in a wide range of medical image applications. The major purpose of this survey is to provide a comprehensive reference source for the starters or researchers involved in GPU-based medical image processing. Within this survey, the continuous advancement of GPU computing is reviewed and the existing traditional applications in three areas of medical image processing, namely, segmentation, registration and visualization, are surveyed. The potential advantages and associated challenges of current GPU-based medical imaging are also discussed to inspire future applications in medicine. PMID:23256080

  18. F-16 Instructional Sequencing Plan Report.

    DTIC Science & Technology

    1981-03-01

    information). 2. Interference (learning of some tasks interferes with the learning of other tasks when they possess similar but confusing differences ...profound effect on the total training expense. This increases the desirability of systematic, precise methods of syllabus generation. Inherent in a given...the expensive to acquire. resource. Least cost The syllabus must Select sequences which provide a least total make maximum use of cost method of

  19. Automated symbolic calculations in nonequilibrium thermodynamics

    NASA Astrophysics Data System (ADS)

    Kröger, Martin; Hütter, Markus

    2010-12-01

    We cast the Jacobi identity for continuous fields into a local form which eliminates the need to perform any partial integration to the expense of performing variational derivatives. This allows us to test the Jacobi identity definitely and efficiently and to provide equations between different components defining a potential Poisson bracket. We provide a simple Mathematica TM notebook which allows to perform this task conveniently, and which offers some additional functionalities of use within the framework of nonequilibrium thermodynamics: reversible equations of change for fields, and the conservation of entropy during the reversible dynamics. Program summaryProgram title: Poissonbracket.nb Catalogue identifier: AEGW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 227 952 No. of bytes in distributed program, including test data, etc.: 268 918 Distribution format: tar.gz Programming language: Mathematica TM 7.0 Computer: Any computer running Mathematica TM 6.0 and later versions Operating system: Linux, MacOS, Windows RAM: 100 Mb Classification: 4.2, 5, 23 Nature of problem: Testing the Jacobi identity can be a very complex task depending on the structure of the Poisson bracket. The Mathematica TM notebook provided here solves this problem using a novel symbolic approach based on inherent properties of the variational derivative, highly suitable for the present tasks. As a by product, calculations performed with the Poisson bracket assume a compact form. Solution method: The problem is first cast into a form which eliminates the need to perform partial integration for arbitrary functionals at the expense of performing variational derivatives. The corresponding equations are conveniently obtained using the symbolic programming environment Mathematica TM. Running time: For the test cases and most typical cases in the literature, the running time is of the order of seconds or minutes, respectively.

  20. Reducing software mass through behavior control. [of planetary roving robots

    NASA Technical Reports Server (NTRS)

    Miller, David P.

    1992-01-01

    Attention is given to the tradeoff between communication and computation as regards a planetary rover (both these subsystems are very power-intensive, and both can be the major driver of the rover's power subsystem, and therefore the minimum mass and size of the rover). Software techniques that can be used to reduce the requirements on both communciation and computation, allowing the overall robot mass to be greatly reduced, are discussed. Novel approaches to autonomous control, called behavior control, employ an entirely different approach, and for many tasks will yield a similar or superior level of autonomy to traditional control techniques, while greatly reducing the computational demand. Traditional systems have several expensive processes that operate serially, while behavior techniques employ robot capabilities that run in parallel. Traditional systems make extensive world models, while behavior control systems use minimal world models or none at all.

  1. As-built data capture of complex piping using photogrammetry technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morray, J.P.; Ziu, C.G.

    1995-11-01

    Plant owners face an increasingly difficult and expensive task of updating drawings, both regarding the plant logic and physical layout. Through the use of photogrammetry technology, H-H spectrum has created a complete operating plant data capture service, with the result that the task of recording accurate plant configurations has become assured and economical. The technology has proven to be extremely valuable for the capture of complex piping configurations, as well as entire plant facilities, and yields accuracy within 1/4 inch. The method uses photographs and workstation technology to quickly document and compute the plant layout, with all components, regardless ofmore » size, included in the resulting model. The system has the capability to compute actual 3-D coordinates of any point based on previous triangulations, allowing for an immediate assessment of accuracy. This ensures a consistent level of accuracy, which is impossible to achieve in a manual approach. Due to the speed of the process, the approach is very important in hazardous/difficult environments such as nuclear power facilities or offshore platforms.« less

  2. User interface support

    NASA Technical Reports Server (NTRS)

    Lewis, Clayton; Wilde, Nick

    1989-01-01

    Space construction will require heavy investment in the development of a wide variety of user interfaces for the computer-based tools that will be involved at every stage of construction operations. Using today's technology, user interface development is very expensive for two reasons: (1) specialized and scarce programming skills are required to implement the necessary graphical representations and complex control regimes for high-quality interfaces; (2) iteration on prototypes is required to meet user and task requirements, since these are difficult to anticipate with current (and foreseeable) design knowledge. We are attacking this problem by building a user interface development tool based on extensions to the spreadsheet model of computation. The tool provides high-level support for graphical user interfaces and permits dynamic modification of interfaces, without requiring conventional programming concepts and skills.

  3. 24 CFR 990.170 - Computation of utilities expense level (UEL): Overview.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... level (UEL): Overview. 990.170 Section 990.170 Housing and Urban Development Regulations Relating to... Expenses § 990.170 Computation of utilities expense level (UEL): Overview. (a) General. The UEL for each... by the payable consumption level multiplied by the inflation factor. The UEL is expressed in terms of...

  4. A configurable distributed high-performance computing framework for satellite's TDI-CCD imaging simulation

    NASA Astrophysics Data System (ADS)

    Xue, Bo; Mao, Bingjing; Chen, Xiaomei; Ni, Guoqiang

    2010-11-01

    This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the simulation time with low expense. Imaging simulation for TDI-CCD mounted on satellite contains four processes: 1) atmosphere leads degradation, 2) optical system leads degradation, 3) electronic system of TDI-CCD leads degradation and re-sampling process, 4) data integration. Process 1) to 3) utilize diversity data-intensity algorithms such as FFT, convolution and LaGrange Interpol etc., which requires powerful CPU. Even uses Intel Xeon X5550 processor, regular series process method takes more than 30 hours for a simulation whose result image size is 1500 * 1462. With literature study, there isn't any mature distributing HPC framework in this field. Here we developed a distribute computing framework for TDI-CCD imaging simulation, which is based on WCF[1], uses Client/Server (C/S) layer and invokes the free CPU resources in LAN. The server pushes the process 1) to 3) tasks to those free computing capacity. Ultimately we rendered the HPC in low cost. In the computing experiment with 4 symmetric nodes and 1 server , this framework reduced about 74% simulation time. Adding more asymmetric nodes to the computing network, the time decreased namely. In conclusion, this framework could provide unlimited computation capacity in condition that the network and task management server are affordable. And this is the brand new HPC solution for TDI-CCD imaging simulation and similar applications.

  5. 47 CFR 32.6124 - General purpose computers expense.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...

  6. 47 CFR 32.6124 - General purpose computers expense.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...

  7. 47 CFR 32.6124 - General purpose computers expense.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...

  8. 47 CFR 32.6124 - General purpose computers expense.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...

  9. 47 CFR 32.6124 - General purpose computers expense.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...

  10. Reducing the Time and Cost of Testing Engines

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Producing a new aircraft engine currently costs approximately $1 billion, with 3 years of development time for a commercial engine and 10 years for a military engine. The high development time and cost make it extremely difficult to transition advanced technologies for cleaner, quieter, and more efficient new engines. To reduce this time and cost, NASA created a vision for the future where designers would use high-fidelity computer simulations early in the design process in order to resolve critical design issues before building the expensive engine hardware. To accomplish this vision, NASA's Glenn Research Center initiated a collaborative effort with the aerospace industry and academia to develop its Numerical Propulsion System Simulation (NPSS), an advanced engineering environment for the analysis and design of aerospace propulsion systems and components. Partners estimate that using NPSS has the potential to dramatically reduce the time, effort, and expense necessary to design and test jet engines by generating sophisticated computer simulations of an aerospace object or system. These simulations will permit an engineer to test various design options without having to conduct costly and time-consuming real-life tests. By accelerating and streamlining the engine system design analysis and test phases, NPSS facilitates bringing the final product to market faster. NASA's NPSS Version (V)1.X effort was a task within the Agency s Computational Aerospace Sciences project of the High Performance Computing and Communication program, which had a mission to accelerate the availability of high-performance computing hardware and software to the U.S. aerospace community for its use in design processes. The technology brings value back to NASA by improving methods of analyzing and testing space transportation components.

  11. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2012-10-01 2012-10-01 false Computers and data processing equipment (account...

  12. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2013-10-01 2013-10-01 false Computers and data processing equipment (account...

  13. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2011-10-01 2011-10-01 false Computers and data processing equipment (account...

  14. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2014-10-01 2014-10-01 false Computers and data processing equipment (account...

  15. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2010-10-01 2010-10-01 false Computers and data processing equipment (account...

  16. Kraken: ultrafast metagenomic sequence classification using exact alignments

    PubMed Central

    2014-01-01

    Kraken is an ultrafast and highly accurate program for assigning taxonomic labels to metagenomic DNA sequences. Previous programs designed for this task have been relatively slow and computationally expensive, forcing researchers to use faster abundance estimation programs, which only classify small subsets of metagenomic data. Using exact alignment of k-mers, Kraken achieves classification accuracy comparable to the fastest BLAST program. In its fastest mode, Kraken classifies 100 base pair reads at a rate of over 4.1 million reads per minute, 909 times faster than Megablast and 11 times faster than the abundance estimation program MetaPhlAn. Kraken is available at http://ccb.jhu.edu/software/kraken/. PMID:24580807

  17. Test Facilities and Experience on Space Nuclear System Developments at the Kurchatov Institute

    NASA Astrophysics Data System (ADS)

    Ponomarev-Stepnoi, Nikolai N.; Garin, Vladimir P.; Glushkov, Evgeny S.; Kompaniets, George V.; Kukharkin, Nikolai E.; Madeev, Vicktor G.; Papin, Vladimir K.; Polyakov, Dmitry N.; Stepennov, Boris S.; Tchuniyaev, Yevgeny I.; Tikhonov, Lev Ya.; Uksusov, Yevgeny I.

    2004-02-01

    The complexity of space fission systems and rigidity of requirement on minimization of weight and dimension characteristics along with the wish to decrease expenditures on their development demand implementation of experimental works which results shall be used in designing, safety substantiation, and licensing procedures. Experimental facilities are intended to solve the following tasks: obtainment of benchmark data for computer code validations, substantiation of design solutions when computational efforts are too expensive, quality control in a production process, and ``iron'' substantiation of criticality safety design solutions for licensing and public relations. The NARCISS and ISKRA critical facilities and unique ORM facility on shielding investigations at the operating OR nuclear research reactor were created in the Kurchatov Institute to solve the mentioned tasks. The range of activities performed at these facilities within the implementation of the previous Russian nuclear power system programs is briefly described in the paper. This experience shall be analyzed in terms of methodological approach to development of future space nuclear systems (this analysis is beyond this paper). Because of the availability of these facilities for experiments, the brief description of their critical assemblies and characteristics is given in this paper.

  18. Computer Assisted Multi-Center Creation of Medical Knowledge Bases

    PubMed Central

    Giuse, Nunzia Bettinsoli; Giuse, Dario A.; Miller, Randolph A.

    1988-01-01

    Computer programs which support different aspects of medical care have been developed in recent years. Their capabilities range from diagnosis to medical imaging, and include hospital management systems and therapy prescription. In spite of their diversity these systems have one commonality: their reliance on a large body of medical knowledge in computer-readable form. This knowledge enables such programs to draw inferences, validate hypotheses, and in general to perform their intended task. As has been clear to developers of such systems, however, the creation and maintenance of medical knowledge bases are very expensive. Practical and economical difficulties encountered during this long-term process have discouraged most attempts. This paper discusses knowledge base creation and maintenance, with special emphasis on medical applications. We first describe the methods currently used and their limitations. We then present our recent work on developing tools and methodologies which will assist in the process of creating a medical knowledge base. We focus, in particular, on the possibility of multi-center creation of the knowledge base.

  19. Optimizing a mobile robot control system using GPU acceleration

    NASA Astrophysics Data System (ADS)

    Tuck, Nat; McGuinness, Michael; Martin, Fred

    2012-01-01

    This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.

  20. Sensitivity Analysis for Coupled Aero-structural Systems

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.

    1999-01-01

    A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.

  1. 47 CFR 69.156 - Marketing expenses.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Marketing expenses. 69.156 Section 69.156... Computation of Charges for Price Cap Local Exchange Carriers § 69.156 Marketing expenses. Effective July 1, 2000, the marketing expenses formerly allocated to the common line and traffic sensitive baskets, and...

  2. 47 CFR 69.156 - Marketing expenses.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Marketing expenses. 69.156 Section 69.156... Computation of Charges for Price Cap Local Exchange Carriers § 69.156 Marketing expenses. Effective July 1, 2000, the marketing expenses formerly allocated to the common line and traffic sensitive baskets, and...

  3. 47 CFR 69.156 - Marketing expenses.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Marketing expenses. 69.156 Section 69.156... Computation of Charges for Price Cap Local Exchange Carriers § 69.156 Marketing expenses. Effective July 1, 2000, the marketing expenses formerly allocated to the common line and traffic sensitive baskets, and...

  4. 47 CFR 69.156 - Marketing expenses.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Marketing expenses. 69.156 Section 69.156... Computation of Charges for Price Cap Local Exchange Carriers § 69.156 Marketing expenses. Effective July 1, 2000, the marketing expenses formerly allocated to the common line and traffic sensitive baskets, and...

  5. 47 CFR 69.156 - Marketing expenses.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Marketing expenses. 69.156 Section 69.156... Computation of Charges for Price Cap Local Exchange Carriers § 69.156 Marketing expenses. Effective July 1, 2000, the marketing expenses formerly allocated to the common line and traffic sensitive baskets, and...

  6. A comparison of native GPU computing versus OpenACC for implementing flow-routing algorithms in hydrological applications

    NASA Astrophysics Data System (ADS)

    Rueda, Antonio J.; Noguera, José M.; Luque, Adrián

    2016-02-01

    In recent years GPU computing has gained wide acceptance as a simple low-cost solution for speeding up computationally expensive processing in many scientific and engineering applications. However, in most cases accelerating a traditional CPU implementation for a GPU is a non-trivial task that requires a thorough refactorization of the code and specific optimizations that depend on the architecture of the device. OpenACC is a promising technology that aims at reducing the effort required to accelerate C/C++/Fortran code on an attached multicore device. Virtually with this technology the CPU code only has to be augmented with a few compiler directives to identify the areas to be accelerated and the way in which data has to be moved between the CPU and GPU. Its potential benefits are multiple: better code readability, less development time, lower risk of errors and less dependency on the underlying architecture and future evolution of the GPU technology. Our aim with this work is to evaluate the pros and cons of using OpenACC against native GPU implementations in computationally expensive hydrological applications, using the classic D8 algorithm of O'Callaghan and Mark for river network extraction as case-study. We implemented the flow accumulation step of this algorithm in CPU, using OpenACC and two different CUDA versions, comparing the length and complexity of the code and its performance with different datasets. We advance that although OpenACC can not match the performance of a CUDA optimized implementation (×3.5 slower in average), it provides a significant performance improvement against a CPU implementation (×2-6) with by far a simpler code and less implementation effort.

  7. Robust, Optimal Water Infrastructure Planning Under Deep Uncertainty Using Metamodels

    NASA Astrophysics Data System (ADS)

    Maier, H. R.; Beh, E. H. Y.; Zheng, F.; Dandy, G. C.; Kapelan, Z.

    2015-12-01

    Optimal long-term planning plays an important role in many water infrastructure problems. However, this task is complicated by deep uncertainty about future conditions, such as the impact of population dynamics and climate change. One way to deal with this uncertainty is by means of robustness, which aims to ensure that water infrastructure performs adequately under a range of plausible future conditions. However, as robustness calculations require computationally expensive system models to be run for a large number of scenarios, it is generally computationally intractable to include robustness as an objective in the development of optimal long-term infrastructure plans. In order to overcome this shortcoming, an approach is developed that uses metamodels instead of computationally expensive simulation models in robustness calculations. The approach is demonstrated for the optimal sequencing of water supply augmentation options for the southern portion of the water supply for Adelaide, South Australia. A 100-year planning horizon is subdivided into ten equal decision stages for the purpose of sequencing various water supply augmentation options, including desalination, stormwater harvesting and household rainwater tanks. The objectives include the minimization of average present value of supply augmentation costs, the minimization of average present value of greenhouse gas emissions and the maximization of supply robustness. The uncertain variables are rainfall, per capita water consumption and population. Decision variables are the implementation stages of the different water supply augmentation options. Artificial neural networks are used as metamodels to enable all objectives to be calculated in a computationally efficient manner at each of the decision stages. The results illustrate the importance of identifying optimal staged solutions to ensure robustness and sustainability of water supply into an uncertain long-term future.

  8. Gaussian process surrogates for failure detection: A Bayesian experimental design approach

    NASA Astrophysics Data System (ADS)

    Wang, Hongqiao; Lin, Guang; Li, Jinglai

    2016-05-01

    An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.

  9. Cost-Benefit Arbitration Between Multiple Reinforcement-Learning Systems.

    PubMed

    Kool, Wouter; Gershman, Samuel J; Cushman, Fiery A

    2017-09-01

    Human behavior is sometimes determined by habit and other times by goal-directed planning. Modern reinforcement-learning theories formalize this distinction as a competition between a computationally cheap but inaccurate model-free system that gives rise to habits and a computationally expensive but accurate model-based system that implements planning. It is unclear, however, how people choose to allocate control between these systems. Here, we propose that arbitration occurs by comparing each system's task-specific costs and benefits. To investigate this proposal, we conducted two experiments showing that people increase model-based control when it achieves greater accuracy than model-free control, and especially when the rewards of accurate performance are amplified. In contrast, they are insensitive to reward amplification when model-based and model-free control yield equivalent accuracy. This suggests that humans adaptively balance habitual and planned action through on-line cost-benefit analysis.

  10. VAX CLuster upgrade: Report of a CPC task force

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanson, J.; Berry, H.; Kessler, P.

    The CSCF VAX cluster provides interactive computing for 100 users during prime time, plus a considerable amount of daytime and overnight batch processing. While this cluster represents less than 10% of the VAX computing power at BNL (6 MIPS out of 70), it has served as an important center for this larger network, supporting special hardware and software too expensive to maintain on every machine. In addition, it is the only unrestricted facility available to VAX/VMS users (other machines are typically dedicated to special projects). This committee's analysis shows that the cpu's on the CSCF cluster are currently badly oversaturated,more » frequently giving extremely poor interactive response. Short batch jobs (a necessary part of interactive work) typically take 3 to 4 times as long to execute as they would on an idle machine. There is also an immediate need for more scratch disk space and user permanent file space.« less

  11. Accelerating epistasis analysis in human genetics with consumer graphics hardware.

    PubMed

    Sinnott-Armstrong, Nicholas A; Greene, Casey S; Cancare, Fabio; Moore, Jason H

    2009-07-24

    Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR) is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs) have more memory bandwidth and computational capability than Central Processing Units (CPUs) and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective performance while leaving the CPU available for other tasks. The GPU workstation containing three GPUs costs $2000 while obtaining similar performance on a Beowulf cluster requires 150 CPU cores which, including the added infrastructure and support cost of the cluster system, cost approximately $82,500. Graphics hardware based computing provides a cost effective means to perform genetic analysis of epistasis using MDR on large datasets without the infrastructure of a computing cluster.

  12. Fast associative memory + slow neural circuitry = the computational model of the brain.

    NASA Astrophysics Data System (ADS)

    Berkovich, Simon; Berkovich, Efraim; Lapir, Gennady

    1997-08-01

    We propose a computational model of the brain based on a fast associative memory and relatively slow neural processors. In this model, processing time is expensive but memory access is not, and therefore most algorithmic tasks would be accomplished by using large look-up tables as opposed to calculating. The essential feature of an associative memory in this context (characteristic for a holographic type memory) is that it works without an explicit mechanism for resolution of multiple responses. As a result, the slow neuronal processing elements, overwhelmed by the flow of information, operate as a set of templates for ranking of the retrieved information. This structure addresses the primary controversy in the brain architecture: distributed organization of memory vs. localization of processing centers. This computational model offers an intriguing explanation of many of the paradoxical features in the brain architecture, such as integration of sensors (through DMA mechanism), subliminal perception, universality of software, interrupts, fault-tolerance, certain bizarre possibilities for rapid arithmetics etc. In conventional computer science the presented type of a computational model did not attract attention as it goes against the technological grain by using a working memory faster than processing elements.

  13. 48 CFR 227.7103-6 - Contract clauses.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... private expense). Do not use the clause when the only deliverable items are computer software or computer software documentation (see 227.72), commercial items developed exclusively at private expense (see 227... the clause in architect-engineer and construction contracts. (b)(1) Use the clause at 252.227-7013...

  14. 48 CFR 227.7103-6 - Contract clauses.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... private expense). Do not use the clause when the only deliverable items are computer software or computer software documentation (see 227.72), commercial items developed exclusively at private expense (see 227... the clause in architect-engineer and construction contracts. (b)(1) Use the clause at 252.227-7013...

  15. Graphical tactile displays for visually-impaired people.

    PubMed

    Vidal-Verdú, Fernando; Hafez, Moustapha

    2007-03-01

    This paper presents an up-to-date survey of graphical tactile displays. These devices provide information through the sense of touch. At best, they should display both text and graphics (text may be considered a type of graphic). Graphs made with shapeable sheets result in bulky items awkward to store and transport; their production is expensive and time-consuming and they deteriorate quickly. Research is ongoing for a refreshable tactile display that acts as an output device for a computer or other information source and can present the information in text and graphics. The work in this field has branched into diverse areas, from physiological studies to technological aspects and challenges. Moreover, interest in these devices is now being shown by other fields such as virtual reality, minimally invasive surgery and teleoperation. It is attracting more and more people, research and money. Many proposals have been put forward, several of them succeeding in the task of presenting tactile information. However, most are research prototypes and very expensive to produce commercially. Thus the goal of an efficient low-cost tactile display for visually-impaired people has not yet been reached.

  16. Probabilistic Reward- and Punishment-based Learning in Opioid Addiction: Experimental and Computational Data

    PubMed Central

    Myers, Catherine E.; Sheynin, Jony; Baldson, Tarryn; Luzardo, Andre; Beck, Kevin D.; Hogarth, Lee; Haber, Paul; Moustafa, Ahmed A.

    2016-01-01

    Addiction is the continuation of a habit in spite of negative consequences. A vast literature gives evidence that this poor decision-making behavior in individuals addicted to drugs also generalizes to laboratory decision making tasks, suggesting that the impairment in decision-making is not limited to decisions about taking drugs. In the current experiment, opioid-addicted individuals and matched controls with no history of illicit drug use were administered a probabilistic classification task that embeds both reward-based and punishment-based learning trials, and a computational model of decision making was applied to understand the mechanisms describing individuals’ performance on the task. Although behavioral results showed thatopioid-addicted individuals performed as well as controls on both reward- and punishment-based learning, the modeling results suggested subtle differences in how decisions were made between the two groups. Specifically, the opioid-addicted group showed decreased tendency to repeat prior responses, meaning that they were more likely to “chase reward” when expectancies were violated, whereas controls were more likely to stick with a previously-successful response rule, despite occasional expectancy violations. This tendency to chase short-term reward, potentially at the expense of developing rules that maximize reward over the long term, may be a contributing factor to opioid addiction. Further work is indicated to better understand whether this tendency arises as a result of brain changes in the wake of continued opioid use/abuse, or might be a pre-existing factor that may contribute to risk for addiction. PMID:26381438

  17. 24 CFR 990.165 - Computation of project expense level (PEL).

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Computation of project expense level (PEL). 990.165 Section 990.165 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR PUBLIC AND INDIAN HOUSING, DEPARTMENT OF...

  18. APHiD: Hierarchical Task Placement to Enable a Tapered Fat Tree Topology for Lower Power and Cost in HPC Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michelogiannakis, George; Ibrahim, Khaled Z.; Shalf, John

    The power and procurement cost of bandwidth in system-wide networks has forced a steady drop in the byte/flop ratio. This trend of computation becoming faster relative to the network is expected to hold. In this paper, we explore how cost-oriented task placement enables reducing the cost of system-wide networks by enabling high performance even on tapered topologies where more bandwidth is provisioned at lower levels. We describe APHiD, an efficient hierarchical placement algorithm that uses new techniques to improve the quality of heuristic solutions and reduces the demand on high-level, expensive bandwidth in hierarchical topologies. We apply APHiD to amore » tapered fat-tree, demonstrating that APHiD maintains application scalability even for severely tapered network configurations. Using simulation, we show that for tapered networks APHiD improves performance by more than 50% over random placement and even 15% in some cases over costlier, state-of-the-art placement algorithms.« less

  19. ATTDES: An Expert System for Satellite Attitude Determination and Control. 2

    NASA Technical Reports Server (NTRS)

    Mackison, Donald L.; Gifford, Kevin

    1996-01-01

    The design, analysis, and flight operations of satellite attitude determintion and attitude control systems require extensive mathematical formulations, optimization studies, and computer simulation. This is best done by an analyst with extensive education and experience. The development of programs such as ATTDES permit the use of advanced techniques by those with less experience. Typical tasks include the mission analysis to select stabilization and damping schemes, attitude determination sensors and algorithms, and control system designs to meet program requirements. ATTDES is a system that includes all of these activities, including high fidelity orbit environment models that can be used for preliminary analysis, parameter selection, stabilization schemes, the development of estimators covariance analyses, and optimization, and can support ongoing orbit activities. The modification of existing simulations to model new configurations for these purposes can be an expensive, time consuming activity that becomes a pacing item in the development and operation of such new systems. The use of an integrated tool such as ATTDES significantly reduces the effort and time required for these tasks.

  20. Performing a global barrier operation in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2014-12-09

    Executing computing tasks on a parallel computer that includes compute nodes coupled for data communications, where each compute node executes tasks, with one task on each compute node designated as a master task, including: for each task on each compute node until all master tasks have joined a global barrier: determining whether the task is a master task; if the task is not a master task, joining a single local barrier; if the task is a master task, joining the global barrier and the single local barrier only after all other tasks on the compute node have joined the single local barrier.

  1. SIMPLEX: Cloud-Enabled Pipeline for the Comprehensive Analysis of Exome Sequencing Data

    PubMed Central

    Fischer, Maria; Snajder, Rene; Pabinger, Stephan; Dander, Andreas; Schossig, Anna; Zschocke, Johannes; Trajanoski, Zlatko; Stocker, Gernot

    2012-01-01

    In recent studies, exome sequencing has proven to be a successful screening tool for the identification of candidate genes causing rare genetic diseases. Although underlying targeted sequencing methods are well established, necessary data handling and focused, structured analysis still remain demanding tasks. Here, we present a cloud-enabled autonomous analysis pipeline, which comprises the complete exome analysis workflow. The pipeline combines several in-house developed and published applications to perform the following steps: (a) initial quality control, (b) intelligent data filtering and pre-processing, (c) sequence alignment to a reference genome, (d) SNP and DIP detection, (e) functional annotation of variants using different approaches, and (f) detailed report generation during various stages of the workflow. The pipeline connects the selected analysis steps, exposes all available parameters for customized usage, performs required data handling, and distributes computationally expensive tasks either on a dedicated high-performance computing infrastructure or on the Amazon cloud environment (EC2). The presented application has already been used in several research projects including studies to elucidate the role of rare genetic diseases. The pipeline is continuously tested and is publicly available under the GPL as a VirtualBox or Cloud image at http://simplex.i-med.ac.at; additional supplementary data is provided at http://www.icbi.at/exome. PMID:22870267

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Juliane

    MISO is an optimization framework for solving computationally expensive mixed-integer, black-box, global optimization problems. MISO uses surrogate models to approximate the computationally expensive objective function. Hence, derivative information, which is generally unavailable for black-box simulation objective functions, is not needed. MISO allows the user to choose the initial experimental design strategy, the type of surrogate model, and the sampling strategy.

  3. 48 CFR 9905.506-60 - Illustrations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., installs a computer service center to begin operations on May 1. The operating expense related to the new... operating expenses of the computer service center for the 8-month part of the cost accounting period may be... 48 Federal Acquisition Regulations System 7 2013-10-01 2012-10-01 true Illustrations. 9905.506-60...

  4. 26 CFR 1.50B-1 - Definitions of WIN expenses and WIN employees.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... employee. (c) Trade or business expenses. The term “WIN expenses” includes only salaries and wages which... 26 Internal Revenue 1 2010-04-01 2010-04-01 true Definitions of WIN expenses and WIN employees. 1... INCOME TAXES Rules for Computing Credit for Expenses of Work Incentive Programs § 1.50B-1 Definitions of...

  5. Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations

    NASA Astrophysics Data System (ADS)

    Mitry, Mina

    Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.

  6. 47 CFR 32.6112 - Motor vehicle expense.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Motor vehicle expense. 32.6112 Section 32.6112 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM SYSTEM OF ACCOUNTS.../or to other Plant Specific Operations Expense accounts. These amounts shall be computed on the basis...

  7. Correlation Filters for Detection of Cellular Nuclei in Histopathology Images.

    PubMed

    Ahmad, Asif; Asif, Amina; Rajpoot, Nasir; Arif, Muhammad; Minhas, Fayyaz Ul Amir Afsar

    2017-11-21

    Nuclei detection in histology images is an essential part of computer aided diagnosis of cancers and tumors. It is a challenging task due to diverse and complicated structures of cells. In this work, we present an automated technique for detection of cellular nuclei in hematoxylin and eosin stained histopathology images. Our proposed approach is based on kernelized correlation filters. Correlation filters have been widely used in object detection and tracking applications but their strength has not been explored in the medical imaging domain up till now. Our experimental results show that the proposed scheme gives state of the art accuracy and can learn complex nuclear morphologies. Like deep learning approaches, the proposed filters do not require engineering of image features as they can operate directly on histopathology images without significant preprocessing. However, unlike deep learning methods, the large-margin correlation filters developed in this work are interpretable, computationally efficient and do not require specialized or expensive computing hardware. A cloud based webserver of the proposed method and its python implementation can be accessed at the following URL: http://faculty.pieas.edu.pk/fayyaz/software.html#corehist .

  8. LittleQuickWarp: an ultrafast image warping tool.

    PubMed

    Qu, Lei; Peng, Hanchuan

    2015-02-01

    Warping images into a standard coordinate space is critical for many image computing related tasks. However, for multi-dimensional and high-resolution images, an accurate warping operation itself is often very expensive in terms of computer memory and computational time. For high-throughput image analysis studies such as brain mapping projects, it is desirable to have high performance image warping tools that are compatible with common image analysis pipelines. In this article, we present LittleQuickWarp, a swift and memory efficient tool that boosts 3D image warping performance dramatically and at the same time has high warping quality similar to the widely used thin plate spline (TPS) warping. Compared to the TPS, LittleQuickWarp can improve the warping speed 2-5 times and reduce the memory consumption 6-20 times. We have implemented LittleQuickWarp as an Open Source plug-in program on top of the Vaa3D system (http://vaa3d.org). The source code and a brief tutorial can be found in the Vaa3D plugin source code repository. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. 47 CFR 32.6121 - Land and building expense.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... operate the telecommunications network shall be charged to Account 6531, Power Expense, and the cost of separately metered electricity used for operating specific types of equipment, such as computers, shall be... SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6121 Land and...

  10. 47 CFR 32.6121 - Land and building expense.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... operate the telecommunications network shall be charged to Account 6531, Power Expense, and the cost of separately metered electricity used for operating specific types of equipment, such as computers, shall be... SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6121 Land and...

  11. 47 CFR 32.6121 - Land and building expense.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... operate the telecommunications network shall be charged to Account 6531, Power Expense, and the cost of separately metered electricity used for operating specific types of equipment, such as computers, shall be... SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6121 Land and...

  12. 47 CFR 32.6121 - Land and building expense.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... operate the telecommunications network shall be charged to Account 6531, Power Expense, and the cost of separately metered electricity used for operating specific types of equipment, such as computers, shall be... SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6121 Land and...

  13. 47 CFR 32.6121 - Land and building expense.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... operate the telecommunications network shall be charged to Account 6531, Power Expense, and the cost of separately metered electricity used for operating specific types of equipment, such as computers, shall be... SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6121 Land and...

  14. 7 CFR 1484.53 - What are the requirements for documenting and reporting contributions?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... contribution must be documented by the Cooperator, showing the method of computing non-cash contributions, salaries, and travel expenses. (b) Each Cooperator must keep records of the methods used to compute the value of non-cash contributions, and (1) Copies of invoices or receipts for expenses paid by the U.S...

  15. Technology Solutions for Existing Homes Overview: Quantifying the Financial Benefits of Multifamily Retrofits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2016-04-01

    In this project, the U.S. Department of Energy Building America team Partnership for Advanced Residential Retrofit (PARR) worked with Elevate Energy on three tasks: to conduct pre- and post-retrofit analysis on the income and expense data of 13 Chicago-area multifamily buildings, to compare Chicago income and expense data to two national samples, and to explore the ramifications that energy-efficiency retrofits have on nine Chicago-area neighborhoods. The project team collected building, energy, and income and expense data from multiple private and public sources.

  16. 26 CFR 1.213-1 - Medical, dental, etc., expenses.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... medical care includes the diagnosis, cure, mitigation, treatment, or prevention of disease. Expenses paid... taxable year for insurance that constitute expenses paid for medical care shall, for purposes of computing... care of the taxpayer, his spouse, or a dependent of the taxpayer and not be compensated for by...

  17. 26 CFR 1.556-2 - Adjustments to taxable income.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... of deductions for trade or business expenses and depreciation which are allocable to the operation... computed without the deduction of the amount disallowed under section 556(b)(5), relating to expenses and... disallowed under section 556(b)(5), relating to expenses and depreciation applicable to property of the...

  18. Surrogate assisted multidisciplinary design optimization for an all-electric GEO satellite

    NASA Astrophysics Data System (ADS)

    Shi, Renhe; Liu, Li; Long, Teng; Liu, Jian; Yuan, Bin

    2017-09-01

    State-of-the-art all-electric geostationary earth orbit (GEO) satellites use electric thrusters to execute all propulsive duties, which significantly differ from the traditional all-chemical ones in orbit-raising, station-keeping, radiation damage protection, and power budget, etc. Design optimization task of an all-electric GEO satellite is therefore a complex multidisciplinary design optimization (MDO) problem involving unique design considerations. However, solving the all-electric GEO satellite MDO problem faces big challenges in disciplinary modeling techniques and efficient optimization strategy. To address these challenges, we presents a surrogate assisted MDO framework consisting of several modules, i.e., MDO problem definition, multidisciplinary modeling, multidisciplinary analysis (MDA), and surrogate assisted optimizer. Based on the proposed framework, the all-electric GEO satellite MDO problem is formulated to minimize the total mass of the satellite system under a number of practical constraints. Then considerable efforts are spent on multidisciplinary modeling involving geosynchronous transfer, GEO station-keeping, power, thermal control, attitude control, and structure disciplines. Since orbit dynamics models and finite element structural model are computationally expensive, an adaptive response surface surrogate based optimizer is incorporated in the proposed framework to solve the satellite MDO problem with moderate computational cost, where a response surface surrogate is gradually refined to represent the computationally expensive MDA process. After optimization, the total mass of the studied GEO satellite is decreased by 185.3 kg (i.e., 7.3% of the total mass). Finally, the optimal design is further discussed to demonstrate the effectiveness of our proposed framework to cope with the all-electric GEO satellite system design optimization problems. This proposed surrogate assisted MDO framework can also provide valuable references for other all-electric spacecraft system design.

  19. Comparison Of Human Modelling Tools For Efficiency Of Prediction Of EVA Tasks

    NASA Technical Reports Server (NTRS)

    Dischinger, H. Charles, Jr.; Loughead, Tomas E.

    1998-01-01

    Construction of the International Space Station (ISS) will require extensive extravehicular activity (EVA, spacewalks), and estimates of the actual time needed continue to rise. As recently as September, 1996, the amount of time to be spent in EVA was believed to be about 400 hours, excluding spacewalks on the Russian segment. This estimate has recently risen to over 1100 hours, and it could go higher before assembly begins in the summer of 1998. These activities are extremely expensive and hazardous, so any design tools which help assure mission success and improve the efficiency of the astronaut in task completion can pay off in reduced design and EVA costs and increased astronaut safety. The tasks which astronauts can accomplish in EVA are limited by spacesuit mobility. They are therefore relatively simple, from an ergonomic standpoint, requiring gross movements rather than time motor skills. The actual tasks include driving bolts, mating and demating electric and fluid connectors, and actuating levers; the important characteristics to be considered in design improvement include the ability of the astronaut to see and reach the item to be manipulated and the clearance required to accomplish the manipulation. This makes the tasks amenable to simulation in a Computer-Assisted Design (CAD) environment. For EVA, the spacesuited astronaut must have his or her feet attached on a work platform called a foot restraint to obtain a purchase against which work forces may be actuated. An important component of the design is therefore the proper placement of foot restraints.

  20. Multidisciplinary optimization of an HSCT wing using a response surface methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giunta, A.A.; Grossman, B.; Mason, W.H.

    1994-12-31

    Aerospace vehicle design is traditionally divided into three phases: conceptual, preliminary, and detailed. Each of these design phases entails a particular level of accuracy and computational expense. While there are several computer programs which perform inexpensive conceptual-level aircraft multidisciplinary design optimization (MDO), aircraft MDO remains prohibitively expensive using preliminary- and detailed-level analysis tools. This occurs due to the expense of computational analyses and because gradient-based optimization requires the analysis of hundreds or thousands of aircraft configurations to estimate design sensitivity information. A further hindrance to aircraft MDO is the problem of numerical noise which occurs frequently in engineering computations. Computermore » models produce numerical noise as a result of the incomplete convergence of iterative processes, round-off errors, and modeling errors. Such numerical noise is typically manifested as a high frequency, low amplitude variation in the results obtained from the computer models. Optimization attempted using noisy computer models may result in the erroneous calculation of design sensitivities and may slow or prevent convergence to an optimal design.« less

  1. Probabilistic reward- and punishment-based learning in opioid addiction: Experimental and computational data.

    PubMed

    Myers, Catherine E; Sheynin, Jony; Balsdon, Tarryn; Luzardo, Andre; Beck, Kevin D; Hogarth, Lee; Haber, Paul; Moustafa, Ahmed A

    2016-01-01

    Addiction is the continuation of a habit in spite of negative consequences. A vast literature gives evidence that this poor decision-making behavior in individuals addicted to drugs also generalizes to laboratory decision making tasks, suggesting that the impairment in decision-making is not limited to decisions about taking drugs. In the current experiment, opioid-addicted individuals and matched controls with no history of illicit drug use were administered a probabilistic classification task that embeds both reward-based and punishment-based learning trials, and a computational model of decision making was applied to understand the mechanisms describing individuals' performance on the task. Although behavioral results showed that opioid-addicted individuals performed as well as controls on both reward- and punishment-based learning, the modeling results suggested subtle differences in how decisions were made between the two groups. Specifically, the opioid-addicted group showed decreased tendency to repeat prior responses, meaning that they were more likely to "chase reward" when expectancies were violated, whereas controls were more likely to stick with a previously-successful response rule, despite occasional expectancy violations. This tendency to chase short-term reward, potentially at the expense of developing rules that maximize reward over the long term, may be a contributing factor to opioid addiction. Further work is indicated to better understand whether this tendency arises as a result of brain changes in the wake of continued opioid use/abuse, or might be a pre-existing factor that may contribute to risk for addiction. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Performing a global barrier operation in a parallel computer that includes compute nodes coupled for data communications, where each compute node executes tasks, with one task on each compute node designated as a master task, including: for each task on each compute node until all master tasks have joined a global barrier: determining whether the task is a master task; if the task is not a master task, joining a single local barrier; if the task is a master task, joining the global barrier and the single local barrier only after all other tasks on the compute node have joinedmore » the single local barrier.« less

  3. Efficient Constant-Time Complexity Algorithm for Stochastic Simulation of Large Reaction Networks.

    PubMed

    Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado

    2017-01-01

    Exact stochastic simulation is an indispensable tool for a quantitative study of biochemical reaction networks. The simulation realizes the time evolution of the model by randomly choosing a reaction to fire and update the system state according to a probability that is proportional to the reaction propensity. Two computationally expensive tasks in simulating large biochemical networks are the selection of next reaction firings and the update of reaction propensities due to state changes. We present in this work a new exact algorithm to optimize both of these simulation bottlenecks. Our algorithm employs the composition-rejection on the propensity bounds of reactions to select the next reaction firing. The selection of next reaction firings is independent of the number reactions while the update of propensities is skipped and performed only when necessary. It therefore provides a favorable scaling for the computational complexity in simulating large reaction networks. We benchmark our new algorithm with the state of the art algorithms available in literature to demonstrate its applicability and efficiency.

  4. Prioritization of Disease Susceptibility Genes Using LSM/SVD.

    PubMed

    Gong, Lejun; Yang, Ronggen; Yan, Qin; Sun, Xiao

    2013-12-01

    Understanding the role of genetics in diseases is one of the most important tasks in the postgenome era. It is generally too expensive and time consuming to perform experimental validation for all candidate genes related to disease. Computational methods play important roles for prioritizing these candidates. Herein, we propose an approach to prioritize disease genes using latent semantic mapping based on singular value decomposition. Our hypothesis is that similar functional genes are likely to cause similar diseases. Measuring the functional similarity between known disease susceptibility genes and unknown genes is to predict new disease susceptibility genes. Taking autism as an instance, the analysis results of the top ten genes prioritized demonstrate they might be autism susceptibility genes, which also indicates our approach could discover new disease susceptibility genes. The novel approach of disease gene prioritization could discover new disease susceptibility genes, and latent disease-gene relations. The prioritized results could also support the interpretive diversity and experimental views as computational evidence for disease researchers.

  5. Extreme-Scale Stochastic Particle Tracing for Uncertain Unsteady Flow Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Hanqi; He, Wenbin; Seo, Sangmin

    2016-11-13

    We present an efficient and scalable solution to estimate uncertain transport behaviors using stochastic flow maps (SFM,) for visualizing and analyzing uncertain unsteady flows. SFM computation is extremely expensive because it requires many Monte Carlo runs to trace densely seeded particles in the flow. We alleviate the computational cost by decoupling the time dependencies in SFMs so that we can process adjacent time steps independently and then compose them together for longer time periods. Adaptive refinement is also used to reduce the number of runs for each location. We then parallelize over tasks—packets of particles in our design—to achieve highmore » efficiency in MPI/thread hybrid programming. Such a task model also enables CPU/GPU coprocessing. We show the scalability on two supercomputers, Mira (up to 1M Blue Gene/Q cores) and Titan (up to 128K Opteron cores and 8K GPUs), that can trace billions of particles in seconds.« less

  6. Real time target allocation in cooperative unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Kudleppanavar, Ganesh

    The prolific development of Unmanned Aerial Vehicles (UAV's) in recent years has the potential to provide tremendous advantages in military, commercial and law enforcement applications. While safety and performance take precedence in the development lifecycle, autonomous operations and, in particular, cooperative missions have the ability to significantly enhance the usability of these vehicles. The success of cooperative missions relies on the optimal allocation of targets while taking into consideration the resource limitation of each vehicle. The task allocation process can be centralized or decentralized. This effort presents the development of a real time target allocation algorithm that considers available stored energy in each vehicle while minimizing the communication between each UAV. The algorithm utilizes a nearest neighbor search algorithm to locate new targets with respect to existing targets. Simulations show that this novel algorithm compares favorably to the mixed integer linear programming method, which is computationally more expensive. The implementation of this algorithm on Arduino and Xbee wireless modules shows the capability of the algorithm to execute efficiently on hardware with minimum computation complexity.

  7. EIT image reconstruction based on a hybrid FE-EFG forward method and the complete-electrode model.

    PubMed

    Hadinia, M; Jafari, R; Soleimani, M

    2016-06-01

    This paper presents the application of the hybrid finite element-element free Galerkin (FE-EFG) method for the forward and inverse problems of electrical impedance tomography (EIT). The proposed method is based on the complete electrode model. Finite element (FE) and element-free Galerkin (EFG) methods are accurate numerical techniques. However, the FE technique has meshing task problems and the EFG method is computationally expensive. In this paper, the hybrid FE-EFG method is applied to take both advantages of FE and EFG methods, the complete electrode model of the forward problem is solved, and an iterative regularized Gauss-Newton method is adopted to solve the inverse problem. The proposed method is applied to compute Jacobian in the inverse problem. Utilizing 2D circular homogenous models, the numerical results are validated with analytical and experimental results and the performance of the hybrid FE-EFG method compared with the FE method is illustrated. Results of image reconstruction are presented for a human chest experimental phantom.

  8. GPU-based RFA simulation for minimally invasive cancer treatment of liver tumours.

    PubMed

    Mariappan, Panchatcharam; Weir, Phil; Flanagan, Ronan; Voglreiter, Philip; Alhonnoro, Tuomas; Pollari, Mika; Moche, Michael; Busse, Harald; Futterer, Jurgen; Portugaller, Horst Rupert; Sequeiros, Roberto Blanco; Kolesnik, Marina

    2017-01-01

    Radiofrequency ablation (RFA) is one of the most popular and well-standardized minimally invasive cancer treatments (MICT) for liver tumours, employed where surgical resection has been contraindicated. Less-experienced interventional radiologists (IRs) require an appropriate planning tool for the treatment to help avoid incomplete treatment and so reduce the tumour recurrence risk. Although a few tools are available to predict the ablation lesion geometry, the process is computationally expensive. Also, in our implementation, a few patient-specific parameters are used to improve the accuracy of the lesion prediction. Advanced heterogeneous computing using personal computers, incorporating the graphics processing unit (GPU) and the central processing unit (CPU), is proposed to predict the ablation lesion geometry. The most recent GPU technology is used to accelerate the finite element approximation of Penne's bioheat equation and a three state cell model. Patient-specific input parameters are used in the bioheat model to improve accuracy of the predicted lesion. A fast GPU-based RFA solver is developed to predict the lesion by doing most of the computational tasks in the GPU, while reserving the CPU for concurrent tasks such as lesion extraction based on the heat deposition at each finite element node. The solver takes less than 3 min for a treatment duration of 26 min. When the model receives patient-specific input parameters, the deviation between real and predicted lesion is below 3 mm. A multi-centre retrospective study indicates that the fast RFA solver is capable of providing the IR with the predicted lesion in the short time period before the intervention begins when the patient has been clinically prepared for the treatment.

  9. Part-task vs. whole-task training on a supervisory control task

    NASA Technical Reports Server (NTRS)

    Battiste, Vernol

    1987-01-01

    The efficacy of a part-task training for the psychomotor portion of a supervisory control simulation was compared to that of the whole-task training, using six subjects in each group, who were asked to perform a task as quickly as possible. Part-task training was provided with the cursor-control device prior to transition to the whole-task. The analysis of both the training and experimental trials demonstrated a significant performance advantage for the part-task group: the tasks were performed better and at higher speed. Although the subjects finally achieved the same level of performance in terms of score, the part-task method was preferable for economic reasons, since simple pretraining systems are significantly less expensive than the whole-task training systems.

  10. Autism: Hard to Switch from Details to the Whole.

    PubMed

    Soriano, María Felipa; Ibáñez-Molina, Antonio J; Paredes, Natalia; Macizo, Pedro

    2017-12-18

    It has long been proposed that individuals with autism exhibit a superior processing of details at the expense of an impaired global processing. This theory has received some empirical support, but results are mixed. In this research we have studied local and global processing in ASD and Typically Developing children, with an adaptation of the Navon task, designed to measure congruency effects between local and global stimuli and switching cost between local and global tasks. ASD children showed preserved global processing; however, compared to Typically Developing children, they exhibited more facilitation from congruent local stimuli when they performed the global task. In addition, children with ASD had more switching cost than Typically Developing children only when they switched from the local to the global task, reflecting a specific difficulty to disengage from local stimuli. Together, results suggest that ASD is characterized by a tendency to process local details, they benefit from the processing of local stimuli at the expense of increasing cost to disengage from local stimuli when global processing is needed. Thus, this work demonstrates experimentally the advantages and disadvantages of the increased local processing in children with ASD.

  11. Two-phase strategy of controlling motor coordination determined by task performance optimality.

    PubMed

    Shimansky, Yury P; Rand, Miya K

    2013-02-01

    A quantitative model of optimal coordination between hand transport and grip aperture has been derived in our previous studies of reach-to-grasp movements without utilizing explicit knowledge of the optimality criterion or motor plant dynamics. The model's utility for experimental data analysis has been demonstrated. Here we show how to generalize this model for a broad class of reaching-type, goal-directed movements. The model allows for measuring the variability of motor coordination and studying its dependence on movement phase. The experimentally found characteristics of that dependence imply that execution noise is low and does not affect motor coordination significantly. From those characteristics it is inferred that the cost of neural computations required for information acquisition and processing is included in the criterion of task performance optimality as a function of precision demand for state estimation and decision making. The precision demand is an additional optimized control variable that regulates the amount of neurocomputational resources activated dynamically. It is shown that an optimal control strategy in this case comprises two different phases. During the initial phase, the cost of neural computations is significantly reduced at the expense of reducing the demand for their precision, which results in speed-accuracy tradeoff violation and significant inter-trial variability of motor coordination. During the final phase, neural computations and thus motor coordination are considerably more precise to reduce the cost of errors in making a contact with the target object. The generality of the optimal coordination model and the two-phase control strategy is illustrated on several diverse examples.

  12. A universal preconditioner for simulating condensed phase materials.

    PubMed

    Packwood, David; Kermode, James; Mones, Letif; Bernstein, Noam; Woolley, John; Gould, Nicholas; Ortner, Christoph; Csányi, Gábor

    2016-04-28

    We introduce a universal sparse preconditioner that accelerates geometry optimisation and saddle point search tasks that are common in the atomic scale simulation of materials. Our preconditioner is based on the neighbourhood structure and we demonstrate the gain in computational efficiency in a wide range of materials that include metals, insulators, and molecular solids. The simple structure of the preconditioner means that the gains can be realised in practice not only when using expensive electronic structure models but also for fast empirical potentials. Even for relatively small systems of a few hundred atoms, we observe speedups of a factor of two or more, and the gain grows with system size. An open source Python implementation within the Atomic Simulation Environment is available, offering interfaces to a wide range of atomistic codes.

  13. A universal preconditioner for simulating condensed phase materials

    NASA Astrophysics Data System (ADS)

    Packwood, David; Kermode, James; Mones, Letif; Bernstein, Noam; Woolley, John; Gould, Nicholas; Ortner, Christoph; Csányi, Gábor

    2016-04-01

    We introduce a universal sparse preconditioner that accelerates geometry optimisation and saddle point search tasks that are common in the atomic scale simulation of materials. Our preconditioner is based on the neighbourhood structure and we demonstrate the gain in computational efficiency in a wide range of materials that include metals, insulators, and molecular solids. The simple structure of the preconditioner means that the gains can be realised in practice not only when using expensive electronic structure models but also for fast empirical potentials. Even for relatively small systems of a few hundred atoms, we observe speedups of a factor of two or more, and the gain grows with system size. An open source Python implementation within the Atomic Simulation Environment is available, offering interfaces to a wide range of atomistic codes.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    I. W. Ginsberg

    Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The resultsmore » show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.« less

  15. Metamodels for Computer-Based Engineering Design: Survey and Recommendations

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.

    1997-01-01

    The use of statistical techniques to build approximations of expensive computer analysis codes pervades much of todays engineering design. These statistical approximations, or metamodels, are used to replace the actual expensive computer analyses, facilitating multidisciplinary, multiobjective optimization and concept exploration. In this paper we review several of these techniques including design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We survey their existing application in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of statistical approximation techniques in given situations and how common pitfalls can be avoided.

  16. What Would a Graph Look Like in this Layout? A Machine Learning Approach to Large Graph Visualization.

    PubMed

    Kwon, Oh-Hyun; Crnovrsanin, Tarik; Ma, Kwan-Liu

    2018-01-01

    Using different methods for laying out a graph can lead to very different visual appearances, with which the viewer perceives different information. Selecting a "good" layout method is thus important for visualizing a graph. The selection can be highly subjective and dependent on the given task. A common approach to selecting a good layout is to use aesthetic criteria and visual inspection. However, fully calculating various layouts and their associated aesthetic metrics is computationally expensive. In this paper, we present a machine learning approach to large graph visualization based on computing the topological similarity of graphs using graph kernels. For a given graph, our approach can show what the graph would look like in different layouts and estimate their corresponding aesthetic metrics. An important contribution of our work is the development of a new framework to design graph kernels. Our experimental study shows that our estimation calculation is considerably faster than computing the actual layouts and their aesthetic metrics. Also, our graph kernels outperform the state-of-the-art ones in both time and accuracy. In addition, we conducted a user study to demonstrate that the topological similarity computed with our graph kernel matches perceptual similarity assessed by human users.

  17. Use of Hilbert Curves in Parallelized CUDA code: Interaction of Interstellar Atoms with the Heliosphere

    NASA Astrophysics Data System (ADS)

    Destefano, Anthony; Heerikhuisen, Jacob

    2015-04-01

    Fully 3D particle simulations can be a computationally and memory expensive task, especially when high resolution grid cells are required. The problem becomes further complicated when parallelization is needed. In this work we focus on computational methods to solve these difficulties. Hilbert curves are used to map the 3D particle space to the 1D contiguous memory space. This method of organization allows for minimized cache misses on the GPU as well as a sorted structure that is equivalent to an octal tree data structure. This type of sorted structure is attractive for uses in adaptive mesh implementations due to the logarithm search time. Implementations using the Message Passing Interface (MPI) library and NVIDIA's parallel computing platform CUDA will be compared, as MPI is commonly used on server nodes with many CPU's. We will also compare static grid structures with those of adaptive mesh structures. The physical test bed will be simulating heavy interstellar atoms interacting with a background plasma, the heliosphere, simulated from fully consistent coupled MHD/kinetic particle code. It is known that charge exchange is an important factor in space plasmas, specifically it modifies the structure of the heliosphere itself. We would like to thank the Alabama Supercomputer Authority for the use of their computational resources.

  18. Manifold learning of brain MRIs by deep learning.

    PubMed

    Brosch, Tom; Tam, Roger

    2013-01-01

    Manifold learning of medical images plays a potentially important role for modeling anatomical variability within a population with pplications that include segmentation, registration, and prediction of clinical parameters. This paper describes a novel method for learning the manifold of 3D brain images that, unlike most existing manifold learning methods, does not require the manifold space to be locally linear, and does not require a predefined similarity measure or a prebuilt proximity graph. Our manifold learning method is based on deep learning, a machine learning approach that uses layered networks (called deep belief networks, or DBNs) and has received much attention recently in the computer vision field due to their success in object recognition tasks. DBNs have traditionally been too computationally expensive for application to 3D images due to the large number of trainable parameters. Our primary contributions are (1) a much more computationally efficient training method for DBNs that makes training on 3D medical images with a resolution of up to 128 x 128 x 128 practical, and (2) the demonstration that DBNs can learn a low-dimensional manifold of brain volumes that detects modes of variations that correlate to demographic and disease parameters.

  19. Recent advances, and unresolved issues, in the application of computational modelling to the prediction of the biological effects of nanomaterials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winkler, David A., E-mail: dave.winkler@csiro.au

    2016-05-15

    Nanomaterials research is one of the fastest growing contemporary research areas. The unprecedented properties of these materials have meant that they are being incorporated into products very quickly. Regulatory agencies are concerned they cannot assess the potential hazards of these materials adequately, as data on the biological properties of nanomaterials are still relatively limited and expensive to acquire. Computational modelling methods have much to offer in helping understand the mechanisms by which toxicity may occur, and in predicting the likelihood of adverse biological impacts of materials not yet tested experimentally. This paper reviews the progress these methods, particularly those QSAR-based,more » have made in understanding and predicting potentially adverse biological effects of nanomaterials, and also the limitations and pitfalls of these methods. - Highlights: • Nanomaterials regulators need good information to make good decisions. • Nanomaterials and their interactions with biology are very complex. • Computational methods use existing data to predict properties of new nanomaterials. • Statistical, data driven modelling methods have been successfully applied to this task. • Much more must be learnt before robust toolkits will be widely usable by regulators.« less

  20. Parallel hyperspectral image reconstruction using random projections

    NASA Astrophysics Data System (ADS)

    Sevilla, Jorge; Martín, Gabriel; Nascimento, José M. P.

    2016-10-01

    Spaceborne sensors systems are characterized by scarce onboard computing and storage resources and by communication links with reduced bandwidth. Random projections techniques have been demonstrated as an effective and very light way to reduce the number of measurements in hyperspectral data, thus, the data to be transmitted to the Earth station is reduced. However, the reconstruction of the original data from the random projections may be computationally expensive. SpeCA is a blind hyperspectral reconstruction technique that exploits the fact that hyperspectral vectors often belong to a low dimensional subspace. SpeCA has shown promising results in the task of recovering hyperspectral data from a reduced number of random measurements. In this manuscript we focus on the implementation of the SpeCA algorithm for graphics processing units (GPU) using the compute unified device architecture (CUDA). Experimental results conducted using synthetic and real hyperspectral datasets on the GPU architecture by NVIDIA: GeForce GTX 980, reveal that the use of GPUs can provide real-time reconstruction. The achieved speedup is up to 22 times when compared with the processing time of SpeCA running on one core of the Intel i7-4790K CPU (3.4GHz), with 32 Gbyte memory.

  1. Using quantum chemistry muscle to flex massive systems: How to respond to something perturbing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertoni, Colleen

    Computational chemistry uses the theoretical advances of quantum mechanics and the algorithmic and hardware advances of computer science to give insight into chemical problems. It is currently possible to do highly accurate quantum chemistry calculations, but the most accurate methods are very computationally expensive. Thus it is only feasible to do highly accurate calculations on small molecules, since typically more computationally efficient methods are also less accurate. The overall goal of my dissertation work has been to try to decrease the computational expense of calculations without decreasing the accuracy. In particular, my dissertation work focuses on fragmentation methods, intermolecular interactionsmore » methods, analytic gradients, and taking advantage of new hardware.« less

  2. 47 CFR 36.311 - Network Support/General Support Expenses-Accounts 6110 and 6120 (Class B Telephone Companies...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... (Class A Telephone Companies). 36.311 Section 36.311 Telecommunication FEDERAL COMMUNICATIONS COMMISSION..., office equipment, and general purpose computers. (b) The expenses in these account are apportioned among...

  3. 47 CFR 36.311 - Network Support/General Support Expenses-Accounts 6110 and 6120 (Class B Telephone Companies...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... (Class A Telephone Companies). 36.311 Section 36.311 Telecommunication FEDERAL COMMUNICATIONS COMMISSION..., office equipment, and general purpose computers. (b) The expenses in these account are apportioned among...

  4. 47 CFR 36.311 - Network Support/General Support Expenses-Accounts 6110 and 6120 (Class B Telephone Companies...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... (Class A Telephone Companies). 36.311 Section 36.311 Telecommunication FEDERAL COMMUNICATIONS COMMISSION..., office equipment, and general purpose computers. (b) The expenses in these account are apportioned among...

  5. Computationally Efficient Multiconfigurational Reactive Molecular Dynamics

    PubMed Central

    Yamashita, Takefumi; Peng, Yuxing; Knight, Chris; Voth, Gregory A.

    2012-01-01

    It is a computationally demanding task to explicitly simulate the electronic degrees of freedom in a system to observe the chemical transformations of interest, while at the same time sampling the time and length scales required to converge statistical properties and thus reduce artifacts due to initial conditions, finite-size effects, and limited sampling. One solution that significantly reduces the computational expense consists of molecular models in which effective interactions between particles govern the dynamics of the system. If the interaction potentials in these models are developed to reproduce calculated properties from electronic structure calculations and/or ab initio molecular dynamics simulations, then one can calculate accurate properties at a fraction of the computational cost. Multiconfigurational algorithms model the system as a linear combination of several chemical bonding topologies to simulate chemical reactions, also sometimes referred to as “multistate”. These algorithms typically utilize energy and force calculations already found in popular molecular dynamics software packages, thus facilitating their implementation without significant changes to the structure of the code. However, the evaluation of energies and forces for several bonding topologies per simulation step can lead to poor computational efficiency if redundancy is not efficiently removed, particularly with respect to the calculation of long-ranged Coulombic interactions. This paper presents accurate approximations (effective long-range interaction and resulting hybrid methods) and multiple-program parallelization strategies for the efficient calculation of electrostatic interactions in reactive molecular simulations. PMID:25100924

  6. Improved configuration control for redundant robots

    NASA Technical Reports Server (NTRS)

    Seraji, H.; Colbaugh, R.

    1990-01-01

    This article presents a singularity-robust task-prioritized reformulation of the configuration control scheme for redundant robot manipulators. This reformulation suppresses large joint velocities near singularities, at the expense of small task trajectory errors. This is achieved by optimally reducing the joint velocities to induce minimal errors in the task performance by modifying the task trajectories. Furthermore, the same framework provides a means for assignment of priorities between the basic task of end-effector motion and the user-defined additional task for utilizing redundancy. This allows automatic relaxation of the additional task constraints in favor of the desired end-effector motion, when both cannot be achieved exactly. The improved configuration control scheme is illustrated for a variety of additional tasks, and extensive simulation results are presented.

  7. Coupling of a continuum ice sheet model and a discrete element calving model using a scientific workflow system

    NASA Astrophysics Data System (ADS)

    Memon, Shahbaz; Vallot, Dorothée; Zwinger, Thomas; Neukirchen, Helmut

    2017-04-01

    Scientific communities generate complex simulations through orchestration of semi-structured analysis pipelines which involves execution of large workflows on multiple, distributed and heterogeneous computing and data resources. Modeling ice dynamics of glaciers requires workflows consisting of many non-trivial, computationally expensive processing tasks which are coupled to each other. From this domain, we present an e-Science use case, a workflow, which requires the execution of a continuum ice flow model and a discrete element based calving model in an iterative manner. Apart from the execution, this workflow also contains data format conversion tasks that support the execution of ice flow and calving by means of transition through sequential, nested and iterative steps. Thus, the management and monitoring of all the processing tasks including data management and transfer of the workflow model becomes more complex. From the implementation perspective, this workflow model was initially developed on a set of scripts using static data input and output references. In the course of application usage when more scripts or modifications introduced as per user requirements, the debugging and validation of results were more cumbersome to achieve. To address these problems, we identified a need to have a high-level scientific workflow tool through which all the above mentioned processes can be achieved in an efficient and usable manner. We decided to make use of the e-Science middleware UNICORE (Uniform Interface to Computing Resources) that allows seamless and automated access to different heterogenous and distributed resources which is supported by a scientific workflow engine. Based on this, we developed a high-level scientific workflow model for coupling of massively parallel High-Performance Computing (HPC) jobs: a continuum ice sheet model (Elmer/Ice) and a discrete element calving and crevassing model (HiDEM). In our talk we present how the use of a high-level scientific workflow middleware enables reproducibility of results more convenient and also provides a reusable and portable workflow template that can be deployed across different computing infrastructures. Acknowledgements This work was kindly supported by NordForsk as part of the Nordic Center of Excellence (NCoE) eSTICC (eScience Tools for Investigating Climate Change at High Northern Latitudes) and the Top-level Research Initiative NCoE SVALI (Stability and Variation of Arctic Land Ice).

  8. Nurse extenders offer a way to trim staff expenses.

    PubMed

    Eastaugh, S R; Regan-Donovan, M

    1990-04-01

    Troubles confronting hospital nursing--from a national shortage of nurses to low morale, high turnover, and rising costs of replacing and retaining staff members--require creative approaches and a rethinking of traditional primary care nursing. Nurse extender programs place non-nursing tasks in the hands of technicians trained to deliver meals, transport patients, take vital signs, and perform other patient care tasks.

  9. Searching for New Answers: The Application of Task-Technology Fit to E-Textbook Usage

    ERIC Educational Resources Information Center

    Gerhart, Natalie; Peak, Daniel A.; Prybutok, Victor R.

    2015-01-01

    Students have been slow to adopt e-textbooks even though they are often less expensive than traditional textbooks. Prior e-textbook research has focused on adoption behavior, with little research to date on how students perceive e-textbooks fitting their needs. This work builds upon Task-Technology Fit (TTF) and Consumer Acceptance and Use of…

  10. A descriptive feast but an evaluative famine: systematic review of published articles on primary care computing during 1980-97.

    PubMed

    Mitchell, E; Sullivan, F

    2001-02-03

    To appraise findings from studies examining the impact of computers on primary care consultations. Systematic review of world literature from 1980 to 1997. 5475 references were identified from electronic databases (Medline, Science Citation Index, Social Sciences Citation Index, Index of Scientific and Technical Proceedings, Embase, OCLC FirstSearch Proceedings), bibliographies, books, identified articles, and by authors active in the field. 1892 eligible abstracts were independently rated, and 89 studies met the inclusion criteria. Effect on doctors' performance and patient outcomes; attitudes towards computerisation. 61 studies examined effects of computers on practitioners' performance, 17 evaluated their impact on patient outcome, and 20 studied practitioners' or patients' attitudes. Computer use during consultations lengthened the consultation. Reminder systems for preventive tasks and disease management improved process rates, although some returned to pre-intervention levels when reminders were stopped. Use of computers for issuing prescriptions increased prescribing of generic drugs, and use of computers for test ordering led to cost savings and fewer unnecessary tests. There were no negative effects on those patient outcomes evaluated. Doctors and patients were generally positive about use of computers, but issues of concern included their impact on privacy, the doctor-patient relationship, cost, time, and training needs. Primary care computing systems can improve practitioner performance, particularly for health promotion interventions. This may be at the expense of patient initiated activities, making many practitioners suspicious of the negative impact on relationships with patients. There remains a dearth of evidence evaluating effects on patient outcomes.

  11. A descriptive feast but an evaluative famine: systematic review of published articles on primary care computing during 1980-97

    PubMed Central

    Mitchell, Elizabeth; Sullivan, Frank

    2001-01-01

    Objectives To appraise findings from studies examining the impact of computers on primary care consultations. Design Systematic review of world literature from 1980 to 1997. Data sources 5475 references were identified from electronic databases (Medline, Science Citation Index, Social Sciences Citation Index, Index of Scientific and Technical Proceedings, Embase, OCLC FirstSearch Proceedings), bibliographies, books, identified articles, and by authors active in the field. 1892 eligible abstracts were independently rated, and 89 studies met the inclusion criteria. Main outcome measures Effect on doctors' performance and patient outcomes; attitudes towards computerisation. Results 61 studies examined effects of computers on practitioners' performance, 17 evaluated their impact on patient outcome, and 20 studied practitioners' or patients' attitudes. Computer use during consultations lengthened the consultation. Reminder systems for preventive tasks and disease management improved process rates, although some returned to pre-intervention levels when reminders were stopped. Use of computers for issuing prescriptions increased prescribing of generic drugs, and use of computers for test ordering led to cost savings and fewer unnecessary tests. There were no negative effects on those patient outcomes evaluated. Doctors and patients were generally positive about use of computers, but issues of concern included their impact on privacy, the doctor-patient relationship, cost, time, and training needs. Conclusions Primary care computing systems can improve practitioner performance, particularly for health promotion interventions. This may be at the expense of patient initiated activities, making many practitioners suspicious of the negative impact on relationships with patients. There remains a dearth of evidence evaluating effects on patient outcomes. PMID:11157532

  12. An efficient approach to imaging underground hydraulic networks

    NASA Astrophysics Data System (ADS)

    Kumar, Mohi

    2012-07-01

    To better locate natural resources, treat pollution, and monitor underground networks associated with geothermal plants, nuclear waste repositories, and carbon dioxide sequestration sites, scientists need to be able to accurately characterize and image fluid seepage pathways below ground. With these images, scientists can gain knowledge of soil moisture content, the porosity of geologic formations, concentrations and locations of dissolved pollutants, and the locations of oil fields or buried liquid contaminants. Creating images of the unknown hydraulic environments underfoot is a difficult task that has typically relied on broad extrapolations from characteristics and tests of rock units penetrated by sparsely positioned boreholes. Such methods, however, cannot identify small-scale features and are very expensive to reproduce over a broad area. Further, the techniques through which information is extrapolated rely on clunky and mathematically complex statistical approaches requiring large amounts of computational power.

  13. A universal preconditioner for simulating condensed phase materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Packwood, David; Ortner, Christoph, E-mail: c.ortner@warwick.ac.uk; Kermode, James, E-mail: j.r.kermode@warwick.ac.uk

    2016-04-28

    We introduce a universal sparse preconditioner that accelerates geometry optimisation and saddle point search tasks that are common in the atomic scale simulation of materials. Our preconditioner is based on the neighbourhood structure and we demonstrate the gain in computational efficiency in a wide range of materials that include metals, insulators, and molecular solids. The simple structure of the preconditioner means that the gains can be realised in practice not only when using expensive electronic structure models but also for fast empirical potentials. Even for relatively small systems of a few hundred atoms, we observe speedups of a factor ofmore » two or more, and the gain grows with system size. An open source Python implementation within the Atomic Simulation Environment is available, offering interfaces to a wide range of atomistic codes.« less

  14. Deep learning in the small sample size setting: cascaded feed forward neural networks for medical image segmentation

    NASA Astrophysics Data System (ADS)

    Gaonkar, Bilwaj; Hovda, David; Martin, Neil; Macyszyn, Luke

    2016-03-01

    Deep Learning, refers to large set of neural network based algorithms, have emerged as promising machine- learning tools in the general imaging and computer vision domains. Convolutional neural networks (CNNs), a specific class of deep learning algorithms, have been extremely effective in object recognition and localization in natural images. A characteristic feature of CNNs, is the use of a locally connected multi layer topology that is inspired by the animal visual cortex (the most powerful vision system in existence). While CNNs, perform admirably in object identification and localization tasks, typically require training on extremely large datasets. Unfortunately, in medical image analysis, large datasets are either unavailable or are extremely expensive to obtain. Further, the primary tasks in medical imaging are organ identification and segmentation from 3D scans, which are different from the standard computer vision tasks of object recognition. Thus, in order to translate the advantages of deep learning to medical image analysis, there is a need to develop deep network topologies and training methodologies, that are geared towards medical imaging related tasks and can work in a setting where dataset sizes are relatively small. In this paper, we present a technique for stacked supervised training of deep feed forward neural networks for segmenting organs from medical scans. Each `neural network layer' in the stack is trained to identify a sub region of the original image, that contains the organ of interest. By layering several such stacks together a very deep neural network is constructed. Such a network can be used to identify extremely small regions of interest in extremely large images, inspite of a lack of clear contrast in the signal or easily identifiable shape characteristics. What is even more intriguing is that the network stack achieves accurate segmentation even when it is trained on a single image with manually labelled ground truth. We validate this approach,using a publicly available head and neck CT dataset. We also show that a deep neural network of similar depth, if trained directly using backpropagation, cannot acheive the tasks achieved using our layer wise training paradigm.

  15. DIALIGN P: fast pair-wise and multiple sequence alignment using parallel processors.

    PubMed

    Schmollinger, Martin; Nieselt, Kay; Kaufmann, Michael; Morgenstern, Burkhard

    2004-09-09

    Parallel computing is frequently used to speed up computationally expensive tasks in Bioinformatics. Herein, a parallel version of the multi-alignment program DIALIGN is introduced. We propose two ways of dividing the program into independent sub-routines that can be run on different processors: (a) pair-wise sequence alignments that are used as a first step to multiple alignment account for most of the CPU time in DIALIGN. Since alignments of different sequence pairs are completely independent of each other, they can be distributed to multiple processors without any effect on the resulting output alignments. (b) For alignments of large genomic sequences, we use a heuristics by splitting up sequences into sub-sequences based on a previously introduced anchored alignment procedure. For our test sequences, this combined approach reduces the program running time of DIALIGN by up to 97%. By distributing sub-routines to multiple processors, the running time of DIALIGN can be crucially improved. With these improvements, it is possible to apply the program in large-scale genomics and proteomics projects that were previously beyond its scope.

  16. Distributed Parallel Processing and Dynamic Load Balancing Techniques for Multidisciplinary High Speed Aircraft Design

    NASA Technical Reports Server (NTRS)

    Krasteva, Denitza T.

    1998-01-01

    Multidisciplinary design optimization (MDO) for large-scale engineering problems poses many challenges (e.g., the design of an efficient concurrent paradigm for global optimization based on disciplinary analyses, expensive computations over vast data sets, etc.) This work focuses on the application of distributed schemes for massively parallel architectures to MDO problems, as a tool for reducing computation time and solving larger problems. The specific problem considered here is configuration optimization of a high speed civil transport (HSCT), and the efficient parallelization of the embedded paradigm for reasonable design space identification. Two distributed dynamic load balancing techniques (random polling and global round robin with message combining) and two necessary termination detection schemes (global task count and token passing) were implemented and evaluated in terms of effectiveness and scalability to large problem sizes and a thousand processors. The effect of certain parameters on execution time was also inspected. Empirical results demonstrated stable performance and effectiveness for all schemes, and the parametric study showed that the selected algorithmic parameters have a negligible effect on performance.

  17. VDA, a Method of Choosing a Better Algorithm with Fewer Validations

    PubMed Central

    Kluger, Yuval

    2011-01-01

    The multitude of bioinformatics algorithms designed for performing a particular computational task presents end-users with the problem of selecting the most appropriate computational tool for analyzing their biological data. The choice of the best available method is often based on expensive experimental validation of the results. We propose an approach to design validation sets for method comparison and performance assessment that are effective in terms of cost and discrimination power. Validation Discriminant Analysis (VDA) is a method for designing a minimal validation dataset to allow reliable comparisons between the performances of different algorithms. Implementation of our VDA approach achieves this reduction by selecting predictions that maximize the minimum Hamming distance between algorithmic predictions in the validation set. We show that VDA can be used to correctly rank algorithms according to their performances. These results are further supported by simulations and by realistic algorithmic comparisons in silico. VDA is a novel, cost-efficient method for minimizing the number of validation experiments necessary for reliable performance estimation and fair comparison between algorithms. Our VDA software is available at http://sourceforge.net/projects/klugerlab/files/VDA/ PMID:22046256

  18. Blocked inverted indices for exact clustering of large chemical spaces.

    PubMed

    Thiel, Philipp; Sach-Peltason, Lisa; Ottmann, Christian; Kohlbacher, Oliver

    2014-09-22

    The calculation of pairwise compound similarities based on fingerprints is one of the fundamental tasks in chemoinformatics. Methods for efficient calculation of compound similarities are of the utmost importance for various applications like similarity searching or library clustering. With the increasing size of public compound databases, exact clustering of these databases is desirable, but often computationally prohibitively expensive. We present an optimized inverted index algorithm for the calculation of all pairwise similarities on 2D fingerprints of a given data set. In contrast to other algorithms, it neither requires GPU computing nor yields a stochastic approximation of the clustering. The algorithm has been designed to work well with multicore architectures and shows excellent parallel speedup. As an application example of this algorithm, we implemented a deterministic clustering application, which has been designed to decompose virtual libraries comprising tens of millions of compounds in a short time on current hardware. Our results show that our implementation achieves more than 400 million Tanimoto similarity calculations per second on a common desktop CPU. Deterministic clustering of the available chemical space thus can be done on modern multicore machines within a few days.

  19. 76 FR 9349 - Jim Woodruff Project

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-17

    ... month. Southeastern would compute its purchased power obligation for each delivery point monthly... rates to include a pass-through of purchased power expenses. The capacity and energy charges to preference customers can be reduced because purchased power expenses will be recovered in a separate, pass...

  20. Inferring Regulatory Networks by Combining Perturbation Screens and Steady State Gene Expression Profiles

    PubMed Central

    Michailidis, George

    2014-01-01

    Reconstructing transcriptional regulatory networks is an important task in functional genomics. Data obtained from experiments that perturb genes by knockouts or RNA interference contain useful information for addressing this reconstruction problem. However, such data can be limited in size and/or are expensive to acquire. On the other hand, observational data of the organism in steady state (e.g., wild-type) are more readily available, but their informational content is inadequate for the task at hand. We develop a computational approach to appropriately utilize both data sources for estimating a regulatory network. The proposed approach is based on a three-step algorithm to estimate the underlying directed but cyclic network, that uses as input both perturbation screens and steady state gene expression data. In the first step, the algorithm determines causal orderings of the genes that are consistent with the perturbation data, by combining an exhaustive search method with a fast heuristic that in turn couples a Monte Carlo technique with a fast search algorithm. In the second step, for each obtained causal ordering, a regulatory network is estimated using a penalized likelihood based method, while in the third step a consensus network is constructed from the highest scored ones. Extensive computational experiments show that the algorithm performs well in reconstructing the underlying network and clearly outperforms competing approaches that rely only on a single data source. Further, it is established that the algorithm produces a consistent estimate of the regulatory network. PMID:24586224

  1. SOP: parallel surrogate global optimization with Pareto center selection for computationally expensive single objective problems

    DOE PAGES

    Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.

    2016-02-02

    This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less

  2. A Case against Computer Symbolic Manipulation in School Mathematics Today.

    ERIC Educational Resources Information Center

    Waits, Bert K.; Demana, Franklin

    1992-01-01

    Presented are two reasons discouraging computer symbol manipulation systems use in school mathematics at present: cost for computer laboratories or expensive pocket computers; and impracticality of exact solution representations. Although development with this technology in mathematics education advances, graphing calculators are recommended to…

  3. Sex differences on a computerized mental rotation task disappear with computer familiarization.

    PubMed

    Roberts, J E; Bell, M A

    2000-12-01

    The area of cognitive research that has produced the most consistent sex differences is spatial ability. Particularly, men consistently perform better on mental rotation tasks than do women. This study examined the effects of familiarization with a computer on performance of a computerized two-dimensional mental rotation task. Two groups of college students (N=44) performed the rotation task, with one group performing a color-matching task that allowed them to be familiarized with the computer prior to the rotation task. Among the participants who only performed the rotation task, the 11 men performed better than the 11 women. Among the participants who performed the computer familiarization task before the rotation task, how ever, there were no sex differences on the mental rotation task between the 10 men and 12 women. These data indicate that sex differences on this two-dimensional task may reflect familiarization with the computer, not the mental rotation component of the task. Further research with larger samples and increased range of task difficulty is encouraged.

  4. Synthesizing parallel imaging applications using the CAP (computer-aided parallelization) tool

    NASA Astrophysics Data System (ADS)

    Gennart, Benoit A.; Mazzariol, Marc; Messerli, Vincent; Hersch, Roger D.

    1997-12-01

    Imaging applications such as filtering, image transforms and compression/decompression require vast amounts of computing power when applied to large data sets. These applications would potentially benefit from the use of parallel processing. However, dedicated parallel computers are expensive and their processing power per node lags behind that of the most recent commodity components. Furthermore, developing parallel applications remains a difficult task: writing and debugging the application is difficult (deadlocks), programs may not be portable from one parallel architecture to the other, and performance often comes short of expectations. In order to facilitate the development of parallel applications, we propose the CAP computer-aided parallelization tool which enables application programmers to specify at a high-level of abstraction the flow of data between pipelined-parallel operations. In addition, the CAP tool supports the programmer in developing parallel imaging and storage operations. CAP enables combining efficiently parallel storage access routines and image processing sequential operations. This paper shows how processing and I/O intensive imaging applications must be implemented to take advantage of parallelism and pipelining between data access and processing. This paper's contribution is (1) to show how such implementations can be compactly specified in CAP, and (2) to demonstrate that CAP specified applications achieve the performance of custom parallel code. The paper analyzes theoretically the performance of CAP specified applications and demonstrates the accuracy of the theoretical analysis through experimental measurements.

  5. Real-time yield estimation based on deep learning

    NASA Astrophysics Data System (ADS)

    Rahnemoonfar, Maryam; Sheppard, Clay

    2017-05-01

    Crop yield estimation is an important task in product management and marketing. Accurate yield prediction helps farmers to make better decision on cultivation practices, plant disease prevention, and the size of harvest labor force. The current practice of yield estimation based on the manual counting of fruits is very time consuming and expensive process and it is not practical for big fields. Robotic systems including Unmanned Aerial Vehicles (UAV) and Unmanned Ground Vehicles (UGV), provide an efficient, cost-effective, flexible, and scalable solution for product management and yield prediction. Recently huge data has been gathered from agricultural field, however efficient analysis of those data is still a challenging task. Computer vision approaches currently face diffident challenges in automatic counting of fruits or flowers including occlusion caused by leaves, branches or other fruits, variance in natural illumination, and scale. In this paper a novel deep convolutional network algorithm was developed to facilitate the accurate yield prediction and automatic counting of fruits and vegetables on the images. Our method is robust to occlusion, shadow, uneven illumination and scale. Experimental results in comparison to the state-of-the art show the effectiveness of our algorithm.

  6. Biology Inspired Approach for Communal Behavior in Sensor Networks

    NASA Technical Reports Server (NTRS)

    Jones, Kennie H.; Lodding, Kenneth N.; Olariu, Stephan; Wilson, Larry; Xin, Chunsheng

    2006-01-01

    Research in wireless sensor network technology has exploded in the last decade. Promises of complex and ubiquitous control of the physical environment by these networks open avenues for new kinds of science and business. Due to the small size and low cost of sensor devices, visionaries promise systems enabled by deployment of massive numbers of sensors working in concert. Although the reduction in size has been phenomenal it results in severe limitations on the computing, communicating, and power capabilities of these devices. Under these constraints, research efforts have concentrated on developing techniques for performing relatively simple tasks with minimal energy expense assuming some form of centralized control. Unfortunately, centralized control does not scale to massive size networks and execution of simple tasks in sparsely populated networks will not lead to the sophisticated applications predicted. These must be enabled by new techniques dependent on local and autonomous cooperation between sensors to effect global functions. As a step in that direction, in this work we detail a technique whereby a large population of sensors can attain a global goal using only local information and by making only local decisions without any form of centralized control.

  7. Development of a method to analyze orthopaedic practice expenses.

    PubMed

    Brinker, M R; Pierce, P; Siegel, G

    2000-03-01

    The purpose of the current investigation was to present a standard method by which an orthopaedic practice can analyze its practice expenses. To accomplish this, a five-step process was developed to analyze practice expenses using a modified version of activity-based costing. In this method, general ledger expenses were assigned to 17 activities that encompass all the tasks and processes typically performed in an orthopaedic practice. These 17 activities were identified in a practice expense study conducted for the American Academy of Orthopaedic Surgeons. To calculate the cost of each activity, financial data were used from a group of 19 orthopaedic surgeons in Houston, Texas. The activities that consumed the largest portion of the employee work force (person hours) were service patients in office (25.0% of all person hours), maintain medical records (13.6% of all person hours), and resolve collection disputes and rebill charges (12.3% of all person hours). The activities that comprised the largest portion of the total expenses were maintain facility (21.4%), service patients in office (16.0%), and sustain business by managing and coordinating practice (13.8%). The five-step process of analyzing practice expenses was relatively easy to perform and it may be used reliably by most orthopaedic practices.

  8. The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency

    ERIC Educational Resources Information Center

    Oder, Karl; Pittman, Stephanie

    2015-01-01

    Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…

  9. ESHRE Task Force on Ethics and Law 10: surrogacy.

    PubMed

    Shenfield, F; Pennings, G; Cohen, J; Devroey, P; de Wert, G; Tarlatzis, B

    2005-10-01

    This 10th statement of the Task Force on Ethics and Law considers ethical questions specific to varied surrogacy arrangements. Surrogacy is especially complex as the interests of the intended parents, the surrogate, and the future child may differ. It is concluded that surrogacy is an acceptable method of assisted reproductive technology of the last resort for specific medical indications, for which only reimbursement of reasonable expenses is allowed.

  10. Shortened Nonword Repetition Task (NWR-S): A Simple, Quick, and Less Expensive Outcome to Identify Children with Combined Specific Language and Reading Impairment

    ERIC Educational Resources Information Center

    le Clercq, Carlijn M. P.; van der Schroeff, Marc P.; Rispens, Judith E.; Ruytjens, Liesbet; Goedegebure, André; van Ingen, Gijs; Franken, Marie-Christine

    2017-01-01

    Purpose: The purpose of this research note was to validate a simplified version of the Dutch nonword repetition task (NWR; Rispens & Baker, 2012). The NWR was shortened and scoring was transformed to correct/incorrect nonwords, resulting in the shortened NWR (NWR-S). Method: NWR-S and NWR performance were compared in the previously published…

  11. Continuous Odour Measurement with Chemosensor Systems

    NASA Astrophysics Data System (ADS)

    Boeker, Peter; Haas, T.; Diekmann, B.; Lammer, P. Schulze

    2009-05-01

    The continuous odour measurement is a challenging task for chemosensor systems. Firstly, a long term and stable measurement mode must be guaranteed in order to preserve the validity of the time consuming and expensive olfactometric calibration data. Secondly, a method is needed to deal with the incoming sensor data. The continuous online detection of signal patterns, the correlated gas emission and the assigned odour data is essential for the continuous odour measurement. Thirdly, a severe danger of over-fitting in the process of the odour calibration is present, because of the high measurement uncertainty of the olfactometry. In this contribution we present a technical solution for continuous measurements comprising of a hybrid QMB-sensor array and electrochemical cells. A set of software tools enables the efficient data processing and calibration and computes the calibration parameters. The internal software of the measurement systems microcontroller processes the calibration parameters online for the output of the desired odour information.

  12. Ship detection in optical remote sensing images based on deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Yao, Yuan; Jiang, Zhiguo; Zhang, Haopeng; Zhao, Danpei; Cai, Bowen

    2017-10-01

    Automatic ship detection in optical remote sensing images has attracted wide attention for its broad applications. Major challenges for this task include the interference of cloud, wave, wake, and the high computational expenses. We propose a fast and robust ship detection algorithm to solve these issues. The framework for ship detection is designed based on deep convolutional neural networks (CNNs), which provide the accurate locations of ship targets in an efficient way. First, the deep CNN is designed to extract features. Then, a region proposal network (RPN) is applied to discriminate ship targets and regress the detection bounding boxes, in which the anchors are designed by intrinsic shape of ship targets. Experimental results on numerous panchromatic images demonstrate that, in comparison with other state-of-the-art ship detection methods, our method is more efficient and achieves higher detection accuracy and more precise bounding boxes in different complex backgrounds.

  13. Cloud computing task scheduling strategy based on improved differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Ge, Junwei; He, Qian; Fang, Yiqiu

    2017-04-01

    In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.

  14. Securing SIFT: Privacy-preserving Outsourcing Computation of Feature Extractions Over Encrypted Image Data.

    PubMed

    Hu, Shengshan; Wang, Qian; Wang, Jingjun; Qin, Zhan; Ren, Kui

    2016-05-13

    Advances in cloud computing have greatly motivated data owners to outsource their huge amount of personal multimedia data and/or computationally expensive tasks onto the cloud by leveraging its abundant resources for cost saving and flexibility. Despite the tremendous benefits, the outsourced multimedia data and its originated applications may reveal the data owner's private information, such as the personal identity, locations or even financial profiles. This observation has recently aroused new research interest on privacy-preserving computations over outsourced multimedia data. In this paper, we propose an effective and practical privacy-preserving computation outsourcing protocol for the prevailing scale-invariant feature transform (SIFT) over massive encrypted image data. We first show that previous solutions to this problem have either efficiency/security or practicality issues, and none can well preserve the important characteristics of the original SIFT in terms of distinctiveness and robustness. We then present a new scheme design that achieves efficiency and security requirements simultaneously with the preservation of its key characteristics, by randomly splitting the original image data, designing two novel efficient protocols for secure multiplication and comparison, and carefully distributing the feature extraction computations onto two independent cloud servers. We both carefully analyze and extensively evaluate the security and effectiveness of our design. The results show that our solution is practically secure, outperforms the state-of-theart, and performs comparably to the original SIFT in terms of various characteristics, including rotation invariance, image scale invariance, robust matching across affine distortion, addition of noise and change in 3D viewpoint and illumination.

  15. SecSIFT: Privacy-preserving Outsourcing Computation of Feature Extractions Over Encrypted Image Data.

    PubMed

    Hu, Shengshan; Wang, Qian; Wang, Jingjun; Qin, Zhan; Ren, Kui

    2016-05-13

    Advances in cloud computing have greatly motivated data owners to outsource their huge amount of personal multimedia data and/or computationally expensive tasks onto the cloud by leveraging its abundant resources for cost saving and flexibility. Despite the tremendous benefits, the outsourced multimedia data and its originated applications may reveal the data owner's private information, such as the personal identity, locations or even financial profiles. This observation has recently aroused new research interest on privacy-preserving computations over outsourced multimedia data. In this paper, we propose an effective and practical privacy-preserving computation outsourcing protocol for the prevailing scale-invariant feature transform (SIFT) over massive encrypted image data. We first show that previous solutions to this problem have either efficiency/security or practicality issues, and none can well preserve the important characteristics of the original SIFT in terms of distinctiveness and robustness. We then present a new scheme design that achieves efficiency and security requirements simultaneously with the preservation of its key characteristics, by randomly splitting the original image data, designing two novel efficient protocols for secure multiplication and comparison, and carefully distributing the feature extraction computations onto two independent cloud servers. We both carefully analyze and extensively evaluate the security and effectiveness of our design. The results show that our solution is practically secure, outperforms the state-of-theart, and performs comparably to the original SIFT in terms of various characteristics, including rotation invariance, image scale invariance, robust matching across affine distortion, addition of noise and change in 3D viewpoint and illumination.

  16. Privacy-preserving search for chemical compound databases.

    PubMed

    Shimizu, Kana; Nuida, Koji; Arai, Hiromi; Mitsunari, Shigeo; Attrapadung, Nuttapong; Hamada, Michiaki; Tsuda, Koji; Hirokawa, Takatsugu; Sakuma, Jun; Hanaoka, Goichiro; Asai, Kiyoshi

    2015-01-01

    Searching for similar compounds in a database is the most important process for in-silico drug screening. Since a query compound is an important starting point for the new drug, a query holder, who is afraid of the query being monitored by the database server, usually downloads all the records in the database and uses them in a closed network. However, a serious dilemma arises when the database holder also wants to output no information except for the search results, and such a dilemma prevents the use of many important data resources. In order to overcome this dilemma, we developed a novel cryptographic protocol that enables database searching while keeping both the query holder's privacy and database holder's privacy. Generally, the application of cryptographic techniques to practical problems is difficult because versatile techniques are computationally expensive while computationally inexpensive techniques can perform only trivial computation tasks. In this study, our protocol is successfully built only from an additive-homomorphic cryptosystem, which allows only addition performed on encrypted values but is computationally efficient compared with versatile techniques such as general purpose multi-party computation. In an experiment searching ChEMBL, which consists of more than 1,200,000 compounds, the proposed method was 36,900 times faster in CPU time and 12,000 times as efficient in communication size compared with general purpose multi-party computation. We proposed a novel privacy-preserving protocol for searching chemical compound databases. The proposed method, easily scaling for large-scale databases, may help to accelerate drug discovery research by making full use of unused but valuable data that includes sensitive information.

  17. Privacy-preserving search for chemical compound databases

    PubMed Central

    2015-01-01

    Background Searching for similar compounds in a database is the most important process for in-silico drug screening. Since a query compound is an important starting point for the new drug, a query holder, who is afraid of the query being monitored by the database server, usually downloads all the records in the database and uses them in a closed network. However, a serious dilemma arises when the database holder also wants to output no information except for the search results, and such a dilemma prevents the use of many important data resources. Results In order to overcome this dilemma, we developed a novel cryptographic protocol that enables database searching while keeping both the query holder's privacy and database holder's privacy. Generally, the application of cryptographic techniques to practical problems is difficult because versatile techniques are computationally expensive while computationally inexpensive techniques can perform only trivial computation tasks. In this study, our protocol is successfully built only from an additive-homomorphic cryptosystem, which allows only addition performed on encrypted values but is computationally efficient compared with versatile techniques such as general purpose multi-party computation. In an experiment searching ChEMBL, which consists of more than 1,200,000 compounds, the proposed method was 36,900 times faster in CPU time and 12,000 times as efficient in communication size compared with general purpose multi-party computation. Conclusion We proposed a novel privacy-preserving protocol for searching chemical compound databases. The proposed method, easily scaling for large-scale databases, may help to accelerate drug discovery research by making full use of unused but valuable data that includes sensitive information. PMID:26678650

  18. 78 FR 50374 - Proposed Information Collection; Comment Request; Information and Communication Technology Survey

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-19

    ... expenses (purchases; and operating leases and rental payments) for four types of information and communication technology equipment and software (computers and peripheral equipment; ICT equipment, excluding computers and peripherals; electromedical and electrotherapeutic apparatus; and computer software, including...

  19. Influence of computer work under time pressure on cardiac activity.

    PubMed

    Shi, Ping; Hu, Sijung; Yu, Hongliu

    2015-03-01

    Computer users are often under stress when required to complete computer work within a required time. Work stress has repeatedly been associated with an increased risk for cardiovascular disease. The present study examined the effects of time pressure workload during computer tasks on cardiac activity in 20 healthy subjects. Heart rate, time domain and frequency domain indices of heart rate variability (HRV) and Poincaré plot parameters were compared among five computer tasks and two rest periods. Faster heart rate and decreased standard deviation of R-R interval were noted in response to computer tasks under time pressure. The Poincaré plot parameters showed significant differences between different levels of time pressure workload during computer tasks, and between computer tasks and the rest periods. In contrast, no significant differences were identified for the frequency domain indices of HRV. The results suggest that the quantitative Poincaré plot analysis used in this study was able to reveal the intrinsic nonlinear nature of the autonomically regulated cardiac rhythm. Specifically, heightened vagal tone occurred during the relaxation computer tasks without time pressure. In contrast, the stressful computer tasks with added time pressure stimulated cardiac sympathetic activity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Ceramic Adhesive for High Temperatures

    NASA Technical Reports Server (NTRS)

    Stevens, Everett G.

    1987-01-01

    Fused-silica/magnesium-phosphate adhesive resists high temperatures and vibrations. New adhesive unaffected by extreme temperatures and vibrations. Assuring direct bonding of gap filters to tile sidewalls, adhesive obviates expensive and time-consuming task of removal, treatment, and replacement of tiles.

  1. Measurement and Validation of Bidirectional Reflectance of Space Shuttle and Space Station Materials for Computerized Lighting Models

    NASA Technical Reports Server (NTRS)

    Fletcher, Lauren E.; Aldridge, Ann M.; Wheelwright, Charles; Maida, James

    1997-01-01

    Task illumination has a major impact on human performance: What a person can perceive in his environment significantly affects his ability to perform tasks, especially in space's harsh environment. Training for lighting conditions in space has long depended on physical models and simulations to emulate the effect of lighting, but such tests are expensive and time-consuming. To evaluate lighting conditions not easily simulated on Earth, personnel at NASA Johnson Space Center's (JSC) Graphics Research and Analysis Facility (GRAF) have been developing computerized simulations of various illumination conditions using the ray-tracing program, Radiance, developed by Greg Ward at Lawrence Berkeley Laboratory. Because these computer simulations are only as accurate as the data used, accurate information about the reflectance properties of materials and light distributions is needed. JSC's Lighting Environment Test Facility (LETF) personnel gathered material reflectance properties for a large number of paints, metals, and cloths used in the Space Shuttle and Space Station programs, and processed these data into reflectance parameters needed for the computer simulations. They also gathered lamp distribution data for most of the light sources used, and validated the ability to accurately simulate lighting levels by comparing predictions with measurements for several ground-based tests. The result of this study is a database of material reflectance properties for a wide variety of materials, and lighting information for most of the standard light sources used in the Shuttle/Station programs. The combination of the Radiance program and GRAF's graphics capability form a validated computerized lighting simulation capability for NASA.

  2. Computer usage and task-switching during resident's working day: Disruptive or not?

    PubMed

    Méan, Marie; Garnier, Antoine; Wenger, Nathalie; Castioni, Julien; Waeber, Gérard; Marques-Vidal, Pedro

    2017-01-01

    Recent implementation of electronic health records (EHR) has dramatically changed medical ward organization. While residents in general internal medicine use EHR systems half of their working time, whether computer usage impacts residents' workflow remains uncertain. We aimed to observe the frequency of task-switches occurring during resident's work and to assess whether computer usage was associated with task-switching. In a large Swiss academic university hospital, we conducted, between May 26 and July 24, 2015 a time-motion study to assess how residents in general internal medicine organize their working day. We observed 49 day and 17 evening shifts of 36 residents, amounting to 697 working hours. During day shifts, residents spent 5.4 hours using a computer (mean total working time: 11.6 hours per day). On average, residents switched 15 times per hour from a task to another. Task-switching peaked between 8:00-9:00 and 16:00-17:00. Task-switching was not associated with resident's characteristics and no association was found between task-switching and extra hours (Spearman r = 0.220, p = 0.137 for day and r = 0.483, p = 0.058 for evening shifts). Computer usage occurred more frequently at the beginning or ends of day shifts and was associated with decreased overall task-switching. Task-switching occurs very frequently during resident's working day. Despite the fact that residents used a computer half of their working time, computer usage was associated with decreased task-switching. Whether frequent task-switches and computer usage impact the quality of patient care and resident's work must be evaluated in further studies.

  3. Web-Based Job Submission Interface for the GAMESS Computational Chemistry Program

    ERIC Educational Resources Information Center

    Perri, M. J.; Weber, S. H.

    2014-01-01

    A Web site is described that facilitates use of the free computational chemistry software: General Atomic and Molecular Electronic Structure System (GAMESS). Its goal is to provide an opportunity for undergraduate students to perform computational chemistry experiments without the need to purchase expensive software.

  4. A Scheduling Algorithm for Cloud Computing System Based on the Driver of Dynamic Essential Path.

    PubMed

    Xie, Zhiqiang; Shao, Xia; Xin, Yu

    2016-01-01

    To solve the problem of task scheduling in the cloud computing system, this paper proposes a scheduling algorithm for cloud computing based on the driver of dynamic essential path (DDEP). This algorithm applies a predecessor-task layer priority strategy to solve the problem of constraint relations among task nodes. The strategy assigns different priority values to every task node based on the scheduling order of task node as affected by the constraint relations among task nodes, and the task node list is generated by the different priority value. To address the scheduling order problem in which task nodes have the same priority value, the dynamic essential long path strategy is proposed. This strategy computes the dynamic essential path of the pre-scheduling task nodes based on the actual computation cost and communication cost of task node in the scheduling process. The task node that has the longest dynamic essential path is scheduled first as the completion time of task graph is indirectly influenced by the finishing time of task nodes in the longest dynamic essential path. Finally, we demonstrate the proposed algorithm via simulation experiments using Matlab tools. The experimental results indicate that the proposed algorithm can effectively reduce the task Makespan in most cases and meet a high quality performance objective.

  5. A Scheduling Algorithm for Cloud Computing System Based on the Driver of Dynamic Essential Path

    PubMed Central

    Xie, Zhiqiang; Shao, Xia; Xin, Yu

    2016-01-01

    To solve the problem of task scheduling in the cloud computing system, this paper proposes a scheduling algorithm for cloud computing based on the driver of dynamic essential path (DDEP). This algorithm applies a predecessor-task layer priority strategy to solve the problem of constraint relations among task nodes. The strategy assigns different priority values to every task node based on the scheduling order of task node as affected by the constraint relations among task nodes, and the task node list is generated by the different priority value. To address the scheduling order problem in which task nodes have the same priority value, the dynamic essential long path strategy is proposed. This strategy computes the dynamic essential path of the pre-scheduling task nodes based on the actual computation cost and communication cost of task node in the scheduling process. The task node that has the longest dynamic essential path is scheduled first as the completion time of task graph is indirectly influenced by the finishing time of task nodes in the longest dynamic essential path. Finally, we demonstrate the proposed algorithm via simulation experiments using Matlab tools. The experimental results indicate that the proposed algorithm can effectively reduce the task Makespan in most cases and meet a high quality performance objective. PMID:27490901

  6. Computing Systems | High-Performance Computing | NREL

    Science.gov Websites

    investigate, build, and test models of complex phenomena or entire integrated systems-that cannot be directly observed or manipulated in the lab, or would be too expensive or time consuming. Models and visualizations

  7. A Simplified Mesh Deformation Method Using Commercial Structural Analysis Software

    NASA Technical Reports Server (NTRS)

    Hsu, Su-Yuen; Chang, Chau-Lyan; Samareh, Jamshid

    2004-01-01

    Mesh deformation in response to redefined or moving aerodynamic surface geometries is a frequently encountered task in many applications. Most existing methods are either mathematically too complex or computationally too expensive for usage in practical design and optimization. We propose a simplified mesh deformation method based on linear elastic finite element analyses that can be easily implemented by using commercially available structural analysis software. Using a prescribed displacement at the mesh boundaries, a simple structural analysis is constructed based on a spatially varying Young s modulus to move the entire mesh in accordance with the surface geometry redefinitions. A variety of surface movements, such as translation, rotation, or incremental surface reshaping that often takes place in an optimization procedure, may be handled by the present method. We describe the numerical formulation and implementation using the NASTRAN software in this paper. The use of commercial software bypasses tedious reimplementation and takes advantage of the computational efficiency offered by the vendor. A two-dimensional airfoil mesh and a three-dimensional aircraft mesh were used as test cases to demonstrate the effectiveness of the proposed method. Euler and Navier-Stokes calculations were performed for the deformed two-dimensional meshes.

  8. Alignment-free genetic sequence comparisons: a review of recent approaches by word analysis

    PubMed Central

    Steele, Joe; Bastola, Dhundy

    2014-01-01

    Modern sequencing and genome assembly technologies have provided a wealth of data, which will soon require an analysis by comparison for discovery. Sequence alignment, a fundamental task in bioinformatics research, may be used but with some caveats. Seminal techniques and methods from dynamic programming are proving ineffective for this work owing to their inherent computational expense when processing large amounts of sequence data. These methods are prone to giving misleading information because of genetic recombination, genetic shuffling and other inherent biological events. New approaches from information theory, frequency analysis and data compression are available and provide powerful alternatives to dynamic programming. These new methods are often preferred, as their algorithms are simpler and are not affected by synteny-related problems. In this review, we provide a detailed discussion of computational tools, which stem from alignment-free methods based on statistical analysis from word frequencies. We provide several clear examples to demonstrate applications and the interpretations over several different areas of alignment-free analysis such as base–base correlations, feature frequency profiles, compositional vectors, an improved string composition and the D2 statistic metric. Additionally, we provide detailed discussion and an example of analysis by Lempel–Ziv techniques from data compression. PMID:23904502

  9. Entropy-based heavy tailed distribution transformation and visual analytics for monitoring massive network traffic

    NASA Astrophysics Data System (ADS)

    Han, Keesook J.; Hodge, Matthew; Ross, Virginia W.

    2011-06-01

    For monitoring network traffic, there is an enormous cost in collecting, storing, and analyzing network traffic datasets. Data mining based network traffic analysis has a growing interest in the cyber security community, but is computationally expensive for finding correlations between attributes in massive network traffic datasets. To lower the cost and reduce computational complexity, it is desirable to perform feasible statistical processing on effective reduced datasets instead of on the original full datasets. Because of the dynamic behavior of network traffic, traffic traces exhibit mixtures of heavy tailed statistical distributions or overdispersion. Heavy tailed network traffic characterization and visualization are important and essential tasks to measure network performance for the Quality of Services. However, heavy tailed distributions are limited in their ability to characterize real-time network traffic due to the difficulty of parameter estimation. The Entropy-Based Heavy Tailed Distribution Transformation (EHTDT) was developed to convert the heavy tailed distribution into a transformed distribution to find the linear approximation. The EHTDT linearization has the advantage of being amenable to characterize and aggregate overdispersion of network traffic in realtime. Results of applying the EHTDT for innovative visual analytics to real network traffic data are presented.

  10. A Progressive Damage Model for unidirectional Fibre Reinforced Composites with Application to Impact and Penetration Simulation

    NASA Astrophysics Data System (ADS)

    Kerschbaum, M.; Hopmann, C.

    2016-06-01

    The computationally efficient simulation of the progressive damage behaviour of continuous fibre reinforced plastics is still a challenging task with currently available computer aided engineering methods. This paper presents an original approach for an energy based continuum damage model which accounts for stress-/strain nonlinearities, transverse and shear stress interaction phenomena, quasi-plastic shear strain components, strain rate effects, regularised damage evolution and consideration of load reversal effects. The physically based modelling approach enables experimental determination of all parameters on ply level to avoid expensive inverse analysis procedures. The modelling strategy, implementation and verification of this model using commercially available explicit finite element software are detailed. The model is then applied to simulate the impact and penetration of carbon fibre reinforced cross-ply specimens with variation of the impact speed. The simulation results show that the presented approach enables a good representation of the force-/displacement curves and especially well agreement with the experimentally observed fracture patterns. In addition, the mesh dependency of the results were assessed for one impact case showing only very little change of the simulation results which emphasises the general applicability of the presented method.

  11. Task Selection, Task Switching and Multitasking during Computer-Based Independent Study

    ERIC Educational Resources Information Center

    Judd, Terry

    2015-01-01

    Detailed logs of students' computer use, during independent study sessions, were captured in an open-access computer laboratory. Each log consisted of a chronological sequence of tasks representing either the application or the Internet domain displayed in the workstation's active window. Each task was classified using a three-tier schema…

  12. Monitoring task loading with multivariate EEG measures during complex forms of human-computer interaction

    NASA Technical Reports Server (NTRS)

    Smith, M. E.; Gevins, A.; Brown, H.; Karnik, A.; Du, R.

    2001-01-01

    Electroencephalographic (EEG) recordings were made while 16 participants performed versions of a personal-computer-based flight simulation task of low, moderate, or high difficulty. As task difficulty increased, frontal midline theta EEG activity increased and alpha band activity decreased. A participant-specific function that combined multiple EEG features to create a single load index was derived from a sample of each participant's data and then applied to new test data from that participant. Index values were computed for every 4 s of task data. Across participants, mean task load index values increased systematically with increasing task difficulty and differed significantly between the different task versions. Actual or potential applications of this research include the use of multivariate EEG-based methods to monitor task loading during naturalistic computer-based work.

  13. 41 CFR 301-10.301 - How do I compute my mileage reimbursement?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 4 2010-07-01 2010-07-01 false How do I compute my...-TRANSPORTATION EXPENSES Privately Owned Vehicle (POV) § 301-10.301 How do I compute my mileage reimbursement? You compute mileage reimbursement by multiplying the distance traveled, determined under § 301-10.302 of this...

  14. Placebo effect of medication cost in Parkinson disease: a randomized double-blind study.

    PubMed

    Espay, Alberto J; Norris, Matthew M; Eliassen, James C; Dwivedi, Alok; Smith, Matthew S; Banks, Christi; Allendorfer, Jane B; Lang, Anthony E; Fleck, David E; Linke, Michael J; Szaflarski, Jerzy P

    2015-02-24

    To examine the effect of cost, a traditionally "inactive" trait of intervention, as contributor to the response to therapeutic interventions. We conducted a prospective double-blind study in 12 patients with moderate to severe Parkinson disease and motor fluctuations (mean age 62.4 ± 7.9 years; mean disease duration 11 ± 6 years) who were randomized to a "cheap" or "expensive" subcutaneous "novel injectable dopamine agonist" placebo (normal saline). Patients were crossed over to the alternate arm approximately 4 hours later. Blinded motor assessments in the "practically defined off" state, before and after each intervention, included the Unified Parkinson's Disease Rating Scale motor subscale, the Purdue Pegboard Test, and a tapping task. Measurements of brain activity were performed using a feedback-based visual-motor associative learning functional MRI task. Order effect was examined using stratified analysis. Although both placebos improved motor function, benefit was greater when patients were randomized first to expensive placebo, with a magnitude halfway between that of cheap placebo and levodopa. Brain activation was greater upon first-given cheap but not upon first-given expensive placebo or by levodopa. Regardless of order of administration, only cheap placebo increased activation in the left lateral sensorimotor cortex and other regions. Expensive placebo significantly improved motor function and decreased brain activation in a direction and magnitude comparable to, albeit less than, levodopa. Perceptions of cost are capable of altering the placebo response in clinical studies. This study provides Class III evidence that perception of cost is capable of influencing motor function and brain activation in Parkinson disease. © 2015 American Academy of Neurology.

  15. Efficient and anonymous two-factor user authentication in wireless sensor networks: achieving user anonymity with lightweight sensor computation.

    PubMed

    Nam, Junghyun; Choo, Kim-Kwang Raymond; Han, Sangchul; Kim, Moonseong; Paik, Juryon; Won, Dongho

    2015-01-01

    A smart-card-based user authentication scheme for wireless sensor networks (hereafter referred to as a SCA-WSN scheme) is designed to ensure that only users who possess both a smart card and the corresponding password are allowed to gain access to sensor data and their transmissions. Despite many research efforts in recent years, it remains a challenging task to design an efficient SCA-WSN scheme that achieves user anonymity. The majority of published SCA-WSN schemes use only lightweight cryptographic techniques (rather than public-key cryptographic techniques) for the sake of efficiency, and have been demonstrated to suffer from the inability to provide user anonymity. Some schemes employ elliptic curve cryptography for better security but require sensors with strict resource constraints to perform computationally expensive scalar-point multiplications; despite the increased computational requirements, these schemes do not provide user anonymity. In this paper, we present a new SCA-WSN scheme that not only achieves user anonymity but also is efficient in terms of the computation loads for sensors. Our scheme employs elliptic curve cryptography but restricts its use only to anonymous user-to-gateway authentication, thereby allowing sensors to perform only lightweight cryptographic operations. Our scheme also enjoys provable security in a formal model extended from the widely accepted Bellare-Pointcheval-Rogaway (2000) model to capture the user anonymity property and various SCA-WSN specific attacks (e.g., stolen smart card attacks, node capture attacks, privileged insider attacks, and stolen verifier attacks).

  16. Efficient and Anonymous Two-Factor User Authentication in Wireless Sensor Networks: Achieving User Anonymity with Lightweight Sensor Computation

    PubMed Central

    Nam, Junghyun; Choo, Kim-Kwang Raymond; Han, Sangchul; Kim, Moonseong; Paik, Juryon; Won, Dongho

    2015-01-01

    A smart-card-based user authentication scheme for wireless sensor networks (hereafter referred to as a SCA-WSN scheme) is designed to ensure that only users who possess both a smart card and the corresponding password are allowed to gain access to sensor data and their transmissions. Despite many research efforts in recent years, it remains a challenging task to design an efficient SCA-WSN scheme that achieves user anonymity. The majority of published SCA-WSN schemes use only lightweight cryptographic techniques (rather than public-key cryptographic techniques) for the sake of efficiency, and have been demonstrated to suffer from the inability to provide user anonymity. Some schemes employ elliptic curve cryptography for better security but require sensors with strict resource constraints to perform computationally expensive scalar-point multiplications; despite the increased computational requirements, these schemes do not provide user anonymity. In this paper, we present a new SCA-WSN scheme that not only achieves user anonymity but also is efficient in terms of the computation loads for sensors. Our scheme employs elliptic curve cryptography but restricts its use only to anonymous user-to-gateway authentication, thereby allowing sensors to perform only lightweight cryptographic operations. Our scheme also enjoys provable security in a formal model extended from the widely accepted Bellare-Pointcheval-Rogaway (2000) model to capture the user anonymity property and various SCA-WSN specific attacks (e.g., stolen smart card attacks, node capture attacks, privileged insider attacks, and stolen verifier attacks). PMID:25849359

  17. A real-time control system for the control of suspended interferometers based on hybrid computing techniques

    NASA Astrophysics Data System (ADS)

    Acernese, Fausto; Barone, Fabrizio; De Rosa, Rosario; Eleuteri, Antonio; Milano, Leopoldo; Pardi, Silvio; Ricciardi, Iolanda; Russo, Guido

    2004-09-01

    One of the main requirements of a digital system for the control of interferometric detectors of gravitational waves is the computing power, that is a direct consequence of the increasing complexity of the digital algorithms necessary for the control signals generation. For this specific task many specialized non standard real-time architectures have been developed, often very expensive and difficult to upgrade. On the other hand, such computing power is generally fully available for off-line applications on standard Pc based systems. Therefore, a possible and obvious solution may be provided by the integration of both the real-time and off-line architecture resulting in a hybrid control system architecture based on standards available components, trying to get both the advantages of the perfect data synchronization provided by the real-time systems and by the large computing power available on Pc based systems. Such integration may be provided by the implementation of the link between the two different architectures through the standard Ethernet network, whose data transfer speed is largely increasing in these years, using the TCP/IP, UDP and raw Ethernet protocols. In this paper we describe the architecture of an hybrid Ethernet based real-time control system prototype we implemented in Napoli, discussing its characteristics and performances. Finally we discuss a possible application to the real-time control of a suspended mass of the mode cleaner of the 3m prototype optical interferometer for gravitational wave detection (IDGW-3P) operational in Napoli.

  18. Instability Mechanisms of Thermally-Driven Interfacial Flows in Liquid-Encapsulated Crystal Growth

    NASA Technical Reports Server (NTRS)

    Haj-Hariri, Hossein; Borhan, Ali

    1997-01-01

    During the past year, a great deal of effort was focused on the enhancement and refinement of the computational tools developed as part of our previous NASA grant. In particular, the interface mollification algorithm developed earlier was extended to incorporate the effects of surface-rheological properties in order to allow the study of thermocapillary flows in the presence of surface contamination. These tools will be used in the computational component of the proposed research in the remaining years of this grant. A detailed description of the progress made in this area is provided elsewhere. Briefly, the method developed allows for the convection and diffusion of bulk-insoluble surfactants on a moving and deforming interface. The novelty of the method is its grid independence: there is no need for front tracking, surface reconstruction, body-fitted grid generation, or metric evaluations; these are all very expensive computational tasks in three dimensions. For small local radii of curvature there is a need for local grid adaption so that the smearing thickness remains a small fraction of the radius of curvature. A special Neumann boundary condition was devised and applied so that the calculated surfactant concentration has no variations normal to the interface, and it is hence truly a surface-defined quantity. The discretized governing equations are solved subsequently using a time-split integration scheme which updates the concentration and the shape successively. Results demonstrate excellent agreement between the computed and exact solutions.

  19. Is Your School Y2K-OK?

    ERIC Educational Resources Information Center

    Bates, Martine G.

    1999-01-01

    The most vulnerable Y2K areas for schools are networked computers, free-standing personal computers, software, and embedded chips in utilities such as telephones and fire alarms. Expensive, time-consuming procedures and software have been developed for testing and bringing most computers into compliance. Districts need a triage prioritization…

  20. Understanding the Internet.

    ERIC Educational Resources Information Center

    Oblinger, Diana

    The Internet is an international network linking hundreds of smaller computer networks in North America, Europe, and Asia. Using the Internet, computer users can connect to a variety of computers with little effort or expense. The potential for use by college faculty is enormous. The largest problem faced by most users is understanding what such…

  1. "Mini", "Midi" and the Student.

    ERIC Educational Resources Information Center

    Edwards, Perry; Broadwell, Bruce

    Mini- and midi-computers have been introduced into the computer science program at Sierra College to afford students more direct contact with computers. The college's administration combined with the Science and Business departments to share the expense and utilization of the program. The National Cash Register Century 100 and the Data General…

  2. 48 CFR 970.5227-1 - Rights in data-facilities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... software. (2) Computer software, as used in this clause, means (i) computer programs which are data... software. The term “data” does not include data incidental to the administration of this contract, such as... this clause, means data, other than computer software, developed at private expense that embody trade...

  3. Task allocation in a distributed computing system

    NASA Technical Reports Server (NTRS)

    Seward, Walter D.

    1987-01-01

    A conceptual framework is examined for task allocation in distributed systems. Application and computing system parameters critical to task allocation decision processes are discussed. Task allocation techniques are addressed which focus on achieving a balance in the load distribution among the system's processors. Equalization of computing load among the processing elements is the goal. Examples of system performance are presented for specific applications. Both static and dynamic allocation of tasks are considered and system performance is evaluated using different task allocation methodologies.

  4. RenderMan design principles

    NASA Technical Reports Server (NTRS)

    Apodaca, Tony; Porter, Tom

    1989-01-01

    The two worlds of interactive graphics and realistic graphics have remained separate. Fast graphics hardware runs simple algorithms and generates simple looking images. Photorealistic image synthesis software runs slowly on large expensive computers. The time has come for these two branches of computer graphics to merge. The speed and expense of graphics hardware is no longer the barrier to the wide acceptance of photorealism. There is every reason to believe that high quality image synthesis will become a standard capability of every graphics machine, from superworkstation to personal computer. The significant barrier has been the lack of a common language, an agreed-upon set of terms and conditions, for 3-D modeling systems to talk to 3-D rendering systems for computing an accurate rendition of that scene. Pixar has introduced RenderMan to serve as that common language. RenderMan, specifically the extensibility it offers in shading calculations, is discussed.

  5. Extending Strong Scaling of Quantum Monte Carlo to the Exascale

    NASA Astrophysics Data System (ADS)

    Shulenburger, Luke; Baczewski, Andrew; Luo, Ye; Romero, Nichols; Kent, Paul

    Quantum Monte Carlo is one of the most accurate and most computationally expensive methods for solving the electronic structure problem. In spite of its significant computational expense, its massively parallel nature is ideally suited to petascale computers which have enabled a wide range of applications to relatively large molecular and extended systems. Exascale capabilities have the potential to enable the application of QMC to significantly larger systems, capturing much of the complexity of real materials such as defects and impurities. However, both memory and computational demands will require significant changes to current algorithms to realize this possibility. This talk will detail both the causes of the problem and potential solutions. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corp, a wholly owned subsidiary of Lockheed Martin Corp, for the US Department of Energys National Nuclear Security Administration under contract DE-AC04-94AL85000.

  6. Primary School Children's Collaboration: Task Presentation and Gender Issues.

    ERIC Educational Resources Information Center

    Fitzpatrick, Helen; Hardman, Margaret

    2000-01-01

    Explores the characteristics of social interaction during an English language based task in the primary classroom, and the role of the computer in structuring collaboration when compared to a non-computer mode. Explains that seven and nine year old boys and girls (n=120) completed a computer and non-computer task. (CMK)

  7. Virtual manufacturing work cell for engineering

    NASA Astrophysics Data System (ADS)

    Watanabe, Hideo; Ohashi, Kazushi; Takahashi, Nobuyuki; Kato, Kiyotaka; Fujita, Satoru

    1997-12-01

    The life cycles of products have been getting shorter. To meet this rapid turnover, manufacturing systems must be frequently changed as well. In engineering to develop manufacturing systems, there are several tasks such as process planning, layout design, programming, and final testing using actual machines. This development of manufacturing systems takes a long time and is expensive. To aid the above engineering process, we have developed the virtual manufacturing workcell (VMW). This paper describes a concept of VMW and design method through computer aided manufacturing engineering using VMW (CAME-VMW) related to the above engineering tasks. The VMW has all design data, and realizes a behavior of equipment and devices using a simulator. The simulator has logical and physical functionality. The one simulates a sequence control and the other simulates motion control, shape movement in 3D space. The simulator can execute the same control software made for actual machines. Therefore we can verify the behavior precisely before the manufacturing workcell will be constructed. The VMW creates engineering work space for several engineers and offers debugging tools such as virtual equipment and virtual controllers. We applied this VMW to development of a transfer workcell for vaporization machine in actual manufacturing system to produce plasma display panel (PDP) workcell and confirmed its effectiveness.

  8. Interactive three-dimensional visualization and creation of geometries for Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Theis, C.; Buchegger, K. H.; Brugger, M.; Forkel-Wirth, D.; Roesler, S.; Vincke, H.

    2006-06-01

    The implementation of three-dimensional geometries for the simulation of radiation transport problems is a very time-consuming task. Each particle transport code supplies its own scripting language and syntax for creating the geometries. All of them are based on the Constructive Solid Geometry scheme requiring textual description. This makes the creation a tedious and error-prone task, which is especially hard to master for novice users. The Monte Carlo code FLUKA comes with built-in support for creating two-dimensional cross-sections through the geometry and FLUKACAD, a custom-built converter to the commercial Computer Aided Design package AutoCAD, exists for 3D visualization. For other codes, like MCNPX, a couple of different tools are available, but they are often specifically tailored to the particle transport code and its approach used for implementing geometries. Complex constructive solid modeling usually requires very fast and expensive special purpose hardware, which is not widely available. In this paper SimpleGeo is presented, which is an implementation of a generic versatile interactive geometry modeler using off-the-shelf hardware. It is running on Windows, with a Linux version currently under preparation. This paper describes its functionality, which allows for rapid interactive visualization as well as generation of three-dimensional geometries, and also discusses critical issues regarding common CAD systems.

  9. Stochastic optimization of GeantV code by use of genetic algorithms

    DOE PAGES

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; ...

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less

  10. Modeling and Analysis Compute Environments, Utilizing Virtualization Technology in the Climate and Earth Systems Science domain

    NASA Astrophysics Data System (ADS)

    Michaelis, A.; Nemani, R. R.; Wang, W.; Votava, P.; Hashimoto, H.

    2010-12-01

    Given the increasing complexity of climate modeling and analysis tools, it is often difficult and expensive to build or recreate an exact replica of the software compute environment used in past experiments. With the recent development of new technologies for hardware virtualization, an opportunity exists to create full modeling, analysis and compute environments that are “archiveable”, transferable and may be easily shared amongst a scientific community or presented to a bureaucratic body if the need arises. By encapsulating and entire modeling and analysis environment in a virtual machine image, others may quickly gain access to the fully built system used in past experiments, potentially easing the task and reducing the costs of reproducing and verify past results produced by other researchers. Moreover, these virtual machine images may be used as a pedagogical tool for others that are interested in performing an academic exercise but don't yet possess the broad expertise required. We built two virtual machine images, one with the Community Earth System Model (CESM) and one with Weather Research Forecast Model (WRF), then ran several small experiments to assess the feasibility, performance overheads costs, reusability, and transferability. We present a list of the pros and cons as well as lessoned learned from utilizing virtualization technology in the climate and earth systems modeling domain.

  11. Stochastic optimization of GeantV code by use of genetic algorithms

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Behera, S. P.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Hariri, F.; Jun, S. Y.; Konstantinov, D.; Kumawat, H.; Ivantchenko, V.; Lima, G.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) and handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. The goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.

  12. Stochastic optimization of GeantV code by use of genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less

  13. [Diagnostic possibilities of digital volume tomography].

    PubMed

    Lemkamp, Michael; Filippi, Andreas; Berndt, Dorothea; Lambrecht, J Thomas

    2006-01-01

    Cone beam computed tomography allows high quality 3D images of cranio-facial structures. Although detail resolution is increased, x-ray exposition is reduced compared to classic computer tomography. The volume is analysed in three orthogonal plains, which can be rotated independently without quality loss. Cone beam computed tomography seems to be a less expensive and less x-ray exposing alternative to classic computer tomography.

  14. hPIN/hTAN: Low-Cost e-Banking Secure against Untrusted Computers

    NASA Astrophysics Data System (ADS)

    Li, Shujun; Sadeghi, Ahmad-Reza; Schmitz, Roland

    We propose hPIN/hTAN, a low-cost token-based e-banking protection scheme when the adversary has full control over the user's computer. Compared with existing hardware-based solutions, hPIN/hTAN depends on neither second trusted channel, nor secure keypad, nor computationally expensive encryption module.

  15. Phase-change lines, scale breaks, and trend lines using Excel 2013.

    PubMed

    Deochand, Neil; Costello, Mack S; Fuqua, R Wayne

    2015-01-01

    The development of graphing skills for behavior analysts is an ongoing process. Specialized graphing software is often expensive, is not widely disseminated, and may require specific training. Dixon et al. (2009) provided an updated task analysis for graph making in the widely used platform Excel 2007. Vanselow and Bourret (2012) provided online tutorials that outline some alternate methods also using Office 2007. This article serves as an update to those task analyses and includes some alternative and underutilized methods in Excel 2013. To examine the utility of our recommendations, 12 psychology graduate students were presented with the task analyses, and the experimenters evaluated their performance and noted feedback. The task analyses were rated favorably. © Society for the Experimental Analysis of Behavior.

  16. Task allocation model for minimization of completion time in distributed computer systems

    NASA Astrophysics Data System (ADS)

    Wang, Jai-Ping; Steidley, Carl W.

    1993-08-01

    A task in a distributed computing system consists of a set of related modules. Each of the modules will execute on one of the processors of the system and communicate with some other modules. In addition, precedence relationships may exist among the modules. Task allocation is an essential activity in distributed-software design. This activity is of importance to all phases of the development of a distributed system. This paper establishes task completion-time models and task allocation models for minimizing task completion time. Current work in this area is either at the experimental level or without the consideration of precedence relationships among modules. The development of mathematical models for the computation of task completion time and task allocation will benefit many real-time computer applications such as radar systems, navigation systems, industrial process control systems, image processing systems, and artificial intelligence oriented systems.

  17. A General Cross-Layer Cloud Scheduling Framework for Multiple IoT Computer Tasks.

    PubMed

    Wu, Guanlin; Bao, Weidong; Zhu, Xiaomin; Zhang, Xiongtao

    2018-05-23

    The diversity of IoT services and applications brings enormous challenges to improving the performance of multiple computer tasks' scheduling in cross-layer cloud computing systems. Unfortunately, the commonly-employed frameworks fail to adapt to the new patterns on the cross-layer cloud. To solve this issue, we design a new computer task scheduling framework for multiple IoT services in cross-layer cloud computing systems. Specifically, we first analyze the features of the cross-layer cloud and computer tasks. Then, we design the scheduling framework based on the analysis and present detailed models to illustrate the procedures of using the framework. With the proposed framework, the IoT services deployed in cross-layer cloud computing systems can dynamically select suitable algorithms and use resources more effectively to finish computer tasks with different objectives. Finally, the algorithms are given based on the framework, and extensive experiments are also given to validate its effectiveness, as well as its superiority.

  18. The AAHA Computer Program. American Animal Hospital Association.

    PubMed

    Albers, J W

    1986-07-01

    The American Animal Hospital Association Computer Program should benefit all small animal practitioners. Through the availability of well-researched and well-developed certified software, veterinarians will have increased confidence in their purchase decisions. With the expansion of computer applications to improve practice management efficiency, veterinary computer systems will further justify their initial expense. The development of the Association's veterinary computer network will provide a variety of important services to the profession.

  19. Numerical experience with a class of algorithms for nonlinear optimization using inexact function and gradient information

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.

  20. Multiobjective Aerodynamic Shape Optimization Using Pareto Differential Evolution and Generalized Response Surface Metamodels

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. The DE algorithm has been recently extended to multiobjective optimization problem by using a Pareto-based approach. In this paper, a Pareto DE algorithm is applied to multiobjective aerodynamic shape optimization problems that are characterized by computationally expensive objective function evaluations. To improve computational expensive the algorithm is coupled with generalized response surface meta-models based on artificial neural networks. Results are presented for some test optimization problems from the literature to demonstrate the capabilities of the method.

  1. In Praise of Robots

    ERIC Educational Resources Information Center

    Sagan, Carl

    1975-01-01

    The author of this article believes that human survival depends upon the ability to develop and work with machines of high artificial intelligence. He lists uses of such machines, including terrestrial mining, outer space exploration, and other tasks too dangerous, too expensive, or too boring for human beings. (MA)

  2. Model Reduction of Computational Aerothermodynamics for Multi-Discipline Analysis in High Speed Flows

    NASA Astrophysics Data System (ADS)

    Crowell, Andrew Rippetoe

    This dissertation describes model reduction techniques for the computation of aerodynamic heat flux and pressure loads for multi-disciplinary analysis of hypersonic vehicles. NASA and the Department of Defense have expressed renewed interest in the development of responsive, reusable hypersonic cruise vehicles capable of sustained high-speed flight and access to space. However, an extensive set of technical challenges have obstructed the development of such vehicles. These technical challenges are partially due to both the inability to accurately test scaled vehicles in wind tunnels and to the time intensive nature of high-fidelity computational modeling, particularly for the fluid using Computational Fluid Dynamics (CFD). The aim of this dissertation is to develop efficient and accurate models for the aerodynamic heat flux and pressure loads to replace the need for computationally expensive, high-fidelity CFD during coupled analysis. Furthermore, aerodynamic heating and pressure loads are systematically evaluated for a number of different operating conditions, including: simple two-dimensional flow over flat surfaces up to three-dimensional flows over deformed surfaces with shock-shock interaction and shock-boundary layer interaction. An additional focus of this dissertation is on the implementation and computation of results using the developed aerodynamic heating and pressure models in complex fluid-thermal-structural simulations. Model reduction is achieved using a two-pronged approach. One prong focuses on developing analytical corrections to isothermal, steady-state CFD flow solutions in order to capture flow effects associated with transient spatially-varying surface temperatures and surface pressures (e.g., surface deformation, surface vibration, shock impingements, etc.). The second prong is focused on minimizing the computational expense of computing the steady-state CFD solutions by developing an efficient surrogate CFD model. The developed two-pronged approach is found to exhibit balanced performance in terms of accuracy and computational expense, relative to several existing approaches. This approach enables CFD-based loads to be implemented into long duration fluid-thermal-structural simulations.

  3. Evaluation of Ground Vibrations Induced by Military Noise Sources

    DTIC Science & Technology

    2006-08-01

    1 Task 2—Determine the acoustic -to-seismic coupling coefficients C1 and C2 ...................... 1 Task 3—Computational modeling ...Determine the acoustic -to-seismic coupling coefficients C1 and C2 ....................45 Task 3—Computational modeling of acoustically induced ground...ground conditions. Task 3—Computational modeling of acoustically induced ground motion The simple model of blast sound interaction with the

  4. Computer task performance by subjects with Duchenne muscular dystrophy.

    PubMed

    Malheiros, Silvia Regina Pinheiro; da Silva, Talita Dias; Favero, Francis Meire; de Abreu, Luiz Carlos; Fregni, Felipe; Ribeiro, Denise Cardoso; de Mello Monteiro, Carlos Bandeira

    2016-01-01

    Two specific objectives were established to quantify computer task performance among people with Duchenne muscular dystrophy (DMD). First, we compared simple computational task performance between subjects with DMD and age-matched typically developing (TD) subjects. Second, we examined correlations between the ability of subjects with DMD to learn the computational task and their motor functionality, age, and initial task performance. The study included 84 individuals (42 with DMD, mean age of 18±5.5 years, and 42 age-matched controls). They executed a computer maze task; all participants performed the acquisition (20 attempts) and retention (five attempts) phases, repeating the same maze. A different maze was used to verify transfer performance (five attempts). The Motor Function Measure Scale was applied, and the results were compared with maze task performance. In the acquisition phase, a significant decrease was found in movement time (MT) between the first and last acquisition block, but only for the DMD group. For the DMD group, MT during transfer was shorter than during the first acquisition block, indicating improvement from the first acquisition block to transfer. In addition, the TD group showed shorter MT than the DMD group across the study. DMD participants improved their performance after practicing a computational task; however, the difference in MT was present in all attempts among DMD and control subjects. Computational task improvement was positively influenced by the initial performance of individuals with DMD. In turn, the initial performance was influenced by their distal functionality but not their age or overall functionality.

  5. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment.

    PubMed

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Abdulhamid, Shafi'i Muhammad; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.

  6. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment

    PubMed Central

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing. PMID:28467505

  7. Laparoscopic skills training using a webcam trainer.

    PubMed

    Chung, Steve Y; Landsittel, Douglas; Chon, Chris H; Ng, Christopher S; Fuchs, Gerhard J

    2005-01-01

    Many sophisticated and expensive trainers have been developed to assist surgeons in learning basic laparoscopic skills. We developed an inexpensive trainer and evaluated its effectiveness. The webcam laparoscopic training device is composed of a webcam, cardboard box, desk lamp and home computer. This homemade trainer was evaluated against 2 commercially available systems, namely the video Pelvitrainer (Karl Storz Endoscopy, Culver City, California) and the dual mirror Simuview (Simulab Corp., Seattle, Washington). The Pelvitrainer consists of a fiberglass box, single lens optic laparoscope, fiberoptic light source, endoscopic camera and video monitor, while the Simuview trainer uses 2 offset, facing mirrors and an uncovered plastic box. A total of 42 participants without prior laparoscopic training were enrolled in the study and asked to execute 2 tasks, that is peg transfer and pattern cutting. Participants were randomly assigned to 6 groups with each group representing a different permutation of trainers to be used. The time required for participants to complete each task was recorded and differences in performance were calculated. Paired t tests, the Wilcoxon signed rank test and ANOVA were performed to analyze the statistical difference in performance times for all conditions. Statistical analyses of the 2 tasks showed no significant difference for the video and webcam trainers. However, the mirror trainer gave significantly higher outcome values for tasks 1 and 2 compared to the video (p = 0.01 and <0.01) and webcam (p = 0.04 and <0.01, respectively) methods. ANOVA indicated no overall difference for tasks 1 and 2 across the orderings (p = 0.36 and 0.99, respectively). However, by attempt 3 the time required to complete the skill tests decreased significantly for all 3 trainers (each p <0.01). Our homemade webcam system is comparable in function to the more elaborate video trainer but superior to the dual mirror trainer. For novice laparoscopists we believe that the webcam system is an inexpensive and effective laparoscopic training device. Furthermore, the webcam system also allows instant recording and review of techniques.

  8. 26 CFR 1.50B-3 - Estates and trusts.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 1 2010-04-01 2010-04-01 true Estates and trusts. 1.50B-3 Section 1.50B-3... Computing Credit for Expenses of Work Incentive Programs § 1.50B-3 Estates and trusts. (a) General rule—(1) In general. In the case of an estate or trust, WIN expenses (as defined in paragraph (a) of § 1.50B-1...

  9. Paradigm Paralysis and the Plight of the PC in Education.

    ERIC Educational Resources Information Center

    O'Neil, Mick

    1998-01-01

    Examines the varied factors involved in providing Internet access in K-12 education, including expense, computer installation and maintenance, and security, and explores how the network computer could be useful in this context. Operating systems and servers are discussed. (MSE)

  10. Computational Modeling in Concert with Laboratory Studies: Application to B Cell Differentiation

    EPA Science Inventory

    Remediation is expensive, so accurate prediction of dose-response is important to help control costs. Dose response is a function of biological mechanisms. Computational models of these mechanisms improve the efficiency of research and provide the capability for prediction.

  11. Using pattern recognition to automatically localize reflection hyperbolas in data from ground penetrating radar

    NASA Astrophysics Data System (ADS)

    Maas, Christian; Schmalzl, Jörg

    2013-08-01

    Ground Penetrating Radar (GPR) is used for the localization of supply lines, land mines, pipes and many other buried objects. These objects can be recognized in the recorded data as reflection hyperbolas with a typical shape depending on depth and material of the object and the surrounding material. To obtain the parameters, the shape of the hyperbola has to be fitted. In the last years several methods were developed to automate this task during post-processing. In this paper we show another approach for the automated localization of reflection hyperbolas in GPR data by solving a pattern recognition problem in grayscale images. In contrast to other methods our detection program is also able to immediately mark potential objects in real-time. For this task we use a version of the Viola-Jones learning algorithm, which is part of the open source library "OpenCV". This algorithm was initially developed for face recognition, but can be adapted to any other simple shape. In our program it is used to narrow down the location of reflection hyperbolas to certain areas in the GPR data. In order to extract the exact location and the velocity of the hyperbolas we apply a simple Hough Transform for hyperbolas. Because the Viola-Jones Algorithm reduces the input for the computational expensive Hough Transform dramatically the detection system can also be implemented on normal field computers, so on-site application is possible. The developed detection system shows promising results and detection rates in unprocessed radargrams. In order to improve the detection results and apply the program to noisy radar images more data of different GPR systems as input for the learning algorithm is necessary.

  12. A Talking Computers System for Persons with Vision and Speech Handicaps. Final Report.

    ERIC Educational Resources Information Center

    Visek & Maggs, Urbana, IL.

    This final report contains a detailed description of six software systems designed to assist individuals with blindness and/or speech disorders in using inexpensive, off-the-shelf computers rather than expensive custom-made devices. The developed software is not written in the native machine language of any particular brand of computer, but in the…

  13. [Cost analysis for navigation in knee endoprosthetics].

    PubMed

    Cerha, O; Kirschner, S; Günther, K-P; Lützner, J

    2009-12-01

    Total knee arthroplasty (TKA) is one of the most frequent procedures in orthopaedic surgery. The outcome depends on a range of factors including alignment of the leg and the positioning of the implant in addition to patient-associated factors. Computer-assisted navigation systems can improve the restoration of a neutral leg alignment. This procedure has been established especially in Europe and North America. The additional expenses are not reimbursed in the German DRG system (Diagnosis Related Groups). In the present study a cost analysis of computer-assisted TKA compared to the conventional technique was performed. The acquisition expenses of various navigation systems (5 and 10 year depreciation), annual costs for maintenance and software updates as well as the accompanying costs per operation (consumables, additional operating time) were considered. The additional operating time was determined on the basis of a meta-analysis according to the current literature. Situations with 25, 50, 100, 200 and 500 computer-assisted TKAs per year were simulated. The amount of the incremental costs of the computer-assisted TKA depends mainly on the annual volume and the additional operating time. A relevant decrease of the incremental costs was detected between 50 and 100 procedures per year. In a model with 100 computer-assisted TKAs per year an additional operating time of 14 mins and a 10 year depreciation of the investment costs, the incremental expenses amount to 300-395 depending on the navigation system. Computer-assisted TKA is associated with additional costs. From an economical point of view an amount of more than 50 procedures per year appears to be favourable. The cost-effectiveness could be estimated if long-term results will show a reduction of revisions or a better clinical outcome.

  14. The Time on Task Effect in Reading and Problem Solving Is Moderated by Task Difficulty and Skill: Insights from a Computer-Based Large-Scale Assessment

    ERIC Educational Resources Information Center

    Goldhammer, Frank; Naumann, Johannes; Stelter, Annette; Tóth, Krisztina; Rölke, Heiko; Klieme, Eckhard

    2014-01-01

    Computer-based assessment can provide new insights into behavioral processes of task completion that cannot be uncovered by paper-based instruments. Time presents a major characteristic of the task completion process. Psychologically, time on task has 2 different interpretations, suggesting opposing associations with task outcome: Spending more…

  15. Final Report of the Project "From the finite element method to the virtual element method"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manzini, Gianmarco; Gyrya, Vitaliy

    The Finite Element Method (FEM) is a powerful numerical tool that is being used in a large number of engineering applications. The FEM is constructed on triangular/tetrahedral and quadrilateral/hexahedral meshes. Extending the FEM to general polygonal/polyhedral meshes in straightforward way turns out to be extremely difficult and leads to very complex and computationally expensive schemes. The reason for this failure is that the construction of the basis functions on elements with a very general shape is a non-trivial and complex task. In this project we developed a new family of numerical methods, dubbed the Virtual Element Method (VEM) for themore » numerical approximation of partial differential equations (PDE) of elliptic type suitable to polygonal and polyhedral unstructured meshes. We successfully formulated, implemented and tested these methods and studied both theoretically and numerically their stability, robustness and accuracy for diffusion problems, convection-reaction-diffusion problems, the Stokes equations and the biharmonic equations.« less

  16. Magnetic resonance imaging of granular materials

    NASA Astrophysics Data System (ADS)

    Stannarius, Ralf

    2017-05-01

    Magnetic Resonance Imaging (MRI) has become one of the most important tools to screen humans in medicine; virtually every modern hospital is equipped with a Nuclear Magnetic Resonance (NMR) tomograph. The potential of NMR in 3D imaging tasks is by far greater, but there is only "a handful" of MRI studies of particulate matter. The method is expensive, time-consuming, and requires a deep understanding of pulse sequences, signal acquisition, and processing. We give a short introduction into the physical principles of this imaging technique, describe its advantages and limitations for the screening of granular matter, and present a number of examples of different application purposes, from the exploration of granular packing, via the detection of flow and particle diffusion, to real dynamic measurements. Probably, X-ray computed tomography is preferable in most applications, but fast imaging of single slices with modern MRI techniques is unmatched, and the additional opportunity to retrieve spatially resolved flow and diffusion profiles without particle tracking is a unique feature.

  17. Multi-media authoring - Instruction and training of air traffic controllers based on ASRS incident reports

    NASA Technical Reports Server (NTRS)

    Armstrong, Herbert B.; Roske-Hofstrand, Renate J.

    1989-01-01

    This paper discusses the use of computer-assisted instructions and flight simulations to enhance procedural and perceptual motor task training. Attention is called to the fact that incorporating the accident and incident data contained in reports filed with the Aviation Safety Reporting System (ASRS) would be a valuable training tool which the learner could apply for other situations. The need to segment the events is emphasized; this would make it possible to modify events in order to suit the needs of the training environment. Methods were developed for designing meaningful scenario development on runway incursions on the basis of analysis of ASRS reports. It is noted that, while the development of interactive training tools using the ASRS and other data bases holds much promise, the design and production of interactive video programs and laser disks are very expensive. It is suggested that this problem may be overcome by sharing the costs of production to develop a library of materials available to a broad range of users.

  18. A low-cost touchscreen operant chamber using a Raspberry Pi™.

    PubMed

    O'Leary, James D; O'Leary, Olivia F; Cryan, John F; Nolan, Yvonne M

    2018-03-08

    The development of a touchscreen platform for rodent testing has allowed new methods for cognitive testing that have been back-translated from clinical assessment tools to preclinical animal models. This platform for cognitive assessment in animals is comparable to human neuropsychological tests such as those employed by the Cambridge Neuropsychological Test Automated Battery, and thus has several advantages compared to the standard maze apparatuses typically employed in rodent behavioral testing, such as the Morris water maze. These include improved translation of preclinical models, as well as high throughput and the automation of animal testing. However, these systems are relatively expensive, which can impede progress for researchers with limited resources. Here we describe a low-cost touchscreen operant chamber based on the single-board computer, Raspberry Pi TM , which is capable of performing tasks similar to those supported by current state-of-the-art systems. This system provides an affordable alternative for cognitive testing in a touchscreen operant paradigm for researchers with limited funding.

  19. A Method for Automated Detection of Usability Problems from Client User Interface Events

    PubMed Central

    Saadawi, Gilan M.; Legowski, Elizabeth; Medvedeva, Olga; Chavan, Girish; Crowley, Rebecca S.

    2005-01-01

    Think-aloud usability analysis provides extremely useful data but is very time-consuming and expensive to perform because of the extensive manual video analysis that is required. We describe a simple method for automated detection of usability problems from client user interface events for a developing medical intelligent tutoring system. The method incorporates (1) an agent-based method for communication that funnels all interface events and system responses to a centralized database, (2) a simple schema for representing interface events and higher order subgoals, and (3) an algorithm that reproduces the criteria used for manual coding of usability problems. A correction factor was empirically determining to account for the slower task performance of users when thinking aloud. We tested the validity of the method by simultaneously identifying usability problems using TAU and manually computing them from stored interface event data using the proposed algorithm. All usability problems that did not rely on verbal utterances were detectable with the proposed method. PMID:16779121

  20. The application of computer-based tools in obtaining the genetic family history.

    PubMed

    Giovanni, Monica A; Murray, Michael F

    2010-07-01

    Family health history is both an adjunct to and a focus of current genetic research, having long been known to be a powerful predictor of individual disease risk. As such, it has been primarily used as a proxy for genetic information. Over the past decade, new roles for family history have emerged, perhaps most importantly as a primary tool for guiding decision-making on the use of expensive genetic testing. The collection of family history information is an important but time-consuming process. Efforts to engage the patient or research subject in preliminary data collection have the potential to improve data accuracy and allow clinicians and researchers more time for analytic tasks. The U.S. Surgeon General, the Centers for Disease Control and Prevention (CDC), and others have developed tools for electronic family history collection. This unit describes the utility of the Web-based My Family Health Portrait (https://familyhistory.hhs.gov) as the prototype for patient-entered family history.

  1. Use of less expensive cigarettes in six cities in China: findings from the International Tobacco Control (ITC) China Survey.

    PubMed

    Li, Qiang; Hyland, Andrew; Fong, Geoffrey T; Jiang, Yuan; Elton-Marshall, Tara

    2010-10-01

    The existence of less expensive cigarettes in China may undermine public health. The aim of the current study is to examine the use of less expensive cigarettes in six cities in China. Data was from the baseline wave of the International Tobacco Control (ITC) China Survey of 4815 adult urban smokers in 6 cities, conducted between April and August 2006. The percentage of smokers who reported buying less expensive cigarettes (the lowest pricing tertile within each city) at last purchase was computed. Complex sample multivariate logistic regression models were used to identify factors associated with use of less expensive cigarettes. The association between the use of less expensive cigarettes and intention to quit smoking was also examined. Smokers who reported buying less expensive cigarettes at last purchase tended to be older, heavier smokers, to have lower education and income, and to think more about the money spent on smoking in the last month. Smokers who bought less expensive cigarettes at the last purchase and who were less knowledgeable about the health harm of smoking were less likely to intend to quit smoking. Measures need to be taken to minimise the price differential among cigarette brands and to increase smokers' health knowledge, which may in turn increase their intentions to quit.

  2. The UAB Informatics Institute and 2016 CEGS N-GRID de-identification shared task challenge.

    PubMed

    Bui, Duy Duc An; Wyatt, Mathew; Cimino, James J

    2017-11-01

    Clinical narratives (the text notes found in patients' medical records) are important information sources for secondary use in research. However, in order to protect patient privacy, they must be de-identified prior to use. Manual de-identification is considered to be the gold standard approach but is tedious, expensive, slow, and impractical for use with large-scale clinical data. Automated or semi-automated de-identification using computer algorithms is a potentially promising alternative. The Informatics Institute of the University of Alabama at Birmingham is applying de-identification to clinical data drawn from the UAB hospital's electronic medical records system before releasing them for research. We participated in a shared task challenge by the Centers of Excellence in Genomic Science (CEGS) Neuropsychiatric Genome-Scale and RDoC Individualized Domains (N-GRID) at the de-identification regular track to gain experience developing our own automatic de-identification tool. We focused on the popular and successful methods from previous challenges: rule-based, dictionary-matching, and machine-learning approaches. We also explored new techniques such as disambiguation rules, term ambiguity measurement, and used multi-pass sieve framework at a micro level. For the challenge's primary measure (strict entity), our submissions achieved competitive results (f-measures: 87.3%, 87.1%, and 86.7%). For our preferred measure (binary token HIPAA), our submissions achieved superior results (f-measures: 93.7%, 93.6%, and 93%). With those encouraging results, we gain the confidence to improve and use the tool for the real de-identification task at the UAB Informatics Institute. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. An efficient and accurate solution methodology for bilevel multi-objective programming problems using a hybrid evolutionary-local-search algorithm.

    PubMed

    Deb, Kalyanmoy; Sinha, Ankur

    2010-01-01

    Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity.

  4. Air-Track: a real-world floating environment for active sensing in head-fixed mice.

    PubMed

    Nashaat, Mostafa A; Oraby, Hatem; Sachdev, Robert N S; Winter, York; Larkum, Matthew E

    2016-10-01

    Natural behavior occurs in multiple sensory and motor modalities and in particular is dependent on sensory feedback that constantly adjusts behavior. To investigate the underlying neuronal correlates of natural behavior, it is useful to have access to state-of-the-art recording equipment (e.g., 2-photon imaging, patch recordings, etc.) that frequently requires head fixation. This limitation has been addressed with various approaches such as virtual reality/air ball or treadmill systems. However, achieving multimodal realistic behavior in these systems can be challenging. These systems are often also complex and expensive to implement. Here we present "Air-Track," an easy-to-build head-fixed behavioral environment that requires only minimal computational processing. The Air-Track is a lightweight physical maze floating on an air table that has all the properties of the "real" world, including multiple sensory modalities tightly coupled to motor actions. To test this system, we trained mice in Go/No-Go and two-alternative forced choice tasks in a plus maze. Mice chose lanes and discriminated apertures or textures by moving the Air-Track back and forth and rotating it around themselves. Mice rapidly adapted to moving the track and used visual, auditory, and tactile cues to guide them in performing the tasks. A custom-controlled camera system monitored animal location and generated data that could be used to calculate reaction times in the visual and somatosensory discrimination tasks. We conclude that the Air-Track system is ideal for eliciting natural behavior in concert with virtually any system for monitoring or manipulating brain activity. Copyright © 2016 the American Physiological Society.

  5. Air-Track: a real-world floating environment for active sensing in head-fixed mice

    PubMed Central

    Oraby, Hatem; Sachdev, Robert N. S.; Winter, York

    2016-01-01

    Natural behavior occurs in multiple sensory and motor modalities and in particular is dependent on sensory feedback that constantly adjusts behavior. To investigate the underlying neuronal correlates of natural behavior, it is useful to have access to state-of-the-art recording equipment (e.g., 2-photon imaging, patch recordings, etc.) that frequently requires head fixation. This limitation has been addressed with various approaches such as virtual reality/air ball or treadmill systems. However, achieving multimodal realistic behavior in these systems can be challenging. These systems are often also complex and expensive to implement. Here we present “Air-Track,” an easy-to-build head-fixed behavioral environment that requires only minimal computational processing. The Air-Track is a lightweight physical maze floating on an air table that has all the properties of the “real” world, including multiple sensory modalities tightly coupled to motor actions. To test this system, we trained mice in Go/No-Go and two-alternative forced choice tasks in a plus maze. Mice chose lanes and discriminated apertures or textures by moving the Air-Track back and forth and rotating it around themselves. Mice rapidly adapted to moving the track and used visual, auditory, and tactile cues to guide them in performing the tasks. A custom-controlled camera system monitored animal location and generated data that could be used to calculate reaction times in the visual and somatosensory discrimination tasks. We conclude that the Air-Track system is ideal for eliciting natural behavior in concert with virtually any system for monitoring or manipulating brain activity. PMID:27486102

  6. The Effects of Study Tasks in a Computer-Based Chemistry Learning Environment

    ERIC Educational Resources Information Center

    Urhahne, Detlef; Nick, Sabine; Poepping, Anna Christin; Schulz , Sarah Jayne

    2013-01-01

    The present study examines the effects of different study tasks on the acquisition of knowledge about acids and bases in a computer-based learning environment. Three different task formats were selected to create three treatment conditions: learning with gap-fill and matching tasks, learning with multiple-choice tasks, and learning only from text…

  7. BisQue: cloud-based system for management, annotation, visualization, analysis and data mining of underwater and remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Fedorov, D.; Miller, R. J.; Kvilekval, K. G.; Doheny, B.; Sampson, S.; Manjunath, B. S.

    2016-02-01

    Logistical and financial limitations of underwater operations are inherent in marine science, including biodiversity observation. Imagery is a promising way to address these challenges, but the diversity of organisms thwarts simple automated analysis. Recent developments in computer vision methods, such as convolutional neural networks (CNN), are promising for automated classification and detection tasks but are typically very computationally expensive and require extensive training on large datasets. Therefore, managing and connecting distributed computation, large storage and human annotations of diverse marine datasets is crucial for effective application of these methods. BisQue is a cloud-based system for management, annotation, visualization, analysis and data mining of underwater and remote sensing imagery and associated data. Designed to hide the complexity of distributed storage, large computational clusters, diversity of data formats and inhomogeneous computational environments behind a user friendly web-based interface, BisQue is built around an idea of flexible and hierarchical annotations defined by the user. Such textual and graphical annotations can describe captured attributes and the relationships between data elements. Annotations are powerful enough to describe cells in fluorescent 4D images, fish species in underwater videos and kelp beds in aerial imagery. Presently we are developing BisQue-based analysis modules for automated identification of benthic marine organisms. Recent experiments with drop-out and CNN based classification of several thousand annotated underwater images demonstrated an overall accuracy above 70% for the 15 best performing species and above 85% for the top 5 species. Based on these promising results, we have extended bisque with a CNN-based classification system allowing continuous training on user-provided data.

  8. Modern Efficiencies for Healthy Schools

    ERIC Educational Resources Information Center

    VanOort, Adam

    2012-01-01

    Facility managers everywhere are tasked with improving energy efficiency to control costs. Those strides cannot be achieved at the expense of system performance and reliability, or the comfort of the people within those properties. There are few places where this is truer than in schools and universities. K-12 schools and university lecture spaces…

  9. Constrained approximation of effective generators for multiscale stochastic reaction networks and application to conditioned path sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cotter, Simon L., E-mail: simon.cotter@manchester.ac.uk

    2016-10-15

    Efficient analysis and simulation of multiscale stochastic systems of chemical kinetics is an ongoing area for research, and is the source of many theoretical and computational challenges. In this paper, we present a significant improvement to the constrained approach, which is a method for computing effective dynamics of slowly changing quantities in these systems, but which does not rely on the quasi-steady-state assumption (QSSA). The QSSA can cause errors in the estimation of effective dynamics for systems where the difference in timescales between the “fast” and “slow” variables is not so pronounced. This new application of the constrained approach allowsmore » us to compute the effective generator of the slow variables, without the need for expensive stochastic simulations. This is achieved by finding the null space of the generator of the constrained system. For complex systems where this is not possible, or where the constrained subsystem is itself multiscale, the constrained approach can then be applied iteratively. This results in breaking the problem down into finding the solutions to many small eigenvalue problems, which can be efficiently solved using standard methods. Since this methodology does not rely on the quasi steady-state assumption, the effective dynamics that are approximated are highly accurate, and in the case of systems with only monomolecular reactions, are exact. We will demonstrate this with some numerics, and also use the effective generators to sample paths of the slow variables which are conditioned on their endpoints, a task which would be computationally intractable for the generator of the full system.« less

  10. Method and system for benchmarking computers

    DOEpatents

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  11. Checkpointing for a hybrid computing node

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cher, Chen-Yong

    2016-03-08

    According to an aspect, a method for checkpointing in a hybrid computing node includes executing a task in a processing accelerator of the hybrid computing node. A checkpoint is created in a local memory of the processing accelerator. The checkpoint includes state data to restart execution of the task in the processing accelerator upon a restart operation. Execution of the task is resumed in the processing accelerator after creating the checkpoint. The state data of the checkpoint are transferred from the processing accelerator to a main processor of the hybrid computing node while the processing accelerator is executing the task.

  12. The Effects of Study Tasks in a Computer-Based Chemistry Learning Environment

    NASA Astrophysics Data System (ADS)

    Urhahne, Detlef; Nick, Sabine; Poepping, Anna Christin; Schulz, Sarah Jayne

    2013-12-01

    The present study examines the effects of different study tasks on the acquisition of knowledge about acids and bases in a computer-based learning environment. Three different task formats were selected to create three treatment conditions: learning with gap-fill and matching tasks, learning with multiple-choice tasks, and learning only from text and figures without any additional tasks. Participants were 196 ninth-grade students who learned with a self-developed multimedia program in a pretest-posttest control group design. Research results reveal that gap-fill and matching tasks were most effective in promoting knowledge acquisition, followed by multiple-choice tasks, and no tasks at all. The findings are in line with previous research on this topic. The effects can possibly be explained by the generation-recognition model, which predicts that gap-fill and matching tasks trigger more encompassing learning processes than multiple-choice tasks. It is concluded that instructional designers should incorporate more challenging study tasks for enhancing the effectiveness of computer-based learning environments.

  13. WISP information display system user's manual

    NASA Technical Reports Server (NTRS)

    Alley, P. L.; Smith, G. R.

    1978-01-01

    The wind shears program (WISP) supports the collection of data on magnetic tape for permanent storage or analysis. The document structure provides: (1) the hardware and software configuration required to execute the WISP system and start up procedure from a power down condition; (2) data collection task, calculations performed on the incoming data, and a description of the magnetic tape format; (3) the data display task and examples of displays obtained from execution of the real time simulation program; and (4) the raw data dump task and examples of operator actions required to obtained the desired format. The procedures outlines herein will allow continuous data collection at the expense of real time visual displays.

  14. Integrating in silico prediction methods, molecular docking, and molecular dynamics simulation to predict the impact of ALK missense mutations in structural perspective.

    PubMed

    Doss, C George Priya; Chakraborty, Chiranjib; Chen, Luonan; Zhu, Hailong

    2014-01-01

    Over the past decade, advancements in next generation sequencing technology have placed personalized genomic medicine upon horizon. Understanding the likelihood of disease causing mutations in complex diseases as pathogenic or neutral remains as a major task and even impossible in the structural context because of its time consuming and expensive experiments. Among the various diseases causing mutations, single nucleotide polymorphisms (SNPs) play a vital role in defining individual's susceptibility to disease and drug response. Understanding the genotype-phenotype relationship through SNPs is the first and most important step in drug research and development. Detailed understanding of the effect of SNPs on patient drug response is a key factor in the establishment of personalized medicine. In this paper, we represent a computational pipeline in anaplastic lymphoma kinase (ALK) for SNP-centred study by the application of in silico prediction methods, molecular docking, and molecular dynamics simulation approaches. Combination of computational methods provides a way in understanding the impact of deleterious mutations in altering the protein drug targets and eventually leading to variable patient's drug response. We hope this rapid and cost effective pipeline will also serve as a bridge to connect the clinicians and in silico resources in tailoring treatments to the patients' specific genotype.

  15. Edge-SIFT: discriminative binary descriptor for scalable partial-duplicate mobile search.

    PubMed

    Zhang, Shiliang; Tian, Qi; Lu, Ke; Huang, Qingming; Gao, Wen

    2013-07-01

    As the basis of large-scale partial duplicate visual search on mobile devices, image local descriptor is expected to be discriminative, efficient, and compact. Our study shows that the popularly used histogram-based descriptors, such as scale invariant feature transform (SIFT) are not optimal for this task. This is mainly because histogram representation is relatively expensive to compute on mobile platforms and loses significant spatial clues, which are important for improving discriminative power and matching near-duplicate image patches. To address these issues, we propose to extract a novel binary local descriptor named Edge-SIFT from the binary edge maps of scale- and orientation-normalized image patches. By preserving both locations and orientations of edges and compressing the sparse binary edge maps with a boosting strategy, the final Edge-SIFT shows strong discriminative power with compact representation. Furthermore, we propose a fast similarity measurement and an indexing framework with flexible online verification. Hence, the Edge-SIFT allows an accurate and efficient image search and is ideal for computation sensitive scenarios such as a mobile image search. Experiments on a large-scale dataset manifest that the Edge-SIFT shows superior retrieval accuracy to Oriented BRIEF (ORB) and is superior to SIFT in the aspects of retrieval precision, efficiency, compactness, and transmission cost.

  16. Network portal: a database for storage, analysis and visualization of biological networks

    PubMed Central

    Turkarslan, Serdar; Wurtmann, Elisabeth J.; Wu, Wei-Ju; Jiang, Ning; Bare, J. Christopher; Foley, Karen; Reiss, David J.; Novichkov, Pavel; Baliga, Nitin S.

    2014-01-01

    The ease of generating high-throughput data has enabled investigations into organismal complexity at the systems level through the inference of networks of interactions among the various cellular components (genes, RNAs, proteins and metabolites). The wider scientific community, however, currently has limited access to tools for network inference, visualization and analysis because these tasks often require advanced computational knowledge and expensive computing resources. We have designed the network portal (http://networks.systemsbiology.net) to serve as a modular database for the integration of user uploaded and public data, with inference algorithms and tools for the storage, visualization and analysis of biological networks. The portal is fully integrated into the Gaggle framework to seamlessly exchange data with desktop and web applications and to allow the user to create, save and modify workspaces, and it includes social networking capabilities for collaborative projects. While the current release of the database contains networks for 13 prokaryotic organisms from diverse phylogenetic clades (4678 co-regulated gene modules, 3466 regulators and 9291 cis-regulatory motifs), it will be rapidly populated with prokaryotic and eukaryotic organisms as relevant data become available in public repositories and through user input. The modular architecture, simple data formats and open API support community development of the portal. PMID:24271392

  17. Alignment-free genetic sequence comparisons: a review of recent approaches by word analysis.

    PubMed

    Bonham-Carter, Oliver; Steele, Joe; Bastola, Dhundy

    2014-11-01

    Modern sequencing and genome assembly technologies have provided a wealth of data, which will soon require an analysis by comparison for discovery. Sequence alignment, a fundamental task in bioinformatics research, may be used but with some caveats. Seminal techniques and methods from dynamic programming are proving ineffective for this work owing to their inherent computational expense when processing large amounts of sequence data. These methods are prone to giving misleading information because of genetic recombination, genetic shuffling and other inherent biological events. New approaches from information theory, frequency analysis and data compression are available and provide powerful alternatives to dynamic programming. These new methods are often preferred, as their algorithms are simpler and are not affected by synteny-related problems. In this review, we provide a detailed discussion of computational tools, which stem from alignment-free methods based on statistical analysis from word frequencies. We provide several clear examples to demonstrate applications and the interpretations over several different areas of alignment-free analysis such as base-base correlations, feature frequency profiles, compositional vectors, an improved string composition and the D2 statistic metric. Additionally, we provide detailed discussion and an example of analysis by Lempel-Ziv techniques from data compression. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  18. VCSim3: a VR simulator for cardiovascular interventions.

    PubMed

    Korzeniowski, Przemyslaw; White, Ruth J; Bello, Fernando

    2018-01-01

    Effective and safe performance of cardiovascular interventions requires excellent catheter/guidewire manipulation skills. These skills are currently mainly gained through an apprenticeship on real patients, which may not be safe or cost-effective. Computer simulation offers an alternative for core skills training. However, replicating the physical behaviour of real instruments navigated through blood vessels is a challenging task. We have developed VCSim3-a virtual reality simulator for cardiovascular interventions. The simulator leverages an inextensible Cosserat rod to model virtual catheters and guidewires. Their mechanical properties were optimized with respect to their real counterparts scanned in a silicone phantom using X-ray CT imaging. The instruments are manipulated via a VSP haptic device. Supporting solutions such as fluoroscopic visualization, contrast flow propagation, cardiac motion, balloon inflation, and stent deployment, enable performing a complete angioplasty procedure. We present detailed results of simulation accuracy of the virtual instruments, along with their computational performance. In addition, the results of a preliminary face and content validation study conveyed on a group of 17 interventional radiologists are given. VR simulation of cardiovascular procedure can contribute to surgical training and improve the educational experience without putting patients at risk, raising ethical issues or requiring expensive animal or cadaver facilities. VCSim3 is still a prototype, yet the initial results indicate that it provides promising foundations for further development.

  19. Balancing accuracy, efficiency, and flexibility in a radiative transfer parameterization for dynamical models

    NASA Astrophysics Data System (ADS)

    Pincus, R.; Mlawer, E. J.

    2017-12-01

    Radiation is key process in numerical models of the atmosphere. The problem is well-understood and the parameterization of radiation has seen relatively few conceptual advances in the past 15 years. It is nonthelss often the single most expensive component of all physical parameterizations despite being computed less frequently than other terms. This combination of cost and maturity suggests value in a single radiation parameterization that could be shared across models; devoting effort to a single parameterization might allow for fine tuning for efficiency. The challenge lies in the coupling of this parameterization to many disparate representations of clouds and aerosols. This talk will describe RRTMGP, a new radiation parameterization that seeks to balance efficiency and flexibility. This balance is struck by isolating computational tasks in "kernels" that expose as much fine-grained parallelism as possible. These have simple interfaces and are interoperable across programming languages so that they might be repalced by alternative implementations in domain-specific langauges. Coupling to the host model makes use of object-oriented features of Fortran 2003, minimizing branching within the kernels and the amount of data that must be transferred. We will show accuracy and efficiency results for a globally-representative set of atmospheric profiles using a relatively high-resolution spectral discretization.

  20. QuickProbs—A Fast Multiple Sequence Alignment Algorithm Designed for Graphics Processors

    PubMed Central

    Gudyś, Adam; Deorowicz, Sebastian

    2014-01-01

    Multiple sequence alignment is a crucial task in a number of biological analyses like secondary structure prediction, domain searching, phylogeny, etc. MSAProbs is currently the most accurate alignment algorithm, but its effectiveness is obtained at the expense of computational time. In the paper we present QuickProbs, the variant of MSAProbs customised for graphics processors. We selected the two most time consuming stages of MSAProbs to be redesigned for GPU execution: the posterior matrices calculation and the consistency transformation. Experiments on three popular benchmarks (BAliBASE, PREFAB, OXBench-X) on quad-core PC equipped with high-end graphics card show QuickProbs to be 5.7 to 9.7 times faster than original CPU-parallel MSAProbs. Additional tests performed on several protein families from Pfam database give overall speed-up of 6.7. Compared to other algorithms like MAFFT, MUSCLE, or ClustalW, QuickProbs proved to be much more accurate at similar speed. Additionally we introduce a tuned variant of QuickProbs which is significantly more accurate on sets of distantly related sequences than MSAProbs without exceeding its computation time. The GPU part of QuickProbs was implemented in OpenCL, thus the package is suitable for graphics processors produced by all major vendors. PMID:24586435

  1. 29 CFR 778.313 - Computing overtime pay under the Act for employees compensated on task basis.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Computing overtime pay under the Act for employees compensated on task basis. 778.313 Section 778.313 Labor Regulations Relating to Labor (Continued) WAGE AND... TO REGULATIONS OVERTIME COMPENSATION Special Problems âtaskâ Basis of Payment § 778.313 Computing...

  2. 49 CFR Appendix A to Part 1511 - Aviation Security Infrastructure Fee

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... Please also submit the same information in Microsoft Word either on a computer disk or by e-mail to TSA..., including Checkpoint Screening Supervisors. 7. All associated expensed non-labor costs including computers, communications equipment, time management systems, supplies, parking, identification badging, furniture, fixtures...

  3. 49 CFR Appendix A to Part 1511 - Aviation Security Infrastructure Fee

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... Please also submit the same information in Microsoft Word either on a computer disk or by e-mail to TSA..., including Checkpoint Screening Supervisors. 7. All associated expensed non-labor costs including computers, communications equipment, time management systems, supplies, parking, identification badging, furniture, fixtures...

  4. 49 CFR Appendix A to Part 1511 - Aviation Security Infrastructure Fee

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... Please also submit the same information in Microsoft Word either on a computer disk or by e-mail to TSA..., including Checkpoint Screening Supervisors. 7. All associated expensed non-labor costs including computers, communications equipment, time management systems, supplies, parking, identification badging, furniture, fixtures...

  5. Computer-assisted coding and clinical documentation: first things first.

    PubMed

    Tully, Melinda; Carmichael, Angela

    2012-10-01

    Computer-assisted coding tools have the potential to drive improvements in seven areas: Transparency of coding. Productivity (generally by 20 to 25 percent for inpatient claims). Accuracy (by improving specificity of documentation). Cost containment (by reducing overtime expenses, audit fees, and denials). Compliance. Efficiency. Consistency.

  6. Reinforcement learning in computer vision

    NASA Astrophysics Data System (ADS)

    Bernstein, A. V.; Burnaev, E. V.

    2018-04-01

    Nowadays, machine learning has become one of the basic technologies used in solving various computer vision tasks such as feature detection, image segmentation, object recognition and tracking. In many applications, various complex systems such as robots are equipped with visual sensors from which they learn state of surrounding environment by solving corresponding computer vision tasks. Solutions of these tasks are used for making decisions about possible future actions. It is not surprising that when solving computer vision tasks we should take into account special aspects of their subsequent application in model-based predictive control. Reinforcement learning is one of modern machine learning technologies in which learning is carried out through interaction with the environment. In recent years, Reinforcement learning has been used both for solving such applied tasks as processing and analysis of visual information, and for solving specific computer vision problems such as filtering, extracting image features, localizing objects in scenes, and many others. The paper describes shortly the Reinforcement learning technology and its use for solving computer vision problems.

  7. The fate of task-irrelevant visual motion: perceptual load versus feature-based attention.

    PubMed

    Taya, Shuichiro; Adams, Wendy J; Graf, Erich W; Lavie, Nilli

    2009-11-18

    We tested contrasting predictions derived from perceptual load theory and from recent feature-based selection accounts. Observers viewed moving, colored stimuli and performed low or high load tasks associated with one stimulus feature, either color or motion. The resultant motion aftereffect (MAE) was used to evaluate attentional allocation. We found that task-irrelevant visual features received less attention than co-localized task-relevant features of the same objects. Moreover, when color and motion features were co-localized yet perceived to belong to two distinct surfaces, feature-based selection was further increased at the expense of object-based co-selection. Load theory predicts that the MAE for task-irrelevant motion would be reduced with a higher load color task. However, this was not seen for co-localized features; perceptual load only modulated the MAE for task-irrelevant motion when this was spatially separated from the attended color location. Our results suggest that perceptual load effects are mediated by spatial selection and do not generalize to the feature domain. Feature-based selection operates to suppress processing of task-irrelevant, co-localized features, irrespective of perceptual load.

  8. Multi-Attribute Task Battery - Applications in pilot workload and strategic behavior research

    NASA Technical Reports Server (NTRS)

    Arnegard, Ruth J.; Comstock, J. R., Jr.

    1991-01-01

    The Multi-Attribute Task (MAT) Battery provides a benchmark set of tasks for use in a wide range of lab studies of operator performance and workload. The battery incorporates tasks analogous to activities that aircraft crewmembers perform in flight, while providing a high degree of experimenter control, performance data on each subtask, and freedom to nonpilot test subjects. Features not found in existing computer based tasks include an auditory communication task (to simulate Air Traffic Control communication), a resource management task permitting many avenues or strategies of maintaining target performance, a scheduling window which gives the operator information about future task demands, and the option of manual or automated control of tasks. Performance data are generated for each subtask. In addition, the task battery may be paused and onscreen workload rating scales presented to the subject. The MAT Battery requires a desktop computer with color graphics. The communication task requires a serial link to a second desktop computer with a voice synthesizer or digitizer card.

  9. The multi-attribute task battery for human operator workload and strategic behavior research

    NASA Technical Reports Server (NTRS)

    Comstock, J. Raymond, Jr.; Arnegard, Ruth J.

    1992-01-01

    The Multi-Attribute Task (MAT) Battery provides a benchmark set of tasks for use in a wide range of lab studies of operator performance and workload. The battery incorporates tasks analogous to activities that aircraft crewmembers perform in flight, while providing a high degree of experimenter control, performance data on each subtask, and freedom to use nonpilot test subjects. Features not found in existing computer based tasks include an auditory communication task (to simulate Air Traffic Control communication), a resource management task permitting many avenues or strategies of maintaining target performance, a scheduling window which gives the operator information about future task demands, and the option of manual or automated control of tasks. Performance data are generated for each subtask. In addition, the task battery may be paused and onscreen workload rating scales presented to the subject. The MAT Battery requires a desktop computer with color graphics. The communication task requires a serial link to a second desktop computer with a voice synthesizer or digitizer card.

  10. Ergonomic assessment for the task of repairing computers in a manufacturing company: A case study.

    PubMed

    Maldonado-Macías, Aidé; Realyvásquez, Arturo; Hernández, Juan Luis; García-Alcaraz, Jorge

    2015-01-01

    Manufacturing industry workers who repair computers may be exposed to ergonomic risk factors. This project analyzes the tasks involved in the computer repair process to (1) find the risk level for musculoskeletal disorders (MSDs) and (2) propose ergonomic interventions to address any ergonomic issues. Work procedures and main body postures were video recorded and analyzed using task analysis, the Rapid Entire Body Assessment (REBA) postural method, and biomechanical analysis. High risk for MSDs was found on every subtask using REBA. Although biomechanical analysis found an acceptable mass center displacement during tasks, a hazardous level of compression on the lower back during computer's transportation was detected. This assessment found ergonomic risks mainly in the trunk, arm/forearm, and legs; the neck and hand/wrist were also compromised. Opportunities for ergonomic analyses and interventions in the design and execution of computer repair tasks are discussed.

  11. MIDAS: Regionally linear multivariate discriminative statistical mapping.

    PubMed

    Varol, Erdem; Sotiras, Aristeidis; Davatzikos, Christos

    2018-07-01

    Statistical parametric maps formed via voxel-wise mass-univariate tests, such as the general linear model, are commonly used to test hypotheses about regionally specific effects in neuroimaging cross-sectional studies where each subject is represented by a single image. Despite being informative, these techniques remain limited as they ignore multivariate relationships in the data. Most importantly, the commonly employed local Gaussian smoothing, which is important for accounting for registration errors and making the data follow Gaussian distributions, is usually chosen in an ad hoc fashion. Thus, it is often suboptimal for the task of detecting group differences and correlations with non-imaging variables. Information mapping techniques, such as searchlight, which use pattern classifiers to exploit multivariate information and obtain more powerful statistical maps, have become increasingly popular in recent years. However, existing methods may lead to important interpretation errors in practice (i.e., misidentifying a cluster as informative, or failing to detect truly informative voxels), while often being computationally expensive. To address these issues, we introduce a novel efficient multivariate statistical framework for cross-sectional studies, termed MIDAS, seeking highly sensitive and specific voxel-wise brain maps, while leveraging the power of regional discriminant analysis. In MIDAS, locally linear discriminative learning is applied to estimate the pattern that best discriminates between two groups, or predicts a variable of interest. This pattern is equivalent to local filtering by an optimal kernel whose coefficients are the weights of the linear discriminant. By composing information from all neighborhoods that contain a given voxel, MIDAS produces a statistic that collectively reflects the contribution of the voxel to the regional classifiers as well as the discriminative power of the classifiers. Critically, MIDAS efficiently assesses the statistical significance of the derived statistic by analytically approximating its null distribution without the need for computationally expensive permutation tests. The proposed framework was extensively validated using simulated atrophy in structural magnetic resonance imaging (MRI) and further tested using data from a task-based functional MRI study as well as a structural MRI study of cognitive performance. The performance of the proposed framework was evaluated against standard voxel-wise general linear models and other information mapping methods. The experimental results showed that MIDAS achieves relatively higher sensitivity and specificity in detecting group differences. Together, our results demonstrate the potential of the proposed approach to efficiently map effects of interest in both structural and functional data. Copyright © 2018. Published by Elsevier Inc.

  12. Health literacy and task environment influence parents' burden for data entry on child-specific health information: randomized controlled trial.

    PubMed

    Porter, Stephen C; Guo, Chao-Yu; Bacic, Janine; Chan, Eugenia

    2011-01-26

    Health care systems increasingly rely on patients' data entry efforts to organize and assist in care delivery through health information exchange. We sought to determine (1) the variation in burden imposed on parents by data entry efforts across paper-based and computer-based environments, and (2) the impact, if any, of parents' health literacy on the task burden. We completed a randomized controlled trial of parent-completed data entry tasks. Parents of children with attention deficit hyperactivity disorder (ADHD) were randomized based on the Test of Functional Health Literacy in Adults (TOFHLA) to either a paper-based or computer-based environment for entry of health information on their children. The primary outcome was the National Aeronautics and Space Administration Task Load Index (TLX) total weighted score. We screened 271 parents: 194 (71.6%) were eligible, and 180 of these (92.8%) constituted the study cohort. We analyzed 90 participants from each arm. Parents who completed information tasks on paper reported a higher task burden than those who worked in the computer environment: mean (SD) TLX scores were 22.8 (20.6) for paper and 16.3 (16.1) for computer. Assignment to the paper environment conferred a significant risk of higher task burden (F(1,178) = 4.05, P = .046). Adequate literacy was associated with lower task burden (decrease in burden score of 1.15 SD, P = .003). After adjusting for relevant child and parent factors, parents' TOFHLA score (beta = -.02, P = .02) and task environment (beta = .31, P = .03) remained significantly associated with task burden. A tailored computer-based environment provided an improved task experience for data entry compared to the same tasks completed on paper. Health literacy was inversely related to task burden.

  13. The ability of non-computer tasks to increase biomechanical exposure variability in computer-intensive office work.

    PubMed

    Barbieri, Dechristian França; Srinivasan, Divya; Mathiassen, Svend Erik; Nogueira, Helen Cristina; Oliveira, Ana Beatriz

    2015-01-01

    Postures and muscle activity in the upper body were recorded from 50 academics office workers during 2 hours of normal work, categorised by observation into computer work (CW) and three non-computer (NC) tasks (NC seated work, NC standing/walking work and breaks). NC tasks differed significantly in exposures from CW, with standing/walking NC tasks representing the largest contrasts for most of the exposure variables. For the majority of workers, exposure variability was larger in their present job than in CW alone, as measured by the job variance ratio (JVR), i.e. the ratio between min-min variabilities in the job and in CW. Calculations of JVRs for simulated jobs containing different proportions of CW showed that variability could, indeed, be increased by redistributing available tasks, but that substantial increases could only be achieved by introducing more vigorous tasks in the job, in casu illustrated by cleaning.

  14. Hybrid techniques for the digital control of mechanical and optical systems

    NASA Astrophysics Data System (ADS)

    Acernese, Fausto; Barone, Fabrizio; De Rosa, Rosario; Eleuteri, Antonio; Milano, Leopoldo; Pardi, Silvio; Ricciardi, Iolanda; Russo, Guido

    2004-07-01

    One of the main requirements of a digital system for the control of interferometric detectors of gravitational waves is the computing power, that is a direct consequence of the increasing complexity of the digital algorithms necessary for the control signals generation. For this specific task many specialised non standard real-time architectures have been developed, often very expensive and difficult to upgrade. On the other hand, such computing power is generally fully available for off-line applications on standard Pc based systems. Therefore, a possible and obvious solution may be provided by the integration of both the the real-time and off-line architecture resulting in a hybrid control system architecture based on standards available components, trying to get both the advantages of the perfect data synchronization provided by the real-time systems and by the large computing power available on Pc based systems. Such integration may be provided by the implementation of the link between the two different architectures through the standard Ethernet network, whose data transfer speed is largely increasing in these years, using the TCP/IP and UDP protocols. In this paper we describe the architecture of an hybrid Ethernet based real-time control system protoype we implemented in Napoli, discussing its characteristics and performances. Finally we discuss a possible application to the real-time control of a suspended mass of the mode cleaner of the 3m prototype optical interferometer for gravitational wave detection (IDGW-3P) operational in Napoli.

  15. Quantum Search in Hilbert Space

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2003-01-01

    A proposed quantum-computing algorithm would perform a search for an item of information in a database stored in a Hilbert-space memory structure. The algorithm is intended to make it possible to search relatively quickly through a large database under conditions in which available computing resources would otherwise be considered inadequate to perform such a task. The algorithm would apply, more specifically, to a relational database in which information would be stored in a set of N complex orthonormal vectors, each of N dimensions (where N can be exponentially large). Each vector would constitute one row of a unitary matrix, from which one would derive the Hamiltonian operator (and hence the evolutionary operator) of a quantum system. In other words, all the stored information would be mapped onto a unitary operator acting on a quantum state that would represent the item of information to be retrieved. Then one could exploit quantum parallelism: one could pose all search queries simultaneously by performing a quantum measurement on the system. In so doing, one would effectively solve the search problem in one computational step. One could exploit the direct- and inner-product decomposability of the unitary matrix to make the dimensionality of the memory space exponentially large by use of only linear resources. However, inasmuch as the necessary preprocessing (the mapping of the stored information into a Hilbert space) could be exponentially expensive, the proposed algorithm would likely be most beneficial in applications in which the resources available for preprocessing were much greater than those available for searching.

  16. SMV⊥: Simplex of maximal volume based upon the Gram-Schmidt process

    NASA Astrophysics Data System (ADS)

    Salazar-Vazquez, Jairo; Mendez-Vazquez, Andres

    2015-10-01

    In recent years, different algorithms for Hyperspectral Image (HI) analysis have been introduced. The high spectral resolution of these images allows to develop different algorithms for target detection, material mapping, and material identification for applications in Agriculture, Security and Defense, Industry, etc. Therefore, from the computer science's point of view, there is fertile field of research for improving and developing algorithms in HI analysis. In some applications, the spectral pixels of a HI can be classified using laboratory spectral signatures. Nevertheless, for many others, there is no enough available prior information or spectral signatures, making any analysis a difficult task. One of the most popular algorithms for the HI analysis is the N-FINDR because it is easy to understand and provides a way to unmix the original HI in the respective material compositions. The N-FINDR is computationally expensive and its performance depends on a random initialization process. This paper proposes a novel idea to reduce the complexity of the N-FINDR by implementing a bottom-up approach based in an observation from linear algebra and the use of the Gram-Schmidt process. Therefore, the Simplex of Maximal Volume Perpendicular (SMV⊥) algorithm is proposed for fast endmember extraction in hyperspectral imagery. This novel algorithm has complexity O(n) with respect to the number of pixels. In addition, the evidence shows that SMV⊥ calculates a bigger volume, and has lower computational time complexity than other poular algorithms on synthetic and real scenarios.

  17. Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks.

    PubMed

    Devi, D Chitra; Uthariaraj, V Rhymend

    2016-01-01

    Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods.

  18. Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks

    PubMed Central

    Devi, D. Chitra; Uthariaraj, V. Rhymend

    2016-01-01

    Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656

  19. Women and Computers: Effects of Stereotype Threat on Attribution of Failure

    ERIC Educational Resources Information Center

    Koch, Sabine C.; Muller, Stephanie M.; Sieverding, Monika

    2008-01-01

    This study investigated whether stereotype threat can influence women's attributions of failure in a computer task. Male and female college-age students (n = 86, 16-21 years old) from Germany were asked to work on a computer task and were hinted beforehand that in this task, either (a) men usually perform better than women do (negative threat…

  20. Measurement and Evidence of Computer-Based Task Switching and Multitasking by "Net Generation" Students

    ERIC Educational Resources Information Center

    Judd, Terry; Kennedy, Gregor

    2011-01-01

    Logs of on-campus computer and Internet usage were used to conduct a study of computer-based task switching and multitasking by undergraduate medical students. A detailed analysis of over 6000 individual sessions revealed that while a majority of students engaged in both task switching and multitasking behaviours, they did so less frequently than…

  1. Diagnosing Pre-Service Science Teachers' Understanding of Chemistry Concepts by Using Computer-Mediated Predict-Observe-Explain Tasks

    ERIC Educational Resources Information Center

    Sesn, Burcin Acar

    2013-01-01

    The purpose of this study was to investigate pre-service science teachers' understanding of surface tension, cohesion and adhesion forces by using computer-mediated predict-observe-explain tasks. 22 third-year pre-service science teachers participated in this study. Three computer-mediated predict-observe-explain tasks were developed and applied…

  2. Report of the Task Force on Computer Charging.

    ERIC Educational Resources Information Center

    Computer Co-ordination Group, Ottawa (Ontario).

    The objectives of the Task Force on Computer Charging as approved by the Committee of Presidents of Universities of Ontario were: (1) to identify alternative methods of costing computing services; (2) to identify alternative methods of pricing computing services; (3) to develop guidelines for the pricing of computing services; (4) to identify…

  3. 48 CFR 227.7103-6 - Contract clauses.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... expense). Do not use the clause when the only deliverable items are computer software or computer software... architect-engineer and construction contracts. (b)(1) Use the clause at 252.227-7013 with its Alternate I in... Software Previously Delivered to the Government, in solicitations when the resulting contract will require...

  4. 48 CFR 227.7103-6 - Contract clauses.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... expense). Do not use the clause when the only deliverable items are computer software or computer software... architect-engineer and construction contracts. (b)(1) Use the clause at 252.227-7013 with its Alternate I in... Software Previously Delivered to the Government, in solicitations when the resulting contract will require...

  5. A COMPUTATIONALLY EFFICIENT HYBRID APPROACH FOR DYNAMIC GAS/AEROSOL TRANSFER IN AIR QUALITY MODELS. (R826371C005)

    EPA Science Inventory

    Dynamic mass transfer methods have been developed to better describe the interaction of the aerosol population with semi-volatile species such as nitrate, ammonia, and chloride. Unfortunately, these dynamic methods are computationally expensive. Assumptions are often made to r...

  6. Looking At Display Technologies

    ERIC Educational Resources Information Center

    Bull, Glen; Bull, Gina

    2005-01-01

    A projection system in a classroom with an Internet connection provides a window on the world. Until recently, projectors were expensive and difficult to maintain. Technological advances have resulted in solid-state projectors that require little maintenance and cost no more than a computer. Adding a second or third computer to a classroom…

  7. Application of the graphics processor unit to simulate a near field diffraction

    NASA Astrophysics Data System (ADS)

    Zinchik, Alexander A.; Topalov, Oleg K.; Muzychenko, Yana B.

    2017-06-01

    For many years, computer modeling program used for lecture demonstrations. Most of the existing commercial software, such as Virtual Lab, LightTrans GmbH company are quite expensive and have a surplus capabilities for educational tasks. The complexity of the diffraction demonstrations in the near zone, due to the large amount of calculations required to obtain the two-dimensional distribution of the amplitude and phase. At this day, there are no demonstrations, allowing to show the resulting distribution of amplitude and phase without much time delay. Even when using Fast Fourier Transform (FFT) algorithms diffraction calculation speed in the near zone for the input complex amplitude distributions with size more than 2000 × 2000 pixels is tens of seconds. Our program selects the appropriate propagation operator from a prescribed set of operators including Spectrum of Plane Waves propagation and Rayleigh-Sommerfeld propagation (using convolution). After implementation, we make a comparison between the calculation time for the near field diffraction: calculations made on GPU and CPU, showing that using GPU for calculations diffraction pattern in near zone does increase the overall speed of algorithm for an image of size 2048×2048 sampling points and more. The modules are implemented as separate dynamic-link libraries and can be used for lecture demonstrations, workshops, selfstudy and students in solving various problems such as the phase retrieval task.

  8. An evaluation method of computer usability based on human-to-computer information transmission model.

    PubMed

    Ogawa, K

    1992-01-01

    This paper proposes a new evaluation and prediction method for computer usability. This method is based on our two previously proposed information transmission measures created from a human-to-computer information transmission model. The model has three information transmission levels: the device, software, and task content levels. Two measures, called the device independent information measure (DI) and the computer independent information measure (CI), defined on the software and task content levels respectively, are given as the amount of information transmitted. Two information transmission rates are defined as DI/T and CI/T, where T is the task completion time: the device independent information transmission rate (RDI), and the computer independent information transmission rate (RCI). The method utilizes the RDI and RCI rates to evaluate relatively the usability of software and device operations on different computer systems. Experiments using three different systems, in this case a graphical information input task, confirm that the method offers an efficient way of determining computer usability.

  9. Large-scale expensive black-box function optimization

    NASA Astrophysics Data System (ADS)

    Rashid, Kashif; Bailey, William; Couët, Benoît

    2012-09-01

    This paper presents the application of an adaptive radial basis function method to a computationally expensive black-box reservoir simulation model of many variables. An iterative proxy-based scheme is used to tune the control variables, distributed for finer control over a varying number of intervals covering the total simulation period, to maximize asset NPV. The method shows that large-scale simulation-based function optimization of several hundred variables is practical and effective.

  10. Use of less expensive cigarettes in six cities in China: findings from the International Tobacco Control (ITC) China Survey

    PubMed Central

    Hyland, Andrew; Fong, Geoffrey T; Jiang, Yuan; Elton-Marshall, Tara

    2010-01-01

    Objective The existence of less expensive cigarettes in China may undermine public health. The aim of the current study is to examine the use of less expensive cigarettes in six cities in China. Methods Data was from the baseline wave of the International Tobacco Control (ITC) China Survey of 4815 adult urban smokers in 6 cities, conducted between April and August 2006. The percentage of smokers who reported buying less expensive cigarettes (the lowest pricing tertile within each city) at last purchase was computed. Complex sample multivariate logistic regression models were used to identify factors associated with use of less expensive cigarettes. The association between the use of less expensive cigarettes and intention to quit smoking was also examined. Results Smokers who reported buying less expensive cigarettes at last purchase tended to be older, heavier smokers, to have lower education and income, and to think more about the money spent on smoking in the last month. Smokers who bought less expensive cigarettes at the last purchase and who were less knowledgeable about the health harm of smoking were less likely to intend to quit smoking. Conclusions Measures need to be taken to minimise the price differential among cigarette brands and to increase smokers' health knowledge, which may in turn increase their intentions to quit. PMID:20935199

  11. The Advantages of Using Technology in Second Language Education: Technology Integration in Foreign Language Teaching Demonstrates the Shift from a Behavioral to a Constructivist Learning Approach

    ERIC Educational Resources Information Center

    Wang, Li

    2005-01-01

    With the advent of networked computers and Internet technology, computer-based instruction has been widely used in language classrooms throughout the United States. Computer technologies have dramatically changed the way people gather information, conduct research and communicate with others worldwide. Considering the tremendous startup expenses,…

  12. Job Management and Task Bundling

    NASA Astrophysics Data System (ADS)

    Berkowitz, Evan; Jansen, Gustav R.; McElvain, Kenneth; Walker-Loud, André

    2018-03-01

    High Performance Computing is often performed on scarce and shared computing resources. To ensure computers are used to their full capacity, administrators often incentivize large workloads that are not possible on smaller systems. Measurements in Lattice QCD frequently do not scale to machine-size workloads. By bundling tasks together we can create large jobs suitable for gigantic partitions. We discuss METAQ and mpi_jm, software developed to dynamically group computational tasks together, that can intelligently backfill to consume idle time without substantial changes to users' current workflows or executables.

  13. Configuration-Control Scheme Copes With Singularities

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun; Colbaugh, Richard D.

    1993-01-01

    Improved configuration-control scheme for robotic manipulator having redundant degrees of freedom suppresses large joint velocities near singularities, at expense of small trajectory errors. Provides means to enforce order of priority of tasks assigned to robot. Basic concept of configuration control of redundant robot described in "Increasing The Dexterity Of Redundant Robots" (NPO-17801).

  14. Device-Task Fidelity and Transfer of Training: Aircraft Cockpit Procedures Training.

    ERIC Educational Resources Information Center

    Prophet, Wallace W.; Boyd, H. Alton

    An evaluation was made of the training effectiveness of two cockpit procedures training devices, differing greatly in physical fidelity and cost, for use on the ground for a twin-engine, turboprop, fixed-wing aircraft. One group of students received training in cockpit procedures in a relatively expensive, sophisticated, computerized trainer,…

  15. Enterprise Resource Planning Systems: Assessment of Risk Factors by California Community College Leaders

    ERIC Educational Resources Information Center

    Valente, Mario Manuel

    2011-01-01

    Most California Community Colleges have chosen to purchase and implement a Management Information Systems software solution also known as an Enterprise Resource Planning (ERP) system in order to monitor, control, and automate their administrative tasks. ERP implementations are complex, expensive, high profile, and therefore high risk. To reduce…

  16. Optimal Designs for Performance Assessments: The Subject Factor.

    ERIC Educational Resources Information Center

    Parkes, Jay

    Much speculation abounds concerning how expensive performance assessments are or are going to be. Recent projections indicate that, in order to achieve an acceptably high generalizability coefficient, many additional tasks may need to be added, which will enlarge costs. Such projections are, to some degree, correct, and to some degree simplistic.…

  17. Does the medium matter? The interaction of task type and technology on group performance and member reactions.

    PubMed

    Straus, S G; McGrath, J E

    1994-02-01

    The authors investigated the hypothesis that as group tasks pose greater requirements for member interdependence, communication media that transmit more social context cues will foster group performance and satisfaction. Seventy-two 3-person groups of undergraduate students worked in either computer-mediated or face-to-face meetings on 3 tasks with increasing levels of interdependence: an idea-generation task, an intellective task, and a judgment task. Results showed few differences between computer-mediated and face-to-face groups in the quality of the work completed but large differences in productivity favoring face-to-face groups. Analysis of productivity and of members' reactions supported the predicted interaction of tasks and media, with greater discrepancies between media conditions for tasks requiring higher levels of coordination. Results are discussed in terms of the implications of using computer-mediated communications systems for group work.

  18. The effect of psychosocial stress on muscle activity during computer work: Comparative study between desktop computer and mobile computing products.

    PubMed

    Taib, Mohd Firdaus Mohd; Bahn, Sangwoo; Yun, Myung Hwan

    2016-06-27

    The popularity of mobile computing products is well known. Thus, it is crucial to evaluate their contribution to musculoskeletal disorders during computer usage under both comfortable and stressful environments. This study explores the effect of different computer products' usages with different tasks used to induce psychosocial stress on muscle activity. Fourteen male subjects performed computer tasks: sixteen combinations of four different computer products with four different tasks used to induce stress. Electromyography for four muscles on the forearm, shoulder and neck regions and task performances were recorded. The increment of trapezius muscle activity was dependent on the task used to induce the stress where a higher level of stress made a greater increment. However, this relationship was not found in the other three muscles. Besides that, compared to desktop and laptop use, the lowest activity for all muscles was obtained during the use of a tablet or smart phone. The best net performance was obtained in a comfortable environment. However, during stressful conditions, the best performance can be obtained using the device that a user is most comfortable with or has the most experience with. Different computer products and different levels of stress play a big role in muscle activity during computer work. Both of these factors must be taken into account in order to reduce the occurrence of musculoskeletal disorders or problems.

  19. Categories of Computer Use and Their Relationships with Attitudes toward Computers.

    ERIC Educational Resources Information Center

    Mitra, Anandra

    1998-01-01

    Analysis of attitude and use questionnaires completed by undergraduates (n1,444) at Wake Forest University determined that computers were used most frequently for word processing. Other uses were e-mail for task and non-task activities and mathematical and statistical computation. Results suggest that the level of computer use was related to…

  20. Exploring methodological frameworks for a mental task-based near-infrared spectroscopy brain-computer interface.

    PubMed

    Weyand, Sabine; Takehara-Nishiuchi, Kaori; Chau, Tom

    2015-10-30

    Near-infrared spectroscopy (NIRS) brain-computer interfaces (BCIs) enable users to interact with their environment using only cognitive activities. This paper presents the results of a comparison of four methodological frameworks used to select a pair of tasks to control a binary NIRS-BCI; specifically, three novel personalized task paradigms and the state-of-the-art prescribed task framework were explored. Three types of personalized task selection approaches were compared, including: user-selected mental tasks using weighted slope scores (WS-scores), user-selected mental tasks using pair-wise accuracy rankings (PWAR), and researcher-selected mental tasks using PWAR. These paradigms, along with the state-of-the-art prescribed mental task framework, where mental tasks are selected based on the most commonly used tasks in literature, were tested by ten able-bodied participants who took part in five NIRS-BCI sessions. The frameworks were compared in terms of their accuracy, perceived ease-of-use, computational time, user preference, and length of training. Most notably, researcher-selected personalized tasks resulted in significantly higher accuracies, while user-selected personalized tasks resulted in significantly higher perceived ease-of-use. It was also concluded that PWAR minimized the amount of data that needed to be collected; while, WS-scores maximized user satisfaction and minimized computational time. In comparison to the state-of-the-art prescribed mental tasks, our findings show that overall, personalized tasks appear to be superior to prescribed tasks with respect to accuracy and perceived ease-of-use. The deployment of personalized rather than prescribed mental tasks ought to be considered and further investigated in future NIRS-BCI studies. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Cloud computing can simplify HIT infrastructure management.

    PubMed

    Glaser, John

    2011-08-01

    Software as a Service (SaaS), built on cloud computing technology, is emerging as the forerunner in IT infrastructure because it helps healthcare providers reduce capital investments. Cloud computing leads to predictable, monthly, fixed operating expenses for hospital IT staff. Outsourced cloud computing facilities are state-of-the-art data centers boasting some of the most sophisticated networking equipment on the market. The SaaS model helps hospitals safeguard against technology obsolescence, minimizes maintenance requirements, and simplifies management.

  2. Novel Analog For Muscle Deconditioning

    NASA Technical Reports Server (NTRS)

    Ploutz-Snyder, Lori; Ryder, Jeff; Buxton, Roxanne; Redd, Elizabeth; Scott-Pandorf, Melissa; Hackney, Kyle; Fiedler, James; Bloomberg, Jacob

    2010-01-01

    Existing models of muscle deconditioning are cumbersome and expensive (ex: bedrest). We propose a new model utilizing a weighted suit to manipulate strength, power or endurance (function) relative to body weight (BW). Methods: 20 subjects performed 7 occupational astronaut tasks while wearing a suit weighted with 0-120% of BW. Models of the full relationship between muscle function/BW and task completion time were developed using fractional polynomial regression and verified by the addition of pre- and post-flight astronaut performance data using the same tasks. Spline regression was used to identify muscle function thresholds below which task performance was impaired. Results: Thresholds of performance decline were identified for each task. Seated egress & walk (most difficult task) showed thresholds of: leg press (LP) isometric peak force/BW of 18 N/kg, LP power/BW of 18 W/kg, LP work/ BW of 79 J/kg, knee extension (KE) isokinetic/BW of 6 Nm/Kg and KE torque/BW of 1.9 Nm/kg. Conclusions: Laboratory manipulation of strength / BW has promise as an appropriate analog for spaceflight-induced loss of muscle function for predicting occupational task performance and establishing operationally relevant exercise targets.

  3. Novel Analog For Muscle Deconditioning

    NASA Technical Reports Server (NTRS)

    Ploutz-Snyder, Lori; Ryder, Jeff; Buxton, Roxanne; Redd. Elizabeth; Scott-Pandorf, Melissa; Hackney, Kyle; Fiedler, James; Ploutz-Snyder, Robert; Bloomberg, Jacob

    2011-01-01

    Existing models (such as bed rest) of muscle deconditioning are cumbersome and expensive. We propose a new model utilizing a weighted suit to manipulate strength, power, or endurance (function) relative to body weight (BW). Methods: 20 subjects performed 7 occupational astronaut tasks while wearing a suit weighted with 0-120% of BW. Models of the full relationship between muscle function/BW and task completion time were developed using fractional polynomial regression and verified by the addition of pre-and postflightastronaut performance data for the same tasks. Splineregression was used to identify muscle function thresholds below which task performance was impaired. Results: Thresholds of performance decline were identified for each task. Seated egress & walk (most difficult task) showed thresholds of leg press (LP) isometric peak force/BW of 18 N/kg, LP power/BW of 18 W/kg, LP work/BW of 79 J/kg, isokineticknee extension (KE)/BW of 6 Nm/kg, and KE torque/BW of 1.9 Nm/kg.Conclusions: Laboratory manipulation of relative strength has promise as an appropriate analog for spaceflight-induced loss of muscle function, for predicting occupational task performance and establishing operationally relevant strength thresholds.

  4. The Use of Human Factors Simulation to Conserve Operations Expense

    NASA Technical Reports Server (NTRS)

    Hamilton, George S.; Dischinger, H. Charles, Jr.; Wu, Hsin-I.

    1999-01-01

    In preparation for on-orbit operations, NASA performs experiments aboard a KC-135 which performs parabolic maneuvers, resulting in short periods of microgravity. While considerably less expensive than space operations, the use of this aircraft is costly. Simulation of tasks to be performed during the flight can allow the participants to optimize hardware configuration and crew interaction prior to flight. This presentation will demonstrate the utility of such simulation. The experiment simulated is the fluid dynamics of epoxy components which may be used in a patch kit in the event of meteoroid damage to the International Space Station. Improved configuration and operational efficiencies were reflected in early and increased data collection.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.

    This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less

  6. Computer Assistance in Information Work. Part I: Conceptual Framework for Improving the Computer/User Interface in Information Work. Part II: Catalog of Acceleration, Augmentation, and Delegation Functions in Information Work.

    ERIC Educational Resources Information Center

    Paisley, William; Butler, Matilda

    This study of the computer/user interface investigated the role of the computer in performing information tasks that users now perform without computer assistance. Users' perceptual/cognitive processes are to be accelerated or augmented by the computer; a long term goal is to delegate information tasks entirely to the computer. Cybernetic and…

  7. All framing effects are not created equal: Low convergent validity between two classic measurements of framing

    PubMed Central

    Zhen, Shanshan; Yu, Rongjun

    2016-01-01

    Human risk-taking attitudes can be influenced by two logically equivalent but descriptively different frames, termed the framing effect. The classic hypothetical vignette-based task (Asian disease problem) and a recently developed reward-based gambling task have been widely used to assess individual differences in the framing effect. Previous studies treat framing bias as a stable trait that has genetic basis. However, these two paradigms differ in terms of task domain (loss vs. gain) and task context (vignette-based vs. reward-based) and the convergent validity of these measurements remains unknown. Here, we developed a vignette-based task and a gambling task in both gain and loss domains and tested correlations of the framing effect among these tasks in 159 young adults. Our results revealed no significant correlation between the vignette-based task in the loss domain and the gambling task in the gain domain, indicating low convergent validity. The current findings raise the question of how to measure the framing effect precisely, especially in individual difference studies using large samples and expensive neuroscience methods. Our results suggest that the framing effect is influenced by both task domain and task context and future research should be cautious about the operationalization of the framing effect. PMID:27436680

  8. All framing effects are not created equal: Low convergent validity between two classic measurements of framing.

    PubMed

    Zhen, Shanshan; Yu, Rongjun

    2016-07-20

    Human risk-taking attitudes can be influenced by two logically equivalent but descriptively different frames, termed the framing effect. The classic hypothetical vignette-based task (Asian disease problem) and a recently developed reward-based gambling task have been widely used to assess individual differences in the framing effect. Previous studies treat framing bias as a stable trait that has genetic basis. However, these two paradigms differ in terms of task domain (loss vs. gain) and task context (vignette-based vs. reward-based) and the convergent validity of these measurements remains unknown. Here, we developed a vignette-based task and a gambling task in both gain and loss domains and tested correlations of the framing effect among these tasks in 159 young adults. Our results revealed no significant correlation between the vignette-based task in the loss domain and the gambling task in the gain domain, indicating low convergent validity. The current findings raise the question of how to measure the framing effect precisely, especially in individual difference studies using large samples and expensive neuroscience methods. Our results suggest that the framing effect is influenced by both task domain and task context and future research should be cautious about the operationalization of the framing effect.

  9. Assessment of Computer and Information Literacy in ICILS 2013: Do Different Item Types Measure the Same Construct?

    ERIC Educational Resources Information Center

    Ihme, Jan Marten; Senkbeil, Martin; Goldhammer, Frank; Gerick, Julia

    2017-01-01

    The combination of different item formats is found quite often in large scale assessments, and analyses on the dimensionality often indicate multi-dimensionality of tests regarding the task format. In ICILS 2013, three different item types (information-based response tasks, simulation tasks, and authoring tasks) were used to measure computer and…

  10. The Chemical Engineer's Toolbox: A Glass Box Approach to Numerical Problem Solving

    ERIC Educational Resources Information Center

    Coronell, Daniel G.; Hariri, M. Hossein

    2009-01-01

    Computer programming in undergraduate engineering education all too often begins and ends with the freshman programming course. Improvements in computer technology and curriculum revision have improved this situation, but often at the expense of the students' learning due to the use of commercial "black box" software. This paper describes the…

  11. Superintendents' Perceptions of 1:1 Initiative Implementation and Sustainability

    ERIC Educational Resources Information Center

    Cole, Bobby Virgil, Jr.; Sauers, Nicholas J.

    2018-01-01

    One of the fastest growing, most discussed, and most expensive technology initiatives over the last decade has been one-to-one (1:1) computing initiatives. The purpose of this study was to examine key factors that influenced implementing and sustaining 1:1 computing initiatives from the perspective of school superintendents. Nine superintendents…

  12. Data Bases at a State Institution--Costs, Uses and Needs. AIR Forum Paper 1978.

    ERIC Educational Resources Information Center

    McLaughlin, Gerald W.

    The cost-benefit of administrative data at a state college is placed in perspective relative to the institutional involvement in computer use. The costs of computer operations, personnel, and peripheral equipment expenses related to instruction are analyzed. Data bases and systems support institutional activities, such as registration, and aid…

  13. Film Library Information Management System.

    ERIC Educational Resources Information Center

    Minnella, C. Vincent; And Others

    The computer program described not only allows the user to determine rental sources for a particular film title quickly, but also to select the least expensive of the sources. This program developed at SUNY Cortland's Sperry Learning Resources Center and Computer Center is designed to maintain accurate data on rental and purchase films in both…

  14. Costs and benefits of integrating information between the cerebral hemispheres: a computational perspective.

    PubMed

    Belger, A; Banich, M T

    1998-07-01

    Because interaction of the cerebral hemispheres has been found to aid task performance under demanding conditions, the present study examined how this effect is moderated by computational complexity, the degree of lateralization for a task, and individual differences in asymmetric hemispheric activation (AHA). Computational complexity was manipulated across tasks either by increasing the number of inputs to be processed or by increasing the number of steps to a decision. Comparison of within- and across-hemisphere trials indicated that the size of the between-hemisphere advantage increased as a function of task complexity, except for a highly lateralized rhyme decision task that can only be performed by the left hemisphere. Measures of individual differences in AHA revealed that when task demands and an individual's AHA both load on the same hemisphere, the ability to divide the processing between the hemispheres is limited. Thus, interhemispheric division of processing improves performance at higher levels of computational complexity only when the required operations can be divided between the hemispheres.

  15. Data Structures for Extreme Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahan, Simon

    As computing problems of national importance grow, the government meets the increased demand by funding the development of ever larger systems. The overarching goal of the work supported in part by this grant is to increase efficiency of programming and performing computations on these large computing systems. In past work, we have demonstrated that some of these computations once thought to require expensive hardware designs and/or complex, special-purpose programming may be executed efficiently on low-cost commodity cluster computing systems using a general-purpose “latency-tolerant” programming framework. One important developed application of the ideas underlying this framework is graph database technology supportingmore » social network pattern matching used by US intelligence agencies to more quickly identify potential terrorist threats. This database application has been spun out by the Pacific Northwest National Laboratory, a Department of Energy Laboratory, into a commercial start-up, Trovares Inc. We explore an alternative application of the same underlying ideas to a well-studied challenge arising in engineering: solving unstructured sparse linear equations. Solving these equations is key to predicting the behavior of large electronic circuits before they are fabricated. Predicting that behavior ahead of fabrication means that designs can optimized and errors corrected ahead of the expense of manufacture.« less

  16. A new parallel DNA algorithm to solve the task scheduling problem based on inspired computational model.

    PubMed

    Wang, Zhaocai; Ji, Zuwen; Wang, Xiaoming; Wu, Tunhua; Huang, Wei

    2017-12-01

    As a promising approach to solve the computationally intractable problem, the method based on DNA computing is an emerging research area including mathematics, computer science and molecular biology. The task scheduling problem, as a well-known NP-complete problem, arranges n jobs to m individuals and finds the minimum execution time of last finished individual. In this paper, we use a biologically inspired computational model and describe a new parallel algorithm to solve the task scheduling problem by basic DNA molecular operations. In turn, we skillfully design flexible length DNA strands to represent elements of the allocation matrix, take appropriate biological experiment operations and get solutions of the task scheduling problem in proper length range with less than O(n 2 ) time complexity. Copyright © 2017. Published by Elsevier B.V.

  17. Health Literacy and Task Environment Influence Parents' Burden for Data Entry on Child-Specific Health Information: Randomized Controlled Trial

    PubMed Central

    Guo, Chao-Yu; Bacic, Janine; Chan, Eugenia

    2011-01-01

    Background Health care systems increasingly rely on patients’ data entry efforts to organize and assist in care delivery through health information exchange. Objectives We sought to determine (1) the variation in burden imposed on parents by data entry efforts across paper-based and computer-based environments, and (2) the impact, if any, of parents’ health literacy on the task burden. Methods We completed a randomized controlled trial of parent-completed data entry tasks. Parents of children with attention deficit hyperactivity disorder (ADHD) were randomized based on the Test of Functional Health Literacy in Adults (TOFHLA) to either a paper-based or computer-based environment for entry of health information on their children. The primary outcome was the National Aeronautics and Space Administration Task Load Index (TLX) total weighted score. Results We screened 271 parents: 194 (71.6%) were eligible, and 180 of these (92.8%) constituted the study cohort. We analyzed 90 participants from each arm. Parents who completed information tasks on paper reported a higher task burden than those who worked in the computer environment: mean (SD) TLX scores were 22.8 (20.6) for paper and 16.3 (16.1) for computer. Assignment to the paper environment conferred a significant risk of higher task burden (F1,178 = 4.05, P = .046). Adequate literacy was associated with lower task burden (decrease in burden score of 1.15 SD, P = .003). After adjusting for relevant child and parent factors, parents’ TOFHLA score (beta = -.02, P = .02) and task environment (beta = .31, P = .03) remained significantly associated with task burden. Conclusions A tailored computer-based environment provided an improved task experience for data entry compared to the same tasks completed on paper. Health literacy was inversely related to task burden. Trial registration Clinicaltrials.gov NCT00543257; http://www.clinicaltrials.gov/ct2/show/NCT00543257 (Archived by WebCite at http://www.webcitation.org/5vUVH2DYR) PMID:21269990

  18. Computational Virtual Reality (VR) as a human-computer interface in the operation of telerobotic systems

    NASA Technical Reports Server (NTRS)

    Bejczy, Antal K.

    1995-01-01

    This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.

  19. Modeling choice and reaction time during arbitrary visuomotor learning through the coordination of adaptive working memory and reinforcement learning

    PubMed Central

    Viejo, Guillaume; Khamassi, Mehdi; Brovelli, Andrea; Girard, Benoît

    2015-01-01

    Current learning theory provides a comprehensive description of how humans and other animals learn, and places behavioral flexibility and automaticity at heart of adaptive behaviors. However, the computations supporting the interactions between goal-directed and habitual decision-making systems are still poorly understood. Previous functional magnetic resonance imaging (fMRI) results suggest that the brain hosts complementary computations that may differentially support goal-directed and habitual processes in the form of a dynamical interplay rather than a serial recruitment of strategies. To better elucidate the computations underlying flexible behavior, we develop a dual-system computational model that can predict both performance (i.e., participants' choices) and modulations in reaction times during learning of a stimulus–response association task. The habitual system is modeled with a simple Q-Learning algorithm (QL). For the goal-directed system, we propose a new Bayesian Working Memory (BWM) model that searches for information in the history of previous trials in order to minimize Shannon entropy. We propose a model for QL and BWM coordination such that the expensive memory manipulation is under control of, among others, the level of convergence of the habitual learning. We test the ability of QL or BWM alone to explain human behavior, and compare them with the performance of model combinations, to highlight the need for such combinations to explain behavior. Two of the tested combination models are derived from the literature, and the latter being our new proposal. In conclusion, all subjects were better explained by model combinations, and the majority of them are explained by our new coordination proposal. PMID:26379518

  20. Automatic Domain Adaptation of Word Sense Disambiguation Based on Sublanguage Semantic Schemata Applied to Clinical Narrative

    ERIC Educational Resources Information Center

    Patterson, Olga

    2012-01-01

    Domain adaptation of natural language processing systems is challenging because it requires human expertise. While manual effort is effective in creating a high quality knowledge base, it is expensive and time consuming. Clinical text adds another layer of complexity to the task due to privacy and confidentiality restrictions that hinder the…

  1. What if Best Practice Is Too Expensive? Feedback on Oral Presentations and Efficient Use of Resources

    ERIC Educational Resources Information Center

    Leger, Lawrence A.; Glass, Karligash; Katsiampa, Paraskevi; Liu, Shibo; Sirichand, Kavita

    2017-01-01

    We evaluate feedback methods for oral presentations used in training non-quantitative research skills (literature review and various associated tasks). Training is provided through a credit-bearing module taught to MSc students of banking, economics and finance in the UK. Monitoring oral presentations and providing "best practice"…

  2. Effectiveness of an Alternative Delivery System for In-Service Vocational Teacher Education. Final Report.

    ERIC Educational Resources Information Center

    Richardson, Donald L.; And Others

    The project was designed to provide vocational teacher educators in Colorado with an alternative delivery system for inservice vocational teacher education which would overcome barriers of distance (and difficult winter travel), expense, and low student density. A task force composed of staff members of the State Board for Community Colleges and…

  3. Study to design and develop remote manipulator system. [computer simulation of human performance

    NASA Technical Reports Server (NTRS)

    Hill, J. W.; Mcgovern, D. E.; Sword, A. J.

    1974-01-01

    Modeling of human performance in remote manipulation tasks is reported by automated procedures using computers to analyze and count motions during a manipulation task. Performance is monitored by an on-line computer capable of measuring the joint angles of both master and slave and in some cases the trajectory and velocity of the hand itself. In this way the operator's strategies with different transmission delays, displays, tasks, and manipulators can be analyzed in detail for comparison. Some progress is described in obtaining a set of standard tasks and difficulty measures for evaluating manipulator performance.

  4. Motivation and Performance within a Collaborative Computer-Based Modeling Task: Relations between Students' Achievement Goal Orientation, Self-Efficacy, Cognitive Processing, and Achievement

    ERIC Educational Resources Information Center

    Sins, Patrick H. M.; van Joolingen, Wouter R.; Savelsbergh, Elwin R.; van Hout-Wolters, Bernadette

    2008-01-01

    Purpose of the present study was to test a conceptual model of relations among achievement goal orientation, self-efficacy, cognitive processing, and achievement of students working within a particular collaborative task context. The task involved a collaborative computer-based modeling task. In order to test the model, group measures of…

  5. Modeling Cognitive Strategies during Complex Task Performing Process

    ERIC Educational Resources Information Center

    Mazman, Sacide Guzin; Altun, Arif

    2012-01-01

    The purpose of this study is to examine individuals' computer based complex task performing processes and strategies in order to determine the reasons of failure by cognitive task analysis method and cued retrospective think aloud with eye movement data. Study group was five senior students from Computer Education and Instructional Technologies…

  6. Automated Instructional Monitors for Complex Operational Tasks. Final Report.

    ERIC Educational Resources Information Center

    Feurzeig, Wallace

    A computer-based instructional system is described which incorporates diagnosis of students difficulties in acquiring complex concepts and skills. A computer automatically generated a simulated display. It then monitored and analyzed a student's work in the performance of assigned training tasks. Two major tasks were studied. The first,…

  7. Simple, efficient allocation of modelling runs on heterogeneous clusters with MPI

    USGS Publications Warehouse

    Donato, David I.

    2017-01-01

    In scientific modelling and computation, the choice of an appropriate method for allocating tasks for parallel processing depends on the computational setting and on the nature of the computation. The allocation of independent but similar computational tasks, such as modelling runs or Monte Carlo trials, among the nodes of a heterogeneous computational cluster is a special case that has not been specifically evaluated previously. A simulation study shows that a method of on-demand (that is, worker-initiated) pulling from a bag of tasks in this case leads to reliably short makespans for computational jobs despite heterogeneity both within and between cluster nodes. A simple reference implementation in the C programming language with the Message Passing Interface (MPI) is provided.

  8. Correlation energy extrapolation by many-body expansion

    DOE PAGES

    Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus; ...

    2017-01-09

    Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less

  9. Correlation energy extrapolation by many-body expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus

    Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less

  10. Parallel processing using an optical delay-based reservoir computer

    NASA Astrophysics Data System (ADS)

    Van der Sande, Guy; Nguimdo, Romain Modeste; Verschaffelt, Guy

    2016-04-01

    Delay systems subject to delayed optical feedback have recently shown great potential in solving computationally hard tasks. By implementing a neuro-inspired computational scheme relying on the transient response to optical data injection, high processing speeds have been demonstrated. However, reservoir computing systems based on delay dynamics discussed in the literature are designed by coupling many different stand-alone components which lead to bulky, lack of long-term stability, non-monolithic systems. Here we numerically investigate the possibility of implementing reservoir computing schemes based on semiconductor ring lasers. Semiconductor ring lasers are semiconductor lasers where the laser cavity consists of a ring-shaped waveguide. SRLs are highly integrable and scalable, making them ideal candidates for key components in photonic integrated circuits. SRLs can generate light in two counterpropagating directions between which bistability has been demonstrated. We demonstrate that two independent machine learning tasks , even with different nature of inputs with different input data signals can be simultaneously computed using a single photonic nonlinear node relying on the parallelism offered by photonics. We illustrate the performance on simultaneous chaotic time series prediction and a classification of the Nonlinear Channel Equalization. We take advantage of different directional modes to process individual tasks. Each directional mode processes one individual task to mitigate possible crosstalk between the tasks. Our results indicate that prediction/classification with errors comparable to the state-of-the-art performance can be obtained even with noise despite the two tasks being computed simultaneously. We also find that a good performance is obtained for both tasks for a broad range of the parameters. The results are discussed in detail in [Nguimdo et al., IEEE Trans. Neural Netw. Learn. Syst. 26, pp. 3301-3307, 2015

  11. 45 CFR 2507.5 - How does the Corporation process requests for records?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... compelled to create new records or do statistical computations. For example, the Corporation is not required... feasible way to respond to a request. The Corporation is not required to perform any research for the... duplicating all of them. For example, if it requires less time and expense to provide a computer record as a...

  12. 26 CFR 1.179-5 - Time and manner of making election.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... desktop computer costing $1,500. On Taxpayer's 2003 Federal tax return filed on April 15, 2004, Taxpayer elected to expense under section 179 the full cost of the laptop computer and the full cost of the desktop... provided by the Internal Revenue Code, the regulations under the Code, or other guidance published in the...

  13. Innovative Leaders Take the Phone and Run: Profiles of Four Trailblazing Programs

    ERIC Educational Resources Information Center

    Norris, Cathleen; Soloway, Elliot; Menchhofer, Kyle; Bauman, Billie Diane; Dickerson, Mindy; Schad, Lenny; Tomko, Sue

    2010-01-01

    While the Internet changed everything, mobile will change everything squared. The Internet is just a roadway, and computers--the equivalent of cars for the Internet--have been expensive. The keepers of the information roadway--the telecommunication companies--will give one a "computer," such as cell phone, mobile learning device, or MLD,…

  14. 75 FR 25161 - Defense Federal Acquisition Regulation Supplement; Presumption of Development at Private Expense

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-07

    ... asserted restrictions on technical data and computer software. DATES: Comments on the proposed rule should... restrictions on technical data and computer software. More specifically, the proposed rule affects these...) items (as defined at 41 U.S.C. 431(c)). Since COTS items are a subtype of commercial items, this change...

  15. 17 CFR 240.17a-3 - Records to be made by certain exchange members, brokers and dealers.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... records) reflecting all assets and liabilities, income and expense and capital accounts. (3) Ledger..., and a record of the computation of aggregate indebtedness and net capital, as of the trial balance...) thereof shall make a record of the computation of aggregate indebtedness and net capital as of the trial...

  16. Application of Sequence Comparison Methods to Multisensor Data Fusion and Target Recognition

    DTIC Science & Technology

    1993-06-18

    lin- ear comparison). A particularly attractive aspect of the proposed fusion scheme is that it has the potential to work for any object with (1...radar sensing is a historical custom - however, the reader should keep in mind that the fundamental issue in this research is to explore and exploit...reduce the computationally expensive need to compute partial derivatives. In usual practice, the computationally more attractive filter design is

  17. Psychology of computer use: XXXII. Computer screen-savers as distractors.

    PubMed

    Volk, F A; Halcomb, C G

    1994-12-01

    The differences in performance of 16 male and 16 female undergraduates on three cognitive tasks were investigated in the presence of visual distractors (computer-generated dynamic graphic images). These tasks included skilled and unskilled proofreading and listening comprehension. The visually demanding task of proofreading (skilled and unskilled) showed no significant decreases in performance in the distractor conditions. Results showed significant decrements, however, in performance on listening comprehension in at least one of the distractor conditions.

  18. A Single-Session Preliminary Evaluation of an Affordable BCI-Controlled Arm Exoskeleton and Motor-Proprioception Platform.

    PubMed

    Elnady, Ahmed Mohamed; Zhang, Xin; Xiao, Zhen Gang; Yong, Xinyi; Randhawa, Bubblepreet Kaur; Boyd, Lara; Menon, Carlo

    2015-01-01

    Traditional, hospital-based stroke rehabilitation can be labor-intensive and expensive. Furthermore, outcomes from rehabilitation are inconsistent across individuals and recovery is hard to predict. Given these uncertainties, numerous technological approaches have been tested in an effort to improve rehabilitation outcomes and reduce the cost of stroke rehabilitation. These techniques include brain-computer interface (BCI), robotic exoskeletons, functional electrical stimulation (FES), and proprioceptive feedback. However, to the best of our knowledge, no studies have combined all these approaches into a rehabilitation platform that facilitates goal-directed motor movements. Therefore, in this paper, we combined all these technologies to test the feasibility of using a BCI-driven exoskeleton with FES (robotic training device) to facilitate motor task completion among individuals with stroke. The robotic training device operated to assist a pre-defined goal-directed motor task. Because it is hard to predict who can utilize this type of technology, we considered whether the ability to adapt skilled movements with proprioceptive feedback would predict who could learn to control a BCI-driven robotic device. To accomplish this aim, we developed a motor task that requires proprioception for completion to assess motor-proprioception ability. Next, we tested the feasibility of robotic training system in individuals with chronic stroke (n = 9) and found that the training device was well tolerated by all the participants. Ability on the motor-proprioception task did not predict the time to completion of the BCI-driven task. Both participants who could accurately target (n = 6) and those who could not (n = 3), were able to learn to control the BCI device, with each BCI trial lasting on average 2.47 min. Our results showed that the participants' ability to use proprioception to control motor output did not affect their ability to use the BCI-driven exoskeleton with FES. Based on our preliminary results, we show that our robotic training device has potential for use as therapy for a broad range of individuals with stroke.

  19. A Single-Session Preliminary Evaluation of an Affordable BCI-Controlled Arm Exoskeleton and Motor-Proprioception Platform

    PubMed Central

    Elnady, Ahmed Mohamed; Zhang, Xin; Xiao, Zhen Gang; Yong, Xinyi; Randhawa, Bubblepreet Kaur; Boyd, Lara; Menon, Carlo

    2015-01-01

    Traditional, hospital-based stroke rehabilitation can be labor-intensive and expensive. Furthermore, outcomes from rehabilitation are inconsistent across individuals and recovery is hard to predict. Given these uncertainties, numerous technological approaches have been tested in an effort to improve rehabilitation outcomes and reduce the cost of stroke rehabilitation. These techniques include brain–computer interface (BCI), robotic exoskeletons, functional electrical stimulation (FES), and proprioceptive feedback. However, to the best of our knowledge, no studies have combined all these approaches into a rehabilitation platform that facilitates goal-directed motor movements. Therefore, in this paper, we combined all these technologies to test the feasibility of using a BCI-driven exoskeleton with FES (robotic training device) to facilitate motor task completion among individuals with stroke. The robotic training device operated to assist a pre-defined goal-directed motor task. Because it is hard to predict who can utilize this type of technology, we considered whether the ability to adapt skilled movements with proprioceptive feedback would predict who could learn to control a BCI-driven robotic device. To accomplish this aim, we developed a motor task that requires proprioception for completion to assess motor-proprioception ability. Next, we tested the feasibility of robotic training system in individuals with chronic stroke (n = 9) and found that the training device was well tolerated by all the participants. Ability on the motor-proprioception task did not predict the time to completion of the BCI-driven task. Both participants who could accurately target (n = 6) and those who could not (n = 3), were able to learn to control the BCI device, with each BCI trial lasting on average 2.47 min. Our results showed that the participants’ ability to use proprioception to control motor output did not affect their ability to use the BCI-driven exoskeleton with FES. Based on our preliminary results, we show that our robotic training device has potential for use as therapy for a broad range of individuals with stroke. PMID:25870554

  20. Real-Time Non-Intrusive Assessment of Viewing Distance during Computer Use.

    PubMed

    Argilés, Marc; Cardona, Genís; Pérez-Cabré, Elisabet; Pérez-Magrané, Ramon; Morcego, Bernardo; Gispets, Joan

    2016-12-01

    To develop and test the sensitivity of an ultrasound-based sensor to assess the viewing distance of visual display terminals operators in real-time conditions. A modified ultrasound sensor was attached to a computer display to assess viewing distance in real time. Sensor functionality was tested on a sample of 20 healthy participants while they conducted four 10-minute randomly presented typical computer tasks (a match-three puzzle game, a video documentary, a task requiring participants to complete a series of sentences, and a predefined internet search). The ultrasound sensor offered good measurement repeatability. Game, text completion, and web search tasks were conducted at shorter viewing distances (54.4 cm [95% CI 51.3-57.5 cm], 54.5 cm [95% CI 51.1-58.0 cm], and 54.5 cm [95% CI 51.4-57.7 cm], respectively) than the video task (62.3 cm [95% CI 58.9-65.7 cm]). Statistically significant differences were found between the video task and the other three tasks (all p < 0.05). Range of viewing distances (from 22 to 27 cm) was similar for all tasks (F = 0.996; p = 0.413). Real-time assessment of the viewing distance of computer users with a non-intrusive ultrasonic device disclosed a task-dependent pattern.

  1. Differences in the activation and co-activation ratios of the four subdivisions of trapezius between genders following a computer typing task.

    PubMed

    Szucs, Kimberly A; Molnar, Megan

    2017-04-01

    The aim of this study was to provide a description of gender differences of the activation patterns of the four subdivisions of the trapezius (clavicular, upper, middle, lower) following a 60min computer work task. Surface EMG was collected from these subdivisions from 21 healthy subjects during bilateral arm elevation pre-/post- task. Subjects completed a standardized 60min computer work task at a standard, ergonomic workstation. Normalized activation and activation ratios of each trapezius subdivision were compared between genders and condition with repeated measures ANOVAs. The interaction effect of Gender×Condition for upper trapezius% activation approached significance at p=0.051with males demonstrating greater activation post-task. The main effect of Condition was statistically significant for% activation of middle and lower trapezius (p<0.05), with both muscles demonstrating increase activation post-task. There was a statistically significant interaction effect of Gender×Condition for the Middle Trapezius/Upper Trapezius ratio and main effect of Condition for the Clavicular Trapezius/Upper Trapezius ratio, with a decreased ratio post-typing. Gender differences exist following 60min of a low force computer typing task. Imbalances in muscle activation and activation ratios following computer work may affect total shoulder kinematics and should be further explored. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. PowerPlay: Training an Increasingly General Problem Solver by Continually Searching for the Simplest Still Unsolvable Problem

    PubMed Central

    Schmidhuber, Jürgen

    2013-01-01

    Most of computer science focuses on automatically solving given computational problems. I focus on automatically inventing or discovering problems in a way inspired by the playful behavior of animals and humans, to train a more and more general problem solver from scratch in an unsupervised fashion. Consider the infinite set of all computable descriptions of tasks with possibly computable solutions. Given a general problem-solving architecture, at any given time, the novel algorithmic framework PowerPlay (Schmidhuber, 2011) searches the space of possible pairs of new tasks and modifications of the current problem solver, until it finds a more powerful problem solver that provably solves all previously learned tasks plus the new one, while the unmodified predecessor does not. Newly invented tasks may require to achieve a wow-effect by making previously learned skills more efficient such that they require less time and space. New skills may (partially) re-use previously learned skills. The greedy search of typical PowerPlay variants uses time-optimal program search to order candidate pairs of tasks and solver modifications by their conditional computational (time and space) complexity, given the stored experience so far. The new task and its corresponding task-solving skill are those first found and validated. This biases the search toward pairs that can be described compactly and validated quickly. The computational costs of validating new tasks need not grow with task repertoire size. Standard problem solver architectures of personal computers or neural networks tend to generalize by solving numerous tasks outside the self-invented training set; PowerPlay’s ongoing search for novelty keeps breaking the generalization abilities of its present solver. This is related to Gödel’s sequence of increasingly powerful formal theories based on adding formerly unprovable statements to the axioms without affecting previously provable theorems. The continually increasing repertoire of problem-solving procedures can be exploited by a parallel search for solutions to additional externally posed tasks. PowerPlay may be viewed as a greedy but practical implementation of basic principles of creativity (Schmidhuber, 2006a, 2010). A first experimental analysis can be found in separate papers (Srivastava et al., 2012a,b, 2013). PMID:23761771

  3. Aviation Technician Training I and Task Analyses: Semester II. Field Review Copy.

    ERIC Educational Resources Information Center

    Upchurch, Richard

    This guide for aviation technician training begins with a course description, resource information, and a course outline. Tasks/competencies are categorized into 16 concept/duty areas: understanding technical symbols and abbreviations; understanding mathematical terms, symbols, and formulas; computing decimals; computing fractions; computing ratio…

  4. Lower cost offshore field development utilizing autonomous vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frisbie, F.R.; Vie, K.J.; Welch, D.W.

    1996-12-31

    The offshore oil and gas industry has the requirement to inspect offshore oil and gas pipelines for scour, corrosion and damage as well as inspect and intervene on satellite production facilities. This task is currently performed with Remotely Operated Vehicles (ROV) operated from dynamically positioned (DP) offshore supply or diving support boats. Currently, these tasks are expensive due to the high day rates for DP ships and the slow, umbilical impeded, 1 knot inspection rates of the tethered ROVs, Emerging Autonomous Undersea Vehicle (AUV) technologies offer opportunities to perform these same inspection tasks for 50--75% lower cost, with comparable ormore » improved quality. The new generation LAPV (Linked Autonomous Power Vehicles) will operate from fixed facilities such as TLPs or FPFs and cover an operating field 10 kms in diameter.« less

  5. An intelligent tutoring system for the investigation of high performance skill acquisition

    NASA Technical Reports Server (NTRS)

    Fink, Pamela K.; Herren, L. Tandy; Regian, J. Wesley

    1991-01-01

    The issue of training high performance skills is of increasing concern. These skills include tasks such as driving a car, playing the piano, and flying an aircraft. Traditionally, the training of high performance skills has been accomplished through the use of expensive, high-fidelity, 3-D simulators, and/or on-the-job training using the actual equipment. Such an approach to training is quite expensive. The design, implementation, and deployment of an intelligent tutoring system developed for the purpose of studying the effectiveness of skill acquisition using lower-cost, lower-physical-fidelity, 2-D simulation. Preliminary experimental results are quite encouraging, indicating that intelligent tutoring systems are a cost-effective means of training high performance skills.

  6. A comparison of symptoms after viewing text on a computer screen and hardcopy.

    PubMed

    Chu, Christina; Rosenfield, Mark; Portello, Joan K; Benzoni, Jaclyn A; Collier, Juanita D

    2011-01-01

    Computer vision syndrome (CVS) is a complex of eye and vision problems experienced during or related to computer use. Ocular symptoms may include asthenopia, accommodative and vergence difficulties and dry eye. CVS occurs in up to 90% of computer workers, and given the almost universal use of these devices, it is important to identify whether these symptoms are specific to computer operation, or are simply a manifestation of performing a sustained near-vision task. This study compared ocular symptoms immediately following a sustained near task. 30 young, visually-normal subjects read text aloud either from a desktop computer screen or a printed hardcopy page at a viewing distance of 50 cm for a continuous 20 min period. Identical text was used in the two sessions, which was matched for size and contrast. Target viewing angle and luminance were similar for the two conditions. Immediately following completion of the reading task, subjects completed a written questionnaire asking about their level of ocular discomfort during the task. When comparing the computer and hardcopy conditions, significant differences in median symptom scores were reported with regard to blurred vision during the task (t = 147.0; p = 0.03) and the mean symptom score (t = 102.5; p = 0.04). In both cases, symptoms were higher during computer use. Symptoms following sustained computer use were significantly worse than those reported after hard copy fixation under similar viewing conditions. A better understanding of the physiology underlying CVS is critical to allow more accurate diagnosis and treatment. This will allow practitioners to optimize visual comfort and efficiency during computer operation.

  7. 7 CFR 3560.102 - Housing project management.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., unless the machine becomes the property of the project after purchase. (iii) Determining if Expenses are... computer learning center activities benefiting tenants are not covered in this prohibition. (viii) It is...

  8. 7 CFR 3560.102 - Housing project management.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., unless the machine becomes the property of the project after purchase. (iii) Determining if Expenses are... computer learning center activities benefiting tenants are not covered in this prohibition. (viii) It is...

  9. 7 CFR 3560.102 - Housing project management.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., unless the machine becomes the property of the project after purchase. (iii) Determining if Expenses are... computer learning center activities benefiting tenants are not covered in this prohibition. (viii) It is...

  10. 7 CFR 3560.102 - Housing project management.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., unless the machine becomes the property of the project after purchase. (iii) Determining if Expenses are... computer learning center activities benefiting tenants are not covered in this prohibition. (viii) It is...

  11. Pattern of Non-Task Interactions in Asynchronous Computer-Supported Collaborative Learning Courses

    ERIC Educational Resources Information Center

    Abedin, Babak; Daneshgar, Farhad; D'Ambra, John

    2014-01-01

    Despite the importance of the non-task interactions in computer-supported collaborative learning (CSCL) environments as emphasized in the literature, few studies have investigated online behavior of people in the CSCL environments. This paper studies the pattern of non-task interactions among postgraduate students in an Australian university. The…

  12. Strategy Generalization across Orientation Tasks: Testing a Computational Cognitive Model

    ERIC Educational Resources Information Center

    Gunzelmann, Glenn

    2008-01-01

    Humans use their spatial information processing abilities flexibly to facilitate problem solving and decision making in a variety of tasks. This article explores the question of whether a general strategy can be adapted for performing two different spatial orientation tasks by testing the predictions of a computational cognitive model. Human…

  13. Learner Use of Holistic Language Units in Multimodal, Task-Based Synchronous Computer-Mediated Communication

    ERIC Educational Resources Information Center

    Collentine, Karina

    2009-01-01

    Second language acquisition (SLA) researchers strive to understand the language and exchanges that learners generate in synchronous computer-mediated communication (SCMC). Doughty and Long (2003) advocate replacing open-ended SCMC with task-based language teaching (TBLT) design principles. Since most task-based SCMC (TB-SCMC) research addresses an…

  14. Physicians' perspectives of adopting computer-assisted navigation in orthopedic surgery.

    PubMed

    Hsu, Hui-Mei; Chang, I-Chiu; Lai, Ta-Wei

    2016-10-01

    Using Computer-assisted orthopedic navigation surgery system (CAOS) has many advantages but is not mandatory to use during an orthopedic surgery. Therefore, opinions obtained from clinical orthopedists with this system are valuable. This paper integrates technology acceptance model and theory of planned behavior to examine the determinants of continued CAOS use to facilitate user management. Opinions from orthopedists who had used a CAOS for at least two years were collected through a cross-sectional survey to verify the research framework. Follow-up interviews with an expert panel based on their experiences of CAOS were conducted to reason the impacts of factors of the research framework. The results show that factors of "perceived usefulness" and "facilitating condition" determine the intention to continue using CAOS, and "perceived usefulness" was driving by "complexity of task" and "social influence". Additionally, support in practice from high-level managers had an influence on orthopedists' satisfaction after using a CAOS. The aging population is accompanied by the increasing requirements for medical care and medical care attendant expenses, especially in total knee replacement. More precision and improvements on survivorship of patients' artificial joints are needed. This study facilitates suggestions in user management when encountering an obstacle in implementing a CAOS. Based on these findings, scientific and practical implications are then discussed. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Large-scale virtual screening on public cloud resources with Apache Spark.

    PubMed

    Capuccini, Marco; Ahmed, Laeeq; Schaal, Wesley; Laure, Erwin; Spjuth, Ola

    2017-01-01

    Structure-based virtual screening is an in-silico method to screen a target receptor against a virtual molecular library. Applying docking-based screening to large molecular libraries can be computationally expensive, however it constitutes a trivially parallelizable task. Most of the available parallel implementations are based on message passing interface, relying on low failure rate hardware and fast network connection. Google's MapReduce revolutionized large-scale analysis, enabling the processing of massive datasets on commodity hardware and cloud resources, providing transparent scalability and fault tolerance at the software level. Open source implementations of MapReduce include Apache Hadoop and the more recent Apache Spark. We developed a method to run existing docking-based screening software on distributed cloud resources, utilizing the MapReduce approach. We benchmarked our method, which is implemented in Apache Spark, docking a publicly available target receptor against [Formula: see text]2.2 M compounds. The performance experiments show a good parallel efficiency (87%) when running in a public cloud environment. Our method enables parallel Structure-based virtual screening on public cloud resources or commodity computer clusters. The degree of scalability that we achieve allows for trying out our method on relatively small libraries first and then to scale to larger libraries. Our implementation is named Spark-VS and it is freely available as open source from GitHub (https://github.com/mcapuccini/spark-vs).Graphical abstract.

  16. Probing Quark-Gluon-Plasma properties with a Bayesian model-to-data comparison

    NASA Astrophysics Data System (ADS)

    Cai, Tianji; Bernhard, Jonah; Ke, Weiyao; Bass, Steffen; Duke QCD Group Team

    2016-09-01

    Experiments at RHIC and LHC study a special state of matter called the Quark Gluon Plasma (QGP), where quarks and gluons roam freely, by colliding relativistic heavy-ions. Given the transitory nature of the QGP, its properties can only be explored by comparing computational models of its formation and evolution to experimental data. The models fall, roughly speaking, under two categories-those solely using relativistic viscous hydrodynamics (pure hydro model) and those that in addition couple to a microscopic Boltzmann transport for the later evolution of the hadronic decay products (hybrid model). Each of these models has multiple parameters that encode the physical properties we want to probe and that need to be calibrated to experimental data, a task which is computationally expensive, but necessary for the knowledge extraction and determination of the models' quality. Our group has developed an analysis technique based on Bayesian Statistics to perform the model calibration and to extract probability distributions for each model parameter. Following the previous work that applies the technique to the hybrid model, we now perform a similar analysis on a pure-hydro model and display the posterior distributions for the same set of model parameters. We also develop a set of criteria to assess the quality of the two models with respect to their ability to describe current experimental data. Funded by Duke University Goldman Sachs Research Fellowship.

  17. Formal ontologies in biomedical knowledge representation.

    PubMed

    Schulz, S; Jansen, L

    2013-01-01

    Medical decision support and other intelligent applications in the life sciences depend on increasing amounts of digital information. Knowledge bases as well as formal ontologies are being used to organize biomedical knowledge and data. However, these two kinds of artefacts are not always clearly distinguished. Whereas the popular RDF(S) standard provides an intuitive triple-based representation, it is semantically weak. Description logics based ontology languages like OWL-DL carry a clear-cut semantics, but they are computationally expensive, and they are often misinterpreted to encode all kinds of statements, including those which are not ontological. We distinguish four kinds of statements needed to comprehensively represent domain knowledge: universal statements, terminological statements, statements about particulars and contingent statements. We argue that the task of formal ontologies is solely to represent universal statements, while the non-ontological kinds of statements can nevertheless be connected with ontological representations. To illustrate these four types of representations, we use a running example from parasitology. We finally formulate recommendations for semantically adequate ontologies that can efficiently be used as a stable framework for more context-dependent biomedical knowledge representation and reasoning applications like clinical decision support systems.

  18. Network Inference via the Time-Varying Graphical Lasso

    PubMed Central

    Hallac, David; Park, Youngsuk; Boyd, Stephen; Leskovec, Jure

    2018-01-01

    Many important problems can be modeled as a system of interconnected entities, where each entity is recording time-dependent observations or measurements. In order to spot trends, detect anomalies, and interpret the temporal dynamics of such data, it is essential to understand the relationships between the different entities and how these relationships evolve over time. In this paper, we introduce the time-varying graphical lasso (TVGL), a method of inferring time-varying networks from raw time series data. We cast the problem in terms of estimating a sparse time-varying inverse covariance matrix, which reveals a dynamic network of interdependencies between the entities. Since dynamic network inference is a computationally expensive task, we derive a scalable message-passing algorithm based on the Alternating Direction Method of Multipliers (ADMM) to solve this problem in an efficient way. We also discuss several extensions, including a streaming algorithm to update the model and incorporate new observations in real time. Finally, we evaluate our TVGL algorithm on both real and synthetic datasets, obtaining interpretable results and outperforming state-of-the-art baselines in terms of both accuracy and scalability. PMID:29770256

  19. Accelerating literature curation with text-mining tools: a case study of using PubTator to curate genes in PubMed abstracts

    PubMed Central

    Lu, Zhiyong

    2012-01-01

    Today’s biomedical research has become heavily dependent on access to the biological knowledge encoded in expert curated biological databases. As the volume of biological literature grows rapidly, it becomes increasingly difficult for biocurators to keep up with the literature because manual curation is an expensive and time-consuming endeavour. Past research has suggested that computer-assisted curation can improve efficiency, but few text-mining systems have been formally evaluated in this regard. Through participation in the interactive text-mining track of the BioCreative 2012 workshop, we developed PubTator, a PubMed-like system that assists with two specific human curation tasks: document triage and bioconcept annotation. On the basis of evaluation results from two external user groups, we find that the accuracy of PubTator-assisted curation is comparable with that of manual curation and that PubTator can significantly increase human curatorial speed. These encouraging findings warrant further investigation with a larger number of publications to be annotated. Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/PubTator/ PMID:23160414

  20. A Fast Goal Recognition Technique Based on Interaction Estimates

    NASA Technical Reports Server (NTRS)

    E-Martin, Yolanda; R-Moreno, Maria D.; Smith, David E.

    2015-01-01

    Goal Recognition is the task of inferring an actor's goals given some or all of the actor's observed actions. There is considerable interest in Goal Recognition for use in intelligent personal assistants, smart environments, intelligent tutoring systems, and monitoring user's needs. In much of this work, the actor's observed actions are compared against a generated library of plans. Recent work by Ramirez and Geffner makes use of AI planning to determine how closely a sequence of observed actions matches plans for each possible goal. For each goal, this is done by comparing the cost of a plan for that goal with the cost of a plan for that goal that includes the observed actions. This approach yields useful rankings, but is impractical for real-time goal recognition in large domains because of the computational expense of constructing plans for each possible goal. In this paper, we introduce an approach that propagates cost and interaction information in a plan graph, and uses this information to estimate goal probabilities. We show that this approach is much faster, but still yields high quality results.

  1. Squared exponential covariance function for prediction of hydrocarbon in seabed logging application

    NASA Astrophysics Data System (ADS)

    Mukhtar, Siti Mariam; Daud, Hanita; Dass, Sarat Chandra

    2016-11-01

    Seabed Logging technology (SBL) has progressively emerged as one of the demanding technologies in Exploration and Production (E&P) industry. Hydrocarbon prediction in deep water areas is crucial task for a driller in any oil and gas company as drilling cost is very expensive. Simulation data generated by Computer Software Technology (CST) is used to predict the presence of hydrocarbon where the models replicate real SBL environment. These models indicate that the hydrocarbon filled reservoirs are more resistive than surrounding water filled sediments. Then, as hydrocarbon depth is increased, it is more challenging to differentiate data with and without hydrocarbon. MATLAB is used for data extractions for curve fitting process using Gaussian process (GP). GP can be classified into regression and classification problems, where this work only focuses on Gaussian process regression (GPR) problem. Most popular choice to supervise GPR is squared exponential (SE), as it provides stability and probabilistic prediction in huge amounts of data. Hence, SE is used to predict the presence or absence of hydrocarbon in the reservoir from the data generated.

  2. Underwater Inherent Optical Properties Estimation Using a Depth Aided Deep Neural Network.

    PubMed

    Yu, Zhibin; Wang, Yubo; Zheng, Bing; Zheng, Haiyong; Wang, Nan; Gu, Zhaorui

    2017-01-01

    Underwater inherent optical properties (IOPs) are the fundamental clues to many research fields such as marine optics, marine biology, and underwater vision. Currently, beam transmissometers and optical sensors are considered as the ideal IOPs measuring methods. But these methods are inflexible and expensive to be deployed. To overcome this problem, we aim to develop a novel measuring method using only a single underwater image with the help of deep artificial neural network. The power of artificial neural network has been proved in image processing and computer vision fields with deep learning technology. However, image-based IOPs estimation is a quite different and challenging task. Unlike the traditional applications such as image classification or localization, IOP estimation looks at the transparency of the water between the camera and the target objects to estimate multiple optical properties simultaneously. In this paper, we propose a novel Depth Aided (DA) deep neural network structure for IOPs estimation based on a single RGB image that is even noisy. The imaging depth information is considered as an aided input to help our model make better decision.

  3. Two-voice fundamental frequency estimation

    NASA Astrophysics Data System (ADS)

    de Cheveigné, Alain

    2002-05-01

    An algorithm is presented that estimates the fundamental frequencies of two concurrent voices or instruments. The algorithm models each voice as a periodic function of time, and jointly estimates both periods by cancellation according to a previously proposed method [de Cheveigné and Kawahara, Speech Commun. 27, 175-185 (1999)]. The new algorithm improves on the old in several respects; it allows an unrestricted search range, effectively avoids harmonic and subharmonic errors, is more accurate (it uses two-dimensional parabolic interpolation), and is computationally less costly. It remains subject to unavoidable errors when periods are in certain simple ratios and the task is inherently ambiguous. The algorithm is evaluated on a small database including speech, singing voice, and instrumental sounds. It can be extended in several ways; to decide the number of voices, to handle amplitude variations, and to estimate more than two voices (at the expense of increased processing cost and decreased reliability). It makes no use of instrument models, learned or otherwise, although it could usefully be combined with such models. [Work supported by the Cognitique programme of the French Ministry of Research and Technology.

  4. General Approach to Quantum Channel Impossibility by Local Operations and Classical Communication.

    PubMed

    Cohen, Scott M

    2017-01-13

    We describe a general approach to proving the impossibility of implementing a quantum channel by local operations and classical communication (LOCC), even with an infinite number of rounds, and find that this can often be demonstrated by solving a set of linear equations. The method also allows one to design a LOCC protocol to implement the channel whenever such a protocol exists in any finite number of rounds. Perhaps surprisingly, the computational expense for analyzing LOCC channels is not much greater than that for LOCC measurements. We apply the method to several examples, two of which provide numerical evidence that the set of quantum channels that are not LOCC is not closed and that there exist channels that can be implemented by LOCC either in one round or in three rounds that are on the boundary of the set of all LOCC channels. Although every LOCC protocol must implement a separable quantum channel, it is a very difficult task to determine whether or not a given channel is separable. Fortunately, prior knowledge that the channel is separable is not required for application of our method.

  5. Can genetic algorithms help virus writers reshape their creations and avoid detection?

    NASA Astrophysics Data System (ADS)

    Abu Doush, Iyad; Al-Saleh, Mohammed I.

    2017-11-01

    Different attack and defence techniques have been evolved over time as actions and reactions between black-hat and white-hat communities. Encryption, polymorphism, metamorphism and obfuscation are among the techniques used by the attackers to bypass security controls. On the other hand, pattern matching, algorithmic scanning, emulation and heuristic are used by the defence team. The Antivirus (AV) is a vital security control that is used against a variety of threats. The AV mainly scans data against its database of virus signatures. Basically, it claims a virus if a match is found. This paper seeks to find the minimal possible changes that can be made on the virus so that it will appear normal when scanned by the AV. Brute-force search through all possible changes can be a computationally expensive task. Alternatively, this paper tries to apply a Genetic Algorithm in solving such a problem. Our proposed algorithm is tested on seven different malware instances. The results show that in all the tested malware instances only a small change in each instance was good enough to bypass the AV.

  6. MONTANA PALLADIUM RESEARCH INITIATIVE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peters, John; McCloskey, Jay; Douglas, Trevor

    2012-05-09

    Project Objective: The overarching objective of the Montana Palladium Research Initiative is to perform scientific research on the properties and uses of palladium in the context of the U.S. Department of Energy's Hydrogen, Fuel Cells and Infrastructure Technologies Program. The purpose of the research will be to explore possible palladium as an alternative to platinum in hydrogen-economy applications. To achieve this objective, the Initiatives activities will focus on several cutting-edge research approaches across a range of disciplines, including metallurgy, biomimetics, instrumentation development, and systems analysis. Background: Platinum-group elements (PGEs) play significant roles in processing hydrogen, an element that shows highmore » potential to address this need in the U.S. and the world for inexpensive, reliable, clean energy. Platinum, however, is a very expensive component of current and planned systems, so less-expensive alternatives that have similar physical properties are being sought. To this end, several tasks have been defined under the rubric of the Montana Palladium Research Iniative. This broad swath of activities will allow progress on several fronts. The membrane-related activities of Task 1 employs state-of-the-art and leading-edge technologies to develop new, ceramic-substrate metallic membranes for the production of high-purity hydrogen, and develop techniques for the production of thin, defect-free platinum group element catalytic membranes for energy production and pollution control. The biomimetic work in Task 2 explores the use of substrate-attached hydrogen-producing enzymes and the encapsulation of palladium in virion-based protein coats to determine their utility for distributed hydrogen production. Task 3 work involves developing laser-induced breakdown spectroscopy (LIBS) as a real-time, in situ diagnostic technique to characterize PGEs nanoparticles for process monitoring and control. The systems engineering work in task 4 will determine how fuel cells taken as systems behave over periods of time that should show how their reformers and other subsystems deteriorate with time.« less

  7. A Framework for Load Balancing of Tensor Contraction Expressions via Dynamic Task Partitioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Pai-Wei; Stock, Kevin; Rajbhandari, Samyam

    In this paper, we introduce the Dynamic Load-balanced Tensor Contractions (DLTC), a domain-specific library for efficient task parallel execution of tensor contraction expressions, a class of computation encountered in quantum chemistry and physics. Our framework decomposes each contraction into smaller unit of tasks, represented by an abstraction referred to as iterators. We exploit an extra level of parallelism by having tasks across independent contractions executed concurrently through a dynamic load balancing run- time. We demonstrate the improved performance, scalability, and flexibility for the computation of tensor contraction expressions on parallel computers using examples from coupled cluster methods.

  8. Mobile and fixed computer use by doctors and nurses on hospital wards: multi-method study on the relationships between clinician role, clinical task, and device choice.

    PubMed

    Andersen, Pia; Lindgaard, Anne-Mette; Prgomet, Mirela; Creswick, Nerida; Westbrook, Johanna I

    2009-08-04

    Selecting the right mix of stationary and mobile computing devices is a significant challenge for system planners and implementers. There is very limited research evidence upon which to base such decisions. We aimed to investigate the relationships between clinician role, clinical task, and selection of a computer hardware device in hospital wards. Twenty-seven nurses and eight doctors were observed for a total of 80 hours as they used a range of computing devices to access a computerized provider order entry system on two wards at a major Sydney teaching hospital. Observers used a checklist to record the clinical tasks completed, devices used, and location of the activities. Field notes were also documented during observations. Semi-structured interviews were conducted after observation sessions. Assessment of the physical attributes of three devices-stationary PCs, computers on wheels (COWs) and tablet PCs-was made. Two types of COWs were available on the wards: generic COWs (laptops mounted on trolleys) and ergonomic COWs (an integrated computer and cart device). Heuristic evaluation of the user interfaces was also carried out. The majority (93.1%) of observed nursing tasks were conducted using generic COWs. Most nursing tasks were performed in patients' rooms (57%) or in the corridors (36%), with a small percentage at a patient's bedside (5%). Most nursing tasks related to the preparation and administration of drugs. Doctors on ward rounds conducted 57.3% of observed clinical tasks on generic COWs and 35.9% on tablet PCs. On rounds, 56% of doctors' tasks were performed in the corridors, 29% in patients' rooms, and 3% at the bedside. Doctors not on a ward round conducted 93.6% of tasks using stationary PCs, most often within the doctors' office. Nurses and doctors were observed performing workarounds, such as transcribing medication orders from the computer to paper. The choice of device was related to clinical role, nature of the clinical task, degree of mobility required, including where task completion occurs, and device design. Nurses' work, and clinical tasks performed by doctors during ward rounds, require highly mobile computer devices. Nurses and doctors on ward rounds showed a strong preference for generic COWs over all other devices. Tablet PCs were selected by doctors for only a small proportion of clinical tasks. Even when using mobile devices clinicians completed a very low proportion of observed tasks at the bedside. The design of the devices and ward space configurations place limitations on how and where devices are used and on the mobility of clinical work. In such circumstances, clinicians will initiate workarounds to compensate. In selecting hardware devices, consideration should be given to who will be using the devices, the nature of their work, and the physical layout of the ward.

  9. Mobile and Fixed Computer Use by Doctors and Nurses on Hospital Wards: Multi-method Study on the Relationships Between Clinician Role, Clinical Task, and Device Choice

    PubMed Central

    Andersen, Pia; Lindgaard, Anne-Mette; Prgomet, Mirela; Creswick, Nerida

    2009-01-01

    Background Selecting the right mix of stationary and mobile computing devices is a significant challenge for system planners and implementers. There is very limited research evidence upon which to base such decisions. Objective We aimed to investigate the relationships between clinician role, clinical task, and selection of a computer hardware device in hospital wards. Methods Twenty-seven nurses and eight doctors were observed for a total of 80 hours as they used a range of computing devices to access a computerized provider order entry system on two wards at a major Sydney teaching hospital. Observers used a checklist to record the clinical tasks completed, devices used, and location of the activities. Field notes were also documented during observations. Semi-structured interviews were conducted after observation sessions. Assessment of the physical attributes of three devices—stationary PCs, computers on wheels (COWs) and tablet PCs—was made. Two types of COWs were available on the wards: generic COWs (laptops mounted on trolleys) and ergonomic COWs (an integrated computer and cart device). Heuristic evaluation of the user interfaces was also carried out. Results The majority (93.1%) of observed nursing tasks were conducted using generic COWs. Most nursing tasks were performed in patients’ rooms (57%) or in the corridors (36%), with a small percentage at a patient’s bedside (5%). Most nursing tasks related to the preparation and administration of drugs. Doctors on ward rounds conducted 57.3% of observed clinical tasks on generic COWs and 35.9% on tablet PCs. On rounds, 56% of doctors’ tasks were performed in the corridors, 29% in patients’ rooms, and 3% at the bedside. Doctors not on a ward round conducted 93.6% of tasks using stationary PCs, most often within the doctors’ office. Nurses and doctors were observed performing workarounds, such as transcribing medication orders from the computer to paper. Conclusions The choice of device was related to clinical role, nature of the clinical task, degree of mobility required, including where task completion occurs, and device design. Nurses’ work, and clinical tasks performed by doctors during ward rounds, require highly mobile computer devices. Nurses and doctors on ward rounds showed a strong preference for generic COWs over all other devices. Tablet PCs were selected by doctors for only a small proportion of clinical tasks. Even when using mobile devices clinicians completed a very low proportion of observed tasks at the bedside. The design of the devices and ward space configurations place limitations on how and where devices are used and on the mobility of clinical work. In such circumstances, clinicians will initiate workarounds to compensate. In selecting hardware devices, consideration should be given to who will be using the devices, the nature of their work, and the physical layout of the ward. PMID:19674959

  10. An Open-Source Toolbox for Surrogate Modeling of Joint Contact Mechanics

    PubMed Central

    Eskinazi, Ilan

    2016-01-01

    Goal Incorporation of elastic joint contact models into simulations of human movement could facilitate studying the interactions between muscles, ligaments, and bones. Unfortunately, elastic joint contact models are often too expensive computationally to be used within iterative simulation frameworks. This limitation can be overcome by using fast and accurate surrogate contact models that fit or interpolate input-output data sampled from existing elastic contact models. However, construction of surrogate contact models remains an arduous task. The aim of this paper is to introduce an open-source program called Surrogate Contact Modeling Toolbox (SCMT) that facilitates surrogate contact model creation, evaluation, and use. Methods SCMT interacts with the third party software FEBio to perform elastic contact analyses of finite element models and uses Matlab to train neural networks that fit the input-output contact data. SCMT features sample point generation for multiple domains, automated sampling, sample point filtering, and surrogate model training and testing. Results An overview of the software is presented along with two example applications. The first example demonstrates creation of surrogate contact models of artificial tibiofemoral and patellofemoral joints and evaluates their computational speed and accuracy, while the second demonstrates the use of surrogate contact models in a forward dynamic simulation of an open-chain leg extension-flexion motion. Conclusion SCMT facilitates the creation of computationally fast and accurate surrogate contact models. Additionally, it serves as a bridge between FEBio and OpenSim musculoskeletal modeling software. Significance Researchers may now create and deploy surrogate models of elastic joint contact with minimal effort. PMID:26186761

  11. Visibiome: an efficient microbiome search engine based on a scalable, distributed architecture.

    PubMed

    Azman, Syafiq Kamarul; Anwar, Muhammad Zohaib; Henschel, Andreas

    2017-07-24

    Given the current influx of 16S rRNA profiles of microbiota samples, it is conceivable that large amounts of them eventually are available for search, comparison and contextualization with respect to novel samples. This process facilitates the identification of similar compositional features in microbiota elsewhere and therefore can help to understand driving factors for microbial community assembly. We present Visibiome, a microbiome search engine that can perform exhaustive, phylogeny based similarity search and contextualization of user-provided samples against a comprehensive dataset of 16S rRNA profiles environments, while tackling several computational challenges. In order to scale to high demands, we developed a distributed system that combines web framework technology, task queueing and scheduling, cloud computing and a dedicated database server. To further ensure speed and efficiency, we have deployed Nearest Neighbor search algorithms, capable of sublinear searches in high-dimensional metric spaces in combination with an optimized Earth Mover Distance based implementation of weighted UniFrac. The search also incorporates pairwise (adaptive) rarefaction and optionally, 16S rRNA copy number correction. The result of a query microbiome sample is the contextualization against a comprehensive database of microbiome samples from a diverse range of environments, visualized through a rich set of interactive figures and diagrams, including barchart-based compositional comparisons and ranking of the closest matches in the database. Visibiome is a convenient, scalable and efficient framework to search microbiomes against a comprehensive database of environmental samples. The search engine leverages a popular but computationally expensive, phylogeny based distance metric, while providing numerous advantages over the current state of the art tool.

  12. A design methodology for portable software on parallel computers

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Miller, Keith W.; Chrisman, Dan A.

    1993-01-01

    This final report for research that was supported by grant number NAG-1-995 documents our progress in addressing two difficulties in parallel programming. The first difficulty is developing software that will execute quickly on a parallel computer. The second difficulty is transporting software between dissimilar parallel computers. In general, we expect that more hardware-specific information will be included in software designs for parallel computers than in designs for sequential computers. This inclusion is an instance of portability being sacrificed for high performance. New parallel computers are being introduced frequently. Trying to keep one's software on the current high performance hardware, a software developer almost continually faces yet another expensive software transportation. The problem of the proposed research is to create a design methodology that helps designers to more precisely control both portability and hardware-specific programming details. The proposed research emphasizes programming for scientific applications. We completed our study of the parallelizability of a subsystem of the NASA Earth Radiation Budget Experiment (ERBE) data processing system. This work is summarized in section two. A more detailed description is provided in Appendix A ('Programming Practices to Support Eventual Parallelism'). Mr. Chrisman, a graduate student, wrote and successfully defended a Ph.D. dissertation proposal which describes our research associated with the issues of software portability and high performance. The list of research tasks are specified in the proposal. The proposal 'A Design Methodology for Portable Software on Parallel Computers' is summarized in section three and is provided in its entirety in Appendix B. We are currently studying a proposed subsystem of the NASA Clouds and the Earth's Radiant Energy System (CERES) data processing system. This software is the proof-of-concept for the Ph.D. dissertation. We have implemented and measured the performance of a portion of this subsystem on the Intel iPSC/2 parallel computer. These results are provided in section four. Our future work is summarized in section five, our acknowledgements are stated in section six, and references for published papers associated with NAG-1-995 are provided in section seven.

  13. The Differential Effects of Two Types of Task Repetition on the Complexity, Accuracy, and Fluency in Computer-Mediated L2 Written Production: A Focus on Computer Anxiety

    ERIC Educational Resources Information Center

    Amiryousefi, Mohammad

    2016-01-01

    Previous task repetition studies have primarily focused on how task repetition characteristics affect the complexity, accuracy, and fluency in L2 oral production with little attention to L2 written production. The main purpose of the study reported in this paper was to examine the effects of task repetition versus procedural repetition on the…

  14. Biomedical Imaging

    DTIC Science & Technology

    1994-04-01

    distribution unlimited. United States Army Aeromedical Research Laboratory Fort Rucker, Alabama 36362-0577 Qualified recuesters Qualified requesters may...FUNDING NUMBER5 I PROGRAM zfJECT TASK WORK UNIT ELEMENT NO. NO. ACCESSION NO. 62787A 30162787A87$ EA 138 Biomedical Imaging 12. PERSONAL AUTHOR(S...times larger. Usually they are expensive with commercially available units starting at around $100,000. Triangulation sensors are capable of range

  15. Applied Behavior Analysis Is Ideal for the Development of a Land Mine Detection Technology Using Animals

    ERIC Educational Resources Information Center

    Jones, B. M.

    2011-01-01

    The detection and subsequent removal of land mines and unexploded ordnance (UXO) from many developing countries are slow, expensive, and dangerous tasks, but have the potential to improve the well-being of millions of people. Consequently, those involved with humanitarian mine and UXO clearance are actively searching for new and more efficient…

  16. Brief Report: The Effect of Delayed Matching to Sample on Stimulus Over-Selectivity

    ERIC Educational Resources Information Center

    Reed, Phil

    2012-01-01

    Stimulus over-selectivity occurs when one aspect of the environment controls behavior at the expense of other equally salient aspects. Participants were trained on a match-to-sample (MTS) discrimination task. Levels of over-selectivity in a group of children (4-18 years) with Autism Spectrum Disorders (ASD) were compared with a mental-aged matched…

  17. SOLVE II: A Technique to Improve Efficiency and Solve Problems in Hardwood Sawmills

    Treesearch

    Edward L. Adams; Daniel E. Dunmire

    1977-01-01

    The squeeze between rising costs and product values is getting tighter for sawmill managers. So, they are taking a closer took at the efficiency of their sawmills by making a complete analysis of their milling situation. Such an analysis requires considerable time and expense. To aid the manager with this task, the USDA Forest Service's Northeastern Forest...

  18. Improving finite element results in modeling heart valve mechanics.

    PubMed

    Earl, Emily; Mohammadi, Hadi

    2018-06-01

    Finite element analysis is a well-established computational tool which can be used for the analysis of soft tissue mechanics. Due to the structural complexity of the leaflet tissue of the heart valve, the currently available finite element models do not adequately represent the leaflet tissue. A method of addressing this issue is to implement computationally expensive finite element models, characterized by precise constitutive models including high-order and high-density mesh techniques. In this study, we introduce a novel numerical technique that enhances the results obtained from coarse mesh finite element models to provide accuracy comparable to that of fine mesh finite element models while maintaining a relatively low computational cost. Introduced in this study is a method by which the computational expense required to solve linear and nonlinear constitutive models, commonly used in heart valve mechanics simulations, is reduced while continuing to account for large and infinitesimal deformations. This continuum model is developed based on the least square algorithm procedure coupled with the finite difference method adhering to the assumption that the components of the strain tensor are available at all nodes of the finite element mesh model. The suggested numerical technique is easy to implement, practically efficient, and requires less computational time compared to currently available commercial finite element packages such as ANSYS and/or ABAQUS.

  19. Integration of active pauses and pattern of muscular activity during computer work.

    PubMed

    St-Onge, Nancy; Samani, Afshin; Madeleine, Pascal

    2017-09-01

    Submaximal isometric muscle contractions have been reported to increase variability of muscle activation during computer work; however, other types of active contractions may be more beneficial. Our objective was to determine which type of active pause vs. rest is more efficient in changing muscle activity pattern during a computer task. Asymptomatic regular computer users performed a standardised 20-min computer task four times, integrating a different type of pause: sub-maximal isometric contraction, dynamic contraction, postural exercise and rest. Surface electromyographic (SEMG) activity was recorded bilaterally from five neck/shoulder muscles. Root-mean-square decreased with isometric pauses in the cervical paraspinals, upper trapezius and middle trapezius, whereas it increased with rest. Variability in the pattern of muscular activity was not affected by any type of pause. Overall, no detrimental effects on the level of SEMG during active pauses were found suggesting that they could be implemented without a cost on activation level or variability. Practitioner Summary: We aimed to determine which type of active pause vs. rest is best in changing muscle activity pattern during a computer task. Asymptomatic computer users performed a standardised computer task integrating different types of pauses. Muscle activation decreased with isometric pauses in neck/shoulder muscles, suggesting their implementation during computer work.

  20. An iterative method for hydrodynamic interactions in Brownian dynamics simulations of polymer dynamics

    NASA Astrophysics Data System (ADS)

    Miao, Linling; Young, Charles D.; Sing, Charles E.

    2017-07-01

    Brownian Dynamics (BD) simulations are a standard tool for understanding the dynamics of polymers in and out of equilibrium. Quantitative comparison can be made to rheological measurements of dilute polymer solutions, as well as direct visual observations of fluorescently labeled DNA. The primary computational challenge with BD is the expensive calculation of hydrodynamic interactions (HI), which are necessary to capture physically realistic dynamics. The full HI calculation, performed via a Cholesky decomposition every time step, scales with the length of the polymer as O(N3). This limits the calculation to a few hundred simulated particles. A number of approximations in the literature can lower this scaling to O(N2 - N2.25), and explicit solvent methods scale as O(N); however both incur a significant constant per-time step computational cost. Despite this progress, there remains a need for new or alternative methods of calculating hydrodynamic interactions; large polymer chains or semidilute polymer solutions remain computationally expensive. In this paper, we introduce an alternative method for calculating approximate hydrodynamic interactions. Our method relies on an iterative scheme to establish self-consistency between a hydrodynamic matrix that is averaged over simulation and the hydrodynamic matrix used to run the simulation. Comparison to standard BD simulation and polymer theory results demonstrates that this method quantitatively captures both equilibrium and steady-state dynamics after only a few iterations. The use of an averaged hydrodynamic matrix allows the computationally expensive Brownian noise calculation to be performed infrequently, so that it is no longer the bottleneck of the simulation calculations. We also investigate limitations of this conformational averaging approach in ring polymers.

  1. Automatic Data Filter Customization Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Mandrake, Lukas

    2013-01-01

    This work predicts whether a retrieval algorithm will usefully determine CO2 concentration from an input spectrum of GOSAT (Greenhouse Gases Observing Satellite). This was done to eliminate needless runtime on atmospheric soundings that would never yield useful results. A space of 50 dimensions was examined for predictive power on the final CO2 results. Retrieval algorithms are frequently expensive to run, and wasted effort defeats requirements and expends needless resources. This algorithm could be used to help predict and filter unneeded runs in any computationally expensive regime. Traditional methods such as the Fischer discriminant analysis and decision trees can attempt to predict whether a sounding will be properly processed. However, this work sought to detect a subsection of the dimensional space that can be simply filtered out to eliminate unwanted runs. LDAs (linear discriminant analyses) and other systems examine the entire data and judge a "best fit," giving equal weight to complex and problematic regions as well as simple, clear-cut regions. In this implementation, a genetic space of "left" and "right" thresholds outside of which all data are rejected was defined. These left/right pairs are created for each of the 50 input dimensions. A genetic algorithm then runs through countless potential filter settings using a JPL computer cluster, optimizing the tossed-out data s yield (proper vs. improper run removal) and number of points tossed. This solution is robust to an arbitrary decision boundary within the data and avoids the global optimization problem of whole-dataset fitting using LDA or decision trees. It filters out runs that would not have produced useful CO2 values to save needless computation. This would be an algorithmic preprocessing improvement to any computationally expensive system.

  2. Prototype part task trainer: A remote manipulator system simulator

    NASA Technical Reports Server (NTRS)

    Shores, David

    1989-01-01

    The Part Task Trainer program (PTT) is a kinematic simulation of the Remote Manipulator System (RMS) for the orbiter. The purpose of the PTT is to supply a low cost man-in-the-loop simulator, allowing the student to learn operational procedures which then can be used in the more expensive full scale simulators. PTT will allow the crew members to work on their arm operation skills without the need for other people running the simulation. The controlling algorithms for the arm were coded out of the Functional Subsystem Requirements Document to ensure realistic operation of the simulation. Relying on the hardware of the workstation to provide fast refresh rates for full shaded images allows the simulation to be run on small low cost stand alone work stations, removing the need to be tied into a multi-million dollar computer for the simulation. PTT will allow the student to make errors which in full scale mock up simulators might cause failures or damage hardware. On the screen the user is shown a graphical representation of the RMS control panel in the aft cockpit of the orbiter, along with a main view window and up to six trunion and guide windows. The dials drawn on the panel may be turned to select the desired mode of operation. The inputs controlling the arm are read from a chair with a Translational Hand Controller (THC) and a Rotational Hand Controller (RHC) attached to it.

  3. Planning paths to multiple targets: memory involvement and planning heuristics in spatial problem solving.

    PubMed

    Wiener, J M; Ehbauer, N N; Mallot, H A

    2009-09-01

    For large numbers of targets, path planning is a complex and computationally expensive task. Humans, however, usually solve such tasks quickly and efficiently. We present experiments studying human path planning performance and the cognitive processes and heuristics involved. Twenty-five places were arranged on a regular grid in a large room. Participants were repeatedly asked to solve traveling salesman problems (TSP), i.e., to find the shortest closed loop connecting a start location with multiple target locations. In Experiment 1, we tested whether humans employed the nearest neighbor (NN) strategy when solving the TSP. Results showed that subjects outperform the NN-strategy, suggesting that it is not sufficient to explain human route planning behavior. As a second possible strategy we tested a hierarchical planning heuristic in Experiment 2, demonstrating that participants first plan a coarse route on the region level that is refined during navigation. To test for the relevance of spatial working memory (SWM) and spatial long-term memory (LTM) for planning performance and the planning heuristics applied, we varied the memory demands between conditions in Experiment 2. In one condition the target locations were directly marked, such that no memory was required; a second condition required participants to memorize the target locations during path planning (SWM); in a third condition, additionally, the locations of targets had to retrieved from LTM (SWM and LTM). Results showed that navigation performance decreased with increasing memory demands while the dependence on the hierarchical planning heuristic increased.

  4. Evaluation of a computerized aid for creating human behavioral representations of human-computer interaction.

    PubMed

    Williams, Kent E; Voigt, Jeffrey R

    2004-01-01

    The research reported herein presents the results of an empirical evaluation that focused on the accuracy and reliability of cognitive models created using a computerized tool: the cognitive analysis tool for human-computer interaction (CAT-HCI). A sample of participants, expert in interacting with a newly developed tactical display for the U.S. Army's Bradley Fighting Vehicle, individually modeled their knowledge of 4 specific tasks employing the CAT-HCI tool. Measures of the accuracy and consistency of task models created by these task domain experts using the tool were compared with task models created by a double expert. The findings indicated a high degree of consistency and accuracy between the different "single experts" in the task domain in terms of the resultant models generated using the tool. Actual or potential applications of this research include assessing human-computer interaction complexity, determining the productivity of human-computer interfaces, and analyzing an interface design to determine whether methods can be automated.

  5. Computing LORAN time differences with an HP-25 hand calculator

    NASA Technical Reports Server (NTRS)

    Jones, E. D.

    1978-01-01

    A program for an HP-25 or HP-25C hand calculator that will calculate accurate LORAN-C time differences is described and presented. The program is most useful when checking the accuracy of a LORAN-C receiver at a known latitude and longitude without the aid of an expensive computer. It can thus be used to compute time differences for known landmarks or waypoints to predict in advance the approximate readings during a navigation mission.

  6. Design Trade-off Between Performance and Fault-Tolerance of Space Onboard Computers

    NASA Astrophysics Data System (ADS)

    Gorbunov, M. S.; Antonov, A. A.

    2017-01-01

    It is well known that there is a trade-off between performance and power consumption in onboard computers. The fault-tolerance is another important factor affecting performance, chip area and power consumption. Involving special SRAM cells and error-correcting codes is often too expensive with relation to the performance needed. We discuss the possibility of finding the optimal solutions for modern onboard computer for scientific apparatus focusing on multi-level cache memory design.

  7. Harness That S.O.B.: Distributing Remote Sensing Analysis in a Small Office/Business

    NASA Astrophysics Data System (ADS)

    Kramer, J.; Combe, J.; McCord, T. B.

    2009-12-01

    Researchers in a small office/business (SOB) operate with limited funding, equipment, and software availability. To mitigate these issues, we developed a distributed computing framework that: 1) leverages open source software to implement functionality otherwise reliant on proprietary software and 2) harnesses the unused power of (semi-)idle office computers with mixed operating systems (OSes). This abstract outlines some reasons for the effort, its conceptual basis and implementation, and provides brief speedup results. The Multiple-Endmember Linear Spectral Unmixing Model (MELSUM)1 processes remote-sensing (hyper-)spectral images. The algorithm is computationally expensive, sometimes taking a full week or more for a 1 million pixel/100 wavelength image. Analysis of pixels is independent, so a large benefit can be gained from parallel processing techniques. Job concurrency is limited by the number of active processing units. MELSUM was originally written in the Interactive Data Language (IDL). Despite its multi-threading capabilities, an IDL instance executes on a single machine, and so concurrency is limited by the machine's number of central processing units (CPUs). Network distribution can access more CPUs to provide a greater speedup, while also taking advantage of (often) underutilized extant equipment. appropriately integrating open source software magnifies the impact by avoiding the purchase of additional licenses. Our method of distribution breaks into four conceptual parts: 1) the top- or task-level user interface; 2) a mid-level program that manages hosts and jobs, called the distribution server; 3) a low-level executable for individual pixel calculations; and 4) a control program to synchronize sequential sub-tasks. Each part is a separate OS process, passing information via shell commands and/or temporary files. While the control and low-level executables are short-lived, the top-level program and distribution server run (at least) for the entirety of a task. While any language that supports "spawning" of OS processes can serve as the top-level interface, our solution, d-MELSUM, has been integrated with the IDL code. Doing so extracts the core calculating from IDL, but otherwise preserves IDL features and functionality. The distribution server is an extension of ADE2 mobile robot software, written in Java. Network connections rely on a secure shell (SSH) implementation, whether natively available (e.g., Linux or OS X) or user installed (e.g., OpenSSH available via Cygwin on Windows). Both the low-level and control programs are relatively small C++ programs (~54K, or 1500 lines, total) that were developed in-house, and use GNU's g++ compiler. The low-level code also relies on Linear Algebra PACKage (LAPACK) libraries for pixel calculations. Despite performance being contingent on data size, CPU speed, and network communication rate and latency to some degree, results have generally demonstrated a time reduction of a factor proportional to the number of open connections (one per CPU). For example, the task mentioned above requiring a week to process took 18 hours with d-MELSUM, using 10 CPUs on 2 computers. 1 J.-Ph Combe, et al., PSS 56, 2008. 2 J. Kramer and M. Scheutz, IROS2006, 2006.

  8. Sustaining Moore's law with 3D chips

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeBenedictis, Erik P.; Badaroglu, Mustafa; Chen, An

    Here, rather than continue the expensive and time-consuming quest for transistor replacement, the authors argue that 3D chips coupled with new computer architectures can keep Moore's law on its traditional scaling path.

  9. Sustaining Moore's law with 3D chips

    DOE PAGES

    DeBenedictis, Erik P.; Badaroglu, Mustafa; Chen, An; ...

    2017-08-01

    Here, rather than continue the expensive and time-consuming quest for transistor replacement, the authors argue that 3D chips coupled with new computer architectures can keep Moore's law on its traditional scaling path.

  10. 46 CFR 404.5 - Guidelines for the recognition of expenses.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... to the extent that they conform to depreciation plus an allowance for return on investment (computed... ratemaking purposes. The Director reviews non-pilotage activities to determine if any adversely impact the...

  11. Interactive Computer Based Assessment Tasks: How Problem-Solving Process Data Can Inform Instruction

    ERIC Educational Resources Information Center

    Zoanetti, Nathan

    2010-01-01

    This article presents key steps in the design and analysis of a computer based problem-solving assessment featuring interactive tasks. The purpose of the assessment is to support targeted instruction for students by diagnosing strengths and weaknesses at different stages of problem-solving. The first focus of this article is the task piloting…

  12. Item Mass and Complexity and the Arithmetic Computation of Students with Learning Disabilities.

    ERIC Educational Resources Information Center

    Cawley, John F.; Shepard, Teri; Smith, Maureen; Parmar, Rene S.

    1997-01-01

    The performance of 76 students (ages 10 to 15) with learning disabilities on four tasks of arithmetic computation within each of the four basic operations was examined. Tasks varied in difficulty level and number of strokes needed to complete all items. Intercorrelations between task sets and operations were examined as was the use of…

  13. Task Scheduling in Desktop Grids: Open Problems

    NASA Astrophysics Data System (ADS)

    Chernov, Ilya; Nikitina, Natalia; Ivashko, Evgeny

    2017-12-01

    We survey the areas of Desktop Grid task scheduling that seem to be insufficiently studied so far and are promising for efficiency, reliability, and quality of Desktop Grid computing. These topics include optimal task grouping, "needle in a haystack" paradigm, game-theoretical scheduling, domain-imposed approaches, special optimization of the final stage of the batch computation, and Enterprise Desktop Grids.

  14. Computer-Mediated Communication in English for Specific Purposes: A Case Study with Computer Science Students at Universiti Teknologi Malaysia

    ERIC Educational Resources Information Center

    Shamsudin, Sarimah; Nesi, Hilary

    2006-01-01

    This paper will describe an ESP approach to the design and implementation of computer-mediated communication (CMC) tasks for computer science students at Universiti Teknologi Malaysia, and discuss the effectiveness of the chat feature of Windows NetMeeting as a tool for developing specified language skills. CMC tasks were set within a programme of…

  15. Simplified Distributed Computing

    NASA Astrophysics Data System (ADS)

    Li, G. G.

    2006-05-01

    The distributed computing runs from high performance parallel computing, GRID computing, to an environment where idle CPU cycles and storage space of numerous networked systems are harnessed to work together through the Internet. In this work we focus on building an easy and affordable solution for computationally intensive problems in scientific applications based on existing technology and hardware resources. This system consists of a series of controllers. When a job request is detected by a monitor or initialized by an end user, the job manager launches the specific job handler for this job. The job handler pre-processes the job, partitions the job into relative independent tasks, and distributes the tasks into the processing queue. The task handler picks up the related tasks, processes the tasks, and puts the results back into the processing queue. The job handler also monitors and examines the tasks and the results, and assembles the task results into the overall solution for the job request when all tasks are finished for each job. A resource manager configures and monitors all participating notes. A distributed agent is deployed on all participating notes to manage the software download and report the status. The processing queue is the key to the success of this distributed system. We use BEA's Weblogic JMS queue in our implementation. It guarantees the message delivery and has the message priority and re-try features so that the tasks never get lost. The entire system is built on the J2EE technology and it can be deployed on heterogeneous platforms. It can handle algorithms and applications developed in any languages on any platforms. J2EE adaptors are provided to manage and communicate the existing applications to the system so that the applications and algorithms running on Unix, Linux and Windows can all work together. This system is easy and fast to develop based on the industry's well-adopted technology. It is highly scalable and heterogeneous. It is an open system and any number and type of machines can join the system to provide the computational power. This asynchronous message-based system can achieve second of response time. For efficiency, communications between distributed tasks are often done at the start and end of the tasks but intermediate status of the tasks can also be provided.

  16. Computer programs: Information retrieval and data analysis, a compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The items presented in this compilation are divided into two sections. Section one treats of computer usage devoted to the retrieval of information that affords the user rapid entry into voluminous collections of data on a selective basis. Section two is a more generalized collection of computer options for the user who needs to take such data and reduce it to an analytical study within a specific discipline. These programs, routines, and subroutines should prove useful to users who do not have access to more sophisticated and expensive computer software.

  17. Improving communication when seeking informed consent: a randomised controlled study of a computer-based method for providing information to prospective clinical trial participants.

    PubMed

    Karunaratne, Asuntha S; Korenman, Stanley G; Thomas, Samantha L; Myles, Paul S; Komesaroff, Paul A

    2010-04-05

    To assess the efficacy, with respect to participant understanding of information, of a computer-based approach to communication about complex, technical issues that commonly arise when seeking informed consent for clinical research trials. An open, randomised controlled study of 60 patients with diabetes mellitus, aged 27-70 years, recruited between August 2006 and October 2007 from the Department of Diabetes and Endocrinology at the Alfred Hospital and Baker IDI Heart and Diabetes Institute, Melbourne. Participants were asked to read information about a mock study via a computer-based presentation (n = 30) or a conventional paper-based information statement (n = 30). The computer-based presentation contained visual aids, including diagrams, video, hyperlinks and quiz pages. Understanding of information as assessed by quantitative and qualitative means. Assessment scores used to measure level of understanding were significantly higher in the group that completed the computer-based task than the group that completed the paper-based task (82% v 73%; P = 0.005). More participants in the group that completed the computer-based task expressed interest in taking part in the mock study (23 v 17 participants; P = 0.01). Most participants from both groups preferred the idea of a computer-based presentation to the paper-based statement (21 in the computer-based task group, 18 in the paper-based task group). A computer-based method of providing information may help overcome existing deficiencies in communication about clinical research, and may reduce costs and improve efficiency in recruiting participants for clinical trials.

  18. Towards Wearable Cognitive Assistance

    DTIC Science & Technology

    2013-12-01

    ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Keywords: mobile computing, cloud...It presents a muli-tiered mobile system architecture that offers tight end-to-end latency bounds on compute-intensive cognitive assistance...to an entire neighborhood or an entire city is extremely expensive and time-consuming. Physical infrastructure in public spaces tends to evolve very

  19. Behavior-Based Fault Monitoring

    DTIC Science & Technology

    1990-12-03

    processor targeted for avionics and space applications . It appears that the signature monitoring technique can be extended to detect computer viruses as...most common approach is structural duplication. Although effective, duplication is too expensive for all but a few applications . Redundancy can also be...Signature Monitoring and Encryption," Int. Conf. on Dependable Computing for Critical Applications , August 1989. 7. K.D. Wilken and J.P. Shen

  20. Artificial Intelligence Methods: Challenge in Computer Based Polymer Design

    NASA Astrophysics Data System (ADS)

    Rusu, Teodora; Pinteala, Mariana; Cartwright, Hugh

    2009-08-01

    This paper deals with the use of Artificial Intelligence Methods (AI) in the design of new molecules possessing desired physical, chemical and biological properties. This is an important and difficult problem in the chemical, material and pharmaceutical industries. Traditional methods involve a laborious and expensive trial-and-error procedure, but computer-assisted approaches offer many advantages in the automation of molecular design.

  1. Gaussian process regression of chirplet decomposed ultrasonic B-scans of a simulated design case

    NASA Astrophysics Data System (ADS)

    Wertz, John; Homa, Laura; Welter, John; Sparkman, Daniel; Aldrin, John

    2018-04-01

    The US Air Force seeks to implement damage tolerant lifecycle management of composite structures. Nondestructive characterization of damage is a key input to this framework. One approach to characterization is model-based inversion of the ultrasonic response from damage features; however, the computational expense of modeling the ultrasonic waves within composites is a major hurdle to implementation. A surrogate forward model with sufficient accuracy and greater computational efficiency is therefore critical to enabling model-based inversion and damage characterization. In this work, a surrogate model is developed on the simulated ultrasonic response from delamination-like structures placed at different locations within a representative composite layup. The resulting B-scans are decomposed via the chirplet transform, and a Gaussian process model is trained on the chirplet parameters. The quality of the surrogate is tested by comparing the B-scan for a delamination configuration not represented within the training data set. The estimated B-scan has a maximum error of ˜15% for an estimated reduction in computational runtime of ˜95% for 200 function calls. This considerable reduction in computational expense makes full 3D characterization of impact damage tractable.

  2. Using Approximations to Accelerate Engineering Design Optimization

    NASA Technical Reports Server (NTRS)

    Torczon, Virginia; Trosset, Michael W.

    1998-01-01

    Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.

  3. Advanced information processing system: Local system services

    NASA Technical Reports Server (NTRS)

    Burkhardt, Laura; Alger, Linda; Whittredge, Roy; Stasiowski, Peter

    1989-01-01

    The Advanced Information Processing System (AIPS) is a multi-computer architecture composed of hardware and software building blocks that can be configured to meet a broad range of application requirements. The hardware building blocks are fault-tolerant, general-purpose computers, fault-and damage-tolerant networks (both computer and input/output), and interfaces between the networks and the computers. The software building blocks are the major software functions: local system services, input/output, system services, inter-computer system services, and the system manager. The foundation of the local system services is an operating system with the functions required for a traditional real-time multi-tasking computer, such as task scheduling, inter-task communication, memory management, interrupt handling, and time maintenance. Resting on this foundation are the redundancy management functions necessary in a redundant computer and the status reporting functions required for an operator interface. The functional requirements, functional design and detailed specifications for all the local system services are documented.

  4. A resource management architecture based on complex network theory in cloud computing federation

    NASA Astrophysics Data System (ADS)

    Zhang, Zehua; Zhang, Xuejie

    2011-10-01

    Cloud Computing Federation is a main trend of Cloud Computing. Resource Management has significant effect on the design, realization, and efficiency of Cloud Computing Federation. Cloud Computing Federation has the typical characteristic of the Complex System, therefore, we propose a resource management architecture based on complex network theory for Cloud Computing Federation (abbreviated as RMABC) in this paper, with the detailed design of the resource discovery and resource announcement mechanisms. Compare with the existing resource management mechanisms in distributed computing systems, a Task Manager in RMABC can use the historical information and current state data get from other Task Managers for the evolution of the complex network which is composed of Task Managers, thus has the advantages in resource discovery speed, fault tolerance and adaptive ability. The result of the model experiment confirmed the advantage of RMABC in resource discovery performance.

  5. Integrating Cloud-Computing-Specific Model into Aircraft Design

    NASA Astrophysics Data System (ADS)

    Zhimin, Tian; Qi, Lin; Guangwen, Yang

    Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.

  6. Adaptive Allocation of Decision Making Responsibility Between Human and Computer in Multi-Task Situations. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chu, Y. Y.

    1978-01-01

    A unified formulation of computer-aided, multi-task, decision making is presented. Strategy for the allocation of decision making responsibility between human and computer is developed. The plans of a flight management systems are studied. A model based on the queueing theory was implemented.

  7. Computer-Based Simulations for Maintenance Training: Current ARI Research. Technical Report 544.

    ERIC Educational Resources Information Center

    Knerr, Bruce W.; And Others

    Three research efforts that used computer-based simulations for maintenance training were in progress when this report was written: Game-Based Learning, which investigated the use of computer-based games to train electronics diagnostic skills; Human Performance in Fault Diagnosis Tasks, which evaluated the use of context-free tasks to train…

  8. Evaluating the Efficacy of the Cloud for Cluster Computation

    NASA Technical Reports Server (NTRS)

    Knight, David; Shams, Khawaja; Chang, George; Soderstrom, Tom

    2012-01-01

    Computing requirements vary by industry, and it follows that NASA and other research organizations have computing demands that fall outside the mainstream. While cloud computing made rapid inroads for tasks such as powering web applications, performance issues on highly distributed tasks hindered early adoption for scientific computation. One venture to address this problem is Nebula, NASA's homegrown cloud project tasked with delivering science-quality cloud computing resources. However, another industry development is Amazon's high-performance computing (HPC) instances on Elastic Cloud Compute (EC2) that promises improved performance for cluster computation. This paper presents results from a series of benchmarks run on Amazon EC2 and discusses the efficacy of current commercial cloud technology for running scientific applications across a cluster. In particular, a 240-core cluster of cloud instances achieved 2 TFLOPS on High-Performance Linpack (HPL) at 70% of theoretical computational performance. The cluster's local network also demonstrated sub-100 ?s inter-process latency with sustained inter-node throughput in excess of 8 Gbps. Beyond HPL, a real-world Hadoop image processing task from NASA's Lunar Mapping and Modeling Project (LMMP) was run on a 29 instance cluster to process lunar and Martian surface images with sizes on the order of tens of gigapixels. These results demonstrate that while not a rival of dedicated supercomputing clusters, commercial cloud technology is now a feasible option for moderately demanding scientific workloads.

  9. Adapting to the surface: A comparison of handwriting measures when writing on a tablet computer and on paper.

    PubMed

    Gerth, Sabrina; Dolk, Thomas; Klassert, Annegret; Fliesser, Michael; Fischer, Martin H; Nottbusch, Guido; Festman, Julia

    2016-08-01

    Our study addresses the following research questions: Are there differences between handwriting movements on paper and on a tablet computer? Can experienced writers, such as most adults, adapt their graphomotor execution during writing to a rather unfamiliar surface for instance a tablet computer? We examined the handwriting performance of adults in three tasks with different complexity: (a) graphomotor abilities, (b) visuomotor abilities and (c) handwriting. Each participant performed each task twice, once on paper and once on a tablet computer with a pen. We tested 25 participants by measuring their writing duration, in air time, number of pen lifts, writing velocity and number of inversions in velocity. The data were analyzed using linear mixed-effects modeling with repeated measures. Our results reveal differences between writing on paper and on a tablet computer which were partly task-dependent. Our findings also show that participants were able to adapt their graphomotor execution to the smoother surface of the tablet computer during the tasks. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. MAT - MULTI-ATTRIBUTE TASK BATTERY FOR HUMAN OPERATOR WORKLOAD AND STRATEGIC BEHAVIOR RESEARCH

    NASA Technical Reports Server (NTRS)

    Comstock, J. R.

    1994-01-01

    MAT, a Multi-Attribute Task battery, gives the researcher the capability of performing multi-task workload and performance experiments. The battery provides a benchmark set of tasks for use in a wide range of laboratory studies of operator performance and workload. MAT incorporates tasks analogous to activities that aircraft crew members perform in flight, while providing a high degree of experiment control, performance data on each subtask, and freedom to use non-pilot test subjects. The MAT battery primary display is composed of four separate task windows which are as follows: a monitoring task window which includes gauges and warning lights, a tracking task window for the demands of manual control, a communication task window to simulate air traffic control communications, and a resource management task window which permits maintaining target levels on a fuel management task. In addition, a scheduling task window gives the researcher information about future task demands. The battery also provides the option of manual or automated control of tasks. The task generates performance data for each subtask. The task battery may be paused and onscreen workload rating scales presented to the subject. The MAT battery was designed to use a serially linked second computer to generate the voice messages for the Communications task. The MATREMX program and support files, which are included in the MAT package, were designed to work with the Heath Voice Card (Model HV-2000, available through the Heath Company, Benton Harbor, Michigan 49022); however, the MATREMX program and support files may easily be modified to work with other voice synthesizer or digitizer cards. The MAT battery task computer may also be used independent of the voice computer if no computer synthesized voice messages are desired or if some other method of presenting auditory messages is devised. MAT is written in QuickBasic and assembly language for IBM PC series and compatible computers running MS-DOS. The code in MAT is written for Microsoft QuickBasic 4.5 and Microsoft Macro Assembler 5.1. This package requires a joystick and EGA or VGA color graphics. An 80286, 386, or 486 processor machine is highly recommended. The standard distribution medium for MAT is a 5.25 inch 360K MS-DOS format diskette. The files are compressed using the PKZIP file compression utility. PKUNZIP is included on the distribution diskette. MAT was developed in 1992. IBM PC is a registered trademark of International Business Machines. MS-DOS, Microsoft QuickBasic, and Microsoft Macro Assembler are registered trademarks of Microsoft Corporation. PKZIP and PKUNZIP are registered trademarks of PKWare, Inc.

  11. Time Sharing Between Robotics and Process Control: Validating a Model of Attention Switching.

    PubMed

    Wickens, Christopher Dow; Gutzwiller, Robert S; Vieane, Alex; Clegg, Benjamin A; Sebok, Angelia; Janes, Jess

    2016-03-01

    The aim of this study was to validate the strategic task overload management (STOM) model that predicts task switching when concurrence is impossible. The STOM model predicts that in overload, tasks will be switched to, to the extent that they are attractive on task attributes of high priority, interest, and salience and low difficulty. But more-difficult tasks are less likely to be switched away from once they are being performed. In Experiment 1, participants performed four tasks of the Multi-Attribute Task Battery and provided task-switching data to inform the role of difficulty and priority. In Experiment 2, participants concurrently performed an environmental control task and a robotic arm simulation. Workload was varied by automation of arm movement and both the phases of environmental control and existence of decision support for fault management. Attention to the two tasks was measured using a head tracker. Experiment 1 revealed the lack of influence of task priority and confirmed the differing roles of task difficulty. In Experiment 2, the percentage attention allocation across the eight conditions was predicted by the STOM model when participants rated the four attributes. Model predictions were compared against empirical data and accounted for over 95% of variance in task allocation. More-difficult tasks were performed longer than easier tasks. Task priority does not influence allocation. The multiattribute decision model provided a good fit to the data. The STOM model is useful for predicting cognitive tunneling given that human-in-the-loop simulation is time-consuming and expensive. © 2016, Human Factors and Ergonomics Society.

  12. Checkpoint triggering in a computer system

    DOEpatents

    Cher, Chen-Yong

    2016-09-06

    According to an aspect, a method for triggering creation of a checkpoint in a computer system includes executing a task in a processing node of the computer system and determining whether it is time to read a monitor associated with a metric of the task. The monitor is read to determine a value of the metric based on determining that it is time to read the monitor. A threshold for triggering creation of the checkpoint is determined based on the value of the metric. Based on determining that the value of the metric has crossed the threshold, the checkpoint including state data of the task is created to enable restarting execution of the task upon a restart operation.

  13. Computing technology in the 1980's. [computers

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1978-01-01

    Advances in computing technology have been led by consistently improving semiconductor technology. The semiconductor industry has turned out ever faster, smaller, and less expensive devices since transistorized computers were first introduced 20 years ago. For the next decade, there appear to be new advances possible, with the rate of introduction of improved devices at least equal to the historic trends. The implication of these projections is that computers will enter new markets and will truly be pervasive in business, home, and factory as their cost diminishes and their computational power expands to new levels. The computer industry as we know it today will be greatly altered in the next decade, primarily because the raw computer system will give way to computer-based turn-key information and control systems.

  14. Airborne Intelligent Display (AID) Phase I Software Description,

    DTIC Science & Technology

    1983-10-24

    Board Computer Characteristics 10 3.0 SOFTWARE GENERAL DESCRIPTION 13 3.1 Overview 13 3.2 System Software 14 3.2.1 System Startup 14 3.2.1.1 Initial...3 A-2 Task States A-4 A-3 Task Program Structure A-6 A-4 Task States and State Change Mechanisms A-7 A-5 Computing Return Addresses: RUNADR, SLPADR A...techniques. 2.2 Design Approach The stated objectives were met by: 1. distributing the processing load among multiple Z80 single-board computers (SBC’s). This

  15. One Task, Divergent Solutions: High- versus Low-Status Sources and Social Comparison Guide Adaptation in a Computer-Supported Socio-Cognitive Conflict Task

    ERIC Educational Resources Information Center

    Baumeister, Antonia E.; Engelmann, Tanja; Hesse, Friedrich W.

    2017-01-01

    This experimental study extends conflict elaboration theory (1) by revealing social influence dynamics for a knowledge-rich computer-supported socio-cognitive conflict task not investigated in the context of this theory before and (2) by showing the impact of individual differences in social comparison orientation. Students in two conditions…

  16. What and When Second-Language Learners Revise When Responding to Timed Writing Tasks on the Computer: The Roles of Task Type, Second Language Proficiency, and Keyboarding Skills

    ERIC Educational Resources Information Center

    Barkaoui, Khaled

    2016-01-01

    This study contributes to the literature on second language (L2) learners' revision behavior by describing what, when, and how often L2 learners revise their texts when responding to timed writing tasks on the computer and by examining the effects of task type, L2 proficiency, and keyboarding skills on what and when L2 learners revise. Each of 54…

  17. Fast by Nature - How Stress Patterns Define Human Experience and Performance in Dexterous Tasks

    PubMed Central

    Pavlidis, I.; Tsiamyrtzis, P.; Shastri, D.; Wesley, A.; Zhou, Y.; Lindner, P.; Buddharaju, P.; Joseph, R.; Mandapati, A.; Dunkin, B.; Bass, B.

    2012-01-01

    In the present study we quantify stress by measuring transient perspiratory responses on the perinasal area through thermal imaging. These responses prove to be sympathetically driven and hence, a likely indicator of stress processes in the brain. Armed with the unobtrusive measurement methodology we developed, we were able to monitor stress responses in the context of surgical training, the quintessence of human dexterity. We show that in dexterous tasking under critical conditions, novices attempt to perform a task's step equally fast with experienced individuals. We further show that while fast behavior in experienced individuals is afforded by skill, fast behavior in novices is likely instigated by high stress levels, at the expense of accuracy. Humans avoid adjusting speed to skill and rather grow their skill to a predetermined speed level, likely defined by neurophysiological latency. PMID:22396852

  18. Switching from computer to microcomputer architecture education

    NASA Astrophysics Data System (ADS)

    Bolanakis, Dimosthenis E.; Kotsis, Konstantinos T.; Laopoulos, Theodore

    2010-03-01

    In the last decades, the technological and scientific evolution of the computing discipline has been widely affecting research in software engineering education, which nowadays advocates more enlightened and liberal ideas. This article reviews cross-disciplinary research on a computer architecture class in consideration of its switching to microcomputer architecture. The authors present their strategies towards a successful crossing of boundaries between engineering disciplines. This communication aims at providing a different aspect on professional courses that are, nowadays, addressed at the expense of traditional courses.

  19. BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments

    DOE PAGES

    Thomas, Brandon R.; Chylek, Lily A.; Colvin, Joshua; ...

    2015-11-09

    Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here in this paper, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive.

  20. Parietal neural prosthetic control of a computer cursor in a graphical-user-interface task

    NASA Astrophysics Data System (ADS)

    Revechkis, Boris; Aflalo, Tyson NS; Kellis, Spencer; Pouratian, Nader; Andersen, Richard A.

    2014-12-01

    Objective. To date, the majority of Brain-Machine Interfaces have been used to perform simple tasks with sequences of individual targets in otherwise blank environments. In this study we developed a more practical and clinically relevant task that approximated modern computers and graphical user interfaces (GUIs). This task could be problematic given the known sensitivity of areas typically used for BMIs to visual stimuli, eye movements, decision-making, and attentional control. Consequently, we sought to assess the effect of a complex, GUI-like task on the quality of neural decoding. Approach. A male rhesus macaque monkey was implanted with two 96-channel electrode arrays in area 5d of the superior parietal lobule. The animal was trained to perform a GUI-like ‘Face in a Crowd’ task on a computer screen that required selecting one cued, icon-like, face image from a group of alternatives (the ‘Crowd’) using a neurally controlled cursor. We assessed whether the crowd affected decodes of intended cursor movements by comparing it to a ‘Crowd Off’ condition in which only the matching target appeared without alternatives. We also examined if training a neural decoder with the Crowd On rather than Off had any effect on subsequent decode quality. Main results. Despite the additional demands of working with the Crowd On, the animal was able to robustly perform the task under Brain Control. The presence of the crowd did not itself affect decode quality. Training the decoder with the Crowd On relative to Off had no negative influence on subsequent decoding performance. Additionally, the subject was able to gaze around freely without influencing cursor position. Significance. Our results demonstrate that area 5d recordings can be used for decoding in a complex, GUI-like task with free gaze. Thus, this area is a promising source of signals for neural prosthetics that utilize computing devices with GUI interfaces, e.g. personal computers, mobile devices, and tablet computers.

  1. Parietal neural prosthetic control of a computer cursor in a graphical-user-interface task.

    PubMed

    Revechkis, Boris; Aflalo, Tyson N S; Kellis, Spencer; Pouratian, Nader; Andersen, Richard A

    2014-12-01

    To date, the majority of Brain-Machine Interfaces have been used to perform simple tasks with sequences of individual targets in otherwise blank environments. In this study we developed a more practical and clinically relevant task that approximated modern computers and graphical user interfaces (GUIs). This task could be problematic given the known sensitivity of areas typically used for BMIs to visual stimuli, eye movements, decision-making, and attentional control. Consequently, we sought to assess the effect of a complex, GUI-like task on the quality of neural decoding. A male rhesus macaque monkey was implanted with two 96-channel electrode arrays in area 5d of the superior parietal lobule. The animal was trained to perform a GUI-like 'Face in a Crowd' task on a computer screen that required selecting one cued, icon-like, face image from a group of alternatives (the 'Crowd') using a neurally controlled cursor. We assessed whether the crowd affected decodes of intended cursor movements by comparing it to a 'Crowd Off' condition in which only the matching target appeared without alternatives. We also examined if training a neural decoder with the Crowd On rather than Off had any effect on subsequent decode quality. Despite the additional demands of working with the Crowd On, the animal was able to robustly perform the task under Brain Control. The presence of the crowd did not itself affect decode quality. Training the decoder with the Crowd On relative to Off had no negative influence on subsequent decoding performance. Additionally, the subject was able to gaze around freely without influencing cursor position. Our results demonstrate that area 5d recordings can be used for decoding in a complex, GUI-like task with free gaze. Thus, this area is a promising source of signals for neural prosthetics that utilize computing devices with GUI interfaces, e.g. personal computers, mobile devices, and tablet computers.

  2. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  3. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  4. Psychological Issues in Online Adaptive Task Allocation

    NASA Technical Reports Server (NTRS)

    Morris, N. M.; Rouse, W. B.; Ward, S. L.; Frey, P. R.

    1984-01-01

    Adaptive aiding is an idea that offers potential for improvement over many current approaches to aiding in human-computer systems. The expected return of tailoring the system to fit the user could be in the form of improved system performance and/or increased user satisfaction. Issues such as the manner in which information is shared between human and computer, the appropriate division of labor between them, and the level of autonomy of the aid are explored. A simulated visual search task was developed. Subjects are required to identify targets in a moving display while performing a compensatory sub-critical tracking task. By manipulating characteristics of the situation such as imposed task-related workload and effort required to communicate with the computer, it is possible to create conditions in which interaction with the computer would be more or less desirable. The results of preliminary research using this experimental scenario are presented, and future directions for this research effort are discussed.

  5. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  6. Seeing the Forest "and" the Trees: Default Local Processing in Individuals with High Autistic Traits Does Not Come at the Expense of Global Attention

    ERIC Educational Resources Information Center

    Stevenson, Ryan A.; Sun, Sol Z.; Hazlett, Naomi; Cant, Jonathan S.; Barense, Morgan D.; Ferber, Susanne

    2018-01-01

    Atypical sensory perception is one of the most ubiquitous symptoms of autism, including a tendency towards a local-processing bias. We investigated whether local-processing biases were associated with global-processing impairments on a global/local attentional-scope paradigm in conjunction with a composite-face task. Behavioural results were…

  7. Creating Diverse Ensemble Classifiers to Reduce Supervision

    DTIC Science & Technology

    2005-12-01

    artificial examples. Quite often training with noise improves network generalization (Bishop, 1995; Raviv & Intrator, 1996). Adding noise to training...full training set, as seen by comparing to the to- tal dataset sizes. Hence, improving on the data utilization of DECORATE is a fairly difficult task...prohibitively expensive, except (perhaps) with an incremen- tal learner such as Naive Bayes. Our AFA framework is significantly more efficient because

  8. Local Histograms for Per-Pixel Classification

    DTIC Science & Technology

    2012-03-01

    few axioms for such models are presented. These axioms are shown to be satisfied using the convergence of random wavelet expansions. The authors of...pathologists can accurately and consistently identify and delineate tissues and their pathologies , it is an expensive and time-consuming task, therefore...Automatic Identification and Delineation of Tissues and Pathologies in H&E Stained Images. PhD Thesis. Carnegie Mellon University, Pittsburgh, PA (September

  9. Task-Based Assessment of Students' Computational Thinking Skills Developed through Visual Programming or Tangible Coding Environments

    ERIC Educational Resources Information Center

    Djambong, Takam; Freiman, Viktor

    2016-01-01

    While today's schools in several countries, like Canada, are about to bring back programming to their curricula, a new conceptual angle, namely one of computational thinking, draws attention of researchers. In order to understand the articulation between computational thinking tasks in one side, student's targeted skills, and the types of problems…

  10. Mediated Activity in the Primary Classroom: Girls, Boys and Computers.

    ERIC Educational Resources Information Center

    Fitzpatrick, Helen; Hardman, Margaret

    2000-01-01

    Studied the social interaction of 7- and 9-year-olds working in the same or mixed gender pairs on language-based computer and noncomputer tasks. At both ages, mixed gender pairs showed more assertive and less transactive (collaborative) interaction than same gender pairs on both tasks. Discusses the mediational role of the computer and the social…

  11. Task-Relevant Sound and User Experience in Computer-Mediated Firefighter Training

    ERIC Educational Resources Information Center

    Houtkamp, Joske M.; Toet, Alexander; Bos, Frank A.

    2012-01-01

    The authors added task-relevant sounds to a computer-mediated instructor in-the-loop virtual training for firefighter commanders in an attempt to raise the engagement and arousal of the users. Computer-mediated training for crew commanders should provide a sensory experience that is sufficiently intense to make the training viable and effective.…

  12. Distributed computation of graphics primitives on a transputer network

    NASA Technical Reports Server (NTRS)

    Ellis, Graham K.

    1988-01-01

    A method is developed for distributing the computation of graphics primitives on a parallel processing network. Off-the-shelf transputer boards are used to perform the graphics transformations and scan-conversion tasks that would normally be assigned to a single transputer based display processor. Each node in the network performs a single graphics primitive computation. Frequently requested tasks can be duplicated on several nodes. The results indicate that the current distribution of commands on the graphics network shows a performance degradation when compared to the graphics display board alone. A change to more computation per node for every communication (perform more complex tasks on each node) may cause the desired increase in throughput.

  13. Brain-computer interface control along instructed paths

    NASA Astrophysics Data System (ADS)

    Sadtler, P. T.; Ryu, S. I.; Tyler-Kabara, E. C.; Yu, B. M.; Batista, A. P.

    2015-02-01

    Objective. Brain-computer interfaces (BCIs) are being developed to assist paralyzed people and amputees by translating neural activity into movements of a computer cursor or prosthetic limb. Here we introduce a novel BCI task paradigm, intended to help accelerate improvements to BCI systems. Through this task, we can push the performance limits of BCI systems, we can quantify more accurately how well a BCI system captures the user’s intent, and we can increase the richness of the BCI movement repertoire. Approach. We have implemented an instructed path task, wherein the user must drive a cursor along a visible path. The instructed path task provides a versatile framework to increase the difficulty of the task and thereby push the limits of performance. Relative to traditional point-to-point tasks, the instructed path task allows more thorough analysis of decoding performance and greater richness of movement kinematics. Main results. We demonstrate that monkeys are able to perform the instructed path task in a closed-loop BCI setting. We further investigate how the performance under BCI control compares to native arm control, whether users can decrease their movement variability in the face of a more demanding task, and how the kinematic richness is enhanced in this task. Significance. The use of the instructed path task has the potential to accelerate the development of BCI systems and their clinical translation.

  14. Digital video technology, today and tomorrow

    NASA Astrophysics Data System (ADS)

    Liberman, J.

    1994-10-01

    Digital video is probably computing's fastest moving technology today. Just three years ago, the zenith of digital video technology on the PC was the successful marriage of digital text and graphics with analog audio and video by means of expensive analog laser disc players and video overlay boards. The state of the art involves two different approaches to fully digital video on computers: hardware-assisted and software-only solutions.

  15. Learning the ideal observer for SKE detection tasks by use of convolutional neural networks (Cum Laude Poster Award)

    NASA Astrophysics Data System (ADS)

    Zhou, Weimin; Anastasio, Mark A.

    2018-03-01

    It has been advocated that task-based measures of image quality (IQ) should be employed to evaluate and optimize imaging systems. Task-based measures of IQ quantify the performance of an observer on a medically relevant task. The Bayesian Ideal Observer (IO), which employs complete statistical information of the object and noise, achieves the upper limit of the performance for a binary signal classification task. However, computing the IO performance is generally analytically intractable and can be computationally burdensome when Markov-chain Monte Carlo (MCMC) techniques are employed. In this paper, supervised learning with convolutional neural networks (CNNs) is employed to approximate the IO test statistics for a signal-known-exactly and background-known-exactly (SKE/BKE) binary detection task. The receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC) are compared to those produced by the analytically computed IO. The advantages of the proposed supervised learning approach for approximating the IO are demonstrated.

  16. Characterization of a laboratory model of computer mouse use - applications for studying risk factors for musculoskeletal disorders.

    PubMed

    Flodgren, G; Heiden, M; Lyskov, E; Crenshaw, A G

    2007-03-01

    In the present study, we assessed the wrist kinetics (range of motion, mean position, velocity and mean power frequency in radial/ulnar deviation, flexion/extension, and pronation/supination) associated with performing a mouse-operated computerized task involving painting rectangles on a computer screen. Furthermore, we evaluated the effects of the painting task on subjective perception of fatigue and wrist position sense. The results showed that the painting task required constrained wrist movements, and repetitive movements of about the same magnitude as those performed in mouse-operated design tasks. In addition, the painting task induced a perception of muscle fatigue in the upper extremity (Borg CR-scale: 3.5, p<0.001) and caused a reduction in the position sense accuracy of the wrist (error before: 4.6 degrees , error after: 5.6 degrees , p<0.05). This standardized painting task appears suitable for studying relevant risk factors, and therefore it offers a potential for investigating the pathophysiological mechanisms behind musculoskeletal disorders related to computer mouse use.

  17. A fast CT reconstruction scheme for a general multi-core PC.

    PubMed

    Zeng, Kai; Bai, Erwei; Wang, Ge

    2007-01-01

    Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors.

  18. A Fast CT Reconstruction Scheme for a General Multi-Core PC

    PubMed Central

    Zeng, Kai; Bai, Erwei; Wang, Ge

    2007-01-01

    Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors. PMID:18256731

  19. Quantifying the Financial Benefits of Multifamily Retrofits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Philbrick, D.; Scheu, R.; Brand, L.

    Increasing the adoption of energy efficient building practices will require the energy sector to increase their understanding of the way that retrofits affect multifamily financial performance as well as how those indicators are interpreted by the lending and appraisal industries. This project analyzed building, energy, and financial program data as well as other public and private data to examine the relationship between energy efficiency retrofits and financial performance on three levels: building, city, and community. The project goals were to increase the data and analysis in the growing body of multifamily financial benefits work as well provide a framework formore » other geographies to produce similar characterization. The goals are accomplished through three tasks. Task one: A pre- and post-retrofit analysis of thirteen Chicago multifamily buildings. Task two: A comparison of Chicago income and expenses to two national datasets. Task three: An in-depth look at multifamily market sales data and the subsequent impact of buildings that undergo retrofits.« less

  20. Building America Case Study: Quantifying the Financial Benefits of Multifamily Retrofits, Chicago, Illinois

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Increasing the adoption of energy efficient building practices will require the energy sector to increase their understanding of the way that retrofits affect multifamily financial performance as well as how those indicators are interpreted by the lending and appraisal industries. This project analyzed building, energy, and financial program data as well as other public and private data to examine the relationship between energy efficiency retrofits and financial performance on three levels: building, city, and community. The project goals were to increase the data and analysis in the growing body of multifamily financial benefits work as well provide a framework formore » other geographies to produce similar characterization. The goals are accomplished through three tasks: Task one: A pre- and post-retrofit analysis of thirteen Chicago multifamily buildings. Task two: A comparison of Chicago income and expenses to two national datasets. Task three: An in-depth look at multifamily market sales data and the subsequent impact of buildings that undergo retrofits.« less

  1. A resource-sharing model based on a repeated game in fog computing.

    PubMed

    Sun, Yan; Zhang, Nan

    2017-03-01

    With the rapid development of cloud computing techniques, the number of users is undergoing exponential growth. It is difficult for traditional data centers to perform many tasks in real time because of the limited bandwidth of resources. The concept of fog computing is proposed to support traditional cloud computing and to provide cloud services. In fog computing, the resource pool is composed of sporadic distributed resources that are more flexible and movable than a traditional data center. In this paper, we propose a fog computing structure and present a crowd-funding algorithm to integrate spare resources in the network. Furthermore, to encourage more resource owners to share their resources with the resource pool and to supervise the resource supporters as they actively perform their tasks, we propose an incentive mechanism in our algorithm. Simulation results show that our proposed incentive mechanism can effectively reduce the SLA violation rate and accelerate the completion of tasks.

  2. The Differential Effects of Collaborative vs. Individual Prewriting Planning on Computer-Mediated L2 Writing: Transferability of Task-Based Linguistic Skills in Focus

    ERIC Educational Resources Information Center

    Amiryousefi, Mohammad

    2017-01-01

    The current study aimed at investigating the effects of three types of prewriting planning conditions, namely teacher-monitored collaborative planning (TMCP), student-led collaborative planning (SLCP), and individual planning (IP) on EFL learners' computer-mediated L2 written production and learning transfer from a pedagogic task to a new task of…

  3. A queueing model of pilot decision making in a multi-task flight management situation

    NASA Technical Reports Server (NTRS)

    Walden, R. S.; Rouse, W. B.

    1977-01-01

    Allocation of decision making responsibility between pilot and computer is considered and a flight management task, designed for the study of pilot-computer interaction, is discussed. A queueing theory model of pilot decision making in this multi-task, control and monitoring situation is presented. An experimental investigation of pilot decision making and the resulting model parameters are discussed.

  4. 25 CFR 700.163 - Expenses in searching for replacement location-nonresidential moves.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., including— (a) Transportation computed at prevailing federal per diem and mileage allowance schedules; meals and lodging away from home; (b) Time spent searching, based on reasonable earnings; (c) Fees paid to a...

  5. 25 CFR 700.163 - Expenses in searching for replacement location-nonresidential moves.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ..., including— (a) Transportation computed at prevailing federal per diem and mileage allowance schedules; meals and lodging away from home; (b) Time spent searching, based on reasonable earnings; (c) Fees paid to a...

  6. SIMULATING ATMOSPHERIC EXPOSURE USING AN INNOVATIVE METEOROLOGICAL SAMPLING SCHEME

    EPA Science Inventory

    Multimedia Risk assessments require the temporal integration of atmospheric concentration and deposition estimates with other media modules. However, providing an extended time series of estimates is computationally expensive. An alternative approach is to substitute long-ter...

  7. 47 CFR 54.639 - Ineligible expenses.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., including the following: i. Computers, including servers, and related hardware (e.g., printers, scanners, laptops), unless used exclusively for network management, maintenance, or other network operations; ii... installation/construction; marketing studies, marketing activities, or outreach to potential network members...

  8. 47 CFR 54.639 - Ineligible expenses.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., including the following: i. Computers, including servers, and related hardware (e.g., printers, scanners, laptops), unless used exclusively for network management, maintenance, or other network operations; ii... installation/construction; marketing studies, marketing activities, or outreach to potential network members...

  9. Inner Space Perturbation Theory in Matrix Product States: Replacing Expensive Iterative Diagonalization.

    PubMed

    Ren, Jiajun; Yi, Yuanping; Shuai, Zhigang

    2016-10-11

    We propose an inner space perturbation theory (isPT) to replace the expensive iterative diagonalization in the standard density matrix renormalization group theory (DMRG). The retained reduced density matrix eigenstates are partitioned into the active and secondary space. The first-order wave function and the second- and third-order energies are easily computed by using one step Davidson iteration. Our formulation has several advantages including (i) keeping a balance between the efficiency and accuracy, (ii) capturing more entanglement with the same amount of computational time, (iii) recovery of the standard DMRG when all the basis states belong to the active space. Numerical examples for the polyacenes and periacene show that the efficiency gain is considerable and the accuracy loss due to the perturbation treatment is very small, when half of the total basis states belong to the active space. Moreover, the perturbation calculations converge in all our numerical examples.

  10. Multi-chain Markov chain Monte Carlo methods for computationally expensive models

    NASA Astrophysics Data System (ADS)

    Huang, M.; Ray, J.; Ren, H.; Hou, Z.; Bao, J.

    2017-12-01

    Markov chain Monte Carlo (MCMC) methods are used to infer model parameters from observational data. The parameters are inferred as probability densities, thus capturing estimation error due to sparsity of the data, and the shortcomings of the model. Multiple communicating chains executing the MCMC method have the potential to explore the parameter space better, and conceivably accelerate the convergence to the final distribution. We present results from tests conducted with the multi-chain method to show how the acceleration occurs i.e., for loose convergence tolerances, the multiple chains do not make much of a difference. The ensemble of chains also seems to have the ability to accelerate the convergence of a few chains that might start from suboptimal starting points. Finally, we show the performance of the chains in the estimation of O(10) parameters using computationally expensive forward models such as the Community Land Model, where the sampling burden is distributed over multiple chains.

  11. On Using Surrogates with Genetic Programming.

    PubMed

    Hildebrandt, Torsten; Branke, Jürgen

    2015-01-01

    One way to accelerate evolutionary algorithms with expensive fitness evaluations is to combine them with surrogate models. Surrogate models are efficiently computable approximations of the fitness function, derived by means of statistical or machine learning techniques from samples of fully evaluated solutions. But these models usually require a numerical representation, and therefore cannot be used with the tree representation of genetic programming (GP). In this paper, we present a new way to use surrogate models with GP. Rather than using the genotype directly as input to the surrogate model, we propose using a phenotypic characterization. This phenotypic characterization can be computed efficiently and allows us to define approximate measures of equivalence and similarity. Using a stochastic, dynamic job shop scenario as an example of simulation-based GP with an expensive fitness evaluation, we show how these ideas can be used to construct surrogate models and improve the convergence speed and solution quality of GP.

  12. Space-filling designs for computer experiments: A review

    DOE PAGES

    Joseph, V. Roshan

    2016-01-29

    Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less

  13. Space-filling designs for computer experiments: A review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joseph, V. Roshan

    Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less

  14. GPU-computing in econophysics and statistical physics

    NASA Astrophysics Data System (ADS)

    Preis, T.

    2011-03-01

    A recent trend in computer science and related fields is general purpose computing on graphics processing units (GPUs), which can yield impressive performance. With multiple cores connected by high memory bandwidth, today's GPUs offer resources for non-graphics parallel processing. This article provides a brief introduction into the field of GPU computing and includes examples. In particular computationally expensive analyses employed in financial market context are coded on a graphics card architecture which leads to a significant reduction of computing time. In order to demonstrate the wide range of possible applications, a standard model in statistical physics - the Ising model - is ported to a graphics card architecture as well, resulting in large speedup values.

  15. Learners' Field Dependence and the Effects of Personalized Narration on Learners' Computer Perceptions and Task-Related Attitudes in Multimedia Learning

    ERIC Educational Resources Information Center

    Liew, Tze Wei; Tan, Su-Mae; Seydali, Rouzbeh

    2014-01-01

    In this article, the effects of personalized narration in multimedia learning on learners' computer perceptions and task-related attitudes were examined. Twenty-six field independent and 22 field dependent participants studied the computer-based multimedia lessons on C-Programming, either with personalized narration or non-personalized narration.…

  16. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1993-01-01

    Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.

  17. Characterizing quantum supremacy in near-term devices

    NASA Astrophysics Data System (ADS)

    Boixo, Sergio; Isakov, Sergei V.; Smelyanskiy, Vadim N.; Babbush, Ryan; Ding, Nan; Jiang, Zhang; Bremner, Michael J.; Martinis, John M.; Neven, Hartmut

    2018-06-01

    A critical question for quantum computing in the near future is whether quantum devices without error correction can perform a well-defined computational task beyond the capabilities of supercomputers. Such a demonstration of what is referred to as quantum supremacy requires a reliable evaluation of the resources required to solve tasks with classical approaches. Here, we propose the task of sampling from the output distribution of random quantum circuits as a demonstration of quantum supremacy. We extend previous results in computational complexity to argue that this sampling task must take exponential time in a classical computer. We introduce cross-entropy benchmarking to obtain the experimental fidelity of complex multiqubit dynamics. This can be estimated and extrapolated to give a success metric for a quantum supremacy demonstration. We study the computational cost of relevant classical algorithms and conclude that quantum supremacy can be achieved with circuits in a two-dimensional lattice of 7 × 7 qubits and around 40 clock cycles. This requires an error rate of around 0.5% for two-qubit gates (0.05% for one-qubit gates), and it would demonstrate the basic building blocks for a fault-tolerant quantum computer.

  18. Application of a fast skyline computation algorithm for serendipitous searching problems

    NASA Astrophysics Data System (ADS)

    Koizumi, Kenichi; Hiraki, Kei; Inaba, Mary

    2018-02-01

    Skyline computation is a method of extracting interesting entries from a large population with multiple attributes. These entries, called skyline or Pareto optimal entries, are known to have extreme characteristics that cannot be found by outlier detection methods. Skyline computation is an important task for characterizing large amounts of data and selecting interesting entries with extreme features. When the population changes dynamically, the task of calculating a sequence of skyline sets is called continuous skyline computation. This task is known to be difficult to perform for the following reasons: (1) information of non-skyline entries must be stored since they may join the skyline in the future; (2) the appearance or disappearance of even a single entry can change the skyline drastically; (3) it is difficult to adopt a geometric acceleration algorithm for skyline computation tasks with high-dimensional datasets. Our new algorithm called jointed rooted-tree (JR-tree) manages entries using a rooted tree structure. JR-tree delays extend the tree to deep levels to accelerate tree construction and traversal. In this study, we presented the difficulties in extracting entries tagged with a rare label in high-dimensional space and the potential of fast skyline computation in low-latency cell identification technology.

  19. A computer-based physics laboratory apparatus: Signal generator software

    NASA Astrophysics Data System (ADS)

    Thanakittiviroon, Tharest; Liangrocapart, Sompong

    2005-09-01

    This paper describes a computer-based physics laboratory apparatus to replace expensive instruments such as high-precision signal generators. This apparatus uses a sound card in a common personal computer to give sinusoidal signals with an accurate frequency that can be programmed to give different frequency signals repeatedly. An experiment on standing waves on an oscillating string uses this apparatus. In conjunction with interactive lab manuals, which have been developed using personal computers in our university, we achieve a complete set of low-cost, accurate, and easy-to-use equipment for teaching a physics laboratory.

  20. A Toolkit for ARB to Integrate Custom Databases and Externally Built Phylogenies

    DOE PAGES

    Essinger, Steven D.; Reichenberger, Erin; Morrison, Calvin; ...

    2015-01-21

    Researchers are perpetually amassing biological sequence data. The computational approaches employed by ecologists for organizing this data (e.g. alignment, phylogeny, etc.) typically scale nonlinearly in execution time with the size of the dataset. This often serves as a bottleneck for processing experimental data since many molecular studies are characterized by massive datasets. To keep up with experimental data demands, ecologists are forced to choose between continually upgrading expensive in-house computer hardware or outsourcing the most demanding computations to the cloud. Outsourcing is attractive since it is the least expensive option, but does not necessarily allow direct user interaction with themore » data for exploratory analysis. Desktop analytical tools such as ARB are indispensable for this purpose, but they do not necessarily offer a convenient solution for the coordination and integration of datasets between local and outsourced destinations. Therefore, researchers are currently left with an undesirable tradeoff between computational throughput and analytical capability. To mitigate this tradeoff we introduce a software package to leverage the utility of the interactive exploratory tools offered by ARB with the computational throughput of cloud-based resources. Our pipeline serves as middleware between the desktop and the cloud allowing researchers to form local custom databases containing sequences and metadata from multiple resources and a method for linking data outsourced for computation back to the local database. Furthermore, a tutorial implementation of the toolkit is provided in the supporting information, S1 Tutorial.« less

  1. A Toolkit for ARB to Integrate Custom Databases and Externally Built Phylogenies

    PubMed Central

    Essinger, Steven D.; Reichenberger, Erin; Morrison, Calvin; Blackwood, Christopher B.; Rosen, Gail L.

    2015-01-01

    Researchers are perpetually amassing biological sequence data. The computational approaches employed by ecologists for organizing this data (e.g. alignment, phylogeny, etc.) typically scale nonlinearly in execution time with the size of the dataset. This often serves as a bottleneck for processing experimental data since many molecular studies are characterized by massive datasets. To keep up with experimental data demands, ecologists are forced to choose between continually upgrading expensive in-house computer hardware or outsourcing the most demanding computations to the cloud. Outsourcing is attractive since it is the least expensive option, but does not necessarily allow direct user interaction with the data for exploratory analysis. Desktop analytical tools such as ARB are indispensable for this purpose, but they do not necessarily offer a convenient solution for the coordination and integration of datasets between local and outsourced destinations. Therefore, researchers are currently left with an undesirable tradeoff between computational throughput and analytical capability. To mitigate this tradeoff we introduce a software package to leverage the utility of the interactive exploratory tools offered by ARB with the computational throughput of cloud-based resources. Our pipeline serves as middleware between the desktop and the cloud allowing researchers to form local custom databases containing sequences and metadata from multiple resources and a method for linking data outsourced for computation back to the local database. A tutorial implementation of the toolkit is provided in the supporting information, S1 Tutorial. Availability: http://www.ece.drexel.edu/gailr/EESI/tutorial.php. PMID:25607539

  2. On the costs of parallel processing in dual-task performance: The case of lexical processing in word production.

    PubMed

    Paucke, Madlen; Oppermann, Frank; Koch, Iring; Jescheniak, Jörg D

    2015-12-01

    Previous dual-task picture-naming studies suggest that lexical processes require capacity-limited processes and prevent other tasks to be carried out in parallel. However, studies involving the processing of multiple pictures suggest that parallel lexical processing is possible. The present study investigated the specific costs that may arise when such parallel processing occurs. We used a novel dual-task paradigm by presenting 2 visual objects associated with different tasks and manipulating between-task similarity. With high similarity, a picture-naming task (T1) was combined with a phoneme-decision task (T2), so that lexical processes were shared across tasks. With low similarity, picture-naming was combined with a size-decision T2 (nonshared lexical processes). In Experiment 1, we found that a manipulation of lexical processes (lexical frequency of T1 object name) showed an additive propagation with low between-task similarity and an overadditive propagation with high between-task similarity. Experiment 2 replicated this differential forward propagation of the lexical effect and showed that it disappeared with longer stimulus onset asynchronies. Moreover, both experiments showed backward crosstalk, indexed as worse T1 performance with high between-task similarity compared with low similarity. Together, these findings suggest that conditions of high between-task similarity can lead to parallel lexical processing in both tasks, which, however, does not result in benefits but rather in extra performance costs. These costs can be attributed to crosstalk based on the dual-task binding problem arising from parallel processing. Hence, the present study reveals that capacity-limited lexical processing can run in parallel across dual tasks but only at the expense of extraordinary high costs. (c) 2015 APA, all rights reserved).

  3. Can low-cost motion-tracking systems substitute a Polhemus system when researching social motor coordination in children?

    PubMed

    Romero, Veronica; Amaral, Joseph; Fitzpatrick, Paula; Schmidt, R C; Duncan, Amie W; Richardson, Michael J

    2017-04-01

    Functionally stable and robust interpersonal motor coordination has been found to play an integral role in the effectiveness of social interactions. However, the motion-tracking equipment required to record and objectively measure the dynamic limb and body movements during social interaction has been very costly, cumbersome, and impractical within a non-clinical or non-laboratory setting. Here we examined whether three low-cost motion-tracking options (Microsoft Kinect skeletal tracking of either one limb or whole body and a video-based pixel change method) can be employed to investigate social motor coordination. Of particular interest was the degree to which these low-cost methods of motion tracking could be used to capture and index the coordination dynamics that occurred between a child and an experimenter for three simple social motor coordination tasks in comparison to a more expensive, laboratory-grade motion-tracking system (i.e., a Polhemus Latus system). Overall, the results demonstrated that these low-cost systems cannot substitute the Polhemus system in some tasks. However, the lower-cost Microsoft Kinect skeletal tracking and video pixel change methods were successfully able to index differences in social motor coordination in tasks that involved larger-scale, naturalistic whole body movements, which can be cumbersome and expensive to record with a Polhemus. However, we found the Kinect to be particularly vulnerable to occlusion and the pixel change method to movements that cross the video frame midline. Therefore, particular care needs to be taken in choosing the motion-tracking system that is best suited for the particular research.

  4. Functional Neuroanatomy Involved in Automatic order Mental Arithmetic and Recitation of the Multiplication Table

    NASA Astrophysics Data System (ADS)

    Wang, Li-Qun; Saito, Masao

    We used 1.5T functional magnetic resonance imaging (fMRI) to explore that which brain areas contribute uniquely to numeric computation. The BOLD effect activation pattern of metal arithmetic task (successive subtraction: actual calculation task) was compared with multiplication tables repetition task (rote verbal arithmetic memory task) response. The activation found in right parietal lobule during metal arithmetic task suggested that quantitative cognition or numeric computation may need the assistance of sensuous convert, such as spatial imagination and spatial sensuous convert. In addition, this mechanism may be an ’analog algorithm’ in the simple mental arithmetic processing.

  5. Assessing the effects of manual dexterity and playing computer games on catheter-wire manipulation for inexperienced operators.

    PubMed

    Alsafi, Z; Hameed, Y; Amin, P; Shamsad, S; Raja, U; Alsafi, A; Hamady, M S

    2017-09-01

    To investigate the effect of playing computer games and manual dexterity on catheter-wire manipulation in a mechanical aortic model. Medical student volunteers filled in a preprocedure questionnaire assessing their exposure to computer games. Their manual dexterity was measured using a smartphone game. They were then shown a video clip demonstrating renal artery cannulation and were asked to reproduce this. All attempts were timed. Two-tailed Student's t-test was used to compare continuous data, while Fisher's exact test was used for categorical data. Fifty students aged 18-22 years took part in the study. Forty-six completed the task at an average of 168 seconds (range 103-301 seconds). There was no significant difference in the dexterity score or time to cannulate the renal artery between male and female students. Students who played computer games for >10 hours per week had better dexterity scores than those who did not play computer games: 9.1 versus 10.2 seconds (p=0.0237). Four of 19 students who did not play computer games failed to complete the task, while all of those who played computer games regularly completed the task (p=0.0168). Playing computer games is associated with better manual dexterity and ability to complete a basic interventional radiology task for novices. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  6. 29 CFR 541.707 - Occasional tasks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Definitions and Miscellaneous Provisions § 541.707 Occasional tasks. Occasional, infrequently recurring tasks...

  7. 29 CFR 541.707 - Occasional tasks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Definitions and Miscellaneous Provisions § 541.707 Occasional tasks. Occasional, infrequently recurring tasks...

  8. 29 CFR 541.707 - Occasional tasks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Definitions and Miscellaneous Provisions § 541.707 Occasional tasks. Occasional, infrequently recurring tasks...

  9. 29 CFR 541.707 - Occasional tasks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Definitions and Miscellaneous Provisions § 541.707 Occasional tasks. Occasional, infrequently recurring tasks...

  10. Dual-Arm Generalized Compliant Motion With Shared Control

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.

    1994-01-01

    Dual-Arm Generalized Compliant Motion (DAGCM) primitive computer program implementing improved unified control scheme for two manipulator arms cooperating in task in which both grasp same object. Provides capabilities for autonomous, teleoperation, and shared control of two robot arms. Unifies cooperative dual-arm control with multi-sensor-based task control and makes complete task-control capability available to higher-level task-planning computer system via large set of input parameters used to describe desired force and position trajectories followed by manipulator arms. Some concepts discussed in "A Generalized-Compliant-Motion Primitive" (NPO-18134).

  11. Numerical Study of Boundary-Layer in Aerodynamics

    NASA Technical Reports Server (NTRS)

    Shih, Tom I-P.

    1997-01-01

    The accomplishments made in the following three tasks are described: (1) The first task was to study shock-wave boundary-layer interactions with bleed - this study is relevant to boundary-layer control in external and mixed-compression inlets of supersonic aircraft; (2) The second task was to test RAAKE, a code developed for computing turbulence quantities; and (3) The third task was to compute flow around the Ames ER-2 aircraft that has been retrofitted with containers over its wings and fuselage. The appendices include two reports submitted to AIAA for publication.

  12. Dynamically allocating sets of fine-grained processors to running computations

    NASA Technical Reports Server (NTRS)

    Middleton, David

    1988-01-01

    Researchers explore an approach to using general purpose parallel computers which involves mapping hardware resources onto computations instead of mapping computations onto hardware. Problems such as processor allocation, task scheduling and load balancing, which have traditionally proven to be challenging, change significantly under this approach and may become amenable to new attacks. Researchers describe the implementation of this approach used by the FFP Machine whose computation and communication resources are repeatedly partitioned into disjoint groups that match the needs of available tasks from moment to moment. Several consequences of this system are examined.

  13. Image Processing and Computer Aided Diagnosis in Computed Tomography of the Breast

    DTIC Science & Technology

    2007-03-01

    TERMS breast imaging, breast CT, scatter compensation, denoising, CAD , Cone-beam CT 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...clinical projection images. The CAD tool based on signal known exactly (SKE) scenario is under development. Task 6: Test and compare the...performances of the CAD developed in Task 5 applied to processed projection data from Task 1 with the CAD performance on the projection data without Bayesian

  14. The Effects of Synchronous Text-Based Computer-Mediated Communication Tasks on the Development of L2 and Academic Literacy: A Mixed Methods Study

    ERIC Educational Resources Information Center

    Li, Jinrong

    2012-01-01

    The dissertation examines how synchronous text-based computer-mediated communication (SCMC) tasks may affect English as a Second Language (ESL) learners' development of second language (L2) and academic literacy. The study is motivated by two issues concerning the use of SCMC tasks in L2 writing classes. First, although some of the alleged…

  15. Design and Analysis of Self-Adapted Task Scheduling Strategies in Wireless Sensor Networks

    PubMed Central

    Guo, Wenzhong; Xiong, Naixue; Chao, Han-Chieh; Hussain, Sajid; Chen, Guolong

    2011-01-01

    In a wireless sensor network (WSN), the usage of resources is usually highly related to the execution of tasks which consume a certain amount of computing and communication bandwidth. Parallel processing among sensors is a promising solution to provide the demanded computation capacity in WSNs. Task allocation and scheduling is a typical problem in the area of high performance computing. Although task allocation and scheduling in wired processor networks has been well studied in the past, their counterparts for WSNs remain largely unexplored. Existing traditional high performance computing solutions cannot be directly implemented in WSNs due to the limitations of WSNs such as limited resource availability and the shared communication medium. In this paper, a self-adapted task scheduling strategy for WSNs is presented. First, a multi-agent-based architecture for WSNs is proposed and a mathematical model of dynamic alliance is constructed for the task allocation problem. Then an effective discrete particle swarm optimization (PSO) algorithm for the dynamic alliance (DPSO-DA) with a well-designed particle position code and fitness function is proposed. A mutation operator which can effectively improve the algorithm’s ability of global search and population diversity is also introduced in this algorithm. Finally, the simulation results show that the proposed solution can achieve significant better performance than other algorithms. PMID:22163971

  16. On The Computational Capabilities of Physical Systems. Part 2; Relationship With Conventional Computer Science

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In the first of this pair of papers, it was proven that there cannot be a physical computer to which one can properly pose any and all computational tasks concerning the physical universe. It was then further proven that no physical computer C can correctly carry out all computational tasks that can be posed to C. As a particular example, this result means that no physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly "processing information faster than the universe does". These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - "physical computation" - is needed to address the issues considered in these papers, which concern real physical computers. While this novel definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. This second paper of the pair presents a preliminary exploration of some of this mathematical structure. Analogues of Chomskian results concerning universal Turing Machines and the Halting theorem are derived, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analogue of algorithmic information complexity, "prediction complexity", is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task, a bound similar to the "encoding" bound governing how much the algorithm information complexity of a Turing machine calculation can differ for two reference universal Turing machines. Finally, it is proven that either the Hamiltonian of our universe proscribes a certain type of computation, or prediction complexity is unique (unlike algorithmic information complexity), in that there is one and only version of it that can be applicable throughout our universe.

  17. Air traffic surveillance and control using hybrid estimation and protocol-based conflict resolution

    NASA Astrophysics Data System (ADS)

    Hwang, Inseok

    The continued growth of air travel and recent advances in new technologies for navigation, surveillance, and communication have led to proposals by the Federal Aviation Administration (FAA) to provide reliable and efficient tools to aid Air Traffic Control (ATC) in performing their tasks. In this dissertation, we address four problems frequently encountered in air traffic surveillance and control; multiple target tracking and identity management, conflict detection, conflict resolution, and safety verification. We develop a set of algorithms and tools to aid ATC; These algorithms have the provable properties of safety, computational efficiency, and convergence. Firstly, we develop a multiple-maneuvering-target tracking and identity management algorithm which can keep track of maneuvering aircraft in noisy environments and of their identities. Secondly, we propose a hybrid probabilistic conflict detection algorithm between multiple aircraft which uses flight mode estimates as well as aircraft current state estimates. Our algorithm is based on hybrid models of aircraft, which incorporate both continuous dynamics and discrete mode switching. Thirdly, we develop an algorithm for multiple (greater than two) aircraft conflict avoidance that is based on a closed-form analytic solution and thus provides guarantees of safety. Finally, we consider the problem of safety verification of control laws for safety critical systems, with application to air traffic control systems. We approach safety verification through reachability analysis, which is a computationally expensive problem. We develop an over-approximate method for reachable set computation using polytopic approximation methods and dynamic optimization. These algorithms may be used either in a fully autonomous way, or as supporting tools to increase controllers' situational awareness and to reduce their work load.

  18. Open Land-Use Map: A Regional Land-Use Mapping Strategy for Incorporating OpenStreetMap with Earth Observations

    NASA Astrophysics Data System (ADS)

    Yang, D.; Fu, C. S.; Binford, M. W.

    2017-12-01

    The southeastern United States has high landscape heterogeneity, withheavily managed forestlands, highly developed agriculture lands, and multiple metropolitan areas. Human activities are transforming and altering land patterns and structures in both negative and positive manners. A land-use map for at the greater scale is a heavy computation task but is critical to most landowners, researchers, and decision makers, enabling them to make informed decisions for varying objectives. There are two major difficulties in generating the classification maps at the regional scale: the necessity of large training point sets and the expensive computation cost-in terms of both money and time-in classifier modeling. Volunteered Geographic Information (VGI) opens a new era in mapping and visualizing our world, where the platform is open for collecting valuable georeferenced information by volunteer citizens, and the data is freely available to the public. As one of the most well-known VGI initiatives, OpenStreetMap (OSM) contributes not only road network distribution, but also the potential for using this data to justify land cover and land use classifications. Google Earth Engine (GEE) is a platform designed for cloud-based mapping with a robust and fast computing power. Most large scale and national mapping approaches confuse "land cover" and "land-use", or build up the land-use database based on modeled land cover datasets. Unlike most other large-scale approaches, we distinguish and differentiate land-use from land cover. By focusing our prime objective of mapping land-use and management practices, a robust regional land-use mapping approach is developed by incorporating the OpenstreepMap dataset into Earth observation remote sensing imageries instead of the often-used land cover base maps.

  19. Attitude to the Use of the Computer for Learning Biological Concepts and Achievement of Students in an Environment Dominated by Indigenous Technology.

    ERIC Educational Resources Information Center

    Jegede, Olugbemiro J.; And Others

    The use of computers to facilitate learning is yet to make an appreciable in-road into the teaching-learning process in most developing Third World countries. The purchase cost and maintenance expenses of the equipment are the major inhibiting factors related to adoption of this high technology in these countries. This study investigated: (1) the…

  20. Analysis of Disaster Preparedness Planning Measures in DoD Computer Facilities

    DTIC Science & Technology

    1993-09-01

    city, stae, aod ZP code) 10 Source of Funding Numbers SProgram Element No lProject No ITask No lWork Unit Accesion I 11 Title include security...Computer Disaster Recovery .... 13 a. PC and LAN Lessons Learned . . ..... 13 2. Distributed Architectures . . . .. . 14 3. Backups...amount of expense, but no client problems." (Leeke, 1993, p. 8) 2. Distributed Architectures The majority of operations that were disrupted by the

  1. Network Support for Group Coordination

    DTIC Science & Technology

    2000-01-01

    telecommuting and ubiquitous computing [40], the advent of networked multimedia, and less expensive technology have shifted telecollaboration into...of Computer Engineering,Santa Cruz,CA,95064 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/ MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10...participants A and B, the payoff structure for choosing two actions i and j is P = Aij + Bij . If P = 0, then the interaction is called a zero -sum game, and

  2. High-Fidelity Simulations of Electromagnetic Propagation and RF Communication Systems

    DTIC Science & Technology

    2017-05-01

    addition to high -fidelity RF propagation modeling, lower-fidelity mod- els, which are less computationally burdensome, are available via a C++ API...expensive to perform, requiring roughly one hour of computer time with 36 available cores and ray tracing per- formed by a single high -end GPU...ER D C TR -1 7- 2 Military Engineering Applied Research High -Fidelity Simulations of Electromagnetic Propagation and RF Communication

  3. Development of Multidisciplinary, Multifidelity Analysis, Integration, and Optimization of Aerospace Vehicles

    DTIC Science & Technology

    2010-02-27

    investigated in more detail. The intermediate level of fidelity, though more expensive, is then used to refine the analysis , add geometric detail, and...design stage is used to further refine the analysis , narrowing the design to a handful of options. Figure 1. Integrated Hierarchical Framework. In...computational structural and computational fluid modeling. For the structural analysis tool we used McIntosh Structural Dynamics’ finite element code CNEVAL

  4. COST FUNCTION STUDIES FOR POWER REACTORS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heestand, J.; Wos, L.T.

    1961-11-01

    A function to evaluate the cost of electricity produced by a nuclear power reactor was developed. The basic equation, revenue = capital charges + profit + operating expenses, was expanded in terms of various cost parameters to enable analysis of multiregion nuclear reactors with uranium and/or plutonium for fuel. A corresponding IBM 704 computer program, which will compute either the price of electricity or the value of plutonium, is presented in detail. (auth)

  5. Gaze entropy reflects surgical task load.

    PubMed

    Di Stasi, Leandro L; Diaz-Piedra, Carolina; Rieiro, Héctor; Sánchez Carrión, José M; Martin Berrido, Mercedes; Olivares, Gonzalo; Catena, Andrés

    2016-11-01

    Task (over-)load imposed on surgeons is a main contributing factor to surgical errors. Recent research has shown that gaze metrics represent a valid and objective index to asses operator task load in non-surgical scenarios. Thus, gaze metrics have the potential to improve workplace safety by providing accurate measurements of task load variations. However, the direct relationship between gaze metrics and surgical task load has not been investigated yet. We studied the effects of surgical task complexity on the gaze metrics of surgical trainees. We recorded the eye movements of 18 surgical residents, using a mobile eye tracker system, during the performance of three high-fidelity virtual simulations of laparoscopic exercises of increasing complexity level: Clip Applying exercise, Cutting Big exercise, and Translocation of Objects exercise. We also measured performance accuracy and subjective rating of complexity. Gaze entropy and velocity linearly increased with increased task complexity: Visual exploration pattern became less stereotyped (i.e., more random) and faster during the more complex exercises. Residents performed better the Clip Applying exercise and the Cutting Big exercise than the Translocation of Objects exercise and their perceived task complexity differed accordingly. Our data show that gaze metrics are a valid and reliable surgical task load index. These findings have potential impacts to improve patient safety by providing accurate measurements of surgeon task (over-)load and might provide future indices to assess residents' learning curves, independently of expensive virtual simulators or time-consuming expert evaluation.

  6. Sensitivity analysis and approximation methods for general eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Murthy, D. V.; Haftka, R. T.

    1986-01-01

    Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.

  7. Internet-Based Software Tools for Analysis and Processing of LIDAR Point Cloud Data via the OpenTopography Portal

    NASA Astrophysics Data System (ADS)

    Nandigam, V.; Crosby, C. J.; Baru, C.; Arrowsmith, R.

    2009-12-01

    LIDAR is an excellent example of the new generation of powerful remote sensing data now available to Earth science researchers. Capable of producing digital elevation models (DEMs) more than an order of magnitude higher resolution than those currently available, LIDAR data allows earth scientists to study the processes that contribute to landscape evolution at resolutions not previously possible, yet essential for their appropriate representation. Along with these high-resolution datasets comes an increase in the volume and complexity of data that the user must efficiently manage and process in order for it to be scientifically useful. Although there are expensive commercial LIDAR software applications available, processing and analysis of these datasets are typically computationally inefficient on the conventional hardware and software that is currently available to most of the Earth science community. We have designed and implemented an Internet-based system, the OpenTopography Portal, that provides integrated access to high-resolution LIDAR data as well as web-based tools for processing of these datasets. By using remote data storage and high performance compute resources, the OpenTopography Portal attempts to simplify data access and standard LIDAR processing tasks for the Earth Science community. The OpenTopography Portal allows users to access massive amounts of raw point cloud LIDAR data as well as a suite of DEM generation tools to enable users to generate custom digital elevation models to best fit their science applications. The Cyberinfrastructure software tools for processing the data are freely available via the portal and conveniently integrated with the data selection in a single user-friendly interface. The ability to run these tools on powerful Cyberinfrastructure resources instead of their own labs provides a huge advantage in terms of performance and compute power. The system also encourages users to explore data processing methods and the variations in algorithm parameters since all of the processing is done remotely and numerous jobs can be submitted in sequence. The web-based software also eliminates the need for users to deal with the hassles and costs associated with software installation and licensing while providing adequate disk space for storage and personal job archival capability. Although currently limited to data access and DEM generation tasks, the OpenTopography system is modular in design and can be modified to accommodate new processing tools as they become available. We are currently exploring implementation of higher-level DEM analysis tasks in OpenTopography, since such processing is often computationally intensive and thus lends itself to utilization of cyberinfrastructure. Products derived from OpenTopography processing are available in a variety of formats ranging from simple Google Earth visualizations of LIDAR-derived hillshades to various GIS-compatible grid formats. To serve community users less interested in data processing, OpenTopography also hosts 1 km^2 digital elevation model tiles as well as Google Earth image overlays for a synoptic view of the data.

  8. Affirmative Action: A Course for the Future. Affirmative Action Task Force for the Study "New Directions: African Americans in a Diversifying Nation."

    ERIC Educational Resources Information Center

    Joint Center for Political and Economic Studies, Washington, DC.

    A primary social dilemma today is that current strategies have led to the perception that affirmative action favors some population groups at the expense of others, that in a sense it uses one form of discrimination to combat another. It is essential to reconsider affirmative action strategies to implement those that are most appropriate for today…

  9. Simulation Learning: PC-Screen Based (PCSB) versus High Fidelity Simulation (HFS)

    DTIC Science & Technology

    2012-08-01

    methods for the use of simulation for teaching clinical skills to military and civilian clinicians . High fidelity simulation is an expensive method of...without the knowledge and approval of the IRB. Changes include, but not limited to, modifications in study design, recruitment process and number of...Person C-Collar simulation algorithm Pathway A Scenario A - Spinal stabilization: Sub processes Legend: Pathway Points Complex task to be performed by

  10. Fast perceptual image hash based on cascade algorithm

    NASA Astrophysics Data System (ADS)

    Ruchay, Alexey; Kober, Vitaly; Yavtushenko, Evgeniya

    2017-09-01

    In this paper, we propose a perceptual image hash algorithm based on cascade algorithm, which can be applied in image authentication, retrieval, and indexing. Image perceptual hash uses for image retrieval in sense of human perception against distortions caused by compression, noise, common signal processing and geometrical modifications. The main disadvantage of perceptual hash is high time expenses. In the proposed cascade algorithm of image retrieval initializes with short hashes, and then a full hash is applied to the processed results. Computer simulation results show that the proposed hash algorithm yields a good performance in terms of robustness, discriminability, and time expenses.

  11. Impact of 2D and 3D vision on performance of novice subjects using da Vinci robotic system.

    PubMed

    Blavier, A; Gaudissart, Q; Cadière, G B; Nyssen, A S

    2006-01-01

    The aim of this study was to evaluate the impact of 3D and 2D vision on performance of novice subjects using da Vinci robotic system. 224 nurses without any surgical experience were divided into two groups and executed a motor task with the robotic system in 2D for one group and with the robotic system in 3D for the other group. Time to perform the task was recorded. Our data showed significant better time performance in 3D view (24.67 +/- 11.2) than in 2D view (40.26 +/- 17.49, P < 0.001). Our findings emphasized the advantage of 3D vision over 2D view in performing surgical task, encouraging the development of efficient and less expensive 3D systems in order to improve the accuracy of surgical gesture, the resident training and the operating time.

  12. Age-Group Differences in Interference from Young and Older Emotional Faces.

    PubMed

    Ebner, Natalie C; Johnson, Marcia K

    2010-11-01

    Human attention is selective, focusing on some aspects of events at the expense of others. In particular, angry faces engage attention. Most studies have used pictures of young faces, even when comparing young and older age groups. Two experiments asked (1) whether task-irrelevant faces of young and older individuals with happy, angry, and neutral expressions disrupt performance on a face-unrelated task, (2) whether interference varies for faces of different ages and different facial expressions, and (3) whether young and older adults differ in this regard. Participants gave speeded responses on a number task while irrelevant faces appeared in the background. Both age groups were more distracted by own than other-age faces. In addition, young participants' responses were slower for angry than happy faces, whereas older participants' responses were slower for happy than angry faces. Factors underlying age-group differences in interference from emotional faces of different ages are discussed.

  13. Computer control improves ethylene plant operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitehead, B.D.; Parnis, M.

    ICIA Australia ordered a turnkey 250,000-tpy ethylene plant to be built at the Botany site, Sydney, Australia. Following a feasibility study, an additional order was placed for a process computer system for advanced process control and optimization. This article gives a broad outline of the process computer tasks, how the tasks were implemented, what problems were met, what lessons were learned and what results were achieved.

  14. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment.

    PubMed

    Abdullahi, Mohammed; Ngadi, Md Asri

    2016-01-01

    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.

  15. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment

    PubMed Central

    Abdullahi, Mohammed; Ngadi, Md Asri

    2016-01-01

    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan. PMID:27348127

  16. Prediction of Drug-Target Interaction Networks from the Integration of Protein Sequences and Drug Chemical Structures.

    PubMed

    Meng, Fan-Rong; You, Zhu-Hong; Chen, Xing; Zhou, Yong; An, Ji-Yong

    2017-07-05

    Knowledge of drug-target interaction (DTI) plays an important role in discovering new drug candidates. Unfortunately, there are unavoidable shortcomings; including the time-consuming and expensive nature of the experimental method to predict DTI. Therefore, it motivates us to develop an effective computational method to predict DTI based on protein sequence. In the paper, we proposed a novel computational approach based on protein sequence, namely PDTPS (Predicting Drug Targets with Protein Sequence) to predict DTI. The PDTPS method combines Bi-gram probabilities (BIGP), Position Specific Scoring Matrix (PSSM), and Principal Component Analysis (PCA) with Relevance Vector Machine (RVM). In order to evaluate the prediction capacity of the PDTPS, the experiment was carried out on enzyme, ion channel, GPCR, and nuclear receptor datasets by using five-fold cross-validation tests. The proposed PDTPS method achieved average accuracy of 97.73%, 93.12%, 86.78%, and 87.78% on enzyme, ion channel, GPCR and nuclear receptor datasets, respectively. The experimental results showed that our method has good prediction performance. Furthermore, in order to further evaluate the prediction performance of the proposed PDTPS method, we compared it with the state-of-the-art support vector machine (SVM) classifier on enzyme and ion channel datasets, and other exiting methods on four datasets. The promising comparison results further demonstrate that the efficiency and robust of the proposed PDTPS method. This makes it a useful tool and suitable for predicting DTI, as well as other bioinformatics tasks.

  17. "Rate My Therapist": Automated Detection of Empathy in Drug and Alcohol Counseling via Speech and Language Processing.

    PubMed

    Xiao, Bo; Imel, Zac E; Georgiou, Panayiotis G; Atkins, David C; Narayanan, Shrikanth S

    2015-01-01

    The technology for evaluating patient-provider interactions in psychotherapy-observational coding-has not changed in 70 years. It is labor-intensive, error prone, and expensive, limiting its use in evaluating psychotherapy in the real world. Engineering solutions from speech and language processing provide new methods for the automatic evaluation of provider ratings from session recordings. The primary data are 200 Motivational Interviewing (MI) sessions from a study on MI training methods with observer ratings of counselor empathy. Automatic Speech Recognition (ASR) was used to transcribe sessions, and the resulting words were used in a text-based predictive model of empathy. Two supporting datasets trained the speech processing tasks including ASR (1200 transcripts from heterogeneous psychotherapy sessions and 153 transcripts and session recordings from 5 MI clinical trials). The accuracy of computationally-derived empathy ratings were evaluated against human ratings for each provider. Computationally-derived empathy scores and classifications (high vs. low) were highly accurate against human-based codes and classifications, with a correlation of 0.65 and F-score (a weighted average of sensitivity and specificity) of 0.86, respectively. Empathy prediction using human transcription as input (as opposed to ASR) resulted in a slight increase in prediction accuracies, suggesting that the fully automatic system with ASR is relatively robust. Using speech and language processing methods, it is possible to generate accurate predictions of provider performance in psychotherapy from audio recordings alone. This technology can support large-scale evaluation of psychotherapy for dissemination and process studies.

  18. UFO: a web server for ultra-fast functional profiling of whole genome protein sequences.

    PubMed

    Meinicke, Peter

    2009-09-02

    Functional profiling is a key technique to characterize and compare the functional potential of entire genomes. The estimation of profiles according to an assignment of sequences to functional categories is a computationally expensive task because it requires the comparison of all protein sequences from a genome with a usually large database of annotated sequences or sequence families. Based on machine learning techniques for Pfam domain detection, the UFO web server for ultra-fast functional profiling allows researchers to process large protein sequence collections instantaneously. Besides the frequencies of Pfam and GO categories, the user also obtains the sequence specific assignments to Pfam domain families. In addition, a comparison with existing genomes provides dissimilarity scores with respect to 821 reference proteomes. Considering the underlying UFO domain detection, the results on 206 test genomes indicate a high sensitivity of the approach. In comparison with current state-of-the-art HMMs, the runtime measurements show a considerable speed up in the range of four orders of magnitude. For an average size prokaryotic genome, the computation of a functional profile together with its comparison typically requires about 10 seconds of processing time. For the first time the UFO web server makes it possible to get a quick overview on the functional inventory of newly sequenced organisms. The genome scale comparison with a large number of precomputed profiles allows a first guess about functionally related organisms. The service is freely available and does not require user registration or specification of a valid email address.

  19. Real-time classification and sensor fusion with a spiking deep belief network.

    PubMed

    O'Connor, Peter; Neil, Daniel; Liu, Shih-Chii; Delbruck, Tobi; Pfeiffer, Michael

    2013-01-01

    Deep Belief Networks (DBNs) have recently shown impressive performance on a broad range of classification problems. Their generative properties allow better understanding of the performance, and provide a simpler solution for sensor fusion tasks. However, because of their inherent need for feedback and parallel update of large numbers of units, DBNs are expensive to implement on serial computers. This paper proposes a method based on the Siegert approximation for Integrate-and-Fire neurons to map an offline-trained DBN onto an efficient event-driven spiking neural network suitable for hardware implementation. The method is demonstrated in simulation and by a real-time implementation of a 3-layer network with 2694 neurons used for visual classification of MNIST handwritten digits with input from a 128 × 128 Dynamic Vision Sensor (DVS) silicon retina, and sensory-fusion using additional input from a 64-channel AER-EAR silicon cochlea. The system is implemented through the open-source software in the jAER project and runs in real-time on a laptop computer. It is demonstrated that the system can recognize digits in the presence of distractions, noise, scaling, translation and rotation, and that the degradation of recognition performance by using an event-based approach is less than 1%. Recognition is achieved in an average of 5.8 ms after the onset of the presentation of a digit. By cue integration from both silicon retina and cochlea outputs we show that the system can be biased to select the correct digit from otherwise ambiguous input.

  20. Grid Task Execution

    NASA Technical Reports Server (NTRS)

    Hu, Chaumin

    2007-01-01

    IPG Execution Service is a framework that reliably executes complex jobs on a computational grid, and is part of the IPG service architecture designed to support location-independent computing. The new grid service enables users to describe the platform on which they need a job to run, which allows the service to locate the desired platform, configure it for the required application, and execute the job. After a job is submitted, users can monitor it through periodic notifications, or through queries. Each job consists of a set of tasks that performs actions such as executing applications and managing data. Each task is executed based on a starting condition that is an expression of the states of other tasks. This formulation allows tasks to be executed in parallel, and also allows a user to specify tasks to execute when other tasks succeed, fail, or are canceled. The two core components of the Execution Service are the Task Database, which stores tasks that have been submitted for execution, and the Task Manager, which executes tasks in the proper order, based on the user-specified starting conditions, and avoids overloading local and remote resources while executing tasks.

  1. Crew/computer communications study. Volume 1: Final report. [onboard computerized communications system for spacecrews

    NASA Technical Reports Server (NTRS)

    Johannes, J. D.

    1974-01-01

    Techniques, methods, and system requirements are reported for an onboard computerized communications system that provides on-line computing capability during manned space exploration. Communications between man and computer take place by sequential execution of each discrete step of a procedure, by interactive progression through a tree-type structure to initiate tasks or by interactive optimization of a task requiring man to furnish a set of parameters. Effective communication between astronaut and computer utilizes structured vocabulary techniques and a word recognition system.

  2. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  3. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGES

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  4. CREASE 6.0 Catalog of Resources for Education in Ada and Software Engineering

    DTIC Science & Technology

    1992-02-01

    Programming Software Engineering Strong Typing Tasking Audene . Computer Scientists Terbook(s): Barnes, J. Programming in Ada, 3rd ed. Addison-Wesley...Ada. Concept: Abstract Data Types Management Overview Package Real-Time Programming Tasking Audene Computer Scientists Textbook(s): Barnes, J

  5. Strategy generalization across orientation tasks: testing a computational cognitive model.

    PubMed

    Gunzelmann, Glenn

    2008-07-08

    Humans use their spatial information processing abilities flexibly to facilitate problem solving and decision making in a variety of tasks. This article explores the question of whether a general strategy can be adapted for performing two different spatial orientation tasks by testing the predictions of a computational cognitive model. Human performance was measured on an orientation task requiring participants to identify the location of a target either on a map (find-on-map) or within an egocentric view of a space (find-in-scene). A general strategy instantiated in a computational cognitive model of the find-on-map task, based on the results from Gunzelmann and Anderson (2006), was adapted to perform both tasks and used to generate performance predictions for a new study. The qualitative fit of the model to the human data supports the view that participants were able to tailor a general strategy to the requirements of particular spatial tasks. The quantitative differences between the predictions of the model and the performance of human participants in the new experiment expose individual differences in sample populations. The model provides a means of accounting for those differences and a framework for understanding how human spatial abilities are applied to naturalistic spatial tasks that involve reasoning with maps. 2008 Cognitive Science Society, Inc.

  6. From gaze cueing to perspective taking: Revisiting the claim that we automatically compute where or what other people are looking at

    PubMed Central

    Bukowski, Henryk; Hietanen, Jari K.; Samson, Dana

    2015-01-01

    ABSTRACT Two paradigms have shown that people automatically compute what or where another person is looking at. In the visual perspective-taking paradigm, participants judge how many objects they see; whereas, in the gaze cueing paradigm, participants identify a target. Unlike in the former task, in the latter task, the influence of what or where the other person is looking at is only observed when the other person is presented alone before the task-relevant objects. We show that this discrepancy across the two paradigms is not due to differences in visual settings (Experiment 1) or available time to extract the directional information (Experiment 2), but that it is caused by how attention is deployed in response to task instructions (Experiment 3). Thus, the mere presence of another person in the field of view is not sufficient to compute where/what that person is looking at, which qualifies the claimed automaticity of such computations. PMID:26924936

  7. From gaze cueing to perspective taking: Revisiting the claim that we automatically compute where or what other people are looking at.

    PubMed

    Bukowski, Henryk; Hietanen, Jari K; Samson, Dana

    2015-09-14

    Two paradigms have shown that people automatically compute what or where another person is looking at. In the visual perspective-taking paradigm, participants judge how many objects they see; whereas, in the gaze cueing paradigm, participants identify a target. Unlike in the former task, in the latter task, the influence of what or where the other person is looking at is only observed when the other person is presented alone before the task-relevant objects. We show that this discrepancy across the two paradigms is not due to differences in visual settings (Experiment 1) or available time to extract the directional information (Experiment 2), but that it is caused by how attention is deployed in response to task instructions (Experiment 3). Thus, the mere presence of another person in the field of view is not sufficient to compute where/what that person is looking at, which qualifies the claimed automaticity of such computations.

  8. Climate@Home: Crowdsourcing Climate Change Research

    NASA Astrophysics Data System (ADS)

    Xu, C.; Yang, C.; Li, J.; Sun, M.; Bambacus, M.

    2011-12-01

    Climate change deeply impacts human wellbeing. Significant amounts of resources have been invested in building super-computers that are capable of running advanced climate models, which help scientists understand climate change mechanisms, and predict its trend. Although climate change influences all human beings, the general public is largely excluded from the research. On the other hand, scientists are eagerly seeking communication mediums for effectively enlightening the public on climate change and its consequences. The Climate@Home project is devoted to connect the two ends with an innovative solution: crowdsourcing climate computing to the general public by harvesting volunteered computing resources from the participants. A distributed web-based computing platform will be built to support climate computing, and the general public can 'plug-in' their personal computers to participate in the research. People contribute the spare computing power of their computers to run a computer model, which is used by scientists to predict climate change. Traditionally, only super-computers could handle such a large computing processing load. By orchestrating massive amounts of personal computers to perform atomized data processing tasks, investments on new super-computers, energy consumed by super-computers, and carbon release from super-computers are reduced. Meanwhile, the platform forms a social network of climate researchers and the general public, which may be leveraged to raise climate awareness among the participants. A portal is to be built as the gateway to the climate@home project. Three types of roles and the corresponding functionalities are designed and supported. The end users include the citizen participants, climate scientists, and project managers. Citizen participants connect their computing resources to the platform by downloading and installing a computing engine on their personal computers. Computer climate models are defined at the server side. Climate scientists configure computer model parameters through the portal user interface. After model configuration, scientists then launch the computing task. Next, data is atomized and distributed to computing engines that are running on citizen participants' computers. Scientists will receive notifications on the completion of computing tasks, and examine modeling results via visualization modules of the portal. Computing tasks, computing resources, and participants are managed by project managers via portal tools. A portal prototype has been built for proof of concept. Three forums have been setup for different groups of users to share information on science aspect, technology aspect, and educational outreach aspect. A facebook account has been setup to distribute messages via the most popular social networking platform. New treads are synchronized from the forums to facebook. A mapping tool displays geographic locations of the participants and the status of tasks on each client node. A group of users have been invited to test functions such as forums, blogs, and computing resource monitoring.

  9. Do monkeys choose to choose?

    PubMed

    Perdue, Bonnie M; Evans, Theodore A; Washburn, David A; Rumbaugh, Duane M; Beran, Michael J

    2014-06-01

    Both empirical and anecdotal evidence supports the idea that choice is preferred by humans. Previous research has demonstrated that this preference extends to nonhuman animals, but it remains largely unknown whether animals will actively seek out or prefer opportunities to choose. Here we explored the issue of whether capuchin and rhesus monkeys choose to choose. We used a modified version of the SELECT task-a computer program in which monkeys can choose the order of completion of various psychomotor and cognitive tasks. In the present experiments, each trial began with a choice between two icons, one of which allowed the monkey to select the order of task completion, and the other of which led to the assignment of a task order by the computer. In either case, subjects still had to complete the same number of tasks and the same number of task trials. The tasks were relatively easy, and the monkeys responded correctly on most trials. Thus, global reinforcement rates were approximately equated across conditions. The only difference was whether the monkey chose the task order or it was assigned, thus isolating the act of choosing. Given sufficient experience with the task icons, all monkeys showed a significant preference for choice when the alternative was a randomly assigned order of tasks. To a lesser extent, some of the monkeys maintained a preference for choice over a preferred, but computer-assigned, task order that was yoked to their own previous choice selection. The results indicated that monkeys prefer to choose when all other aspects of the task are equated.

  10. Loosely Coupled GPS-Aided Inertial Navigation System for Range Safety

    NASA Technical Reports Server (NTRS)

    Heatwole, Scott; Lanzi, Raymond J.

    2010-01-01

    The Autonomous Flight Safety System (AFSS) aims to replace the human element of range safety operations, as well as reduce reliance on expensive, downrange assets for launches of expendable launch vehicles (ELVs). The system consists of multiple navigation sensors and flight computers that provide a highly reliable platform. It is designed to ensure that single-event failures in a flight computer or sensor will not bring down the whole system. The flight computer uses a rules-based structure derived from range safety requirements to make decisions whether or not to destroy the rocket.

  11. Application of the System Identification Technique to Goal-Directed Saccades.

    DTIC Science & Technology

    1984-07-30

    1983 to May 31, 1984 by the AFOSR under Grant No. AFOSR-83-0187. 1. Salaries & Wages $7,257 2. Employee Benefits $ 4186 3. Indirect Costs $1,177 *’ 1...Equipment $2,127 DEC VT100 Terminal Computer Terminal Table & Chair Computer Interface 5. Travel $ 672 6. Miscellaneous Expenses 281 Computer Costs ...Telephone Xeroxing Report Costs Total $12,000 A 1cc;3t Ion r . ;. ., ’o n. e, Ef V r CI3 k.i *r 7’r’ ’ - s-I - . CLef • -- * 0 - -- -, r ~ 𔄁 . r w

  12. Active Nodal Task Seeking for High-Performance, Ultra-Dependable Computing

    DTIC Science & Technology

    1994-07-01

    implementation. Figure 1 shows a hardware organization of ANTS: stand-alone computing nodes inter - connected by buses. 2.1 Run Time Partitioning The...nodes in 14 respond to changing loads [27] or system reconfiguration [26]. Existing techniques are all source-initiated or server-initiated [27]. 5.1...short-running task segments. The task segments must be short-running in order that processors will become avalable often enough to satisfy changing

  13. Non-Evolutionary Algorithms for Scheduling Dependent Tasks in Distributed Heterogeneous Computing Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wayne F. Boyer; Gurdeep S. Hura

    2005-09-01

    The Problem of obtaining an optimal matching and scheduling of interdependent tasks in distributed heterogeneous computing (DHC) environments is well known to be an NP-hard problem. In a DHC system, task execution time is dependent on the machine to which it is assigned and task precedence constraints are represented by a directed acyclic graph. Recent research in evolutionary techniques has shown that genetic algorithms usually obtain more efficient schedules that other known algorithms. We propose a non-evolutionary random scheduling (RS) algorithm for efficient matching and scheduling of inter-dependent tasks in a DHC system. RS is a succession of randomized taskmore » orderings and a heuristic mapping from task order to schedule. Randomized task ordering is effectively a topological sort where the outcome may be any possible task order for which the task precedent constraints are maintained. A detailed comparison to existing evolutionary techniques (GA and PSGA) shows the proposed algorithm is less complex than evolutionary techniques, computes schedules in less time, requires less memory and fewer tuning parameters. Simulation results show that the average schedules produced by RS are approximately as efficient as PSGA schedules for all cases studied and clearly more efficient than PSGA for certain cases. The standard formulation for the scheduling problem addressed in this paper is Rm|prec|Cmax.,« less

  14. GPSS/360 computer models to simulate aircraft passenger emergency evacuations.

    DOT National Transportation Integrated Search

    1972-09-01

    Live tests of emergency evacuation of transport aircraft are becoming increasingly expensive as the planes grow to a size seating hundreds of passengers. Repeated tests, to cope with random variations, increase these costs, as well as risks of injuri...

  15. Another View of "PC vs. Mac."

    ERIC Educational Resources Information Center

    DeMillion, John A.

    1998-01-01

    An article by Nan Wodarz in the November 1997 issue listed reasons why the Microsoft computer operating system was superior to the Apple Macintosh platform. This rebuttal contends the Macintosh is less expensive, lasts longer, and requires less technical staff for support. (MLF)

  16. Experimental CAD Course Uses Low-Cost Systems.

    ERIC Educational Resources Information Center

    Wohlers, Terry

    1984-01-01

    Describes the outstanding results obtained when a department of industrial sciences used special software on microcomputers to teach computer-aided design (CAD) as an alternative to much more expensive equipment. The systems used and prospects for the future are also considered. (JN)

  17. A Decentralized Eigenvalue Computation Method for Spectrum Sensing Based on Average Consensus

    NASA Astrophysics Data System (ADS)

    Mohammadi, Jafar; Limmer, Steffen; Stańczak, Sławomir

    2016-07-01

    This paper considers eigenvalue estimation for the decentralized inference problem for spectrum sensing. We propose a decentralized eigenvalue computation algorithm based on the power method, which is referred to as generalized power method GPM; it is capable of estimating the eigenvalues of a given covariance matrix under certain conditions. Furthermore, we have developed a decentralized implementation of GPM by splitting the iterative operations into local and global computation tasks. The global tasks require data exchange to be performed among the nodes. For this task, we apply an average consensus algorithm to efficiently perform the global computations. As a special case, we consider a structured graph that is a tree with clusters of nodes at its leaves. For an accelerated distributed implementation, we propose to use computation over multiple access channel (CoMAC) as a building block of the algorithm. Numerical simulations are provided to illustrate the performance of the two algorithms.

  18. Improved Neural Signal Classification in a Rapid Serial Visual Presentation Task Using Active Learning.

    PubMed

    Marathe, Amar R; Lawhern, Vernon J; Wu, Dongrui; Slayback, David; Lance, Brent J

    2016-03-01

    The application space for brain-computer interface (BCI) technologies is rapidly expanding with improvements in technology. However, most real-time BCIs require extensive individualized calibration prior to use, and systems often have to be recalibrated to account for changes in the neural signals due to a variety of factors including changes in human state, the surrounding environment, and task conditions. Novel approaches to reduce calibration time or effort will dramatically improve the usability of BCI systems. Active Learning (AL) is an iterative semi-supervised learning technique for learning in situations in which data may be abundant, but labels for the data are difficult or expensive to obtain. In this paper, we apply AL to a simulated BCI system for target identification using data from a rapid serial visual presentation (RSVP) paradigm to minimize the amount of training samples needed to initially calibrate a neural classifier. Our results show AL can produce similar overall classification accuracy with significantly less labeled data (in some cases less than 20%) when compared to alternative calibration approaches. In fact, AL classification performance matches performance of 10-fold cross-validation (CV) in over 70% of subjects when training with less than 50% of the data. To our knowledge, this is the first work to demonstrate the use of AL for offline electroencephalography (EEG) calibration in a simulated BCI paradigm. While AL itself is not often amenable for use in real-time systems, this work opens the door to alternative AL-like systems that are more amenable for BCI applications and thus enables future efforts for developing highly adaptive BCI systems.

  19. EHR-based phenotyping: Bulk learning and evaluation.

    PubMed

    Chiu, Po-Hsiang; Hripcsak, George

    2017-06-01

    In data-driven phenotyping, a core computational task is to identify medical concepts and their variations from sources of electronic health records (EHR) to stratify phenotypic cohorts. A conventional analytic framework for phenotyping largely uses a manual knowledge engineering approach or a supervised learning approach where clinical cases are represented by variables encompassing diagnoses, medicinal treatments and laboratory tests, among others. In such a framework, tasks associated with feature engineering and data annotation remain a tedious and expensive exercise, resulting in poor scalability. In addition, certain clinical conditions, such as those that are rare and acute in nature, may never accumulate sufficient data over time, which poses a challenge to establishing accurate and informative statistical models. In this paper, we use infectious diseases as the domain of study to demonstrate a hierarchical learning method based on ensemble learning that attempts to address these issues through feature abstraction. We use a sparse annotation set to train and evaluate many phenotypes at once, which we call bulk learning. In this batch-phenotyping framework, disease cohort definitions can be learned from within the abstract feature space established by using multiple diseases as a substrate and diagnostic codes as surrogates. In particular, using surrogate labels for model training renders possible its subsequent evaluation using only a sparse annotated sample. Moreover, statistical models can be trained and evaluated, using the same sparse annotation, from within the abstract feature space of low dimensionality that encapsulates the shared clinical traits of these target diseases, collectively referred to as the bulk learning set. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. An opportunity cost model of subjective effort and task performance

    PubMed Central

    Kurzban, Robert; Duckworth, Angela; Kable, Joseph W.; Myers, Justus

    2013-01-01

    Why does performing certain tasks cause the aversive experience of mental effort and concomitant deterioration in task performance? One explanation posits a physical resource that is depleted over time. We propose an alternate explanation that centers on mental representations of the costs and benefits associated with task performance. Specifically, certain computational mechanisms, especially those associated with executive function, can be deployed for only a limited number of simultaneous tasks at any given moment. Consequently, the deployment of these computational mechanisms carries an opportunity cost – that is, the next-best use to which these systems might be put. We argue that the phenomenology of effort can be understood as the felt output of these cost/benefit computations. In turn, the subjective experience of effort motivates reduced deployment of these computational mechanisms in the service of the present task. These opportunity cost representations, then, together with other cost/benefit calculations, determine effort expended and, everything else equal, result in performance reductions. In making our case for this position, we review alternate explanations both for the phenomenology of effort associated with these tasks and for performance reductions over time. Likewise, we review the broad range of relevant empirical results from across subdisciplines, especially psychology and neuroscience. We hope that our proposal will help to build links among the diverse fields that have been addressing similar questions from different perspectives, and we emphasize ways in which alternate models might be empirically distinguished. PMID:24304775

Top