Sample records for wlcg common computing

  1. Unified Monitoring Architecture for IT and Grid Services

    NASA Astrophysics Data System (ADS)

    Aimar, A.; Aguado Corman, A.; Andrade, P.; Belov, S.; Delgado Fernandez, J.; Garrido Bear, B.; Georgiou, M.; Karavakis, E.; Magnoni, L.; Rama Ballesteros, R.; Riahi, H.; Rodriguez Martinez, J.; Saiz, P.; Zolnai, D.

    2017-10-01

    This paper provides a detailed overview of the Unified Monitoring Architecture (UMA) that aims at merging the monitoring of the CERN IT data centres and the WLCG monitoring using common and widely-adopted open source technologies such as Flume, Elasticsearch, Hadoop, Spark, Kibana, Grafana and Zeppelin. It provides insights and details on the lessons learned, explaining the work performed in order to monitor the CERN IT data centres and the WLCG computing activities such as the job processing, data access and transfers, and the status of sites and services.

  2. Monitoring WLCG with lambda-architecture: a new scalable data store and analytics platform for monitoring at petabyte scale.

    NASA Astrophysics Data System (ADS)

    Magnoni, L.; Suthakar, U.; Cordeiro, C.; Georgiou, M.; Andreeva, J.; Khan, A.; Smith, D. R.

    2015-12-01

    Monitoring the WLCG infrastructure requires the gathering and analysis of a high volume of heterogeneous data (e.g. data transfers, job monitoring, site tests) coming from different services and experiment-specific frameworks to provide a uniform and flexible interface for scientists and sites. The current architecture, where relational database systems are used to store, to process and to serve monitoring data, has limitations in coping with the foreseen increase in the volume (e.g. higher LHC luminosity) and the variety (e.g. new data-transfer protocols and new resource-types, as cloud-computing) of WLCG monitoring events. This paper presents a new scalable data store and analytics platform designed by the Support for Distributed Computing (SDC) group, at the CERN IT department, which uses a variety of technologies each one targeting specific aspects of big-scale distributed data-processing (commonly referred as lambda-architecture approach). Results of data processing on Hadoop for WLCG data activities monitoring are presented, showing how the new architecture can easily analyze hundreds of millions of transfer logs in a few minutes. Moreover, a comparison of data partitioning, compression and file format (e.g. CSV, Avro) is presented, with particular attention given to how the file structure impacts the overall MapReduce performance. In conclusion, the evolution of the current implementation, which focuses on data storage and batch processing, towards a complete lambda-architecture is discussed, with consideration of candidate technology for the serving layer (e.g. Elasticsearch) and a description of a proof of concept implementation, based on Apache Spark and Esper, for the real-time part which compensates for batch-processing latency and automates problem detection and failures.

  3. WLCG scale testing during CMS data challenges

    NASA Astrophysics Data System (ADS)

    Gutsche, O.; Hajdu, C.

    2008-07-01

    The CMS computing model to process and analyze LHC collision data follows a data-location driven approach and is using the WLCG infrastructure to provide access to GRID resources. As a preparation for data taking, CMS tests its computing model during dedicated data challenges. An important part of the challenges is the test of the user analysis which poses a special challenge for the infrastructure with its random distributed access patterns. The CMS Remote Analysis Builder (CRAB) handles all interactions with the WLCG infrastructure transparently for the user. During the 2006 challenge, CMS set its goal to test the infrastructure at a scale of 50,000 user jobs per day using CRAB. Both direct submissions by individual users and automated submissions by robots were used to achieve this goal. A report will be given about the outcome of the user analysis part of the challenge using both the EGEE and OSG parts of the WLCG. In particular, the difference in submission between both GRID middlewares (resource broker vs. direct submission) will be discussed. In the end, an outlook for the 2007 data challenge is given.

  4. Integration of Russian Tier-1 Grid Center with High Performance Computers at NRC-KI for LHC experiments and beyond HENP

    NASA Astrophysics Data System (ADS)

    Belyaev, A.; Berezhnaya, A.; Betev, L.; Buncic, P.; De, K.; Drizhuk, D.; Klimentov, A.; Lazin, Y.; Lyalin, I.; Mashinistov, R.; Novikov, A.; Oleynik, D.; Polyakov, A.; Poyda, A.; Ryabinkin, E.; Teslyuk, A.; Tkachenko, I.; Yasnopolskiy, L.

    2015-12-01

    The LHC experiments are preparing for the precision measurements and further discoveries that will be made possible by higher LHC energies from April 2015 (LHC Run2). The need for simulation, data processing and analysis would overwhelm the expected capacity of grid infrastructure computing facilities deployed by the Worldwide LHC Computing Grid (WLCG). To meet this challenge the integration of the opportunistic resources into LHC computing model is highly important. The Tier-1 facility at Kurchatov Institute (NRC-KI) in Moscow is a part of WLCG and it will process, simulate and store up to 10% of total data obtained from ALICE, ATLAS and LHCb experiments. In addition Kurchatov Institute has supercomputers with peak performance 0.12 PFLOPS. The delegation of even a fraction of supercomputing resources to the LHC Computing will notably increase total capacity. In 2014 the development a portal combining a Tier-1 and a supercomputer in Kurchatov Institute was started to provide common interfaces and storage. The portal will be used not only for HENP experiments, but also by other data- and compute-intensive sciences like biology with genome sequencing analysis; astrophysics with cosmic rays analysis, antimatter and dark matter search, etc.

  5. WLCG and IPv6 - The HEPiX IPv6 working group

    DOE PAGES

    Campana, S.; K. Chadwick; Chen, G.; ...

    2014-06-11

    The HEPiX (http://www.hepix.org) IPv6 Working Group has been investigating the many issues which feed into the decision on the timetable for the use of IPv6 (http://www.ietf.org/rfc/rfc2460.txt) networking protocols in High Energy Physics (HEP) Computing, in particular in the Worldwide Large Hadron Collider (LHC) Computing Grid (WLCG). RIPE NCC, the European Regional Internet Registry (RIR), ran out ofIPv4 addresses in September 2012. The North and South America RIRs are expected to run out soon. In recent months it has become more clear that some WLCG sites, including CERN, are running short of IPv4 address space, now without the possibility of applyingmore » for more. This has increased the urgency for the switch-on of dual-stack IPv4/IPv6 on all outward facing WLCG services to allow for the eventual support of IPv6-only clients. The activities of the group include the analysis and testing of the readiness for IPv6 and the performance of many required components, including the applications, middleware, management and monitoring tools essential for HEP computing. Many WLCG Tier 1/2 sites are participants in the group's distributed IPv6 testbed and the major LHC experiment collaborations are engaged in the testing. We are constructing a group web/wiki which will contain useful information on the IPv6 readiness of the various software components and a knowledge base (http://hepix-ipv6.web.cern.ch/knowledge-base). Furthermore, this paper describes the work done by the working group and its future plans.« less

  6. WLCG and IPv6 - the HEPiX IPv6 working group

    NASA Astrophysics Data System (ADS)

    Campana, S.; Chadwick, K.; Chen, G.; Chudoba, J.; Clarke, P.; Eliáš, M.; Elwell, A.; Fayer, S.; Finnern, T.; Goossens, L.; Grigoras, C.; Hoeft, B.; Kelsey, D. P.; Kouba, T.; López Muñoz, F.; Martelli, E.; Mitchell, M.; Nairz, A.; Ohrenberg, K.; Pfeiffer, A.; Prelz, F.; Qi, F.; Rand, D.; Reale, M.; Rozsa, S.; Sciaba, A.; Voicu, R.; Walker, C. J.; Wildish, T.

    2014-06-01

    The HEPiX (http://www.hepix.org) IPv6 Working Group has been investigating the many issues which feed into the decision on the timetable for the use of IPv6 (http://www.ietf.org/rfc/rfc2460.txt) networking protocols in High Energy Physics (HEP) Computing, in particular in the Worldwide Large Hadron Collider (LHC) Computing Grid (WLCG). RIPE NCC, the European Regional Internet Registry (RIR), ran out ofIPv4 addresses in September 2012. The North and South America RIRs are expected to run out soon. In recent months it has become more clear that some WLCG sites, including CERN, are running short of IPv4 address space, now without the possibility of applying for more. This has increased the urgency for the switch-on of dual-stack IPv4/IPv6 on all outward facing WLCG services to allow for the eventual support of IPv6-only clients. The activities of the group include the analysis and testing of the readiness for IPv6 and the performance of many required components, including the applications, middleware, management and monitoring tools essential for HEP computing. Many WLCG Tier 1/2 sites are participants in the group's distributed IPv6 testbed and the major LHC experiment collaborations are engaged in the testing. We are constructing a group web/wiki which will contain useful information on the IPv6 readiness of the various software components and a knowledge base (http://hepix-ipv6.web.cern.ch/knowledge-base). This paper describes the work done by the working group and its future plans.

  7. WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Beche, A.; Belov, S.; Kadochnikov, I.; Saiz, P.; Tuckett, D.

    2014-06-01

    The Worldwide LHC Computing Grid provides resources for the four main virtual organizations. Along with data processing, data distribution is the key computing activity on the WLCG infrastructure. The scale of this activity is very large, the ATLAS virtual organization (VO) alone generates and distributes more than 40 PB of data in 100 million files per year. Another challenge is the heterogeneity of data transfer technologies. Currently there are two main alternatives for data transfers on the WLCG: File Transfer Service and XRootD protocol. Each LHC VO has its own monitoring system which is limited to the scope of that particular VO. There is a need for a global system which would provide a complete cross-VO and cross-technology picture of all WLCG data transfers. We present a unified monitoring tool - WLCG Transfers Dashboard - where all the VOs and technologies coexist and are monitored together. The scale of the activity and the heterogeneity of the system raise a number of technical challenges. Each technology comes with its own monitoring specificities and some of the VOs use several of these technologies. This paper describes the implementation of the system with particular focus on the design principles applied to ensure the necessary scalability and performance, and to easily integrate any new technology providing additional functionality which might be specific to that technology.

  8. The “Common Solutions” Strategy of the Experiment Support group at CERN for the LHC Experiments

    NASA Astrophysics Data System (ADS)

    Girone, M.; Andreeva, J.; Barreiro Megino, F. H.; Campana, S.; Cinquilli, M.; Di Girolamo, A.; Dimou, M.; Giordano, D.; Karavakis, E.; Kenyon, M. J.; Kokozkiewicz, L.; Lanciotti, E.; Litmaath, M.; Magini, N.; Negri, G.; Roiser, S.; Saiz, P.; Saiz Santos, M. D.; Schovancova, J.; Sciabà, A.; Spiga, D.; Trentadue, R.; Tuckett, D.; Valassi, A.; Van der Ster, D. C.; Shiers, J. D.

    2012-12-01

    After two years of LHC data taking, processing and analysis and with numerous changes in computing technology, a number of aspects of the experiments’ computing, as well as WLCG deployment and operations, need to evolve. As part of the activities of the Experiment Support group in CERN's IT department, and reinforced by effort from the EGI-InSPIRE project, we present work aimed at common solutions across all LHC experiments. Such solutions allow us not only to optimize development manpower but also offer lower long-term maintenance and support costs. The main areas cover Distributed Data Management, Data Analysis, Monitoring and the LCG Persistency Framework. Specific tools have been developed including the HammerCloud framework, automated services for data placement, data cleaning and data integrity (such as the data popularity service for CMS, the common Victor cleaning agent for ATLAS and CMS and tools for catalogue/storage consistency), the Dashboard Monitoring framework (job monitoring, data management monitoring, File Transfer monitoring) and the Site Status Board. This talk focuses primarily on the strategic aspects of providing such common solutions and how this relates to the overall goals of long-term sustainability and the relationship to the various WLCG Technical Evolution Groups. The success of the service components has given us confidence in the process, and has developed the trust of the stakeholders. We are now attempting to expand the development of common solutions into the more critical workflows. The first is a feasibility study of common analysis workflow execution elements between ATLAS and CMS. We look forward to additional common development in the future.

  9. An alternative model to distribute VO software to WLCG sites based on CernVM-FS: a prototype at PIC Tier1

    NASA Astrophysics Data System (ADS)

    Lanciotti, E.; Merino, G.; Bria, A.; Blomer, J.

    2011-12-01

    In a distributed computing model as WLCG the software of experiment specific application software has to be efficiently distributed to any site of the Grid. Application software is currently installed in a shared area of the site visible for all Worker Nodes (WNs) of the site through some protocol (NFS, AFS or other). The software is installed at the site by jobs which run on a privileged node of the computing farm where the shared area is mounted in write mode. This model presents several drawbacks which cause a non-negligible rate of job failure. An alternative model for software distribution based on the CERN Virtual Machine File System (CernVM-FS) has been tried at PIC, the Spanish Tierl site of WLCG. The test bed used and the results are presented in this paper.

  10. WLCG Monitoring Consolidation and further evolution

    NASA Astrophysics Data System (ADS)

    Saiz, P.; Aimar, A.; Andreeva, J.; Babik, M.; Cons, L.; Dzhunov, I.; Forti, A.; di Girolamo, A.; Karavakis, E.; Litmaath, M.; Magini, N.; Magnoni, L.; de los Rios, H. Martin; Roiser, S.; Sciaba, A.; Schulz, M.; Tarragon, J.; Tuckett, D.

    2015-12-01

    The WLCG monitoring system solves a challenging task of keeping track of the LHC computing activities on the WLCG infrastructure, ensuring health and performance of the distributed services at more than 170 sites. The challenge consists of decreasing the effort needed to operate the monitoring service and to satisfy the constantly growing requirements for its scalability and performance. This contribution describes the recent consolidation work aimed to reduce the complexity of the system, and to ensure more effective operations, support and service management. This was done by unifying where possible the implementation of the monitoring components. The contribution also covers further steps like the evaluation of the new technologies for data storage, processing and visualization and migration to a new technology stack.

  11. Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue

    DOE PAGES

    Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; ...

    2017-10-01

    Here, the Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a modelmore » does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.« less

  12. Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey

    Here, the Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a modelmore » does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.« less

  13. Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue

    NASA Astrophysics Data System (ADS)

    Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; Bagliesi, Giuseppe; Belforte, Stephano; Campana, Simone; Dimou, Maria; Flix, Jose; Forti, Alessandra; di Girolamo, A.; Karavakis, Edward; Lammel, Stephan; Litmaath, Maarten; Sciaba, Andrea; Valassi, Andrea

    2017-10-01

    The Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a model does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.

  14. Distributed Monitoring Infrastructure for Worldwide LHC Computing Grid

    NASA Astrophysics Data System (ADS)

    Andrade, P.; Babik, M.; Bhatt, K.; Chand, P.; Collados, D.; Duggal, V.; Fuente, P.; Hayashi, S.; Imamagic, E.; Joshi, P.; Kalmady, R.; Karnani, U.; Kumar, V.; Lapka, W.; Quick, R.; Tarragon, J.; Teige, S.; Triantafyllidis, C.

    2012-12-01

    The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO[1] managers, service managers, management), from different middleware providers (ARC[2], dCache[3], gLite[4], UNICORE[5] and VDT[6]), consortiums (WLCG[7], EMI[11], EGI[15], OSG[13]), and operational teams (GOC[16], OMB[8], OTAG[9], CSIRT[10]). The seamless harmonization of these distributed actors is in daily use for monitoring of the WLCG infrastructure. In this paper we describe the monitoring of the WLCG infrastructure from the operational perspective. We explain the complexity of the journey of a monitoring probe from its execution on a grid node to the visualization on the MyWLCG[27] portal where it is exposed to other clients. This monitoring workflow profits from the interoperability established between the SAM[19] and RSV[20] frameworks. We show how these two distributed structures are capable of uniting technologies and hiding the complexity around them, making them easy to be used by the community. Finally, the different supported deployment strategies, tailored not only for monitoring the entire infrastructure but also for monitoring sites and virtual organizations, are presented and the associated operational benefits highlighted.

  15. Dynamic partitioning as a way to exploit new computing paradigms: the cloud use case.

    NASA Astrophysics Data System (ADS)

    Ciaschini, Vincenzo; Dal Pra, Stefano; dell'Agnello, Luca

    2015-12-01

    The WLCG community and many groups in the HEP community have based their computing strategy on the Grid paradigm, which proved successful and still ensures its goals. However, Grid technology has not spread much over other communities; in the commercial world, the cloud paradigm is the emerging way to provide computing services. WLCG experiments aim to achieve integration of their existing current computing model with cloud deployments and take advantage of the so-called opportunistic resources (including HPC facilities) which are usually not Grid compliant. One missing feature in the most common cloud frameworks, is the concept of job scheduler, which plays a key role in a traditional computing centre, by enabling a fairshare based access at the resources to the experiments in a scenario where demand greatly outstrips availability. At CNAF we are investigating the possibility to access the Tier-1 computing resources as an OpenStack based cloud service. The system, exploiting the dynamic partitioning mechanism already being used to enable Multicore computing, allowed us to avoid a static splitting of the computing resources in the Tier-1 farm, while permitting a share friendly approach. The hosts in a dynamically partitioned farm may be moved to or from the partition, according to suitable policies for request and release of computing resources. Nodes being requested in the partition switch their role and become available to play a different one. In the cloud use case hosts may switch from acting as Worker Node in the Batch system farm to cloud compute node member, made available to tenants. In this paper we describe the dynamic partitioning concept, its implementation and integration with our current batch system, LSF.

  16. Managing competing elastic Grid and Cloud scientific computing applications using OpenNebula

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Lusso, S.; Masera, M.; Vallero, S.

    2015-12-01

    Elastic cloud computing applications, i.e. applications that automatically scale according to computing needs, work on the ideal assumption of infinite resources. While large public cloud infrastructures may be a reasonable approximation of this condition, scientific computing centres like WLCG Grid sites usually work in a saturated regime, in which applications compete for scarce resources through queues, priorities and scheduling policies, and keeping a fraction of the computing cores idle to allow for headroom is usually not an option. In our particular environment one of the applications (a WLCG Tier-2 Grid site) is much larger than all the others and cannot autoscale easily. Nevertheless, other smaller applications can benefit of automatic elasticity; the implementation of this property in our infrastructure, based on the OpenNebula cloud stack, will be described and the very first operational experiences with a small number of strategies for timely allocation and release of resources will be discussed.

  17. Processing of the WLCG monitoring data using NoSQL

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Beche, A.; Belov, S.; Dzhunov, I.; Kadochnikov, I.; Karavakis, E.; Saiz, P.; Schovancova, J.; Tuckett, D.

    2014-06-01

    The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation, performance and reliability. Furthermore, the generated monitoring flow is constantly increasing, which represents another challenge for the monitoring systems. While existing solutions are traditionally based on Oracle for data storage and processing, recent developments evaluate NoSQL for processing large-scale monitoring datasets. NoSQL databases are getting increasingly popular for processing datasets at the terabyte and petabyte scale using commodity hardware. In this contribution, the integration of NoSQL data processing in the Experiment Dashboard framework is described along with first experiences of using this technology for monitoring the LHC computing activities.

  18. Multicore job scheduling in the Worldwide LHC Computing Grid

    NASA Astrophysics Data System (ADS)

    Forti, A.; Pérez-Calero Yzquierdo, A.; Hartmann, T.; Alef, M.; Lahiff, A.; Templon, J.; Dal Pra, S.; Gila, M.; Skipsey, S.; Acosta-Silva, C.; Filipcic, A.; Walker, R.; Walker, C. J.; Traynor, D.; Gadrat, S.

    2015-12-01

    After the successful first run of the LHC, data taking is scheduled to restart in Summer 2015 with experimental conditions leading to increased data volumes and event complexity. In order to process the data generated in such scenario and exploit the multicore architectures of current CPUs, the LHC experiments have developed parallelized software for data reconstruction and simulation. However, a good fraction of their computing effort is still expected to be executed as single-core tasks. Therefore, jobs with diverse resources requirements will be distributed across the Worldwide LHC Computing Grid (WLCG), making workload scheduling a complex problem in itself. In response to this challenge, the WLCG Multicore Deployment Task Force has been created in order to coordinate the joint effort from experiments and WLCG sites. The main objective is to ensure the convergence of approaches from the different LHC Virtual Organizations (VOs) to make the best use of the shared resources in order to satisfy their new computing needs, minimizing any inefficiency originated from the scheduling mechanisms, and without imposing unnecessary complexities in the way sites manage their resources. This paper describes the activities and progress of the Task Force related to the aforementioned topics, including experiences from key sites on how to best use different batch system technologies, the evolution of workload submission tools by the experiments and the knowledge gained from scale tests of the different proposed job submission strategies.

  19. IPv6 Security

    NASA Astrophysics Data System (ADS)

    Babik, M.; Chudoba, J.; Dewhurst, A.; Finnern, T.; Froy, T.; Grigoras, C.; Hafeez, K.; Hoeft, B.; Idiculla, T.; Kelsey, D. P.; López Muñoz, F.; Martelli, E.; Nandakumar, R.; Ohrenberg, K.; Prelz, F.; Rand, D.; Sciabà, A.; Tigerstedt, U.; Traynor, D.; Wartel, R.

    2017-10-01

    IPv4 network addresses are running out and the deployment of IPv6 networking in many places is now well underway. Following the work of the HEPiX IPv6 Working Group, a growing number of sites in the Worldwide Large Hadron Collider Computing Grid (WLCG) are deploying dual-stack IPv6/IPv4 services. The aim of this is to support the use of IPv6-only clients, i.e. worker nodes, virtual machines or containers. The IPv6 networking protocols while they do contain features aimed at improving security also bring new challenges for operational IT security. The lack of maturity of IPv6 implementations together with the increased complexity of some of the protocol standards raise many new issues for operational security teams. The HEPiX IPv6 Working Group is producing guidance on best practices in this area. This paper considers some of the security concerns for WLCG in an IPv6 world and presents the HEPiX IPv6 working group guidance for the system administrators who manage IT services on the WLCG distributed infrastructure, for their related site security and networking teams and for developers and software engineers working on WLCG applications.

  20. Optimisation of the usage of LHC and local computing resources in a multidisciplinary physics department hosting a WLCG Tier-2 centre

    NASA Astrophysics Data System (ADS)

    Barberis, Stefano; Carminati, Leonardo; Leveraro, Franco; Mazza, Simone Michele; Perini, Laura; Perlz, Francesco; Rebatto, David; Tura, Ruggero; Vaccarossa, Luca; Villaplana, Miguel

    2015-12-01

    We present the approach of the University of Milan Physics Department and the local unit of INFN to allow and encourage the sharing among different research areas of computing, storage and networking resources (the largest ones being those composing the Milan WLCG Tier-2 centre and tailored to the needs of the ATLAS experiment). Computing resources are organised as independent HTCondor pools, with a global master in charge of monitoring them and optimising their usage. The configuration has to provide satisfactory throughput for both serial and parallel (multicore, MPI) jobs. A combination of local, remote and cloud storage options are available. The experience of users from different research areas operating on this shared infrastructure is discussed. The promising direction of improving scientific computing throughput by federating access to distributed computing and storage also seems to fit very well with the objectives listed in the European Horizon 2020 framework for research and development.

  1. A scalable architecture for online anomaly detection of WLCG batch jobs

    NASA Astrophysics Data System (ADS)

    Kuehn, E.; Fischer, M.; Giffels, M.; Jung, C.; Petzold, A.

    2016-10-01

    For data centres it is increasingly important to monitor the network usage, and learn from network usage patterns. Especially configuration issues or misbehaving batch jobs preventing a smooth operation need to be detected as early as possible. At the GridKa data and computing centre we therefore operate a tool BPNetMon for monitoring traffic data and characteristics of WLCG batch jobs and pilots locally on different worker nodes. On the one hand local information itself are not sufficient to detect anomalies for several reasons, e.g. the underlying job distribution on a single worker node might change or there might be a local misconfiguration. On the other hand a centralised anomaly detection approach does not scale regarding network communication as well as computational costs. We therefore propose a scalable architecture based on concepts of a super-peer network.

  2. From the CMS Computing Experience in the WLCG STEP'09 Challenge to the First Data Taking of the LHC Era

    NASA Astrophysics Data System (ADS)

    Bonacorsi, D.; Gutsche, O.

    The Worldwide LHC Computing Grid (WLCG) project decided in March 2009 to perform scale tests of parts of its overall Grid infrastructure before the start of the LHC data taking. The "Scale Test for the Experiment Program" (STEP'09) was performed mainly in June 2009 -with more selected tests in September- October 2009 -and emphasized the simultaneous test of the computing systems of all 4 LHC experiments. CMS tested its Tier-0 tape writing and processing capabilities. The Tier-1 tape systems were stress tested using the complete range of Tier-1 work-flows: transfer from Tier-0 and custody of data on tape, processing and subsequent archival, redistribution of datasets amongst all Tier-1 sites as well as burst transfers of datasets to Tier-2 sites. The Tier-2 analysis capacity was tested using bulk analysis job submissions to backfill normal user activity. In this talk, we will report on the different performed tests and present their post-mortem analysis.

  3. Design and performance of the virtualization platform for offline computing on the ATLAS TDAQ Farm

    NASA Astrophysics Data System (ADS)

    Ballestrero, S.; Batraneanu, S. M.; Brasolin, F.; Contescu, C.; Di Girolamo, A.; Lee, C. J.; Pozo Astigarraga, M. E.; Scannicchio, D. A.; Twomey, M. S.; Zaytsev, A.

    2014-06-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 there is an opportunity to use the computing resources of the experiments' large trigger farms for other data processing activities. In the case of the ATLAS experiment, the TDAQ farm, consisting of more than 1500 compute nodes, is suitable for running Monte Carlo (MC) production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of the design and deployment of a virtualized platform running on this computing resource and of its use to run large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to guarantee the security and the usability of the ATLAS private network, and to minimize interference with TDAQ's usage of the farm. Openstack has been chosen to provide a cloud management layer. The experience gained in the last 3.5 months shows that the use of the TDAQ farm for the MC simulation contributes to the ATLAS data processing at the level of a large Tier-1 WLCG site, despite the opportunistic nature of the underlying computing resources being used.

  4. Analyzing data flows of WLCG jobs at batch job level

    NASA Astrophysics Data System (ADS)

    Kuehn, Eileen; Fischer, Max; Giffels, Manuel; Jung, Christopher; Petzold, Andreas

    2015-05-01

    With the introduction of federated data access to the workflows of WLCG, it is becoming increasingly important for data centers to understand specific data flows regarding storage element accesses, firewall configurations, as well as the scheduling of batch jobs themselves. As existing batch system monitoring and related system monitoring tools do not support measurements at batch job level, a new tool has been developed and put into operation at the GridKa Tier 1 center for monitoring continuous data streams and characteristics of WLCG jobs and pilots. Long term measurements and data collection are in progress. These measurements already have been proven to be useful analyzing misbehaviors and various issues. Therefore we aim for an automated, realtime approach for anomaly detection. As a requirement, prototypes for standard workflows have to be examined. Based on measurements of several months, different features of HEP jobs are evaluated regarding their effectiveness for data mining approaches to identify these common workflows. The paper will introduce the actual measurement approach and statistics as well as the general concept and first results classifying different HEP job workflows derived from the measurements at GridKa.

  5. The production deployment of IPv6 on WLCG

    NASA Astrophysics Data System (ADS)

    Bernier, J.; Campana, S.; Chadwick, K.; Chudoba, J.; Dewhurst, A.; Eliáš, M.; Fayer, S.; Finnern, T.; Grigoras, C.; Hartmann, T.; Hoeft, B.; Idiculla, T.; Kelsey, D. P.; López Muñoz, F.; Macmahon, E.; Martelli, E.; Millar, A. P.; Nandakumar, R.; Ohrenberg, K.; Prelz, F.; Rand, D.; Sciabà, A.; Tigerstedt, U.; Voicu, R.; Walker, C. J.; Wildish, T.

    2015-12-01

    The world is rapidly running out of IPv4 addresses; the number of IPv6 end systems connected to the internet is increasing; WLCG and the LHC experiments may soon have access to worker nodes and/or virtual machines (VMs) possessing only an IPv6 routable address. The HEPiX IPv6 Working Group has been investigating, testing and planning for dual-stack services on WLCG for several years. Following feedback from our working group, many of the storage technologies in use on WLCG have recently been made IPv6-capable. This paper presents the IPv6 requirements, tests and plans of the LHC experiments together with the tests performed on the group's IPv6 test-bed. This is primarily aimed at IPv6-only worker nodes or VMs accessing several different implementations of a global dual-stack federated storage service. Finally the plans for deployment of production dual-stack WLCG services are presented.

  6. Web Proxy Auto Discovery for the WLCG

    NASA Astrophysics Data System (ADS)

    Dykstra, D.; Blomer, J.; Blumenfeld, B.; De Salvo, A.; Dewhurst, A.; Verguilov, V.

    2017-10-01

    All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily support that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which they direct to the nearest publicly accessible web proxy servers. The responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.

  7. Web Proxy Auto Discovery for the WLCG

    DOE PAGES

    Dykstra, D.; Blomer, J.; Blumenfeld, B.; ...

    2017-11-23

    All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily supportmore » that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which it directs to the nearest publicly accessible web proxy servers. Furthermore, the responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.« less

  8. Web Proxy Auto Discovery for the WLCG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, D.; Blomer, J.; Blumenfeld, B.

    All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily supportmore » that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which it directs to the nearest publicly accessible web proxy servers. Furthermore, the responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.« less

  9. Experience in using commercial clouds in CMS

    NASA Astrophysics Data System (ADS)

    Bauerdick, L.; Bockelman, B.; Dykstra, D.; Fuess, S.; Garzoglio, G.; Girone, M.; Gutsche, O.; Holzman, B.; Hufnagel, D.; Kim, H.; Kennedy, R.; Mason, D.; Spentzouris, P.; Timm, S.; Tiradani, A.; Vaandering, E.; CMS Collaboration

    2017-10-01

    Historically high energy physics computing has been performed on large purpose-built computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.

  10. Experience in using commercial clouds in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauerdick, L.; Bockelman, B.; Dykstra, D.

    Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is amore » growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.« less

  11. Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation

    NASA Astrophysics Data System (ADS)

    Anisenkov, A. V.

    2018-03-01

    In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).

  12. The WLCG Messaging Service and its Future

    NASA Astrophysics Data System (ADS)

    Cons, Lionel; Paladin, Massimo

    2012-12-01

    Enterprise messaging is seen as an attractive mechanism to simplify and extend several portions of the Grid middleware, from low level monitoring to experiments dashboards. The production messaging service currently used by WLCG includes four tightly coupled brokers operated by EGI (running Apache ActiveMQ and designed to host the Grid operational tools such as SAM) as well as two dedicated services for ATLAS-DDM and experiments dashboards (currently also running Apache ActiveMQ). In the future, this service is expected to grow in numbers of applications supported, brokers and technologies. The WLCG Messaging Roadmap identified three areas with room for improvement (security, scalability and availability/reliability) as well as ten practical recommendations to address them. This paper describes a messaging service architecture that is in line with these recommendations as well as a software architecture based on reusable components that ease interactions with the messaging service. These two architectures will support the growth of the WLCG messaging service.

  13. Managing a tier-2 computer centre with a private cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-06-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI.

  14. Integration of end-user Cloud storage for CMS analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez

    End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less

  15. Integration of end-user Cloud storage for CMS analysis

    DOE PAGES

    Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez; ...

    2017-05-19

    End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less

  16. Making the most of cloud storage - a toolkit for exploitation by WLCG experiments

    NASA Astrophysics Data System (ADS)

    Alvarez Ayllon, Alejandro; Arsuaga Rios, Maria; Bitzes, Georgios; Furano, Fabrizio; Keeble, Oliver; Manzi, Andrea

    2017-10-01

    Understanding how cloud storage can be effectively used, either standalone or in support of its associated compute, is now an important consideration for WLCG. We report on a suite of extensions to familiar tools targeted at enabling the integration of cloud object stores into traditional grid infrastructures and workflows. Notable updates include support for a number of object store flavours in FTS3, Davix and gfal2, including mitigations for lack of vector reads; the extension of Dynafed to operate as a bridge between grid and cloud domains; protocol translation in FTS3; the implementation of extensions to DPM (also implemented by the dCache project) to allow 3rd party transfers over HTTP. The result is a toolkit which facilitates data movement and access between grid and cloud infrastructures, broadening the range of workflows suitable for cloud. We report on deployment scenarios and prototype experience, explaining how, for example, an Amazon S3 or Azure allocation can be exploited by grid workflows.

  17. FermiGrid—experience and future plans

    NASA Astrophysics Data System (ADS)

    Chadwick, K.; Berman, E.; Canal, P.; Hesselroth, T.; Garzoglio, G.; Levshina, T.; Sergeev, V.; Sfiligoi, I.; Sharma, N.; Timm, S.; Yocum, D. R.

    2008-07-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid (OSG) and the Worldwide LHC Computing Grid Collaboration (WLCG). FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the OSG, EGEE, and the WLCG. Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure - the successes and the problems.

  18. FermiGrid - experience and future plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chadwick, K.; Berman, E.; Canal, P.

    2007-09-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid and the WLCG. FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the Open Science Grid (OSG), EGEE and themore » Worldwide LHC Computing Grid Collaboration (WLCG). Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure--the successes and the problems.« less

  19. Evaluation of ZFS as an efficient WLCG storage backend

    NASA Astrophysics Data System (ADS)

    Ebert, M.; Washbrook, A.

    2017-10-01

    A ZFS based software raid system was tested for performance against a hardware raid system providing storage based on the traditional Linux file systems XFS and EXT4. These tests were done for a healthy raid array as well as for a degraded raid array and during the rebuild of a raid array. It was found that ZFS performs better in almost all test scenarios. In addition, distinct features of ZFS were tested for WLCG data storage use, like compression and higher raid levels with triple redundancy information. The long term reliability was observed after converting all production storage servers at the Edinburgh WLCG Tier-2 site to ZFS, resulting in about 1.2PB of ZFS based storage at this site.

  20. Dashboard Task Monitor for Managing ATLAS User Analysis on the Grid

    NASA Astrophysics Data System (ADS)

    Sargsyan, L.; Andreeva, J.; Jha, M.; Karavakis, E.; Kokoszkiewicz, L.; Saiz, P.; Schovancova, J.; Tuckett, D.; Atlas Collaboration

    2014-06-01

    The organization of the distributed user analysis on the Worldwide LHC Computing Grid (WLCG) infrastructure is one of the most challenging tasks among the computing activities at the Large Hadron Collider. The Experiment Dashboard offers a solution that not only monitors but also manages (kill, resubmit) user tasks and jobs via a web interface. The ATLAS Dashboard Task Monitor provides analysis users with a tool that is independent of the operating system and Grid environment. This contribution describes the functionality of the application and its implementation details, in particular authentication, authorization and audit of the management operations.

  1. Effects of a multidisciplinary stress treatment programme on patient return to work rate and symptom reduction: results from a randomised, wait-list controlled trial.

    PubMed

    Netterstrøm, Bo; Friebel, Lene; Ladegaard, Yun

    2013-01-01

    To evaluate the efficacy of a multidisciplinary stress treatment programme. General practitioners referred 198 employed patients on sick leave with symptoms of persistent work-related stress. Using a waitlisted randomised controlled trial design, the participants were randomly divided into the following three groups: the intervention group (IG, 69 participants); treatment-as-usual control group (TAUCG, 71 participants), which received 12 consultations with a psychologist, and the waitlisted control group (WLCG, 58 participants). The stress treatment intervention consisted of nine 1-hour sessions conducted over 3 months. The goals of the sessions were the following: (1) identifying relevant stressors; (2) changing the participant's coping strategies; (3) adjusting the participant's workload and tasks, and (4) improving workplace dialogue. Each participant also attended a mindfulness-based stress reduction (MBSR) course for 2 h a week over 8 weeks. The IG and TAUCG showed significantly greater symptom level (Symptom Check List 92) reductions compared to the WLCG. Regarding the return to work (RTW) rate, 67% of participants in the IG returned to full-time work after treatment, which was a significantly higher rate than in the TAUCG (36%) and WLCG (24%). Significantly more participants in the IG (97%) increased their working hours during treatment compared with the participants in the control groups, TAUCG (71%) and WLCG (64%). The stress treatment programme--a combination of work place-focused psychotherapy and MBSR--significantly reduced stress symptom levels and increased RTW rates compared with the WLCG and TAUCG. Copyright © 2013 S. Karger AG, Basel.

  2. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    NASA Astrophysics Data System (ADS)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  3. Deployment of 464XLAT (RFC6877) alongside IPv6-only CPU resources at WLCG sites

    NASA Astrophysics Data System (ADS)

    Froy, T. S.; Traynor, D. P.; Walker, C. J.

    2017-10-01

    IPv4 is now officially deprecated by the IETF. A significant amount of effort has already been expended by the HEPiX IPv6 Working Group on testing dual-stacked hosts and IPv6-only CPU resources. Dual-stack adds complexity and administrative overhead to sites that may already be starved of resource. This has resulted in a very slow uptake of IPv6 from WLCG sites. 464XLAT (RFC6877) is intended for IPv6 single-stack environments that require the ability to communicate with IPv4-only endpoints. This paper will present a deployment strategy for 464XLAT, operational experiences of using 464XLAT in production at a WLCG site and important information to consider prior to deploying 464XLAT.

  4. Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi

    DOE PAGES

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; ...

    2015-05-22

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less

  5. ATLAS and LHC computing on CRAY

    NASA Astrophysics Data System (ADS)

    Sciacca, F. G.; Haug, S.; ATLAS Collaboration

    2017-10-01

    Access and exploitation of large scale computing resources, such as those offered by general purpose HPC centres, is one important measure for ATLAS and the other Large Hadron Collider experiments in order to meet the challenge posed by the full exploitation of the future data within the constraints of flat budgets. We report on the effort of moving the Swiss WLCG T2 computing, serving ATLAS, CMS and LHCb, from a dedicated cluster to the large Cray systems at the Swiss National Supercomputing Centre CSCS. These systems do not only offer very efficient hardware, cooling and highly competent operators, but also have large backfill potentials due to size and multidisciplinary usage and potential gains due to economy at scale. Technical solutions, performance, expected return and future plans are discussed.

  6. Future computing platforms for science in a power constrained era

    DOE PAGES

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; ...

    2015-12-23

    Power consumption will be a key constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics (HEP). This makes performance-per-watt a crucial metric for selecting cost-efficient computing solutions. For this paper, we have done a wide survey of current and emerging architectures becoming available on the market including x86-64 variants, ARMv7 32-bit, ARMv8 64-bit, Many-Core and GPU solutions, as well as newer System-on-Chip (SoC) solutions. We compare performance and energy efficiency using an evolving set of standardized HEP-related benchmarks and power measurement techniques we have been developing. In conclusion, we evaluate the potentialmore » for use of such computing solutions in the context of DHTC systems, such as the Worldwide LHC Computing Grid (WLCG).« less

  7. Development, deployment and operations of ATLAS databases

    NASA Astrophysics Data System (ADS)

    Vaniachine, A. V.; Schmitt, J. G. v. d.

    2008-07-01

    In preparation for ATLAS data taking, a coordinated shift from development towards operations has occurred in ATLAS database activities. In addition to development and commissioning activities in databases, ATLAS is active in the development and deployment (in collaboration with the WLCG 3D project) of the tools that allow the worldwide distribution and installation of databases and related datasets, as well as the actual operation of this system on ATLAS multi-grid infrastructure. We describe development and commissioning of major ATLAS database applications for online and offline. We present the first scalability test results and ramp-up schedule over the initial LHC years of operations towards the nominal year of ATLAS running, when the database storage volumes are expected to reach 6.1 TB for the Tag DB and 1.0 TB for the Conditions DB. ATLAS database applications require robust operational infrastructure for data replication between online and offline at Tier-0, and for the distribution of the offline data to Tier-1 and Tier-2 computing centers. We describe ATLAS experience with Oracle Streams and other technologies for coordinated replication of databases in the framework of the WLCG 3D services.

  8. Distributed Monte Carlo production for DZero

    NASA Astrophysics Data System (ADS)

    Snow, Joel; DØ Collaboration

    2010-04-01

    The DZero collaboration uses a variety of resources on four continents to pursue a strategy of flexibility and automation in the generation of simulation data. This strategy provides a resilient and opportunistic system which ensures an adequate and timely supply of simulation data to support DZero's physics analyses. A mixture of facilities, dedicated and opportunistic, specialized and generic, large and small, grid job enabled and not, are used to provide a production system that has adapted to newly developing technologies. This strategy has increased the event production rate by a factor of seven and the data production rate by a factor of ten in the last three years despite diminishing manpower. Common to all production facilities is the SAM (Sequential Access to Metadata) data-grid. Job submission to the grid uses SAMGrid middleware which may forward jobs to the OSG, the WLCG, or native SAMGrid sites. The distributed computing and data handling system used by DZero will be described and the results of MC production since the deployment of grid technologies will be presented.

  9. Connecting Restricted, High-Availability, or Low-Latency Resources to a Seamless Global Pool for CMS

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Bockelman, B.; Hufnagel, D.; Hurtado Anampa, K.; Jayatilaka, B.; Khan, F.; Larson, K.; Letts, J.; Mascheroni, M.; Mohapatra, A.; Marra Da Silva, J.; Mason, D.; Perez-Calero Yzquierdo, A.; Piperov, S.; Tiradani, A.; Verguilov, V.; CMS Collaboration

    2017-10-01

    The connection of diverse and sometimes non-Grid enabled resource types to the CMS Global Pool, which is based on HTCondor and glideinWMS, has been a major goal of CMS. These resources range in type from a high-availability, low latency facility at CERN for urgent calibration studies, called the CAF, to a local user facility at the Fermilab LPC, allocation-based computing resources at NERSC and SDSC, opportunistic resources provided through the Open Science Grid, commercial clouds, and others, as well as access to opportunistic cycles on the CMS High Level Trigger farm. In addition, we have provided the capability to give priority to local users of beyond WLCG pledged resources at CMS sites. Many of the solutions employed to bring these diverse resource types into the Global Pool have common elements, while some are very specific to a particular project. This paper details some of the strategies and solutions used to access these resources through the Global Pool in a seamless manner.

  10. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.

    2014-06-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  11. The JINR Tier1 Site Simulation for Research and Development Purposes

    NASA Astrophysics Data System (ADS)

    Korenkov, V.; Nechaevskiy, A.; Ososkov, G.; Pryahina, D.; Trofimov, V.; Uzhinskiy, A.; Voytishin, N.

    2016-02-01

    Distributed complex computing systems for data storage and processing are in common use in the majority of modern scientific centers. The design of such systems is usually based on recommendations obtained via a preliminary simulated model used and executed only once. However big experiments last for years and decades, and the development of their computing system is going on, not only quantitatively but also qualitatively. Even with the substantial efforts invested in the design phase to understand the systems configuration, it would be hard enough to develop a system without additional research of its future evolution. The developers and operators face the problem of the system behaviour predicting after the planned modifications. A system for grid and cloud services simulation is developed at LIT (JINR, Dubna). This simulation system is focused on improving the effciency of the grid/cloud structures development by using the work quality indicators of some real system. The development of such kind of software is very important for making a new grid/cloud infrastructure for such big scientific experiments like the JINR Tier1 site for WLCG. The simulation of some processes of the Tier1 site is considered as an example of our application approach.

  12. Commissioning the CERN IT Agile Infrastructure with experiment workloads

    NASA Astrophysics Data System (ADS)

    Medrano Llamas, Ramón; Harald Barreiro Megino, Fernando; Kucharczyk, Katarzyna; Kamil Denis, Marek; Cinquilli, Mattia

    2014-06-01

    In order to ease the management of their infrastructure, most of the WLCG sites are adopting cloud based strategies. In the case of CERN, the Tier 0 of the WLCG, is completely restructuring the resource and configuration management of their computing center under the codename Agile Infrastructure. Its goal is to manage 15,000 Virtual Machines by means of an OpenStack middleware in order to unify all the resources in CERN's two datacenters: the one placed in Meyrin and the new on in Wigner, Hungary. During the commissioning of this infrastructure, CERN IT is offering an attractive amount of computing resources to the experiments (800 cores for ATLAS and CMS) through a private cloud interface. ATLAS and CMS have joined forces to exploit them by running stress tests and simulation workloads since November 2012. This work will describe the experience of the first deployments of the current experiment workloads on the CERN private cloud testbed. The paper is organized as follows: the first section will explain the integration of the experiment workload management systems (WMS) with the cloud resources. The second section will revisit the performance and stress testing performed with HammerCloud in order to evaluate and compare the suitability for the experiment workloads. The third section will go deeper into the dynamic provisioning techniques, such as the use of the cloud APIs directly by the WMS. The paper finishes with a review of the conclusions and the challenges ahead.

  13. x509-free access to WLCG resources

    NASA Astrophysics Data System (ADS)

    Short, H.; Manzi, A.; De Notaris, V.; Keeble, O.; Kiryanov, A.; Mikkonen, H.; Tedesco, P.; Wartel, R.

    2017-10-01

    Access to WLCG resources is authenticated using an x509 and PKI infrastructure. Even though HEP users have always been exposed to certificates directly, the development of modern Web Applications by the LHC experiments calls for simplified authentication processes keeping the underlying software unmodified. In this work we will show a solution with the goal of providing access to WLCG resources using the user’s home organisations credentials, without the need for user-acquired x509 certificates. In particular, we focus on identity providers within eduGAIN, which interconnects research and education organisations worldwide, and enables the trustworthy exchange of identity-related information. eduGAIN has been integrated at CERN in the SSO infrastructure so that users can authenticate without the need of a CERN account. This solution achieves x509-free access to Grid resources with the help of two services: STS and an online CA. The STS (Security Token Service) allows credential translation from the SAML2 format used by Identity Federations to the VOMS-enabled x509 used by most of the Grid. The IOTA CA (Identifier-Only Trust Assurance Certification Authority) is responsible for the automatic issuing of short-lived x509 certificates. The IOTA CA deployed at CERN has been accepted by EUGridPMA as the CERN LCG IOTA CA, included in the IGTF trust anchor distribution and installed by the sites in WLCG. We will also describe the first pilot projects which are integrating the solution.

  14. Experiences with http/WebDAV protocols for data access in high throughput computing

    NASA Astrophysics Data System (ADS)

    Bernabeu, Gerard; Martinez, Francisco; Acción, Esther; Bria, Arnau; Caubet, Marc; Delfino, Manuel; Espinal, Xavier

    2011-12-01

    In the past, access to remote storage was considered to be at least one order of magnitude slower than local disk access. Improvement on network technologies provide the alternative of using remote disk. For those accesses one can today reach levels of throughput similar or exceeding those of local disks. Common choices as access protocols in the WLCG collaboration are RFIO, [GSI]DCAP, GRIDFTP, XROOTD and NFS. HTTP protocol shows a promising alternative as it is a simple, lightweight protocol. It also enables the use of standard technologies such as http caching or load balancing which can be used to improve service resilience and scalability or to boost performance for some use cases seen in HEP such as the "hot files". WebDAV extensions allow writing data, giving it enough functionality to work as a remote access protocol. This paper will show our experiences with the WebDAV door for dCache, in terms of functionality and performance, applied to some of the HEP work flows in the LHC Tier1 at PIC.

  15. High-throughput landslide modelling using computational grids

    NASA Astrophysics Data System (ADS)

    Wallace, M.; Metson, S.; Holcombe, L.; Anderson, M.; Newbold, D.; Brook, N.

    2012-04-01

    Landslides are an increasing problem in developing countries. Multiple landslides can be triggered by heavy rainfall resulting in loss of life, homes and critical infrastructure. Through computer simulation of individual slopes it is possible to predict the causes, timing and magnitude of landslides and estimate the potential physical impact. Geographical scientists at the University of Bristol have developed software that integrates a physically-based slope hydrology and stability model (CHASM) with an econometric model (QUESTA) in order to predict landslide risk over time. These models allow multiple scenarios to be evaluated for each slope, accounting for data uncertainties, different engineering interventions, risk management approaches and rainfall patterns. Individual scenarios can be computationally intensive, however each scenario is independent and so multiple scenarios can be executed in parallel. As more simulations are carried out the overhead involved in managing input and output data becomes significant. This is a greater problem if multiple slopes are considered concurrently, as is required both for landslide research and for effective disaster planning at national levels. There are two critical factors in this context: generated data volumes can be in the order of tens of terabytes, and greater numbers of simulations result in long total runtimes. Users of such models, in both the research community and in developing countries, need to develop a means for handling the generation and submission of landside modelling experiments, and the storage and analysis of the resulting datasets. Additionally, governments in developing countries typically lack the necessary computing resources and infrastructure. Consequently, knowledge that could be gained by aggregating simulation results from many different scenarios across many different slopes remains hidden within the data. To address these data and workload management issues, University of Bristol particle physicists and geographical scientists are collaborating to develop methods for providing simple and effective access to landslide models and associated simulation data. Particle physicists have valuable experience in dealing with data complexity and management due to the scale of data generated by particle accelerators such as the Large Hadron Collider (LHC). The LHC generates tens of petabytes of data every year which is stored and analysed using the Worldwide LHC Computing Grid (WLCG). Tools and concepts from the WLCG are being used to drive the development of a Software-as-a-Service (SaaS) platform to provide access to hosted landslide simulation software and data. It contains advanced data management features and allows landslide simulations to be run on the WLCG, dramatically reducing simulation runtimes by parallel execution. The simulations are accessed using a web page through which users can enter and browse input data, submit jobs and visualise results. Replication of the data ensures a local copy can be accessed should a connection to the platform be unavailable. The platform does not know the details of the simulation software it runs, so it is therefore possible to use it to run alternative models at similar scales. This creates the opportunity for activities such as model sensitivity analysis and performance comparison at scales that are impractical using standalone software.

  16. A Messaging Infrastructure for WLCG

    NASA Astrophysics Data System (ADS)

    Casey, James; Cons, Lionel; Lapka, Wojciech; Paladin, Massimo; Skaburskas, Konstantin

    2011-12-01

    During the EGEE-III project operational tools such as SAM, Nagios, Gridview, the regional Dashboard and GGUS moved to a communication architecture based on ActiveMQ, an open-source enterprise messaging solution. LHC experiments, in particular ATLAS, developed prototypes of systems using the same messaging infrastructure, validating the system for their use-cases. In this paper we describe the WLCG messaging use cases and outline an improved messaging architecture based on the experience gained during the EGEE-III period. We show how this provides a solid basis for many applications, including the grid middleware, to improve their resilience and reliability.

  17. Grid today, clouds on the horizon

    NASA Astrophysics Data System (ADS)

    Shiers, Jamie

    2009-04-01

    By the time of CCP 2008, the largest scientific machine in the world - the Large Hadron Collider - had been cooled down as scheduled to its operational temperature of below 2 degrees Kelvin and injection tests were starting. Collisions of proton beams at 5+5 TeV were expected within one to two months of the initial tests, with data taking at design energy ( 7+7 TeV) foreseen for 2009. In order to process the data from this world machine, we have put our "Higgs in one basket" - that of Grid computing [The Worldwide LHC Computing Grid (WLCG), in: Proceedings of the Conference on Computational Physics 2006 (CCP 2006), vol. 177, 2007, pp. 219-223]. After many years of preparation, 2008 saw a final "Common Computing Readiness Challenge" (CCRC'08) - aimed at demonstrating full readiness for 2008 data taking, processing and analysis. By definition, this relied on a world-wide production Grid infrastructure. But change - as always - is on the horizon. The current funding model for Grids - which in Europe has been through 3 generations of EGEE projects, together with related projects in other parts of the world, including South America - is evolving towards a long-term, sustainable e-infrastructure, like the European Grid Initiative (EGI) [The European Grid Initiative Design Study, website at http://web.eu-egi.eu/]. At the same time, potentially new paradigms, such as that of "Cloud Computing" are emerging. This paper summarizes the results of CCRC'08 and discusses the potential impact of future Grid funding on both regional and international application communities. It contrasts Grid and Cloud computing models from both technical and sociological points of view. Finally, it discusses the requirements from production application communities, in terms of stability and continuity in the medium to long term.

  18. Open Science Grid (OSG) Ticket Synchronization: Keeping Your Home Field Advantage In A Distributed Environment

    NASA Astrophysics Data System (ADS)

    Gross, Kyle; Hayashi, Soichi; Teige, Scott; Quick, Robert

    2012-12-01

    Large distributed computing collaborations, such as the Worldwide LHC Computing Grid (WLCG), face many issues when it comes to providing a working grid environment for their users. One of these is exchanging tickets between various ticketing systems in use by grid collaborations. Ticket systems such as Footprints, RT, Remedy, and ServiceNow all have different schema that must be addressed in order to provide a reliable exchange of information between support entities and users in different grid environments. To combat this problem, OSG Operations has created a ticket synchronization interface called GOC-TX that relies on web services instead of error-prone email parsing methods of the past. Synchronizing tickets between different ticketing systems allows any user or support entity to work on a ticket in their home environment, thus providing a familiar and comfortable place to provide updates without having to learn another ticketing system. The interface is built in a way that it is generic enough that it can be customized for nearly any ticketing system with a web-service interface with only minor changes. This allows us to be flexible and rapidly bring new ticket synchronization online. Synchronization can be triggered by different methods including mail, web services interface, and active messaging. GOC-TX currently interfaces with Global Grid User Support (GGUS) for WLCG, Remedy at Brookhaven National Lab (BNL), and Request Tracker (RT) at the Virtual Data Toolkit (VDT). Work is progressing on the Fermi National Accelerator Laboratory (FNAL) ServiceNow synchronization. This paper will explain the problems faced by OSG and how they led OSG to create and implement this ticket synchronization system along with the technical details that allow synchronization to be preformed at a production level.

  19. GLUE 2 deployment: Ensuring quality in the EGI/WLCG information system

    NASA Astrophysics Data System (ADS)

    Burke, Stephen; Alandes Pradillo, Maria; Field, Laurence; Keeble, Oliver

    2014-06-01

    The GLUE 2 information model is now fully supported in the production EGI/WLCG information system. However, to make it usable and allow clients to rely on the published information it is important that the meaning is clearly defined, and that information providers and site configurations are validated to ensure as far as possible that what they publish is correct. In this paper we describe the definition of a detailed schema usage profile, the implementation of a software tool to validate published information according to the profile and the use of the tool in the production Grid, and also summarise the overall state of GLUE 2 deployment.

  20. How Much Higher Can HTCondor Fly?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fajardo, E. M.; Dost, J. M.; Holzman, B.

    The HTCondor high throughput computing system is heavily used in the high energy physics (HEP) community as the batch system for several Worldwide LHC Computing Grid (WLCG) resources. Moreover, it is the backbone of GlidelnWMS, the pilot system used by the computing organization of the Compact Muon Solenoid (CMS) experiment. To prepare for LHC Run 2, we probed the scalability limits of new versions and configurations of HTCondor with a goal of reaching 200,000 simultaneous running jobs in a single internationally distributed dynamic pool.In this paper, we first describe how we created an opportunistic distributed testbed capable of exercising runsmore » with 200,000 simultaneous jobs without impacting production. This testbed methodology is appropriate not only for scale testing HTCondor, but potentially for many other services. In addition to the test conditions and the testbed topology, we include the suggested configuration options used to obtain the scaling results, and describe some of the changes to HTCondor inspired by our testing that enabled sustained operations at scales well beyond previous limits.« less

  1. Maintaining Traceability in an Evolving Distributed Computing Environment

    NASA Astrophysics Data System (ADS)

    Collier, I.; Wartel, R.

    2015-12-01

    The management of risk is fundamental to the operation of any distributed computing infrastructure. Identifying the cause of incidents is essential to prevent them from re-occurring. In addition, it is a goal to contain the impact of an incident while keeping services operational. For response to incidents to be acceptable this needs to be commensurate with the scale of the problem. The minimum level of traceability for distributed computing infrastructure usage is to be able to identify the source of all actions (executables, file transfers, pilot jobs, portal jobs, etc.) and the individual who initiated them. In addition, sufficiently fine-grained controls, such as blocking the originating user and monitoring to detect abnormal behaviour, are necessary for keeping services operational. It is essential to be able to understand the cause and to fix any problems before re-enabling access for the user. The aim is to be able to answer the basic questions who, what, where, and when concerning any incident. This requires retaining all relevant information, including timestamps and the digital identity of the user, sufficient to identify, for each service instance, and for every security event including at least the following: connect, authenticate, authorize (including identity changes) and disconnect. In traditional grid infrastructures (WLCG, EGI, OSG etc.) best practices and procedures for gathering and maintaining the information required to maintain traceability are well established. In particular, sites collect and store information required to ensure traceability of events at their sites. With the increased use of virtualisation and private and public clouds for HEP workloads established procedures, which are unable to see 'inside' running virtual machines no longer capture all the information required. Maintaining traceability will at least involve a shift of responsibility from sites to Virtual Organisations (VOs) bringing with it new requirements for their logging infrastructures. VOs indeed need to fulfil a new operational role and become fully active participants in the incident response process. We present an analysis of the changing requirements to maintain traceability for virtualised and cloud based workflows with particular reference to the work of the WLCG Traceability Working Group.

  2. Exploiting analytics techniques in CMS computing monitoring

    NASA Astrophysics Data System (ADS)

    Bonacorsi, D.; Kuznetsov, V.; Magini, N.; Repečka, A.; Vaandering, E.

    2017-10-01

    The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster for further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.

  3. Evolution of Database Replication Technologies for WLCG

    NASA Astrophysics Data System (ADS)

    Baranowski, Zbigniew; Lobato Pardavila, Lorena; Blaszczyk, Marcin; Dimitrov, Gancho; Canali, Luca

    2015-12-01

    In this article we summarize several years of experience on database replication technologies used at WLCG and we provide a short review of the available Oracle technologies and their key characteristics. One of the notable changes and improvement in this area in recent past has been the introduction of Oracle GoldenGate as a replacement of Oracle Streams. We report in this article on the preparation and later upgrades for remote replication done in collaboration with ATLAS and Tier 1 database administrators, including the experience from running Oracle GoldenGate in production. Moreover, we report on another key technology in this area: Oracle Active Data Guard which has been adopted in several of the mission critical use cases for database replication between online and offline databases for the LHC experiments.

  4. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    NASA Astrophysics Data System (ADS)

    Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde

    2014-06-01

    The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  5. Exploiting Analytics Techniques in CMS Computing Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonacorsi, D.; Kuznetsov, V.; Magini, N.

    The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster formore » further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.« less

  6. Scaling up ATLAS Event Service to production levels on opportunistic computing platforms

    NASA Astrophysics Data System (ADS)

    Benjamin, D.; Caballero, J.; Ernst, M.; Guan, W.; Hover, J.; Lesny, D.; Maeno, T.; Nilsson, P.; Tsulaia, V.; van Gemmeren, P.; Vaniachine, A.; Wang, F.; Wenaus, T.; ATLAS Collaboration

    2016-10-01

    Continued growth in public cloud and HPC resources is on track to exceed the dedicated resources available for ATLAS on the WLCG. Examples of such platforms are Amazon AWS EC2 Spot Instances, Edison Cray XC30 supercomputer, backfill at Tier 2 and Tier 3 sites, opportunistic resources at the Open Science Grid (OSG), and ATLAS High Level Trigger farm between the data taking periods. Because of specific aspects of opportunistic resources such as preemptive job scheduling and data I/O, their efficient usage requires workflow innovations provided by the ATLAS Event Service. Thanks to the finer granularity of the Event Service data processing workflow, the opportunistic resources are used more efficiently. We report on our progress in scaling opportunistic resource usage to double-digit levels in ATLAS production.

  7. Benchmarking worker nodes using LHCb productions and comparing with HEPSpec06

    NASA Astrophysics Data System (ADS)

    Charpentier, P.

    2017-10-01

    In order to estimate the capabilities of a computing slot with limited processing time, it is necessary to know with a rather good precision its “power”. This allows for example pilot jobs to match a task for which the required CPU-work is known, or to define the number of events to be processed knowing the CPU-work per event. Otherwise one always has the risk that the task is aborted because it exceeds the CPU capabilities of the resource. It also allows a better accounting of the consumed resources. The traditional way the CPU power is estimated in WLCG since 2007 is using the HEP-Spec06 benchmark (HS06) suite that was verified at the time to scale properly with a set of typical HEP applications. However, the hardware architecture of processors has evolved, all WLCG experiments moved to using 64-bit applications and use different compilation flags from those advertised for running HS06. It is therefore interesting to check the scaling of HS06 with the HEP applications. For this purpose, we have been using CPU intensive massive simulation productions from the LHCb experiment and compared their event throughput to the HS06 rating of the worker nodes. We also compared it with a much faster benchmark script that is used by the DIRAC framework used by LHCb for evaluating at run time the performance of the worker nodes. This contribution reports on the finding of these comparisons: the main observation is that the scaling with HS06 is no longer fulfilled, while the fast benchmarks have a better scaling but are less precise. One can also clearly see that some hardware or software features when enabled on the worker nodes may enhance their performance beyond expectation from either benchmark, depending on external factors.

  8. Analysis of empty ATLAS pilot jobs

    NASA Astrophysics Data System (ADS)

    Love, P. A.; Alef, M.; Dal Pra, S.; Di Girolamo, A.; Forti, A.; Templon, J.; Vamvakopoulos, E.; ATLAS Collaboration

    2017-10-01

    In this analysis we quantify the wallclock time used by short empty pilot jobs on a number of WLCG compute resources. Pilot factory logs and site batch logs are used to provide independent accounts of the usage. Results show a wide variation of wallclock time used by short jobs depending on the site and queue, and changing with time. For a reference dataset of all jobs in August 2016, the fraction of wallclock time used by empty jobs per studied site ranged from 0.1% to 0.8%. Aside from the wall time used by empty pilots, we also looked at how many pilots were empty as a fraction of all pilots sent. Binning the August dataset into days, empty fractions between 2% and 90% were observed. The higher fractions correlate well with periods of few actual payloads being sent to the site.

  9. Commissioning of a CERN Production and Analysis Facility Based on xrootd

    NASA Astrophysics Data System (ADS)

    Campana, Simone; van der Ster, Daniel C.; Di Girolamo, Alessandro; Peters, Andreas J.; Duellmann, Dirk; Coelho Dos Santos, Miguel; Iven, Jan; Bell, Tim

    2011-12-01

    The CERN facility hosts the Tier-0 of the four LHC experiments, but as part of WLCG it also offers a platform for production activities and user analysis. The CERN CASTOR storage technology has been extensively tested and utilized for LHC data recording and exporting to external sites according to experiments computing model. On the other hand, to accommodate Grid data processing activities and, more importantly, chaotic user analysis, it was realized that additional functionality was needed including a different throttling mechanism for file access. This paper will describe the xroot-based CERN production and analysis facility for the ATLAS experiment and in particular the experiment use case and data access scenario, the xrootd redirector setup on top of the CASTOR storage system, the commissioning of the system and real life experience for data processing and data analysis.

  10. Monitoring techniques and alarm procedures for CMS services and sites in WLCG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molina-Perez, J.; Bonacorsi, D.; Gutsche, O.

    2012-01-01

    The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS, the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operatingmore » worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is exploited to ensure the 24/7 monitoring, alarming and troubleshooting of the CMS computing sites and services. We review the deployment of the monitoring and alarming procedures, and report on the experience gained throughout the first two years of LHC operation. We describe the efficiency of the communication tools employed, the coherent monitoring framework, the proactive alarming systems and the proficient troubleshooting procedures that helped the CMS Computing facilities and infrastructure to operate at high reliability levels.« less

  11. Deployment of IPv6-only CPU resources at WLCG sites

    NASA Astrophysics Data System (ADS)

    Babik, M.; Chudoba, J.; Dewhurst, A.; Finnern, T.; Froy, T.; Grigoras, C.; Hafeez, K.; Hoeft, B.; Idiculla, T.; Kelsey, D. P.; López Muñoz, F.; Martelli, E.; Nandakumar, R.; Ohrenberg, K.; Prelz, F.; Rand, D.; Sciabà, A.; Tigerstedt, U.; Traynor, D.

    2017-10-01

    The fraction of Internet traffic carried over IPv6 continues to grow rapidly. IPv6 support from network hardware vendors and carriers is pervasive and becoming mature. A network infrastructure upgrade often offers sites an excellent window of opportunity to configure and enable IPv6. There is a significant overhead when setting up and maintaining dual-stack machines, so where possible sites would like to upgrade their services directly to IPv6 only. In doing so, they are also expediting the transition process towards its desired completion. While the LHC experiments accept there is a need to move to IPv6, it is currently not directly affecting their work. Sites are unwilling to upgrade if they will be unable to run LHC experiment workflows. This has resulted in a very slow uptake of IPv6 from WLCG sites. For several years the HEPiX IPv6 Working Group has been testing a range of WLCG services to ensure they are IPv6 compliant. Several sites are now running many of their services as dual-stack. The working group, driven by the requirements of the LHC VOs to be able to use IPv6-only opportunistic resources, continues to encourage wider deployment of dual-stack services to make the use of such IPv6-only clients viable. This paper presents the working group’s plan and progress so far to allow sites to deploy IPv6-only CPU resources. This includes making experiment central services dual-stack as well as a number of storage services. The monitoring, accounting and information services that are used by jobs also need to be upgraded. Finally the VO testing that has taken place on hosts connected via IPv6-only is reported.

  12. Optimising LAN access to grid enabled storage elements

    NASA Astrophysics Data System (ADS)

    Stewart, G. A.; Cowan, G. A.; Dunne, B.; Elwell, A.; Millar, A. P.

    2008-07-01

    When operational, the Large Hadron Collider experiments at CERN will collect tens of petabytes of physics data per year. The worldwide LHC computing grid (WLCG) will distribute this data to over two hundred Tier-1 and Tier-2 computing centres, enabling particle physicists around the globe to access the data for analysis. Although different middleware solutions exist for effective management of storage systems at collaborating institutes, the patterns of access envisaged for Tier-2s fall into two distinct categories. The first involves bulk transfer of data between different Grid storage elements using protocols such as GridFTP. This data movement will principally involve writing ESD and AOD files into Tier-2 storage. Secondly, once datasets are stored at a Tier-2, physics analysis jobs will read the data from the local SE. Such jobs require a POSIX-like interface to the storage so that individual physics events can be extracted. In this paper we consider the performance of POSIX-like access to files held in Disk Pool Manager (DPM) storage elements, a popular lightweight SRM storage manager from EGEE.

  13. Challenging data and workload management in CMS Computing with network-aware systems

    NASA Astrophysics Data System (ADS)

    D, Bonacorsi; T, Wildish

    2014-06-01

    After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the Networks have constantly been of paramount importance for the execution of CMS workflows, exceeding the original expectations - as from the MONARC model - in terms of performance, stability and reliability. The low-latency transfers of PetaBytes of CMS data among dozens of WLCG Tiers worldwide using the PhEDEx dataset replication system is an example of the importance of reliable Networks. Another example is the exploitation of WAN data access over data federations in CMS. A new emerging area of work is the exploitation of Intelligent Network Services, including also bandwidth on demand concepts. In this paper, we will review the work done in CMS on this, and the next steps.

  14. A multipurpose computing center with distributed resources

    NASA Astrophysics Data System (ADS)

    Chudoba, J.; Adam, M.; Adamová, D.; Kouba, T.; Mikula, A.; Říkal, V.; Švec, J.; Uhlířová, J.; Vokáč, P.; Svatoš, M.

    2017-10-01

    The Computing Center of the Institute of Physics (CC IoP) of the Czech Academy of Sciences serves a broad spectrum of users with various computing needs. It runs WLCG Tier-2 center for the ALICE and the ATLAS experiments; the same group of services is used by astroparticle physics projects the Pierre Auger Observatory (PAO) and the Cherenkov Telescope Array (CTA). OSG stack is installed for the NOvA experiment. Other groups of users use directly local batch system. Storage capacity is distributed to several locations. DPM servers used by the ATLAS and the PAO are all in the same server room, but several xrootd servers for the ALICE experiment are operated in the Nuclear Physics Institute in Řež, about 10 km away. The storage capacity for the ATLAS and the PAO is extended by resources of the CESNET - the Czech National Grid Initiative representative. Those resources are in Plzen and Jihlava, more than 100 km away from the CC IoP. Both distant sites use a hierarchical storage solution based on disks and tapes. They installed one common dCache instance, which is published in the CC IoP BDII. ATLAS users can use these resources using the standard ATLAS tools in the same way as the local storage without noticing this geographical distribution. Computing clusters LUNA and EXMAG dedicated to users mostly from the Solid State Physics departments offer resources for parallel computing. They are part of the Czech NGI infrastructure MetaCentrum with distributed batch system based on torque with a custom scheduler. Clusters are installed remotely by the MetaCentrum team and a local contact helps only when needed. Users from IoP have exclusive access only to a part of these two clusters and take advantage of higher priorities on the rest (1500 cores in total), which can also be used by any user of the MetaCentrum. IoP researchers can also use distant resources located in several towns of the Czech Republic with a capacity of more than 12000 cores in total.

  15. Integration of Cloud resources in the LHCb Distributed Computing

    NASA Astrophysics Data System (ADS)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  16. ALICE HLT Cluster operation during ALICE Run 2

    NASA Astrophysics Data System (ADS)

    Lehrbach, J.; Krzewicki, M.; Rohr, D.; Engel, H.; Gomez Ramirez, A.; Lindenstruth, V.; Berzano, D.; ALICE Collaboration

    2017-10-01

    ALICE (A Large Ion Collider Experiment) is one of the four major detectors located at the LHC at CERN, focusing on the study of heavy-ion collisions. The ALICE High Level Trigger (HLT) is a compute cluster which reconstructs the events and compresses the data in real-time. The data compression by the HLT is a vital part of data taking especially during the heavy-ion runs in order to be able to store the data which implies that reliability of the whole cluster is an important matter. To guarantee a consistent state among all compute nodes of the HLT cluster we have automatized the operation as much as possible. For automatic deployment of the nodes we use Foreman with locally mirrored repositories and for configuration management of the nodes we use Puppet. Important parameters like temperatures, network traffic, CPU load etc. of the nodes are monitored with Zabbix. During periods without beam the HLT cluster is used for tests and as one of the WLCG Grid sites to compute offline jobs in order to maximize the usage of our cluster. To prevent interference with normal HLT operations we separate the virtual machines running the Grid jobs from the normal HLT operation via virtual networks (VLANs). In this paper we give an overview of the ALICE HLT operation in 2016.

  17. Self managing experiment resources

    NASA Astrophysics Data System (ADS)

    Stagni, F.; Ubeda, M.; Tsaregorodtsev, A.; Romanovskiy, V.; Roiser, S.; Charpentier, P.; Graciani, R.

    2014-06-01

    Within this paper we present an autonomic Computing resources management system, used by LHCb for assessing the status of their Grid resources. Virtual Organizations Grids include heterogeneous resources. For example, LHC experiments very often use resources not provided by WLCG, and Cloud Computing resources will soon provide a non-negligible fraction of their computing power. The lack of standards and procedures across experiments and sites generated the appearance of multiple information systems, monitoring tools, ticket portals, etc... which nowadays coexist and represent a very precious source of information for running HEP experiments Computing systems as well as sites. These two facts lead to many particular solutions for a general problem: managing the experiment resources. In this paper we present how LHCb, via the DIRAC interware, addressed such issues. With a renewed Central Information Schema hosting all resources metadata and a Status System (Resource Status System) delivering real time information, the system controls the resources topology, independently of the resource types. The Resource Status System applies data mining techniques against all possible information sources available and assesses the status changes, that are then propagated to the topology description. Obviously, giving full control to such an automated system is not risk-free. Therefore, in order to minimise the probability of misbehavior, a battery of tests has been developed in order to certify the correctness of its assessments. We will demonstrate the performance and efficiency of such a system in terms of cost reduction and reliability.

  18. Investigating the Efficacy of Web-Based Transfer Training on Independent Wheelchair Transfers Through Randomized Controlled Trials.

    PubMed

    Worobey, Lynn A; Rigot, Stephanie K; Hogaboom, Nathan S; Venus, Chris; Boninger, Michael L

    2018-01-01

    To determine the efficacy of a web-based transfer training module at improving transfer technique across 3 groups: web-based training, in-person training (current standard of practice), and a waitlist control group (WLCG); and secondarily, to determine subject factors that can be used to predict improvements in transfer ability after training. Randomized controlled trials. Summer and winter sporting events for disabled veterans. A convenience sample (N=71) of manual and power wheelchair users who could transfer independently. An individualized, in-person transfer training session or a web-based transfer training module. The WLCG received the web training at their follow-up visit. Transfer Assessment Instrument (TAI) part 1 score was used to assess transfers at baseline, skill acquisition immediately posttraining, and skill retention after a 1- to 2-day follow-up period. The in-person and web-based training groups improved their median (interquartile range) TAI scores from 7.98 (7.18-8.46) to 9.13 (8.57-9.58; P<.01), and from 7.14 (6.15-7.86) to 9.23 (8.46-9.82; P<.01), respectively, compared with the WLCG that had a median score of 7.69 for both assessments (baseline, 6.15-8.46; follow-up control, 5.83-8.46). Participants retained improvements at follow-up (P>.05). A lower initial TAI score was found to be the only significant predictor of a larger percent change in TAI score after receiving training. Transfer training can improve technique with changes retained within a short follow-up window, even among experienced wheelchair users. Web-based transfer training demonstrated comparable improvements to in-person training. With almost half of the United States population consulting online resources before a health care professional, web-based training may be an effective method to increase knowledge translation. Copyright © 2017 American Congress of Rehabilitation Medicine. All rights reserved.

  19. First Experiences with CMS Data Storage on the GEMSS System at the INFN-CNAF Tier-1

    NASA Astrophysics Data System (ADS)

    Andreotti, D.; Bonacorsi, D.; Cavalli, A.; Pra, S. Dal; Dell'Agnello, L.; Forti, Alberto; Grandi, C.; Gregori, D.; Gioi, L. Li; Martelli, B.; Prosperini, A.; Ricci, P. P.; Ronchieri, Elisabetta; Sapunenko, V.; Sartirana, A.; Vagnoni, V.; Zappi, Riccardo

    A brand new Mass Storage System solution called "Grid-Enabled Mass Storage System" (GEMSS) -based on the Storage Resource Manager (StoRM) developed by INFN, on the General Parallel File System by IBM and on the Tivoli Storage Manager by IBM -has been tested and deployed at the INFNCNAF Tier-1 Computing Centre in Italy. After a successful stress test phase, the solution is now being used in production for the data custodiality of the CMS experiment at CNAF. All data previously recorded on the CASTOR system have been transferred to GEMSS. As final validation of the GEMSS system, some of the computing tests done in the context of the WLCG "Scale Test for the Experiment Program" (STEP'09) challenge were repeated in September-October 2009 and compared with the results previously obtained with CASTOR in June 2009. In this paper, the GEMSS system basics, the stress test activity and the deployment phase -as well as the reliability and performance of the system -are overviewed. The experiences in the use of GEMSS at CNAF in preparing for the first months of data taking of the CMS experiment at the Large Hadron Collider are also presented.

  20. Federated data storage system prototype for LHC experiments and data intensive science

    NASA Astrophysics Data System (ADS)

    Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.

    2017-10-01

    Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.

  1. Extending the farm on external sites: the INFN Tier-1 experience

    NASA Astrophysics Data System (ADS)

    Boccali, T.; Cavalli, A.; Chiarelli, L.; Chierici, A.; Cesini, D.; Ciaschini, V.; Dal Pra, S.; dell'Agnello, L.; De Girolamo, D.; Falabella, A.; Fattibene, E.; Maron, G.; Prosperini, A.; Sapunenko, V.; Virgilio, S.; Zani, S.

    2017-10-01

    The Tier-1 at CNAF is the main INFN computing facility offering computing and storage resources to more than 30 different scientific collaborations including the 4 experiments at the LHC. It is also foreseen a huge increase in computing needs in the following years mainly driven by the experiments at the LHC (especially starting with the run 3 from 2021) but also by other upcoming experiments such as CTA[1] While we are considering the upgrade of the infrastructure of our data center, we are also evaluating the possibility of using CPU resources available in other data centres or even leased from commercial cloud providers. Hence, at INFN Tier-1, besides participating to the EU project HNSciCloud, we have also pledged a small amount of computing resources (˜ 2000 cores) located at the Bari ReCaS[2] for the WLCG experiments for 2016 and we are testing the use of resources provided by a commercial cloud provider. While the Bari ReCaS data center is directly connected to the GARR network[3] with the obvious advantage of a low latency and high bandwidth connection, in the case of the commercial provider we rely only on the General Purpose Network. In this paper we describe the set-up phase and the first results of these installations started in the last quarter of 2015, focusing on the issues that we have had to cope with and discussing the measured results in terms of efficiency.

  2. Enabling IPv6 at FZU - WLCG Tier2 in Prague

    NASA Astrophysics Data System (ADS)

    Kouba, Tomáš; Chudoba, Jiří; Eliáš, Marek

    2014-06-01

    The usage of the new IPv6 protocol in production is becoming reality in the HEP community and the Computing Centre of the Institute of Physics in Prague participates in many IPv6 related activities. Our contribution presents experience with monitoring in HEPiX distributed IPv6 testbed which includes 11 remote sites. We use Nagios to check availability of services and Smokeping for monitoring the network latency. Since it is not always trivial to setup DNS in a dual stack environment properly, we developed a Nagios plugin for checking whether a domain name is resolvable when using only IP protocol version 6 and only version 4. We will also present local area network monitoring and tuning related to IPv6 performance. One of the most important software for a grid site is a batch system for a job execution. We will present our experience with configuring and running Torque batch system in a dual stack environment. We also discuss the steps needed to run VO specific jobs in our IPv6 testbed.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apyan, A.; Badillo, J.; Cruz, J. Diaz

    The CMS experiment at the LHC relies on 7 Tier-1 centres of the WLCG to perform the majority of its bulk processing activity, and to archive its data. During the first run of the LHC, these two functions were tightly coupled as each Tier-1 was constrained to process only the data archived on its hierarchical storage. This lack of flexibility in the assignment of processing workflows occasionally resulted in uneven resource utilisation and in an increased latency in the delivery of the results to the physics community.The long shutdown of the LHC in 2013-2014 was an opportunity to revisit thismore » mode of operations, disentangling the processing and archive functionalities of the Tier-1 centres. The storage services at the Tier-1s were redeployed breaking the traditional hierarchical model: each site now provides a large disk storage to host input and output data for processing, and an independent tape storage used exclusively for archiving. Movement of data between the tape and disk endpoints is not automated, but triggered externally through the WLCG transfer management systems.With this new setup, CMS operations actively controls at any time which data is available on disk for processing and which data should be sent to archive. Thanks to the high-bandwidth connectivity guaranteed by the LHCOPN, input data can be freely transferred between disk endpoints as needed to take advantage of free CPU, turning the Tier-1s into a large pool of shared resources. The output data can be validated before archiving them permanently, and temporary data formats can be produced without wasting valuable tape resources. Lastly, the data hosted on disk at Tier-1s can now be made available also for user analysis since there is no risk any longer of triggering chaotic staging from tape.In this contribution, we describe the technical solutions adopted for the new disk and tape endpoints at the sites, and we report on the commissioning and scale testing of the service. We detail the procedures implemented by CMS computing operations to actively manage data on disk at Tier-1 sites, and we give examples of the benefits brought to CMS workflows by the additional flexibility of the new system.« less

  4. Contributing opportunistic resources to the grid with HTCondor-CE-Bosco

    NASA Astrophysics Data System (ADS)

    Weitzel, Derek; Bockelman, Brian

    2017-10-01

    The HTCondor-CE [1] is the primary Compute Element (CE) software for the Open Science Grid. While it offers many advantages for large sites, for smaller, WLCG Tier-3 sites or opportunistic clusters, it can be a difficult task to install, configure, and maintain the HTCondor-CE. Installing a CE typically involves understanding several pieces of software, installing hundreds of packages on a dedicated node, updating several configuration files, and implementing grid authentication mechanisms. On the other hand, accessing remote clusters from personal computers has been dramatically improved with Bosco: site admins only need to setup SSH public key authentication and appropriate accounts on a login host. In this paper, we take a new approach with the HTCondor-CE-Bosco, a CE which combines the flexibility and reliability of the HTCondor-CE with the easy-to-install Bosco. The administrators of the opportunistic resource are not required to install any software: only SSH access and a user account are required from the host site. The OSG can then run the grid-specific portions from a central location. This provides a new, more centralized, model for running grid services, which complements the traditional distributed model. We will show the architecture of a HTCondor-CE-Bosco enabled site, as well as feedback from multiple sites that have deployed it.

  5. Evolution of the ATLAS distributed computing system during the LHC long shutdown

    NASA Astrophysics Data System (ADS)

    Campana, S.; Atlas Collaboration

    2014-06-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R&D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.

  6. Opportunistic Resource Usage in CMS

    NASA Astrophysics Data System (ADS)

    Kreuzer, Peter; Hufnagel, Dirk; Dykstra, D.; Gutsche, O.; Tadel, M.; Sfiligoi, I.; Letts, J.; Wuerthwein, F.; McCrea, A.; Bockelman, B.; Fajardo, E.; Linares, L.; Wagner, R.; Konstantinov, P.; Blumenfeld, B.; Bradley, D.; Cms Collaboration

    2014-06-01

    CMS is using a tiered setup of dedicated computing resources provided by sites distributed over the world and organized in WLCG. These sites pledge resources to CMS and are preparing them especially for CMS to run the experiment's applications. But there are more resources available opportunistically both on the GRID and in local university and research clusters which can be used for CMS applications. We will present CMS' strategy to use opportunistic resources and prepare them dynamically to run CMS applications. CMS is able to run its applications on resources that can be reached through the GRID, through EC2 compliant cloud interfaces. Even resources that can be used through ssh login nodes can be harnessed. All of these usage modes are integrated transparently into the GlideIn WMS submission infrastructure, which is the basis of CMS' opportunistic resource usage strategy. Technologies like Parrot to mount the software distribution via CVMFS and xrootd for access to data and simulation samples via the WAN are used and will be described. We will summarize the experience with opportunistic resource usage and give an outlook for the restart of LHC data taking in 2015.

  7. Pooling the resources of the CMS Tier-1 sites

    DOE PAGES

    Apyan, A.; Badillo, J.; Cruz, J. Diaz; ...

    2015-12-23

    The CMS experiment at the LHC relies on 7 Tier-1 centres of the WLCG to perform the majority of its bulk processing activity, and to archive its data. During the first run of the LHC, these two functions were tightly coupled as each Tier-1 was constrained to process only the data archived on its hierarchical storage. This lack of flexibility in the assignment of processing workflows occasionally resulted in uneven resource utilisation and in an increased latency in the delivery of the results to the physics community.The long shutdown of the LHC in 2013-2014 was an opportunity to revisit thismore » mode of operations, disentangling the processing and archive functionalities of the Tier-1 centres. The storage services at the Tier-1s were redeployed breaking the traditional hierarchical model: each site now provides a large disk storage to host input and output data for processing, and an independent tape storage used exclusively for archiving. Movement of data between the tape and disk endpoints is not automated, but triggered externally through the WLCG transfer management systems.With this new setup, CMS operations actively controls at any time which data is available on disk for processing and which data should be sent to archive. Thanks to the high-bandwidth connectivity guaranteed by the LHCOPN, input data can be freely transferred between disk endpoints as needed to take advantage of free CPU, turning the Tier-1s into a large pool of shared resources. The output data can be validated before archiving them permanently, and temporary data formats can be produced without wasting valuable tape resources. Lastly, the data hosted on disk at Tier-1s can now be made available also for user analysis since there is no risk any longer of triggering chaotic staging from tape.In this contribution, we describe the technical solutions adopted for the new disk and tape endpoints at the sites, and we report on the commissioning and scale testing of the service. We detail the procedures implemented by CMS computing operations to actively manage data on disk at Tier-1 sites, and we give examples of the benefits brought to CMS workflows by the additional flexibility of the new system.« less

  8. Pooling the resources of the CMS Tier-1 sites

    NASA Astrophysics Data System (ADS)

    Apyan, A.; Badillo, J.; Diaz Cruz, J.; Gadrat, S.; Gutsche, O.; Holzman, B.; Lahiff, A.; Magini, N.; Mason, D.; Perez, A.; Stober, F.; Taneja, S.; Taze, M.; Wissing, C.

    2015-12-01

    The CMS experiment at the LHC relies on 7 Tier-1 centres of the WLCG to perform the majority of its bulk processing activity, and to archive its data. During the first run of the LHC, these two functions were tightly coupled as each Tier-1 was constrained to process only the data archived on its hierarchical storage. This lack of flexibility in the assignment of processing workflows occasionally resulted in uneven resource utilisation and in an increased latency in the delivery of the results to the physics community. The long shutdown of the LHC in 2013-2014 was an opportunity to revisit this mode of operations, disentangling the processing and archive functionalities of the Tier-1 centres. The storage services at the Tier-1s were redeployed breaking the traditional hierarchical model: each site now provides a large disk storage to host input and output data for processing, and an independent tape storage used exclusively for archiving. Movement of data between the tape and disk endpoints is not automated, but triggered externally through the WLCG transfer management systems. With this new setup, CMS operations actively controls at any time which data is available on disk for processing and which data should be sent to archive. Thanks to the high-bandwidth connectivity guaranteed by the LHCOPN, input data can be freely transferred between disk endpoints as needed to take advantage of free CPU, turning the Tier-1s into a large pool of shared resources. The output data can be validated before archiving them permanently, and temporary data formats can be produced without wasting valuable tape resources. Finally, the data hosted on disk at Tier-1s can now be made available also for user analysis since there is no risk any longer of triggering chaotic staging from tape. In this contribution, we describe the technical solutions adopted for the new disk and tape endpoints at the sites, and we report on the commissioning and scale testing of the service. We detail the procedures implemented by CMS computing operations to actively manage data on disk at Tier-1 sites, and we give examples of the benefits brought to CMS workflows by the additional flexibility of the new system.

  9. DIRAC3 - the new generation of the LHCb grid software

    NASA Astrophysics Data System (ADS)

    Tsaregorodtsev, A.; Brook, N.; Casajus Ramo, A.; Charpentier, Ph; Closier, J.; Cowan, G.; Graciani Diaz, R.; Lanciotti, E.; Mathe, Z.; Nandakumar, R.; Paterson, S.; Romanovsky, V.; Santinelli, R.; Sapunov, M.; Smith, A. C.; Seco Miguelez, M.; Zhelezov, A.

    2010-04-01

    DIRAC, the LHCb community Grid solution, was considerably reengineered in order to meet all the requirements for processing the data coming from the LHCb experiment. It is covering all the tasks starting with raw data transportation from the experiment area to the grid storage, data processing up to the final user analysis. The reengineered DIRAC3 version of the system includes a fully grid security compliant framework for building service oriented distributed systems; complete Pilot Job framework for creating efficient workload management systems; several subsystems to manage high level operations like data production and distribution management. The user interfaces of the DIRAC3 system providing rich command line and scripting tools are complemented by a full-featured Web portal providing users with a secure access to all the details of the system status and ongoing activities. We will present an overview of the DIRAC3 architecture, new innovative features and the achieved performance. Extending DIRAC3 to manage computing resources beyond the WLCG grid will be discussed. Experience with using DIRAC3 by other user communities than LHCb and in other application domains than High Energy Physics will be shown to demonstrate the general-purpose nature of the system.

  10. Opportunistic Resource Usage in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kreuzer, Peter; Hufnagel, Dirk; Dykstra, D.

    2014-01-01

    CMS is using a tiered setup of dedicated computing resources provided by sites distributed over the world and organized in WLCG. These sites pledge resources to CMS and are preparing them especially for CMS to run the experiment's applications. But there are more resources available opportunistically both on the GRID and in local university and research clusters which can be used for CMS applications. We will present CMS' strategy to use opportunistic resources and prepare them dynamically to run CMS applications. CMS is able to run its applications on resources that can be reached through the GRID, through EC2 compliantmore » cloud interfaces. Even resources that can be used through ssh login nodes can be harnessed. All of these usage modes are integrated transparently into the GlideIn WMS submission infrastructure, which is the basis of CMS' opportunistic resource usage strategy. Technologies like Parrot to mount the software distribution via CVMFS and xrootd for access to data and simulation samples via the WAN are used and will be described. We will summarize the experience with opportunistic resource usage and give an outlook for the restart of LHC data taking in 2015.« less

  11. CMS results in the Combined Computing Readiness Challenge CCRC'08

    NASA Astrophysics Data System (ADS)

    Bonacorsi, D.; Bauerdick, L.; CMS Collaboration

    2009-12-01

    During February and May 2008, CMS participated to the Combined Computing Readiness Challenge (CCRC'08) together with all other LHC experiments. The purpose of this worldwide exercise was to check the readiness of the Computing infrastructure for LHC data taking. Another set of major CMS tests called Computing, Software and Analysis challenge (CSA'08) - as well as CMS cosmic runs - were also running at the same time: CCRC augmented the load on computing with additional tests to validate and stress-test all CMS computing workflows at full data taking scale, also extending this to the global WLCG community. CMS exercised most aspects of the CMS computing model, with very comprehensive tests. During May 2008, CMS moved more than 3.6 Petabytes among more than 300 links in the complex Grid topology. CMS demonstrated that is able to safely move data out of CERN to the Tier-1 sites, sustaining more than 600 MB/s as a daily average for more than seven days in a row, with enough headroom and with hourly peaks of up to 1.7 GB/s. CMS ran hundreds of simultaneous jobs at each Tier-1 site, re-reconstructing and skimming hundreds of millions of events. After re-reconstruction the fresh AOD (Analysis Object Data) has to be synchronized between Tier-1 centers: CMS demonstrated that the required inter-Tier-1 transfers are achievable within a few days. CMS also showed that skimmed analysis data sets can be transferred to Tier-2 sites for analysis at sufficient rate, regionally as well as inter-regionally, achieving all goals in about 90% of >200 links. Simultaneously, CMS also ran a large Tier-2 analysis exercise, where realistic analysis jobs were submitted to a large set of Tier-2 sites by a large number of people to produce a chaotic workload across the systems, and with more than 400 analysis users in May. Taken all together, CMS routinely achieved submissions of 100k jobs/day, with peaks up to 200k jobs/day. The achieved results in CCRC'08 - focussing on the distributed workflows - are presented and discussed.

  12. The vacuum platform

    NASA Astrophysics Data System (ADS)

    McNab, A.

    2017-10-01

    This paper describes GridPP’s Vacuum Platform for managing virtual machines (VMs), which has been used to run production workloads for WLCG and other HEP experiments. The platform provides a uniform interface between VMs and the sites they run at, whether the site is organised as an Infrastructure-as-a-Service cloud system such as OpenStack, or an Infrastructure-as-a-Client system such as Vac. The paper describes our experience in using this platform, in developing and operating VM lifecycle managers Vac and Vcycle, and in interacting with VMs provided by LHCb, ATLAS, ALICE, CMS, and the GridPP DIRAC service to run production workloads.

  13. Advanced technologies for scalable ATLAS conditions database access on the grid

    NASA Astrophysics Data System (ADS)

    Basset, R.; Canali, L.; Dimitrov, G.; Girone, M.; Hawkings, R.; Nevski, P.; Valassi, A.; Vaniachine, A.; Viegas, F.; Walker, R.; Wong, A.

    2010-04-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysis of server performance under stress tests indicates that Conditions Db data access is limited by the disk I/O throughput. An unacceptable side-effect of the disk I/O saturation is a degradation of the WLCG 3D Services that update Conditions Db data at all ten ATLAS Tier-1 sites using the technology of Oracle Streams. To avoid such bottlenecks we prototyped and tested a novel approach for database peak load avoidance in Grid computing. Our approach is based upon the proven idea of pilot job submission on the Grid: instead of the actual query, an ATLAS utility library sends to the database server a pilot query first.

  14. Monitoring of computing resource use of active software releases at ATLAS

    NASA Astrophysics Data System (ADS)

    Limosani, Antonio; ATLAS Collaboration

    2017-10-01

    The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the TierO at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end-user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as “MemoryMonitor”, to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed in plots generated using Python visualization libraries and collected into pre-formatted auto-generated Web pages, which allow the ATLAS developer community to track the performance of their algorithms. This information is however preferentially filtered to domain leaders and developers through the use of JIRA and via reports given at ATLAS software meetings. Finally, we take a glimpse of the future by reporting on the expected CPU and RAM usage in benchmark workflows associated with the High Luminosity LHC and anticipate the ways performance monitoring will evolve to understand and benchmark future workflows.

  15. Data federation strategies for ATLAS using XRootD

    NASA Astrophysics Data System (ADS)

    Gardner, Robert; Campana, Simone; Duckeck, Guenter; Elmsheuser, Johannes; Hanushevsky, Andrew; Hönig, Friedrich G.; Iven, Jan; Legger, Federica; Vukotic, Ilija; Yang, Wei; Atlas Collaboration

    2014-06-01

    In the past year the ATLAS Collaboration accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing more efficient and intelligent use of computing resources. Along with these advances come integration with existing ATLAS production services (PanDA and its pilot services) and data management services (DQ2, and in the next generation, Rucio). Functional testing of the federation has been integrated into the standard ATLAS and WLCG monitoring frameworks and a dedicated set of tools provides high granularity information on its current and historical usage. We use a federation topology designed to search from the site's local storage outward to its region and to globally distributed storage resources. We describe programmatic testing of various federation access modes including direct access over the wide area network and staging of remote data files to local disk. To support job-brokering decisions, a time-dependent cost-of-data-access matrix is made taking into account network performance and key site performance factors. The system's response to production-scale physics analysis workloads, either from individual end-users or ATLAS analysis services, is discussed.

  16. The consistency service of the ATLAS Distributed Data Management system

    NASA Astrophysics Data System (ADS)

    Serfon, Cédric; Garonne, Vincent; ATLAS Collaboration

    2011-12-01

    With the continuously increasing volume of data produced by ATLAS and stored on the WLCG sites, the probability of data corruption or data losses, due to software and hardware failures is increasing. In order to ensure the consistency of all data produced by ATLAS a Consistency Service has been developed as part of the DQ2 Distributed Data Management system. This service is fed by the different ATLAS tools, i.e. the analysis tools, production tools, DQ2 site services or by site administrators that report corrupted or lost files. It automatically corrects the errors reported and informs the users in case of irrecoverable file loss.

  17. Monitoring of services with non-relational databases and map-reduce framework

    NASA Astrophysics Data System (ADS)

    Babik, M.; Souto, F.

    2012-12-01

    Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core site services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via instrumented jobs. This generates quite a lot of monitoring data to process, as there are several data points for every job and several million jobs are executed every day. The recent uptake of non-relational databases opens a new paradigm in the large-scale storage and distributed processing of systems with heavy read-write workloads. For SAM this brings new possibilities to improve its model, from performing aggregation of measurements to storing raw data and subsequent re-processing. Both SAM and SWAT are currently tuned to run at top performance, reaching some of the limits in storage and processing power of their existing Oracle relational database. We investigated the usability and performance of non-relational storage together with its distributed data processing capabilities. For this, several popular systems have been compared. In this contribution we describe our investigation of the existing non-relational databases suited for monitoring systems covering Cassandra, HBase and MongoDB. Further, we present our experiences in data modeling and prototyping map-reduce algorithms focusing on the extension of the already existing availability and reliability computations. Finally, possible future directions in this area are discussed, analyzing the current deficiencies of the existing Grid monitoring systems and proposing solutions to leverage the benefits of the non-relational databases to get more scalable and flexible frameworks.

  18. Networks in ATLAS

    NASA Astrophysics Data System (ADS)

    McKee, Shawn; ATLAS Collaboration

    2017-10-01

    Networks have played a critical role in high-energy physics (HEP), enabling us to access and effectively utilize globally distributed resources to meet the needs of our physicists. Because of their importance in enabling our grid computing infrastructure many physicists have taken leading roles in research and education (R&E) networking, participating in, and even convening, network related meetings and research programs with the broader networking community worldwide. This has led to HEP benefiting from excellent global networking capabilities for little to no direct cost. However, as other science domains ramp-up their need for similar networking it becomes less clear that this situation will continue unchanged. What this means for ATLAS in particular needs to be understood. ATLAS has evolved its computing model since the LHC started based upon its experience with using globally distributed resources. The most significant theme of those changes has been increased reliance upon, and use of, its networks. We will report on a number of networking initiatives in ATLAS including participation in the global perfSONAR network monitoring and measuring efforts of WLCG and OSG, the collaboration with the LHCOPN/LHCONE effort, the integration of network awareness into PanDA, the use of the evolving ATLAS analytics framework to better understand our networks and the changes in our DDM system to allow remote access to data. We will also discuss new efforts underway that are exploring the inclusion and use of software defined networks (SDN) and how ATLAS might benefit from: • Orchestration and optimization of distributed data access and data movement. • Better control of workflows, end to end. • Enabling prioritization of time-critical vs normal tasks • Improvements in the efficiency of resource usage

  19. Towards a Global Service Registry for the World-Wide LHC Computing Grid

    NASA Astrophysics Data System (ADS)

    Field, Laurence; Alandes Pradillo, Maria; Di Girolamo, Alessandro

    2014-06-01

    The World-Wide LHC Computing Grid encompasses a set of heterogeneous information systems; from central portals such as the Open Science Grid's Information Management System and the Grid Operations Centre Database, to the WLCG information system, where the information sources are the Grid services themselves. Providing a consistent view of the information, which involves synchronising all these informations systems, is a challenging activity that has lead the LHC virtual organisations to create their own configuration databases. This experience, whereby each virtual organisation's configuration database interfaces with multiple information systems, has resulted in the duplication of effort, especially relating to the use of manual checks for the handling of inconsistencies. The Global Service Registry aims to address this issue by providing a centralised service that aggregates information from multiple information systems. It shows both information on registered resources (i.e. what should be there) and available resources (i.e. what is there). The main purpose is to simplify the synchronisation of the virtual organisation's own configuration databases, which are used for job submission and data management, through the provision of a single interface for obtaining all the information. By centralising the information, automated consistency and validation checks can be performed to improve the overall quality of information provided. Although internally the GLUE 2.0 information model is used for the purpose of integration, the Global Service Registry in not dependent on any particular information model for ingestion or dissemination. The intention is to allow the virtual organisation's configuration databases to be decoupled from the underlying information systems in a transparent way and hence simplify any possible future migration due to the evolution of those systems. This paper presents the Global Service Registry architecture, its advantages compared to the current situation and how it can support the evolution of information systems.

  20. Efficient provisioning for multi-core applications with LSF

    NASA Astrophysics Data System (ADS)

    Dal Pra, Stefano

    2015-12-01

    Tier-1 sites providing computing power for HEP experiments are usually tightly designed for high throughput performances. This is pursued by reducing the variety of supported use cases and tuning for performances those ones, the most important of which have been that of singlecore jobs. Moreover, the usual workload is saturation: each available core in the farm is in use and there are queued jobs waiting for their turn to run. Enabling multi-core jobs thus requires dedicating a number of hosts where to run, and waiting for them to free the needed number of cores. This drain-time introduces a loss of computing power driven by the number of unusable empty cores. As an increasing demand for multi-core capable resources have emerged, a Task Force have been constituted in WLCG, with the goal to define a simple and efficient multi-core resource provisioning model. This paper details the work done at the INFN Tier-1 to enable multi-core support for the LSF batch system, with the intent of reducing to the minimum the average number of unused cores. The adopted strategy has been that of dedicating to multi-core a dynamic set of nodes, whose dimension is mainly driven by the number of pending multi-core requests and fair-share priority of the submitting user. The node status transition, from single to multi core et vice versa, is driven by a finite state machine which is implemented in a custom multi-core director script, running in the cluster. After describing and motivating both the implementation and the details specific to the LSF batch system, results about performance are reported. Factors having positive and negative impact on the overall efficiency are discussed and solutions to reduce at most the negative ones are proposed.

  1. LeaRN: A Collaborative Learning-Research Network for a WLCG Tier-3 Centre

    NASA Astrophysics Data System (ADS)

    Pérez Calle, Elio

    2011-12-01

    The Department of Modern Physics of the University of Science and Technology of China is hosting a Tier-3 centre for the ATLAS experiment. A interdisciplinary team of researchers, engineers and students are devoted to the task of receiving, storing and analysing the scientific data produced by the LHC. In order to achieve the highest performance and to develop a knowledge base shared by all members of the team, the research activities and their coordination are being supported by an array of computing systems. These systems have been designed to foster communication, collaboration and coordination among the members of the team, both face-to-face and remotely, and both in synchronous and asynchronous ways. The result is a collaborative learning-research network whose main objectives are awareness (to get shared knowledge about other's activities and therefore obtain synergies), articulation (to allow a project to be divided, work units to be assigned and then reintegrated) and adaptation (to adapt information technologies to the needs of the group). The main technologies involved are Communication Tools such as web publishing, revision control and wikis, Conferencing Tools such as forums, instant messaging and video conferencing and Coordination Tools, such as time management, project management and social networks. The software toolkit has been deployed by the members of the team and it has been based on free and open source software.

  2. Monitoring of large-scale federated data storage: XRootD and beyond

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Beche, A.; Belov, S.; Diguez Arias, D.; Giordano, D.; Oleynik, D.; Petrosyan, A.; Saiz, P.; Tadel, M.; Tuckett, D.; Vukotic, I.

    2014-06-01

    The computing models of the LHC experiments are gradually moving from hierarchical data models with centrally managed data pre-placement towards federated storage which provides seamless access to data files independently of their location and dramatically improve recovery due to fail-over mechanisms. Construction of the data federations and understanding the impact of the new approach to data management on user analysis requires complete and detailed monitoring. Monitoring functionality should cover the status of all components of the federated storage, measuring data traffic and data access performance, as well as being able to detect any kind of inefficiencies and to provide hints for resource optimization and effective data distribution policy. Data mining of the collected monitoring data provides a deep insight into new usage patterns. In the WLCG context, there are several federations currently based on the XRootD technology. This paper will focus on monitoring for the ATLAS and CMS XRootD federations implemented in the Experiment Dashboard monitoring framework. Both federations consist of many dozens of sites accessed by many hundreds of clients and they continue to grow in size. Handling of the monitoring flow generated by these systems has to be well optimized in order to achieve the required performance. Furthermore, this paper demonstrates the XRootD monitoring architecture is sufficiently generic to be easily adapted for other technologies, such as HTTP/WebDAV dynamic federations.

  3. Storage element performance optimization for CMS analysis jobs

    NASA Astrophysics Data System (ADS)

    Behrmann, G.; Dahlblom, J.; Guldmyr, J.; Happonen, K.; Lindén, T.

    2012-12-01

    Tier-2 computing sites in the Worldwide Large Hadron Collider Computing Grid (WLCG) host CPU-resources (Compute Element, CE) and storage resources (Storage Element, SE). The vast amount of data that needs to processed from the Large Hadron Collider (LHC) experiments requires good and efficient use of the available resources. Having a good CPU efficiency for the end users analysis jobs requires that the performance of the storage system is able to scale with I/O requests from hundreds or even thousands of simultaneous jobs. In this presentation we report on the work on improving the SE performance at the Helsinki Institute of Physics (HIP) Tier-2 used for the Compact Muon Experiment (CMS) at the LHC. Statistics from CMS grid jobs are collected and stored in the CMS Dashboard for further analysis, which allows for easy performance monitoring by the sites and by the CMS collaboration. As part of the monitoring framework CMS uses the JobRobot which sends every four hours 100 analysis jobs to each site. CMS also uses the HammerCloud tool for site monitoring and stress testing and it has replaced the JobRobot. The performance of the analysis workflow submitted with JobRobot or HammerCloud can be used to track the performance due to site configuration changes, since the analysis workflow is kept the same for all sites and for months in time. The CPU efficiency of the JobRobot jobs at HIP was increased approximately by 50 % to more than 90 %, by tuning the SE and by improvements in the CMSSW and dCache software. The performance of the CMS analysis jobs improved significantly too. Similar work has been done on other CMS Tier-sites, since on average the CPU efficiency for CMSSW jobs has increased during 2011. Better monitoring of the SE allows faster detection of problems, so that the performance level can be kept high. The next storage upgrade at HIP consists of SAS disk enclosures which can be stress tested on demand with HammerCloud workflows, to make sure that the I/O-performance is good.

  4. Adjusting the fairshare policy to prevent computing power loss

    NASA Astrophysics Data System (ADS)

    Dal Pra, Stefano

    2017-10-01

    On a typical WLCG site providing batch access to computing resources according to a fairshare policy, the idle time lapse after a job ends and before a new one begins on a given slot is negligible if compared to the duration of typical jobs. The overall amount of these intervals over a time window increases with the size of the cluster and the inverse of job duration and can be considered equivalent to an average number of unavailable slots over that time window. This value has been investigated for the Tier-1 at CNAF, and observed to occasionally grow and reach up to more than the 10% of the about 20,000 available computing slots. Analysis reveals that this happens when a sustained rate of short jobs is submitted to the cluster and dispatched by the batch system. Because of how the default fairshare policy works, it increases the dynamic priority of those users mostly submitting short jobs, since they are not accumulating runtime, and will dispatch more of their jobs at the next round, thus worsening the situation until the submission flow ends. To address this problem the default behaviour of the fairshare have been altered by adding a correcting term to the default formula for the dynamic priority. The LSF batch system, currently adopted at CNAF, provides a way to define its value by invoking a C function, which returns it for each user in the cluster. The correcting term works by rounding up to a minimum defined runtime the most recently done jobs. Doing so, each short job looks almost like a regular one and the dynamic priority value settles to a proper value. The net effect is a reduction of the dispatching rate of short jobs and, consequently, the average number of available slots greatly improves. Furthermore, a potential starvation problem, actually observed at least once is also prevented. After describing short jobs and reporting about their impact on the cluster, possible workarounds are discussed and the selected solution is motivated. Details on the most critical aspects of the implementation are explained and the observed results are presented.

  5. Integration of Grid and Local Batch Resources at DESY

    NASA Astrophysics Data System (ADS)

    Beyer, Christoph; Finnern, Thomas; Gellrich, Andreas; Hartmann, Thomas; Kemp, Yves; Lewendel, Birgit

    2017-10-01

    As one of the largest resource centres DESY has to support differing work flows of users from various scientific backgrounds. Users can be for one HEP experiments in WLCG or Belle II as well as local HEP users but also physicists from other fields as photon science or accelerator development. By abandoning specific worker node setups in favour of generic flat nodes with middleware resources provided via CVMFS, we gain flexibility to subsume different use cases in a homogeneous environment. Grid jobs and the local batch system are managed in a HTCondor based setup, accepting pilot, user and containerized jobs. The unified setup allows dynamic re-assignment of resources between the different use cases. Monitoring is implemented on global batch system metrics as well as on a per job level utilizing corresponding cgroup information.

  6. Use of DAGMan in CRAB3 to Improve the Splitting of CMS User Jobs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolf, M.; Mascheroni, M.; Woodard, A.

    CRAB3 is a workload management tool used by CMS physicists to analyze data acquired by the Compact Muon Solenoid (CMS) detector at the CERN Large Hadron Collider (LHC). Research in high energy physics often requires the analysis of large collections of files, referred to as datasets. The task is divided into jobs that are distributed among a large collection of worker nodes throughout the Worldwide LHC Computing Grid (WLCG). Splitting a large analysis task into optimally sized jobs is critical to efficient use of distributed computing resources. Jobs that are too big will have excessive runtimes and will not distributemore » the work across all of the available nodes. However, splitting the project into a large number of very small jobs is also inefficient, as each job creates additional overhead which increases load on infrastructure resources. Currently this splitting is done manually, using parameters provided by the user. However the resources needed for each job are difficult to predict because of frequent variations in the performance of the user code and the content of the input dataset. As a result, dividing a task into jobs by hand is difficult and often suboptimal. In this work we present a new feature called “automatic splitting” which removes the need for users to manually specify job splitting parameters. We discuss how HTCondor DAGMan can be used to build dynamic Directed Acyclic Graphs (DAGs) to optimize the performance of large CMS analysis jobs on the Grid. We use DAGMan to dynamically generate interconnected DAGs that estimate the processing time the user code will require to analyze each event. This is used to calculate an estimate of the total processing time per job, and a set of analysis jobs are run using this estimate as a specified time limit. Some jobs may not finish within the alloted time; they are terminated at the time limit, and the unfinished data is regrouped into smaller jobs and resubmitted.« less

  7. Use of DAGMan in CRAB3 to improve the splitting of CMS user jobs

    NASA Astrophysics Data System (ADS)

    Wolf, M.; Mascheroni, M.; Woodard, A.; Belforte, S.; Bockelman, B.; Hernandez, J. M.; Vaandering, E.

    2017-10-01

    CRAB3 is a workload management tool used by CMS physicists to analyze data acquired by the Compact Muon Solenoid (CMS) detector at the CERN Large Hadron Collider (LHC). Research in high energy physics often requires the analysis of large collections of files, referred to as datasets. The task is divided into jobs that are distributed among a large collection of worker nodes throughout the Worldwide LHC Computing Grid (WLCG). Splitting a large analysis task into optimally sized jobs is critical to efficient use of distributed computing resources. Jobs that are too big will have excessive runtimes and will not distribute the work across all of the available nodes. However, splitting the project into a large number of very small jobs is also inefficient, as each job creates additional overhead which increases load on infrastructure resources. Currently this splitting is done manually, using parameters provided by the user. However the resources needed for each job are difficult to predict because of frequent variations in the performance of the user code and the content of the input dataset. As a result, dividing a task into jobs by hand is difficult and often suboptimal. In this work we present a new feature called “automatic splitting” which removes the need for users to manually specify job splitting parameters. We discuss how HTCondor DAGMan can be used to build dynamic Directed Acyclic Graphs (DAGs) to optimize the performance of large CMS analysis jobs on the Grid. We use DAGMan to dynamically generate interconnected DAGs that estimate the processing time the user code will require to analyze each event. This is used to calculate an estimate of the total processing time per job, and a set of analysis jobs are run using this estimate as a specified time limit. Some jobs may not finish within the alloted time; they are terminated at the time limit, and the unfinished data is regrouped into smaller jobs and resubmitted.

  8. DPM: Future Proof Storage

    NASA Astrophysics Data System (ADS)

    Alvarez, Alejandro; Beche, Alexandre; Furano, Fabrizio; Hellmich, Martin; Keeble, Oliver; Rocha, Ricardo

    2012-12-01

    The Disk Pool Manager (DPM) is a lightweight solution for grid enabled disk storage management. Operated at more than 240 sites it has the widest distribution of all grid storage solutions in the WLCG infrastructure. It provides an easy way to manage and configure disk pools, and exposes multiple interfaces for data access (rfio, xroot, nfs, gridftp and http/dav) and control (srm). During the last year we have been working on providing stable, high performant data access to our storage system using standard protocols, while extending the storage management functionality and adapting both configuration and deployment procedures to reuse commonly used building blocks. In this contribution we cover in detail the extensive evaluation we have performed of our new HTTP/WebDAV and NFS 4.1 frontends, in terms of functionality and performance. We summarize the issues we faced and the solutions we developed to turn them into valid alternatives to the existing grid protocols - namely the additional work required to provide multi-stream transfers for high performance wide area access, support for third party copies, credential delegation or the required changes in the experiment and fabric management frameworks and tools. We describe new functionality that has been added to ease system administration, such as different filesystem weights and a faster disk drain, and new configuration and monitoring solutions based on the industry standards Puppet and Nagios. Finally, we explain some of the internal changes we had to do in the DPM architecture to better handle the additional load from the analysis use cases.

  9. Optimizing CMS build infrastructure via Apache Mesos

    NASA Astrophysics Data System (ADS)

    Abdurachmanov, David; Degano, Alessandro; Elmer, Peter; Eulisse, Giulio; Mendez, David; Muzaffar, Shahzad

    2015-12-01

    The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux. Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora, and other applications on a dynamically shared pool of nodes. We present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.

  10. dCache on Steroids - Delegated Storage Solutions

    DOE PAGES

    Mkrtchyan, Tigran; Adeyemi, F.; Ashish, A.; ...

    2017-11-23

    For over a decade, dCache.org has delivered a robust software used at more than 80 Universities and research institutes around the world, allowing these sites to provide reliable storage services for the WLCG experiments as well as many other scientific communities. The flexible architecture of dCache allows running it in a wide variety of configurations and platforms - from a SoC based all-in-one Raspberry-Pi up to hundreds of nodes in a multipetabyte installation. Due to lack of managed storage at the time, dCache implemented data placement, replication and data integrity directly. Today, many alternatives are available: S3, GlusterFS, CEPH andmore » others. While such solutions position themselves as scalable storage systems, they cannot be used by many scientific communities out of the box. The absence of community-accepted authentication and authorization mechanisms, the use of product specific protocols and the lack of namespace are some of the reasons that prevent wide-scale adoption of these alternatives. Most of these limitations are already solved by dCache. By delegating low-level storage management functionality to the above-mentioned new systems and providing the missing layer through dCache, we provide a solution which combines the benefits of both worlds - industry standard storage building blocks with the access protocols and authentication required by scientific communities. In this paper, we focus on CEPH, a popular software for clustered storage that supports file, block and object interfaces. CEPH is often used in modern computing centers, for example as a backend to OpenStack services. We will show prototypes of dCache running with a CEPH backend and discuss the benefits and limitations of such an approach. As a result, we will also outline the roadmap for supporting ‘delegated storage’ within the dCache releases.« less

  11. dCache on Steroids - Delegated Storage Solutions

    NASA Astrophysics Data System (ADS)

    Mkrtchyan, T.; Adeyemi, F.; Ashish, A.; Behrmann, G.; Fuhrmann, P.; Litvintsev, D.; Millar, P.; Rossi, A.; Sahakyan, M.; Starek, J.

    2017-10-01

    For over a decade, dCache.org has delivered a robust software used at more than 80 Universities and research institutes around the world, allowing these sites to provide reliable storage services for the WLCG experiments as well as many other scientific communities. The flexible architecture of dCache allows running it in a wide variety of configurations and platforms - from a SoC based all-in-one Raspberry-Pi up to hundreds of nodes in a multipetabyte installation. Due to lack of managed storage at the time, dCache implemented data placement, replication and data integrity directly. Today, many alternatives are available: S3, GlusterFS, CEPH and others. While such solutions position themselves as scalable storage systems, they cannot be used by many scientific communities out of the box. The absence of community-accepted authentication and authorization mechanisms, the use of product specific protocols and the lack of namespace are some of the reasons that prevent wide-scale adoption of these alternatives. Most of these limitations are already solved by dCache. By delegating low-level storage management functionality to the above-mentioned new systems and providing the missing layer through dCache, we provide a solution which combines the benefits of both worlds - industry standard storage building blocks with the access protocols and authentication required by scientific communities. In this paper, we focus on CEPH, a popular software for clustered storage that supports file, block and object interfaces. CEPH is often used in modern computing centers, for example as a backend to OpenStack services. We will show prototypes of dCache running with a CEPH backend and discuss the benefits and limitations of such an approach. We will also outline the roadmap for supporting ‘delegated storage’ within the dCache releases.

  12. dCache on Steroids - Delegated Storage Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mkrtchyan, Tigran; Adeyemi, F.; Ashish, A.

    For over a decade, dCache.org has delivered a robust software used at more than 80 Universities and research institutes around the world, allowing these sites to provide reliable storage services for the WLCG experiments as well as many other scientific communities. The flexible architecture of dCache allows running it in a wide variety of configurations and platforms - from a SoC based all-in-one Raspberry-Pi up to hundreds of nodes in a multipetabyte installation. Due to lack of managed storage at the time, dCache implemented data placement, replication and data integrity directly. Today, many alternatives are available: S3, GlusterFS, CEPH andmore » others. While such solutions position themselves as scalable storage systems, they cannot be used by many scientific communities out of the box. The absence of community-accepted authentication and authorization mechanisms, the use of product specific protocols and the lack of namespace are some of the reasons that prevent wide-scale adoption of these alternatives. Most of these limitations are already solved by dCache. By delegating low-level storage management functionality to the above-mentioned new systems and providing the missing layer through dCache, we provide a solution which combines the benefits of both worlds - industry standard storage building blocks with the access protocols and authentication required by scientific communities. In this paper, we focus on CEPH, a popular software for clustered storage that supports file, block and object interfaces. CEPH is often used in modern computing centers, for example as a backend to OpenStack services. We will show prototypes of dCache running with a CEPH backend and discuss the benefits and limitations of such an approach. As a result, we will also outline the roadmap for supporting ‘delegated storage’ within the dCache releases.« less

  13. Towards more stable operation of the Tokyo Tier2 center

    NASA Astrophysics Data System (ADS)

    Nakamura, T.; Mashimo, T.; Matsui, N.; Sakamoto, H.; Ueda, I.

    2014-06-01

    The Tokyo Tier2 center, which is located at the International Center for Elementary Particle Physics (ICEPP) in the University of Tokyo, was established as a regional analysis center in Japan for the ATLAS experiment. The official operation with WLCG was started in 2007 after the several years development since 2002. In December 2012, we have replaced almost all hardware as the third system upgrade to deal with analysis for further growing data of the ATLAS experiment. The number of CPU cores are increased by factor of two (9984 cores in total), and the performance of individual CPU core is improved by 20% according to the HEPSPEC06 benchmark test at 32bit compile mode. The score is estimated as 18.03 (SL6) per core by using Intel Xeon E5-2680 2.70 GHz. Since all worker nodes are made by 16 CPU cores configuration, we deployed 624 blade servers in total. They are connected to 6.7 PB of disk storage system with non-blocking 10 Gbps internal network backbone by using two center network switches (NetIron MLXe-32). The disk storage is made by 102 of RAID6 disk arrays (Infortrend DS S24F-G2840-4C16DO0) and served by equivalent number of 1U file servers with 8G-FC connection to maximize the file transfer throughput per storage capacity. As of February 2013, 2560 CPU cores and 2.00 PB of disk storage have already been deployed for WLCG. Currently, the remaining non-grid resources for both CPUs and disk storage are used as dedicated resources for the data analysis by the ATLAS Japan collaborators. Since all hardware in the non-grid resources are made by same architecture with Tier2 resource, they will be able to be migrated as the Tier2 extra resource on demand of the ATLAS experiment in the future. In addition to the upgrade of computing resources, we expect the improvement of connectivity on the wide area network. Thanks to the Japanese NREN (NII), another 10 Gbps trans-Pacific line from Japan to Washington will be available additionally with existing two 10 Gbps lines (Tokyo to New York and Tokyo to Los Angeles). The new line will be connected to LHCONE for the more improvement of the connectivity. In this circumstance, we are working for the further stable operation. For instance, we have newly introduced GPFS (IBM) for the non-grid disk storage, while Disk Pool Manager (DPM) are continued to be used as Tier2 disk storage from the previous system. Since the number of files stored in a DPM pool will be increased with increasing the total amount of data, the development of stable database configuration is one of the crucial issues as well as scalability. We have started some studies on the performance of asynchronous database replication so that we can take daily full backup. In this report, we would like to introduce several improvements in terms of the performances and stability of our new system and possibility of the further improvement of local I/O performance in the multi-core worker node. We also present the status of the wide area network connectivity from Japan to US and/or EU with LHCONE.

  14. ATLAS Distributed Computing Experience and Performance During the LHC Run-2

    NASA Astrophysics Data System (ADS)

    Filipčič, A.; ATLAS Collaboration

    2017-10-01

    ATLAS Distributed Computing during LHC Run-1 was challenged by steadily increasing computing, storage and network requirements. In addition, the complexity of processing task workflows and their associated data management requirements led to a new paradigm in the ATLAS computing model for Run-2, accompanied by extensive evolution and redesign of the workflow and data management systems. The new systems were put into production at the end of 2014, and gained robustness and maturity during 2015 data taking. ProdSys2, the new request and task interface; JEDI, the dynamic job execution engine developed as an extension to PanDA; and Rucio, the new data management system, form the core of Run-2 ATLAS distributed computing engine. One of the big changes for Run-2 was the adoption of the Derivation Framework, which moves the chaotic CPU and data intensive part of the user analysis into the centrally organized train production, delivering derived AOD datasets to user groups for final analysis. The effectiveness of the new model was demonstrated through the delivery of analysis datasets to users just one week after data taking, by completing the calibration loop, Tier-0 processing and train production steps promptly. The great flexibility of the new system also makes it possible to execute part of the Tier-0 processing on the grid when Tier-0 resources experience a backlog during high data-taking periods. The introduction of the data lifetime model, where each dataset is assigned a finite lifetime (with extensions possible for frequently accessed data), was made possible by Rucio. Thanks to this the storage crises experienced in Run-1 have not reappeared during Run-2. In addition, the distinction between Tier-1 and Tier-2 disk storage, now largely artificial given the quality of Tier-2 resources and their networking, has been removed through the introduction of dynamic ATLAS clouds that group the storage endpoint nucleus and its close-by execution satellite sites. All stable ATLAS sites are now able to store unique or primary copies of the datasets. ATLAS Distributed Computing is further evolving to speed up request processing by introducing network awareness, using machine learning and optimisation of the latencies during the execution of the full chain of tasks. The Event Service, a new workflow and job execution engine, is designed around check-pointing at the level of event processing to use opportunistic resources more efficiently. ATLAS has been extensively exploring possibilities of using computing resources extending beyond conventional grid sites in the WLCG fabric to deliver as many computing cycles as possible and thereby enhance the significance of the Monte-Carlo samples to deliver better physics results. The exploitation of opportunistic resources was at an early stage throughout 2015, at the level of 10% of the total ATLAS computing power, but in the next few years it is expected to deliver much more. In addition, demonstrating the ability to use an opportunistic resource can lead to securing ATLAS allocations on the facility, hence the importance of this work goes beyond merely the initial CPU cycles gained. In this paper, we give an overview and compare the performance, development effort, flexibility and robustness of the various approaches.

  15. Disk storage at CERN

    NASA Astrophysics Data System (ADS)

    Mascetti, L.; Cano, E.; Chan, B.; Espinal, X.; Fiorot, A.; González Labrador, H.; Iven, J.; Lamanna, M.; Lo Presti, G.; Mościcki, JT; Peters, AJ; Ponce, S.; Rousseau, H.; van der Ster, D.

    2015-12-01

    CERN IT DSS operates the main storage resources for data taking and physics analysis mainly via three system: AFS, CASTOR and EOS. The total usable space available on disk for users is about 100 PB (with relative ratios 1:20:120). EOS actively uses the two CERN Tier0 centres (Meyrin and Wigner) with 50:50 ratio. IT DSS also provide sizeable on-demand resources for IT services most notably OpenStack and NFS-based clients: this is provided by a Ceph infrastructure (3 PB) and few proprietary servers (NetApp). We will describe our operational experience and recent changes to these systems with special emphasis to the present usages for LHC data taking, the convergence to commodity hardware (nodes with 200-TB each with optional SSD) shared across all services. We also describe our experience in coupling commodity and home-grown solution (e.g. CERNBox integration in EOS, Ceph disk pools for AFS, CASTOR and NFS) and finally the future evolution of these systems for WLCG and beyond.

  16. Optimizing CMS build infrastructure via Apache Mesos

    DOE PAGES

    Abdurachmanov, David; Degano, Alessandro; Elmer, Peter; ...

    2015-12-23

    The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux.Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora,more » and other applications on a dynamically shared pool of nodes. Lastly, we present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.« less

  17. Optimizing CMS build infrastructure via Apache Mesos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdurachmanov, David; Degano, Alessandro; Elmer, Peter

    The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux.Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora,more » and other applications on a dynamically shared pool of nodes. Lastly, we present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.« less

  18. 26 CFR 1.584-3 - Computation of common trust fund income.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 7 2010-04-01 2010-04-01 true Computation of common trust fund income. 1.584-3 Section 1.584-3 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Banking Institutions § 1.584-3 Computation of common trust...

  19. Real-time complex event processing for cloud resources

    NASA Astrophysics Data System (ADS)

    Adam, M.; Cordeiro, C.; Field, L.; Giordano, D.; Magnoni, L.

    2017-10-01

    The ongoing integration of clouds into the WLCG raises the need for detailed health and performance monitoring of the virtual resources in order to prevent problems of degraded service and interruptions due to undetected failures. When working in scale, the existing monitoring diversity can lead to a metric overflow whereby the operators need to manually collect and correlate data from several monitoring tools and frameworks, resulting in tens of different metrics to be constantly interpreted and analyzed per virtual machine. In this paper we present an ESPER based standalone application which is able to process complex monitoring events coming from various sources and automatically interpret data in order to issue alarms upon the resources’ statuses, without interfering with the actual resources and data sources. We will describe how this application has been used with both commercial and non-commercial cloud activities, allowing the operators to quickly be alarmed and react to misbehaving VMs and LHC experiments’ workflows. We will present the pattern analysis mechanisms being used, as well as the surrounding Elastic and REST API interfaces where the alarms are collected and served to users.

  20. LHCb experience with running jobs in virtual machines

    NASA Astrophysics Data System (ADS)

    McNab, A.; Stagni, F.; Luzzi, C.

    2015-12-01

    The LHCb experiment has been running production jobs in virtual machines since 2013 as part of its DIRAC-based infrastructure. We describe the architecture of these virtual machines and the steps taken to replicate the WLCG worker node environment expected by user and production jobs. This relies on the uCernVM system for providing root images for virtual machines. We use the CernVM-FS distributed filesystem to supply the root partition files, the LHCb software stack, and the bootstrapping scripts necessary to configure the virtual machines for us. Using this approach, we have been able to minimise the amount of contextualisation which must be provided by the virtual machine managers. We explain the process by which the virtual machine is able to receive payload jobs submitted to DIRAC by users and production managers, and how this differs from payloads executed within conventional DIRAC pilot jobs on batch queue based sites. We describe our operational experiences in running production on VM based sites managed using Vcycle/OpenStack, Vac, and HTCondor Vacuum. Finally we show how our use of these resources is monitored using Ganglia and DIRAC.

  1. Key Lessons in Building "Data Commons": The Open Science Data Cloud Ecosystem

    NASA Astrophysics Data System (ADS)

    Patterson, M.; Grossman, R.; Heath, A.; Murphy, M.; Wells, W.

    2015-12-01

    Cloud computing technology has created a shift around data and data analysis by allowing researchers to push computation to data as opposed to having to pull data to an individual researcher's computer. Subsequently, cloud-based resources can provide unique opportunities to capture computing environments used both to access raw data in its original form and also to create analysis products which may be the source of data for tables and figures presented in research publications. Since 2008, the Open Cloud Consortium (OCC) has operated the Open Science Data Cloud (OSDC), which provides scientific researchers with computational resources for storing, sharing, and analyzing large (terabyte and petabyte-scale) scientific datasets. OSDC has provided compute and storage services to over 750 researchers in a wide variety of data intensive disciplines. Recently, internal users have logged about 2 million core hours each month. The OSDC also serves the research community by colocating these resources with access to nearly a petabyte of public scientific datasets in a variety of fields also accessible for download externally by the public. In our experience operating these resources, researchers are well served by "data commons," meaning cyberinfrastructure that colocates data archives, computing, and storage infrastructure and supports essential tools and services for working with scientific data. In addition to the OSDC public data commons, the OCC operates a data commons in collaboration with NASA and is developing a data commons for NOAA datasets. As cloud-based infrastructures for distributing and computing over data become more pervasive, we ask, "What does it mean to publish data in a data commons?" Here we present the OSDC perspective and discuss several services that are key in architecting data commons, including digital identifier services.

  2. Incorporation of CAD/CAM Restoration Into Navy Dentistry

    DTIC Science & Technology

    2017-09-26

    CAD/CAM Computer-aided design /Computer-assisted manufacturing CDT Common Dental Terminology DENCAS Dental Common Access System DTF Dental...to reduce avoidable dental emergencies for deployed sailors and marines. Dental Computer-aided design /Computer-assisted manufacturing (CAD/CAM...report will review and evaluate the placement rate by Navy dentists of digitally fabricated in-office ceramic restorations compared to traditional direct

  3. COMCAN: a computer program for common cause analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burdick, G.R.; Marshall, N.H.; Wilson, J.R.

    1976-05-01

    The computer program, COMCAN, searches the fault tree minimal cut sets for shared susceptibility to various secondary events (common causes) and common links between components. In the case of common causes, a location check may also be performed by COMCAN to determine whether barriers to the common cause exist between components. The program can locate common manufacturers of components having events in the same minimal cut set. A relative ranking scheme for secondary event susceptibility is included in the program.

  4. Test Anxiety, Computer-Adaptive Testing and the Common Core

    ERIC Educational Resources Information Center

    Colwell, Nicole Makas

    2013-01-01

    This paper highlights the current findings and issues regarding the role of computer-adaptive testing in test anxiety. The computer-adaptive test (CAT) proposed by one of the Common Core consortia brings these issues to the forefront. Research has long indicated that test anxiety impairs student performance. More recent research indicates that…

  5. DIALOG: An executive computer program for linking independent programs

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hague, D. S.; Watson, D. A.

    1973-01-01

    A very large scale computer programming procedure called the DIALOG executive system was developed for the CDC 6000 series computers. The executive computer program, DIALOG, controls the sequence of execution and data management function for a library of independent computer programs. Communication of common information is accomplished by DIALOG through a dynamically constructed and maintained data base of common information. Each computer program maintains its individual identity and is unaware of its contribution to the large scale program. This feature makes any computer program a candidate for use with the DIALOG executive system. The installation and uses of the DIALOG executive system are described.

  6. Practical Algorithms for the Longest Common Extension Problem

    NASA Astrophysics Data System (ADS)

    Ilie, Lucian; Tinta, Liviu

    The Longest Common Extension problem considers a string s and computes, for each of a number of pairs (i,j), the longest substring of s that starts at both i and j. It appears as a subproblem in many fundamental string problems and can be solved by linear-time preprocessing of the string that allows (worst-case) constant-time computation for each pair. The two known approaches use powerful algorithms: either constant-time computation of the Lowest Common Ancestor in trees or constant-time computation of Range Minimum Queries (RMQ) in arrays. We show here that, from practical point of view, such complicated approaches are not needed. We give two very simple algorithms for this problem that require no preprocessing. The first needs only the string and is significantly faster than all previous algorithms on the average. The second combines the first with a direct RMQ computation on the Longest Common Prefix array. It takes advantage of the superior speed of the cache memory and is the fastest on virtually all inputs.

  7. The ATLAS Tier-3 in Geneva and the Trigger Development Facility

    NASA Astrophysics Data System (ADS)

    Gadomski, S.; Meunier, Y.; Pasche, P.; Baud, J.-P.; ATLAS Collaboration

    2011-12-01

    The ATLAS Tier-3 farm at the University of Geneva provides storage and processing power for analysis of ATLAS data. In addition the facility is used for development, validation and commissioning of the High Level Trigger of ATLAS [1]. The latter purpose leads to additional requirements on the availability of latest software and data, which will be presented. The farm is also a part of the WLCG [2], and is available to all members of the ATLAS Virtual Organization. The farm currently provides 268 CPU cores and 177 TB of storage space. A grid Storage Element, implemented with the Disk Pool Manager software [3], is available and integrated with the ATLAS Distributed Data Management system [4]. The batch system can be used directly by local users, or with a grid interface provided by NorduGrid ARC middleware [5]. In this article we will present the use cases that we support, as well as the experience with the software and the hardware we are using. Results of I/O benchmarking tests, which were done for our DPM Storage Element and for the NFS servers we are using, will also be presented.

  8. CephFS: a new generation storage platform for Australian high energy physics

    NASA Astrophysics Data System (ADS)

    Borges, G.; Crosby, S.; Boland, L.

    2017-10-01

    This paper presents an implementation of a Ceph file system (CephFS) use case at the ARC Center of Excellence for Particle Physics at the Terascale (CoEPP). CoEPP’s CephFS provides a posix-like file system on top of a Ceph RADOS object store, deployed on commodity hardware and without single points of failure. By delivering a unique file system namespace at different CoEPP centres spread across Australia, local HEP researchers can store, process and share data independently of their geographical locations. CephFS is also used as the back-end file system for a WLCG ATLAS user area at the Australian Tier-2. Dedicated SRM and XROOTD services, deployed on top of CoEPP’s CephFS, integrates it in ATLAS data distributed operations. This setup, while allowing Australian HEP researchers to trigger data movement via ATLAS grid tools, also enables local posix-like read access providing greater control to scientists of their data flows. In this article we will present details on CoEPP’s Ceph/CephFS implementation and report performance I/O metrics collected during the testing/tuning phase of the system.

  9. Common data buffer system. [communication with computational equipment utilized in spacecraft operations

    NASA Technical Reports Server (NTRS)

    Byrne, F. (Inventor)

    1981-01-01

    A high speed common data buffer system is described for providing an interface and communications medium between a plurality of computers utilized in a distributed computer complex forming part of a checkout, command and control system for space vehicles and associated ground support equipment. The system includes the capability for temporarily storing data to be transferred between computers, for transferring a plurality of interrupts between computers, for monitoring and recording these transfers, and for correcting errors incurred in these transfers. Validity checks are made on each transfer and appropriate error notification is given to the computer associated with that transfer.

  10. Early In-Theater Management of Combat-Related Traumatic Brain Injury: A Prospective, Observational Study to Identify Opportunities for Performance Improvement

    DTIC Science & Technology

    2015-05-18

    Head computed tomographic scan most commonly found skull fracture (68.9%), subdural hematoma (54.1%), and cerebral contusion (51.4%). Hypertonic saline...were common on presentation. Head computed tomographic scan most commonly found skull fracture (68.9%), subdural hematoma (54.1%), and cerebral con...reported was skull fracture, occurring in 68.9% of patients. The most common type of intracranial hemorrhage was subdural hematoma (54.1%). Multiple

  11. Computer Use and Computer Anxiety in Older Korean Americans.

    PubMed

    Yoon, Hyunwoo; Jang, Yuri; Xie, Bo

    2016-09-01

    Responding to the limited literature on computer use in ethnic minority older populations, the present study examined predictors of computer use and computer anxiety in older Korean Americans. Separate regression models were estimated for computer use and computer anxiety with the common sets of predictors: (a) demographic variables (age, gender, marital status, and education), (b) physical health indicators (chronic conditions, functional disability, and self-rated health), and (c) sociocultural factors (acculturation and attitudes toward aging). Approximately 60% of the participants were computer-users, and they had significantly lower levels of computer anxiety than non-users. A higher likelihood of computer use and lower levels of computer anxiety were commonly observed among individuals with younger age, male gender, advanced education, more positive ratings of health, and higher levels of acculturation. In addition, positive attitudes toward aging were found to reduce computer anxiety. Findings provide implications for developing computer training and education programs for the target population. © The Author(s) 2015.

  12. Computed Tomography (CT) - Spine

    MedlinePlus

    ... Resources Professions Site Index A-Z Computed Tomography (CT) - Spine Computed tomography (CT) of the spine is ... of CT Scanning of the Spine? What is CT Scanning of the Spine? Computed tomography, more commonly ...

  13. Utilizing Android and the Cloud Computing Environment to Increase Situational Awareness for a Mobile Distributed Response

    DTIC Science & Technology

    2012-03-01

    by using a common communication technology there is no need to develop a complicated communications plan and generate an ad - hoc communications...DISTRIBUTION CODE A 13. ABSTRACT (maximum 200 words) Maintaining an accurate Common Operational Picture (COP) is a strategic requirement for...TERMS Android Programming, Cloud Computing, Common Operating Picture, Web Programing 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT

  14. Protocols for Handling Messages Between Simulation Computers

    NASA Technical Reports Server (NTRS)

    Balcerowski, John P.; Dunnam, Milton

    2006-01-01

    Practical Simulator Network (PSimNet) is a set of data-communication protocols designed especially for use in handling messages between computers that are engaging cooperatively in real-time or nearly-real-time training simulations. In a typical application, computers that provide individualized training at widely dispersed locations would communicate, by use of PSimNet, with a central host computer that would provide a common computational- simulation environment and common data. Originally intended for use in supporting interfaces between training computers and computers that simulate the responses of spacecraft scientific payloads, PSimNet could be especially well suited for a variety of other applications -- for example, group automobile-driver training in a classroom. Another potential application might lie in networking of automobile-diagnostic computers at repair facilities to a central computer that would compile the expertise of numerous technicians and engineers and act as an expert consulting technician.

  15. DIALOG: An executive computer program for linking independent programs

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hague, D. S.; Watson, D. A.

    1973-01-01

    A very large scale computer programming procedure called the DIALOG Executive System has been developed for the Univac 1100 series computers. The executive computer program, DIALOG, controls the sequence of execution and data management function for a library of independent computer programs. Communication of common information is accomplished by DIALOG through a dynamically constructed and maintained data base of common information. The unique feature of the DIALOG Executive System is the manner in which computer programs are linked. Each program maintains its individual identity and as such is unaware of its contribution to the large scale program. This feature makes any computer program a candidate for use with the DIALOG Executive System. The installation and use of the DIALOG Executive System are described at Johnson Space Center.

  16. COMPUTER-ASSISTED MOTION ANALYSIS OF SPERM FROM THE COMMON CARP

    EPA Science Inventory

    Computer-assisted semen analysis (CASA) technology was applied to the measurement of sperm motility parameters in the common carp Cyprinus carpio. Activated sperm were videotaped at 200 frames s-1 and analysed with the CellTrak/S CASA research system. The percentage of motile cel...

  17. Near real-time traffic routing

    NASA Technical Reports Server (NTRS)

    Yang, Chaowei (Inventor); Xie, Jibo (Inventor); Zhou, Bin (Inventor); Cao, Ying (Inventor)

    2012-01-01

    A near real-time physical transportation network routing system comprising: a traffic simulation computing grid and a dynamic traffic routing service computing grid. The traffic simulator produces traffic network travel time predictions for a physical transportation network using a traffic simulation model and common input data. The physical transportation network is divided into a multiple sections. Each section has a primary zone and a buffer zone. The traffic simulation computing grid includes multiple of traffic simulation computing nodes. The common input data includes static network characteristics, an origin-destination data table, dynamic traffic information data and historical traffic data. The dynamic traffic routing service computing grid includes multiple dynamic traffic routing computing nodes and generates traffic route(s) using the traffic network travel time predictions.

  18. Perceptions and performance using computer-based testing: One institution's experience.

    PubMed

    Bloom, Timothy J; Rich, Wesley D; Olson, Stephanie M; Adams, Michael L

    2018-02-01

    The purpose of this study was to evaluate student and faculty perceptions of the transition to a required computer-based testing format and to identify any impact of this transition on student exam performance. Separate questionnaires sent to students and faculty asked about perceptions of and problems with computer-based testing. Exam results from program-required courses for two years prior to and two years following the adoption of computer-based testing were compared to determine if this testing format impacted student performance. Responses to Likert-type questions about perceived ease of use showed no difference between students with one and three semesters experience with computer-based testing. Of 223 student-reported problems, 23% related to faculty training with the testing software. Students most commonly reported improved feedback (46% of responses) and ease of exam-taking (17% of responses) as benefits to computer-based testing. Faculty-reported difficulties were most commonly related to problems with student computers during an exam (38% of responses) while the most commonly identified benefit was collecting assessment data (32% of responses). Neither faculty nor students perceived an impact on exam performance due to computer-based testing. An analysis of exam grades confirmed there was no consistent performance difference between the paper and computer-based formats. Both faculty and students rapidly adapted to using computer-based testing. There was no evidence that switching to computer-based testing had any impact on student exam performance. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. MONTHLY VARIATION IN SPERM MOTILITY IN COMMON CARP ASSESSED USING COMPUTER-ASSISTED SPERM ANALYSIS (CASA)

    EPA Science Inventory

    Sperm motility variables from the milt of the common carp Cyprinus carpio were assessed using a computer-assisted sperm analysis (CASA) system across several months (March-August 1992) known to encompass the natural spawning period. Two-year-old pond-raised males obtained each mo...

  20. Understanding of the Cyber Security and the Development of CAPTCHA

    NASA Astrophysics Data System (ADS)

    Yang, Yu

    2018-04-01

    CAPTCHA is the abbreviation of "Completely Automated Public Turing Test to Tell Computers and Humans Apart", which is a program algorithm for distinguishing between computers and humans. It is able to generate and evaluate tests that are easy for human to pass yet are not possible for computers to. Common CAPTCHA generally contains symbols, text, pictures, and even videos, which is mainly used for human-computer verification. With the popularization of the Internet and its related applications, many malicious attacks against websites, systems and servers gradually appear. Therefore, the research on CAPTCHA is especially important. This article will briefly summarize and introduce the existing CAPTCHA technology, and summarizes the common problems of network attacks and information security. After listing the common type of CAPTCHA, it will finally propose feasible suggestions for the development of CAPTCHA.

  1. Computer Viruses. Technology Update.

    ERIC Educational Resources Information Center

    Ponder, Tim, Comp.; Ropog, Marty, Comp.; Keating, Joseph, Comp.

    This document provides general information on computer viruses, how to help protect a computer network from them, measures to take if a computer becomes infected. Highlights include the origins of computer viruses; virus contraction; a description of some common virus types (File Virus, Boot Sector/Partition Table Viruses, Trojan Horses, and…

  2. Debugging embedded computer programs. [tactical missile computers

    NASA Technical Reports Server (NTRS)

    Kemp, G. H.

    1980-01-01

    Every embedded computer program must complete its debugging cycle using some system that will allow real time debugging. Many of the common items addressed during debugging are listed. Seven approaches to debugging are analyzed to evaluate how well they treat those items. Cost evaluations are also included in the comparison. The results indicate that the best collection of capabilities to cover the common items present in the debugging task occurs in the approach where a minicomputer handles the environment simulation with an emulation of some kind representing the embedded computer. This approach can be taken at a reasonable cost. The case study chosen is an embedded computer in a tactical missile. Several choices of computer for the environment simulation are discussed as well as different approaches to the embedded emulator.

  3. Translational Bounds for Factorial n and the Factorial Polynomial

    ERIC Educational Resources Information Center

    Mahmood, Munir; Edwards, Phillip

    2009-01-01

    During the period 1729-1826 Bernoulli, Euler, Goldbach and Legendre developed expressions for defining and evaluating "n"! and the related gamma function. Expressions related to "n"! and the gamma function are a common feature in computer science and engineering applications. In the modern computer age people live in now, two common tests to…

  4. Information Commons to Go

    ERIC Educational Resources Information Center

    Bayer, Marc Dewey

    2008-01-01

    Since 2004, Buffalo State College's E. H. Butler Library has used the Information Commons (IC) model to assist its 8,500 students with library research and computer applications. Campus Technology Services (CTS) plays a very active role in its IC, with a centrally located Computer Help Desk and a newly created Application Support Desk right in the…

  5. A Computer-Based Instrument That Identifies Common Science Misconceptions

    ERIC Educational Resources Information Center

    Larrabee, Timothy G.; Stein, Mary; Barman, Charles

    2006-01-01

    This article describes the rationale for and development of a computer-based instrument that helps identify commonly held science misconceptions. The instrument, known as the Science Beliefs Test, is a 47-item instrument that targets topics in chemistry, physics, biology, earth science, and astronomy. The use of an online data collection system…

  6. An Examination of Sampling Characteristics of Some Analytic Factor Transformation Techniques.

    ERIC Educational Resources Information Center

    Skakun, Ernest N.; Hakstian, A. Ralph

    Two population raw data matrices were constructed by computer simulation techniques. Each consisted of 10,000 subjects and 12 variables, and each was constructed according to an underlying factorial model consisting of four major common factors, eight minor common factors, and 12 unique factors. The computer simulation techniques were employed to…

  7. Student Achievement in Computer Programming: Lecture vs Computer-Aided Instruction

    ERIC Educational Resources Information Center

    Tsai, San-Yun W.; Pohl, Norval F.

    1978-01-01

    This paper discusses a study of the differences in student learning achievement, as measured by four different types of common performance evaluation techniques, in a college-level computer programming course under three teaching/learning environments: lecture, computer-aided instruction, and lecture supplemented with computer-aided instruction.…

  8. Motivating Contributions for Home Computer Security

    ERIC Educational Resources Information Center

    Wash, Richard L.

    2009-01-01

    Recently, malicious computer users have been compromising computers en masse and combining them to form coordinated botnets. The rise of botnets has brought the problem of home computers to the forefront of security. Home computer users commonly have insecure systems; these users do not have the knowledge, experience, and skills necessary to…

  9. Computed tomographic angiography in stroke imaging: fundamental principles, pathologic findings, and common pitfalls.

    PubMed

    Gupta, Rajiv; Jones, Stephen E; Mooyaart, Eline A Q; Pomerantz, Stuart R

    2006-06-01

    The development of multidetector row computed tomography (MDCT) now permits visualization of the entire vascular tree that is relevant for the management of stroke within 15 seconds. Advances in MDCT have brought computed tomography angiography (CTA) to the frontline in evaluation of stroke. CTA is a rapid and noninvasive modality for evaluating the neurovasculature. This article describes the role of CTA in the management of stroke. Fundamentals of contrast delivery, common pathologic findings, artifacts, and pitfalls in CTA interpretation are discussed.

  10. 47 CFR 32.2124 - General purpose computers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false General purpose computers. 32.2124 Section 32.2124 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM... General purpose computers. (a) This account shall include the original cost of computers and peripheral...

  11. 47 CFR 32.2124 - General purpose computers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 2 2011-10-01 2011-10-01 false General purpose computers. 32.2124 Section 32.2124 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM... General purpose computers. (a) This account shall include the original cost of computers and peripheral...

  12. 47 CFR 32.2124 - General purpose computers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 2 2014-10-01 2014-10-01 false General purpose computers. 32.2124 Section 32.2124 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM... General purpose computers. (a) This account shall include the original cost of computers and peripheral...

  13. 47 CFR 32.2124 - General purpose computers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 2 2013-10-01 2013-10-01 false General purpose computers. 32.2124 Section 32.2124 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM... General purpose computers. (a) This account shall include the original cost of computers and peripheral...

  14. 47 CFR 32.2124 - General purpose computers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 2 2012-10-01 2012-10-01 false General purpose computers. 32.2124 Section 32.2124 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM... General purpose computers. (a) This account shall include the original cost of computers and peripheral...

  15. Support System Effects on the NASA Common Research Model

    NASA Technical Reports Server (NTRS)

    Rivers, S. Melissa B.; Hunter, Craig A.

    2012-01-01

    An experimental investigation of the NASA Common Research Model was conducted in the NASA Langley National Transonic Facility and NASA Ames 11-Foot Transonic Wind Tunnel Facility for use in the Drag Prediction Workshop. As data from the experimental investigations was collected, a large difference in moment values was seen between the experimental and the computational data from the 4th Drag Prediction Workshop. This difference led to the present work. In this study, a computational assessment has been undertaken to investigate model support system interference effects on the Common Research Model. The configurations computed during this investigation were the wing/body/tail=0deg without the support system and the wing/body/tail=0deg with the support system. The results from this investigation confirm that the addition of the support system to the computational cases does shift the pitching moment in the direction of the experimental results.

  16. Body CT (CAT Scan)

    MedlinePlus

    ... Resources Professions Site Index A-Z Computed Tomography (CT) - Body Computed tomography (CT) of the body uses ... of CT Scanning of the Body? What is CT Scanning of the Body? Computed tomography, more commonly ...

  17. A boundary integral method for numerical computation of radar cross section of 3D targets using hybrid BEM/FEM with edge elements

    NASA Astrophysics Data System (ADS)

    Dodig, H.

    2017-11-01

    This contribution presents the boundary integral formulation for numerical computation of time-harmonic radar cross section for 3D targets. Method relies on hybrid edge element BEM/FEM to compute near field edge element coefficients that are associated with near electric and magnetic fields at the boundary of the computational domain. Special boundary integral formulation is presented that computes radar cross section directly from these edge element coefficients. Consequently, there is no need for near-to-far field transformation (NTFFT) which is common step in RCS computations. By the end of the paper it is demonstrated that the formulation yields accurate results for canonical models such as spheres, cubes, cones and pyramids. Method has demonstrated accuracy even in the case of dielectrically coated PEC sphere at interior resonance frequency which is common problem for computational electromagnetic codes.

  18. Computational intelligence in bioinformatics: SNP/haplotype data in genetic association study for common diseases.

    PubMed

    Kelemen, Arpad; Vasilakos, Athanasios V; Liang, Yulan

    2009-09-01

    Comprehensive evaluation of common genetic variations through association of single-nucleotide polymorphism (SNP) structure with common complex disease in the genome-wide scale is currently a hot area in human genome research due to the recent development of the Human Genome Project and HapMap Project. Computational science, which includes computational intelligence (CI), has recently become the third method of scientific enquiry besides theory and experimentation. There have been fast growing interests in developing and applying CI in disease mapping using SNP and haplotype data. Some of the recent studies have demonstrated the promise and importance of CI for common complex diseases in genomic association study using SNP/haplotype data, especially for tackling challenges, such as gene-gene and gene-environment interactions, and the notorious "curse of dimensionality" problem. This review provides coverage of recent developments of CI approaches for complex diseases in genetic association study with SNP/haplotype data.

  19. Ten quick tips for machine learning in computational biology.

    PubMed

    Chicco, Davide

    2017-01-01

    Machine learning has become a pivotal tool for many projects in computational biology, bioinformatics, and health informatics. Nevertheless, beginners and biomedical researchers often do not have enough experience to run a data mining project effectively, and therefore can follow incorrect practices, that may lead to common mistakes or over-optimistic results. With this review, we present ten quick tips to take advantage of machine learning in any computational biology context, by avoiding some common errors that we observed hundreds of times in multiple bioinformatics projects. We believe our ten suggestions can strongly help any machine learning practitioner to carry on a successful project in computational biology and related sciences.

  20. Detecting Motion from a Moving Platform; Phase 3: Unification of Control and Sensing for More Advanced Situational Awareness

    DTIC Science & Technology

    2011-11-01

    RX-TY-TR-2011-0096-01) develops a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica...01 summarizes the development of a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica

  1. Cultural Commonalities and Differences in Spatial Problem-Solving: A Computational Analysis

    ERIC Educational Resources Information Center

    Lovett, Andrew; Forbus, Kenneth

    2011-01-01

    A fundamental question in human cognition is how people reason about space. We use a computational model to explore cross-cultural commonalities and differences in spatial cognition. Our model is based upon two hypotheses: (1) the structure-mapping model of analogy can explain the visual comparisons used in spatial reasoning; and (2) qualitative,…

  2. Benchmark Lisp And Ada Programs

    NASA Technical Reports Server (NTRS)

    Davis, Gloria; Galant, David; Lim, Raymond; Stutz, John; Gibson, J.; Raghavan, B.; Cheesema, P.; Taylor, W.

    1992-01-01

    Suite of nonparallel benchmark programs, ELAPSE, designed for three tests: comparing efficiency of computer processing via Lisp vs. Ada; comparing efficiencies of several computers processing via Lisp; or comparing several computers processing via Ada. Tests efficiency which computer executes routines in each language. Available for computer equipped with validated Ada compiler and/or Common Lisp system.

  3. Computer Languages: A Practical Guide to the Chief Programming Languages.

    ERIC Educational Resources Information Center

    Sanderson, Peter C.

    All the most commonly-used high-level computer languages are discussed in this book. An introductory discussion provides an overview of the basic components of a digital computer, the general planning of a computer programing problem, and the various types of computer languages. Each chapter is self-contained, emphasizes those features of a…

  4. Distributed processor allocation for launching applications in a massively connected processors complex

    DOEpatents

    Pedretti, Kevin

    2008-11-18

    A compute processor allocator architecture for allocating compute processors to run applications in a multiple processor computing apparatus is distributed among a subset of processors within the computing apparatus. Each processor of the subset includes a compute processor allocator. The compute processor allocators can share a common database of information pertinent to compute processor allocation. A communication path permits retrieval of information from the database independently of the compute processor allocators.

  5. A Case for Data Commons

    PubMed Central

    Grossman, Robert L.; Heath, Allison; Murphy, Mark; Patterson, Maria; Wells, Walt

    2017-01-01

    Data commons collocate data, storage, and computing infrastructure with core services and commonly used tools and applications for managing, analyzing, and sharing data to create an interoperable resource for the research community. An architecture for data commons is described, as well as some lessons learned from operating several large-scale data commons. PMID:29033693

  6. Common Accounting System for Monitoring the ATLAS Distributed Computing Resources

    NASA Astrophysics Data System (ADS)

    Karavakis, E.; Andreeva, J.; Campana, S.; Gayazov, S.; Jezequel, S.; Saiz, P.; Sargsyan, L.; Schovancova, J.; Ueda, I.; Atlas Collaboration

    2014-06-01

    This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.

  7. 5 CFR Appendix A to Subpart H of... - Information on Computing Certain Common Deductions From Back Pay Awards

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Deductions From Back Pay Awards A Appendix A to Subpart H of Part 550 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Back Pay Pt. 550, Subpt. H, App. A Appendix A to Subpart H of Part 550—Information on Computing Certain Common Deductions From Back...

  8. Teaching Reductive Thinking

    ERIC Educational Resources Information Center

    Armoni, Michal; Gal-Ezer, Judith

    2005-01-01

    When dealing with a complex problem, solving it by reduction to simpler problems, or problems for which the solution is already known, is a common method in mathematics and other scientific disciplines, as in computer science and, specifically, in the field of computability. However, when teaching computational models (as part of computability)…

  9. Extreme hydronephrosis due to uretropelvic junction obstruction in infant (case report).

    PubMed

    Krzemień, Grażyna; Szmigielska, Agnieszka; Bombiński, Przemysław; Barczuk, Marzena; Biejat, Agnieszka; Warchoł, Stanisław; Dudek-Warchoł, Teresa

    2016-01-01

    Hydronephrosis is the one of the most common congenital abnormalities of urinary tract. The left kidney is more commonly affected than the right side and is more common in males. To determine the role of ultrasonography, renal dynamic scintigraphy and lowerdose computed tomography urography in preoperative diagnostic workup of infant with extreme hydronephrosis. We presented the boy with antenatally diagnosed hydronephrosis. In serial, postnatal ultrasonography, renal scintigraphy and computed tomography urography we observed slightly declining function in the dilated kidney and increasing pelvic dilatation. Pyeloplasty was performed at the age of four months with good result. Results of ultrasonography and renal dynamic scintigraphy in child with extreme hydronephrosis can be difficult to asses, therefore before the surgical procedure a lower-dose computed tomography urography should be performed.

  10. The structure of common-envelope remnants

    NASA Astrophysics Data System (ADS)

    Hall, Philip D.

    2015-05-01

    We investigate the structure and evolution of the remnants of common-envelope evolution in binary star systems. In a common-envelope phase, two stars become engulfed in a gaseous envelope and, under the influence of drag forces, spiral to smaller separations. They may merge to form a single star or the envelope may be ejected to leave the stars in a shorter period orbit. This process explains the short orbital periods of many observed binary systems, such as cataclysmic variables and low-mass X-ray binary systems. Despite the importance of these systems, and of common-envelope evolution to their formation, it remains poorly understood. Specifically, we are unable to confidently predict the outcome of a common-envelope phase from the properties at its onset. After presenting a review of work on stellar evolution, binary systems, common-envelope evolution and the computer programs used, we describe the results of three computational projects on common-envelope evolution. Our work specifically relates to the methods and prescriptions which are used for predicting the outcome. We use the Cambridge stellar-evolution code STARS to produce detailed models of the structure and evolution of remnants of common-envelope evolution. We compare different assumptions about the uncertain end-of-common envelope structure and envelope mass of remnants which successfully eject their common envelopes. In the first project, we use detailed remnant models to investigate whether planetary nebulae are predicted after common-envelope phases initiated by low-mass red giants. We focus on the requirement that a remnant evolves rapidly enough to photoionize the nebula and compare the predictions for different ideas about the structure at the end of a common-envelope phase. We find that planetary nebulae are possible for some prescriptions for the end-of-common envelope structure. In our second contribution, we compute a large set of single-star models and fit new formulae to the core radii of evolved stars. These formulae can be used to better compute the outcome of common-envelope evolution with rapid evolution codes. We find that the new formulae are necessary for accurate predictions of the properties of post-common envelope systems. Finally, we use detailed remnant models of massive stars to investigate whether hydrogen may be retained after a common-envelope phase to the point of core-collapse and so be observable in supernovae. We find that this is possible and thus common-envelope evolution may contribute to the formation of Type IIb supernovae.

  11. Hepatic Kaposi Sarcoma Revisited: An Important but Less Commonly Seen Neoplasm in Patients With Acquired Immunodeficiency Syndrome.

    PubMed

    Chen, Frank; Gulati, Mittul; Tchelepi, Hisham

    2017-03-01

    Hepatic Kaposi sarcoma (KS) is the most commonly seen hepatic neoplasm in patients with acquired immunodeficiency syndrome (AIDS), found in 34% of patients in an autopsy series. However, the incidence of hepatic KS has significantly declined since the advent of highly active antiretroviral therapy and is not as commonly seen on imaging. We present a case of hepatic KS in a patient with AIDS, which was initially mistaken for hepatic abscesses on computed tomography. We discuss the computed tomography, grayscale ultrasound, and contrast-enhanced ultrasound appearance of hepatic KS and how to distinguish this hepatic neoplasm from other common hepatic lesions seen in patients with AIDS.

  12. Advanced Transport Operating System (ATOPS) utility library software description

    NASA Technical Reports Server (NTRS)

    Clinedinst, Winston C.; Slominski, Christopher J.; Dickson, Richard W.; Wolverton, David A.

    1993-01-01

    The individual software processes used in the flight computers on-board the Advanced Transport Operating System (ATOPS) aircraft have many common functional elements. A library of commonly used software modules was created for general uses among the processes. The library includes modules for mathematical computations, data formatting, system database interfacing, and condition handling. The modules available in the library and their associated calling requirements are described.

  13. Medical Knowledge Bases.

    ERIC Educational Resources Information Center

    Miller, Randolph A.; Giuse, Nunzia B.

    1991-01-01

    Few commonly available, successful computer-based tools exist in medical informatics. Faculty expertise can be included in computer-based medical information systems. Computers allow dynamic recombination of knowledge to answer questions unanswerable with print textbooks. Such systems can also create stronger ties between academic and clinical…

  14. The Materials Commons: A Collaboration Platform and Information Repository for the Global Materials Community

    NASA Astrophysics Data System (ADS)

    Puchala, Brian; Tarcea, Glenn; Marquis, Emmanuelle. A.; Hedstrom, Margaret; Jagadish, H. V.; Allison, John E.

    2016-08-01

    Accelerating the pace of materials discovery and development requires new approaches and means of collaborating and sharing information. To address this need, we are developing the Materials Commons, a collaboration platform and information repository for use by the structural materials community. The Materials Commons has been designed to be a continuous, seamless part of the scientific workflow process. Researchers upload the results of experiments and computations as they are performed, automatically where possible, along with the provenance information describing the experimental and computational processes. The Materials Commons website provides an easy-to-use interface for uploading and downloading data and data provenance, as well as for searching and sharing data. This paper provides an overview of the Materials Commons. Concepts are also outlined for integrating the Materials Commons with the broader Materials Information Infrastructure that is evolving to support the Materials Genome Initiative.

  15. Hardware architecture design of image restoration based on time-frequency domain computation

    NASA Astrophysics Data System (ADS)

    Wen, Bo; Zhang, Jing; Jiao, Zipeng

    2013-10-01

    The image restoration algorithms based on time-frequency domain computation is high maturity and applied widely in engineering. To solve the high-speed implementation of these algorithms, the TFDC hardware architecture is proposed. Firstly, the main module is designed, by analyzing the common processing and numerical calculation. Then, to improve the commonality, the iteration control module is planed for iterative algorithms. In addition, to reduce the computational cost and memory requirements, the necessary optimizations are suggested for the time-consuming module, which include two-dimensional FFT/IFFT and the plural calculation. Eventually, the TFDC hardware architecture is adopted for hardware design of real-time image restoration system. The result proves that, the TFDC hardware architecture and its optimizations can be applied to image restoration algorithms based on TFDC, with good algorithm commonality, hardware realizability and high efficiency.

  16. Safe Computing: An Overview of Viruses.

    ERIC Educational Resources Information Center

    Wodarz, Nan

    2001-01-01

    A computer virus is a program that replicates itself, in conjunction with an additional program that can harm a computer system. Common viruses include boot-sector, macro, companion, overwriting, and multipartite. Viruses can be fast, slow, stealthy, and polymorphic. Anti-virus products are described. (MLH)

  17. Common Sense Conversion: Can You Read Me?

    ERIC Educational Resources Information Center

    Crawford, Walt

    1988-01-01

    Discusses basic approaches and available software for five categories of file conversion: (1) converting between different computers with different operating systems; (2) converting between different computers with the same operating systems; (3) converting between different applications on the same computer; (4) converting between different…

  18. Computing the Moore-Penrose Inverse of a Matrix with a Computer Algebra System

    ERIC Educational Resources Information Center

    Schmidt, Karsten

    2008-01-01

    In this paper "Derive" functions are provided for the computation of the Moore-Penrose inverse of a matrix, as well as for solving systems of linear equations by means of the Moore-Penrose inverse. Making it possible to compute the Moore-Penrose inverse easily with one of the most commonly used Computer Algebra Systems--and to have the blueprint…

  19. The feasibility of using UML to compare the impact of different brands of computer system on the clinical consultation.

    PubMed

    Kumarapeli, Pushpa; de Lusignan, Simon; Koczan, Phil; Jones, Beryl; Sheeler, Ian

    2007-01-01

    UK general practice is universally computerised, with computers used in the consulting room at the point of care. Practices use a range of different brands of computer system, which have developed organically to meet the needs of general practitioners and health service managers. Unified Modelling Language (UML) is a standard modelling and specification notation widely used in software engineering. To examine the feasibility of UML notation to compare the impact of different brands of general practice computer system on the clinical consultation. Multi-channel video recordings of simulated consultation sessions were recorded on three different clinical computer systems in common use (EMIS, iSOFT Synergy and IPS Vision). User action recorder software recorded time logs of keyboard and mouse use, and pattern recognition software captured non-verbal communication. The outputs of these were used to create UML class and sequence diagrams for each consultation. We compared 'definition of the presenting problem' and 'prescribing', as these tasks were present in all the consultations analysed. Class diagrams identified the entities involved in the clinical consultation. Sequence diagrams identified common elements of the consultation (such as prescribing) and enabled comparisons to be made between the different brands of computer system. The clinician and computer system interaction varied greatly between the different brands. UML sequence diagrams are useful in identifying common tasks in the clinical consultation, and for contrasting the impact of the different brands of computer system on the clinical consultation. Further research is needed to see if patterns demonstrated in this pilot study are consistently displayed.

  20. Elementary software for the hand lens identification of some common iranian woods

    Treesearch

    Vahidreza Safdari; Margaret S. Devall

    2009-01-01

    A computer program, “Hyrcania”, has been developed for identifying some common woods (26 hardwoods and 6 softwoods) from the Hyrcanian forest type of Iran. The program has been written in JavaScript and is usable with computers as well as mobile phones. The databases use anatomical characteristics (visible with a hand lens) and wood colour, and can be searched in...

  1. Computer conferencing: Choices and strategies

    NASA Technical Reports Server (NTRS)

    Smith, Jill Y.

    1991-01-01

    Computer conferencing permits meeting through the computer while sharing a common file. The primary advantages of computer conferencing are that participants may (1) meet simultaneously or nonsimultaneously, and (2) contribute across geographic distance and time zones. Due to these features, computer conferencing offers a viable meeting option for distributed business teams. Past research and practice is summarized denoting practical uses of computer conferencing as well as types of meeting activities ill suited to the medium. Additionally, effective team strategies are outlined which maximize the benefits of computer conferencing.

  2. Where the Cloud Meets the Commons

    ERIC Educational Resources Information Center

    Ipri, Tom

    2011-01-01

    Changes presented by cloud computing--shared computing services, applications, and storage available to end users via the Internet--have the potential to seriously alter how libraries provide services, not only remotely, but also within the physical library, specifically concerning challenges facing the typical desktop computing experience.…

  3. Integrating Computational Chemistry into a Course in Classical Thermodynamics

    ERIC Educational Resources Information Center

    Martini, Sheridan R.; Hartzell, Cynthia J.

    2015-01-01

    Computational chemistry is commonly addressed in the quantum mechanics course of undergraduate physical chemistry curricula. Since quantum mechanics traditionally follows the thermodynamics course, there is a lack of curricula relating computational chemistry to thermodynamics. A method integrating molecular modeling software into a semester long…

  4. Computer-Assisted College Administration. Final Report.

    ERIC Educational Resources Information Center

    Punga, V.

    Rensselaer Polytechnic Institute of Connecticut offered a part-time training program "Computer-Assisted-College-Administration" during the academic year 1969-70. Participants were trained in the utilization of computer-assisted methods in dealing with the common tasks of college administration, the problems of college development and…

  5. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; Russell, Samuel S.

    2012-01-01

    Objective Develop a software application utilizing high performance computing techniques, including general purpose graphics processing units (GPGPUs), for the analysis and visualization of large thermographic data sets. Over the past several years, an increasing effort among scientists and engineers to utilize graphics processing units (GPUs) in a more general purpose fashion is allowing for previously unobtainable levels of computation by individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU which yield significant increases in performance. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Image processing is one area were GPUs are being used to greatly increase the performance of certain analysis and visualization techniques.

  6. Deska: Tool for Central Administration of a Grid Site

    NASA Astrophysics Data System (ADS)

    Kundrát, Jan; Krejčová, Martina; Hubík, Tomáš; Kerpl, Lukáš

    2011-12-01

    Running a typical Tier-2 site requires mastering quite a few tools for fabric management. Keeping an inventory of installed HW machines, their roles and detailed information, from IP addresses to rack locations, is typically done using various in-house applications ranging from simple spreadsheets to web applications. Such solutions, whose documentation usually leaves much to be desired, typically do not prevent a significant duplication of information, and therefore the data therein quickly become obsolete. After having deployed Cfengine as one of a few sites in the WLCG environment, the Prague Tier-2 site set forth to further automate the fabric management, developing the Deska project. The aim of the system is to provide a central place to perform changes, from adding new machines or moving them between racks to changing their assigned service roles and additional metadata. The database provides an authoritative source of information from which all other systems and services (like DHCP servers, Ethernet switches or the Cfengine system) pull their data, using newly developed configuration adaptors. An easy-to-use command line interface modelled after the Cisco IOS-based switches was developed, enabling the data center administrators to easily change any information in an intuitive way. We provide an overview of the current status of the implementation and describe our design choices aimed at further reducing the system engineers' workload.

  7. Instructional support and implementation structure during elementary teachers' science education simulation use

    NASA Astrophysics Data System (ADS)

    Gonczi, Amanda L.; Chiu, Jennifer L.; Maeng, Jennifer L.; Bell, Randy L.

    2016-07-01

    This investigation sought to identify patterns in elementary science teachers' computer simulation use, particularly implementation structures and instructional supports commonly employed by teachers. Data included video-recorded science lessons of 96 elementary teachers who used computer simulations in one or more science lessons. Results indicated teachers used a one-to-one student-to-computer ratio most often either during class-wide individual computer use or during a rotating station structure. Worksheets, general support, and peer collaboration were the most common forms of instructional support. The least common instructional support forms included lesson pacing, initial play, and a closure discussion. Students' simulation use was supported in the fewest ways during a rotating station structure. Results suggest that simulation professional development with elementary teachers needs to explicitly focus on implementation structures and instructional support to enhance participants' pedagogical knowledge and improve instructional simulation use. In addition, research is needed to provide theoretical explanations for the observed patterns that should subsequently be addressed in supporting teachers' instructional simulation use during professional development or in teacher preparation programs.

  8. Further Investigation of the Support System Effects and Wing Twist on the NASA Common Research Model

    NASA Technical Reports Server (NTRS)

    Rivers, Melissa B.; Hunter, Craig A.; Campbell, Richard L.

    2012-01-01

    An experimental investigation of the NASA Common Research Model was conducted in the NASA Langley National Transonic Facility and NASA Ames 11-foot Transonic Wind Tunnel Facility for use in the Drag Prediction Workshop. As data from the experimental investigations was collected, a large difference in moment values was seen between the experiment and computational data from the 4th Drag Prediction Workshop. This difference led to a computational assessment to investigate model support system interference effects on the Common Research Model. The results from this investigation showed that the addition of the support system to the computational cases did increase the pitching moment so that it more closely matched the experimental results, but there was still a large discrepancy in pitching moment. This large discrepancy led to an investigation into the shape of the as-built model, which in turn led to a change in the computational grids and re-running of all the previous support system cases. The results of these cases are the focus of this paper.

  9. Computer-Generated Phase Diagrams for Binary Mixtures.

    ERIC Educational Resources Information Center

    Jolls, Kenneth R.; And Others

    1983-01-01

    Computer programs that generate projections of thermodynamic phase surfaces through computer graphics were used to produce diagrams representing properties of water and steam and the pressure-volume-temperature behavior of most of the common equations of state. The program, program options emphasizing thermodynamic features of interest, and…

  10. Get SUNREL | Buildings | NREL

    Science.gov Websites

    ; means (a) copies of the computer program commonly known as SUNREL, and all of the contents of files accordance with the Documentation. 1.2 "Computer" means an electronic device that accepts computer." 1.4 "Licensee" means the Individual Licensee. 1.5 "Licensed Single Site"

  11. AIR FORCE CYBER MISSION ASSURANCE SOURCES OF MISSION UNCERTAINTY

    DTIC Science & Technology

    2017-04-06

    Army’s Command and General Staff College at Fort Leavenworth, Kansas. Lt Col Herwick holds a bachelor of science degree in Computer Science from the...United States Air Force Academy and a master’s degree in Computer Resources and Information Management from Webster University. iii Abstract...vocabulary and while it is common to use conversationally, that usage is not always based on specific definitions. As a result, it finds common usage in

  12. Simple, inexpensive computerized rodent activity meters.

    PubMed

    Horton, R M; Karachunski, P I; Kellermann, S A; Conti-Fine, B M

    1995-10-01

    We describe two approaches for using obsolescent computers, either an IBM PC XT or an Apple Macintosh Plus, to accurately quantify spontaneous rodent activity, as revealed by continuous monitoring of the spontaneous usage of running activity wheels. Because such computers can commonly be obtained at little or no expense, and other commonly available materials and inexpensive parts can be used, these meters can be built quite economically. Construction of these meters requires no specialized electronics expertise, and their software requirements are simple. The computer interfaces are potentially of general interest, as they could also be used for monitoring a variety of events in a research setting.

  13. A characterization of workflow management systems for extreme-scale applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia

    We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less

  14. A characterization of workflow management systems for extreme-scale applications

    DOE PAGES

    Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia; ...

    2017-02-16

    We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less

  15. Preschool Cookbook of Computer Programming Topics

    ERIC Educational Resources Information Center

    Morgado, Leonel; Cruz, Maria; Kahn, Ken

    2010-01-01

    A common problem in computer programming use for education in general, not simply as a technical skill, is that children and teachers find themselves constrained by what is possible through limited expertise in computer programming techniques. This is particularly noticeable at the preliterate level, where constructs tend to be limited to…

  16. Computer Vision Syndrome: Implications for the Occupational Health Nurse.

    PubMed

    Lurati, Ann Regina

    2018-02-01

    Computers and other digital devices are commonly used both in the workplace and during leisure time. Computer vision syndrome (CVS) is a new health-related condition that negatively affects workers. This article reviews the pathology of and interventions for CVS with implications for the occupational health nurse.

  17. Collaboration, Collusion and Plagiarism in Computer Science Coursework

    ERIC Educational Resources Information Center

    Fraser, Robert

    2014-01-01

    We present an overview of the nature of academic dishonesty with respect to computer science coursework. We discuss the efficacy of various policies for collaboration with regard to student education, and we consider a number of strategies for mitigating dishonest behaviour on computer science coursework by addressing some common causes. Computer…

  18. Unaffiliated Users' Access to Academic Libraries: A Survey.

    ERIC Educational Resources Information Center

    Courtney, Nancy

    2003-01-01

    Most of 814 academic libraries surveyed allow onsite access to unaffiliated users, and many give borrowing privileges to certain categories of users. Use of library computers to access library resources and other computer applications is commonly allowed although authentication on library computers is increasing. Five tables show statistics.…

  19. Browsing Software of the Visible Korean Data Used for Teaching Sectional Anatomy

    ERIC Educational Resources Information Center

    Shin, Dong Sun; Chung, Min Suk; Park, Hyo Seok; Park, Jin Seo; Hwang, Sung Bae

    2011-01-01

    The interpretation of computed tomographs (CTs) and magnetic resonance images (MRIs) to diagnose clinical conditions requires basic knowledge of sectional anatomy. Sectional anatomy has traditionally been taught using sectioned cadavers, atlases, and/or computer software. The computer software commonly used for this subject is practical and…

  20. If Minicomputers Are the Answer, What Was the Question?

    ERIC Educational Resources Information Center

    GRI Computer Corp., Newton, MA.

    The availability of low-cost minicomputers in the last few years has opened up many new control and special purpose applications for computers. However, using general purpose computers for these specialized applications often leads to inefficiencies in programing and operation. GRI Computer Corporation has developed a common-sense approach called…

  1. Common Sense Planning for a Computer, or, What's It Worth to You?

    ERIC Educational Resources Information Center

    Crawford, Walt

    1984-01-01

    Suggests factors to be considered in planning for the purchase of a microcomputer, including budgets, benefits, costs, and decisions. Major uses of a personal computer are described--word processing, financial analysis, file and database management, programming and computer literacy, education, entertainment, and thrill of high technology. (EJS)

  2. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2012-10-01 2012-10-01 false Computers and data processing equipment (account...

  3. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2013-10-01 2013-10-01 false Computers and data processing equipment (account...

  4. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2011-10-01 2011-10-01 false Computers and data processing equipment (account...

  5. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2014-10-01 2014-10-01 false Computers and data processing equipment (account...

  6. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2010-10-01 2010-10-01 false Computers and data processing equipment (account...

  7. Computer hardware for radiologists: Part 2

    PubMed Central

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU), chipset, random access memory (RAM), and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. “Storage drive” is a term describing a “memory” hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. “Drive interfaces” connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular “input/output devices” used commonly with computers are the printer, monitor, mouse, and keyboard. The “bus” is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated) ISA bus. “Ports” are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ‘ever increasing’ digital future. PMID:21423895

  8. Computer hardware for radiologists: Part 2.

    PubMed

    Indrajit, Ik; Alam, A

    2010-11-01

    Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU), chipset, random access memory (RAM), and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. "Storage drive" is a term describing a "memory" hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. "Drive interfaces" connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular "input/output devices" used commonly with computers are the printer, monitor, mouse, and keyboard. The "bus" is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated) ISA bus. "Ports" are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the 'ever increasing' digital future.

  9. 47 CFR 69.152 - End user common line for price cap local exchange carriers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false End user common line for price cap local...) COMMON CARRIER SERVICES (CONTINUED) ACCESS CHARGES Computation of Charges for Price Cap Local Exchange Carriers § 69.152 End user common line for price cap local exchange carriers. (a) A charge that is...

  10. CyVerse Data Commons: lessons learned in cyberinfrastructure management and data hosting from the Life Sciences

    NASA Astrophysics Data System (ADS)

    Swetnam, T. L.; Walls, R.; Merchant, N.

    2017-12-01

    CyVerse, is a US National Science Foundation funded initiative "to design, deploy, and expand a national cyberinfrastructure for life sciences research, and to train scientists in its use," supporting and enabling cross disciplinary collaborations across institutions. CyVerse' free, open-source, cyberinfrastructure is being adopted into biogeoscience and space sciences research. CyVerse data-science agnostic platforms provide shared data storage, high performance computing, and cloud computing that allow analysis of very large data sets (including incomplete or work-in-progress data sets). Part of CyVerse success has been in addressing the handling of data through its entire lifecycle, from creation to final publication in a digital data repository to reuse in new analyses. CyVerse developers and user communities have learned many lessons that are germane to Earth and Environmental Science. We present an overview of the tools and services available through CyVerse including: interactive computing with the Discovery Environment (https://de.cyverse.org/), an interactive data science workbench featuring data storage and transfer via the Data Store; cloud computing with Atmosphere (https://atmo.cyverse.org); and access to HPC via Agave API (https://agaveapi.co/). Each CyVerse service emphasizes access to long term data storage, including our own Data Commons (http://datacommons.cyverse.org), as well as external repositories. The Data Commons service manages, organizes, preserves, publishes, allows for discovery and reuse of data. All data published to CyVerse's Curated Data receive a permanent identifier (PID) in the form of a DOI (Digital Object Identifier) or ARK (Archival Resource Key). Data that is more fluid can also be published in the Data commons through Community Collaborated data. The Data Commons provides landing pages, permanent DOIs or ARKs, and supports data reuse and citation through features such as open data licenses and downloadable citations. The ability to access and do computing on data within the CyVerse framework or with external compute resources when necessary, has proven highly beneficial to our user community, which has continuously grown since the inception of CyVerse nine years ago.

  11. Gender Differences in Computer and Information Literacy: An Exploration of the Performances of Girls and Boys in ICILS 2013

    ERIC Educational Resources Information Center

    Punter, R. Annemiek; Meelissen, Martina R. M.; Glas, Cees A. W.

    2017-01-01

    IEA's International Computer and Information Literacy Study (ICILS) 2013 showed that in the majority of the participating countries, 14-year-old girls outperformed boys in computer and information literacy (CIL): results that seem to contrast with the common view of boys having better computer skills. This study used the ICILS data to explore…

  12. An attempt to obtain a detailed declination chart from the United States magnetic anomaly map

    USGS Publications Warehouse

    Alldredge, L.R.

    1989-01-01

    Modern declination charts of the United States show almost no details. It was hoped that declination details could be derived from the information contained in the existing magnetic anomaly map of the United States. This could be realized only if all of the survey data were corrected to a common epoch, at which time a main-field vector model was known, before the anomaly values were computed. Because this was not done, accurate declination values cannot be determined. In spite of this conclusion, declination values were computed using a common main-field model for the entire United States to see how well they compared with observed values. The computed detailed declination values were found to compare less favourably with observed values of declination than declination values computed from the IGRF 1985 model itself. -from Author

  13. Parallel and serial computing tools for testing single-locus and epistatic SNP effects of quantitative traits in genome-wide association studies

    PubMed Central

    Ma, Li; Runesha, H Birali; Dvorkin, Daniel; Garbe, John R; Da, Yang

    2008-01-01

    Background Genome-wide association studies (GWAS) using single nucleotide polymorphism (SNP) markers provide opportunities to detect epistatic SNPs associated with quantitative traits and to detect the exact mode of an epistasis effect. Computational difficulty is the main bottleneck for epistasis testing in large scale GWAS. Results The EPISNPmpi and EPISNP computer programs were developed for testing single-locus and epistatic SNP effects on quantitative traits in GWAS, including tests of three single-locus effects for each SNP (SNP genotypic effect, additive and dominance effects) and five epistasis effects for each pair of SNPs (two-locus interaction, additive × additive, additive × dominance, dominance × additive, and dominance × dominance) based on the extended Kempthorne model. EPISNPmpi is the parallel computing program for epistasis testing in large scale GWAS and achieved excellent scalability for large scale analysis and portability for various parallel computing platforms. EPISNP is the serial computing program based on the EPISNPmpi code for epistasis testing in small scale GWAS using commonly available operating systems and computer hardware. Three serial computing utility programs were developed for graphical viewing of test results and epistasis networks, and for estimating CPU time and disk space requirements. Conclusion The EPISNPmpi parallel computing program provides an effective computing tool for epistasis testing in large scale GWAS, and the epiSNP serial computing programs are convenient tools for epistasis analysis in small scale GWAS using commonly available computer hardware. PMID:18644146

  14. SNS programming environment user's guide

    NASA Technical Reports Server (NTRS)

    Tennille, Geoffrey M.; Howser, Lona M.; Humes, D. Creig; Cronin, Catherine K.; Bowen, John T.; Drozdowski, Joseph M.; Utley, Judith A.; Flynn, Theresa M.; Austin, Brenda A.

    1992-01-01

    The computing environment is briefly described for the Supercomputing Network Subsystem (SNS) of the Central Scientific Computing Complex of NASA Langley. The major SNS computers are a CRAY-2, a CRAY Y-MP, a CONVEX C-210, and a CONVEX C-220. The software is described that is common to all of these computers, including: the UNIX operating system, computer graphics, networking utilities, mass storage, and mathematical libraries. Also described is file management, validation, SNS configuration, documentation, and customer services.

  15. A Series of Molecular Dynamics and Homology Modeling Computer Labs for an Undergraduate Molecular Modeling Course

    ERIC Educational Resources Information Center

    Elmore, Donald E.; Guayasamin, Ryann C.; Kieffer, Madeleine E.

    2010-01-01

    As computational modeling plays an increasingly central role in biochemical research, it is important to provide students with exposure to common modeling methods in their undergraduate curriculum. This article describes a series of computer labs designed to introduce undergraduate students to energy minimization, molecular dynamics simulations,…

  16. Establishing the Content Validity of a Basic Computer Literacy Course.

    ERIC Educational Resources Information Center

    Clements, James; Carifio, James

    1995-01-01

    Content analysis of 13 textbooks and 2 Department of Education documents was conducted to ascertain common word processing, database, and spreadsheet software skills in order to determine which specific skills should be taught in a high school computer literacy course. Aspects of a basic computer course, created from this analysis, are described.…

  17. Reconfigurable vision system for real-time applications

    NASA Astrophysics Data System (ADS)

    Torres-Huitzil, Cesar; Arias-Estrada, Miguel

    2002-03-01

    Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.

  18. Cardiology office computer use: primer, pointers, pitfalls.

    PubMed

    Shepard, R B; Blum, R I

    1986-10-01

    An office computer is a utility, like an automobile, with benefits and costs that are both direct and hidden and potential for disaster. For the cardiologist or cardiovascular surgeon, the increasing power and decreasing costs of computer hardware and the availability of software make use of an office computer system an increasingly attractive possibility. Management of office business functions is common; handling and scientific analysis of practice medical information are less common. The cardiologist can also access national medical information systems for literature searches and for interactive further education. Selection and testing of programs and the entire computer system before purchase of computer hardware will reduce the chances of disappointment or serious problems. Personnel pretraining and planning for office information flow and medical information security are necessary. Some cardiologists design their own office systems, buy hardware and software as needed, write programs for themselves and carry out the implementation themselves. For most cardiologists, the better course will be to take advantage of the professional experience of expert advisors. This article provides a starting point from which the practicing cardiologist can approach considering, specifying or implementing an office computer system for business functions and for scientific analysis of practice results.

  19. Effects on Training Using Illumination in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Maida, James C.; Novak, M. S. Jennifer; Mueller, Kristian

    1999-01-01

    Camera based tasks are commonly performed during orbital operations, and orbital lighting conditions, such as high contrast shadowing and glare, are a factor in performance. Computer based training using virtual environments is a common tool used to make and keep CTW members proficient. If computer based training included some of these harsh lighting conditions, would the crew increase their proficiency? The project goal was to determine whether computer based training increases proficiency if one trains for a camera based task using computer generated virtual environments with enhanced lighting conditions such as shadows and glare rather than color shaded computer images normally used in simulators. Previous experiments were conducted using a two degree of freedom docking system. Test subjects had to align a boresight camera using a hand controller with one axis of rotation and one axis of rotation. Two sets of subjects were trained on two computer simulations using computer generated virtual environments, one with lighting, and one without. Results revealed that when subjects were constrained by time and accuracy, those who trained with simulated lighting conditions performed significantly better than those who did not. To reinforce these results for speed and accuracy, the task complexity was increased.

  20. Unified Computational Methods for Regression Analysis of Zero-Inflated and Bound-Inflated Data

    PubMed Central

    Yang, Yan; Simpson, Douglas

    2010-01-01

    Bounded data with excess observations at the boundary are common in many areas of application. Various individual cases of inflated mixture models have been studied in the literature for bound-inflated data, yet the computational methods have been developed separately for each type of model. In this article we use a common framework for computing these models, and expand the range of models for both discrete and semi-continuous data with point inflation at the lower boundary. The quasi-Newton and EM algorithms are adapted and compared for estimation of model parameters. The numerical Hessian and generalized Louis method are investigated as means for computing standard errors after optimization. Correlated data are included in this framework via generalized estimating equations. The estimation of parameters and effectiveness of standard errors are demonstrated through simulation and in the analysis of data from an ultrasound bioeffect study. The unified approach enables reliable computation for a wide class of inflated mixture models and comparison of competing models. PMID:20228950

  1. A CUMULATIVE MIGRATION METHOD FOR COMPUTING RIGOROUS TRANSPORT CROSS SECTIONS AND DIFFUSION COEFFICIENTS FOR LWR LATTICES WITH MONTE CARLO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhaoyuan Liu; Kord Smith; Benoit Forget

    2016-05-01

    A new method for computing homogenized assembly neutron transport cross sections and dif- fusion coefficients that is both rigorous and computationally efficient is proposed in this paper. In the limit of a homogeneous hydrogen slab, the new method is equivalent to the long-used, and only-recently-published CASMO transport method. The rigorous method is used to demonstrate the sources of inaccuracy in the commonly applied “out-scatter” transport correction. It is also demonstrated that the newly developed method is directly applicable to lattice calculations per- formed by Monte Carlo and is capable of computing rigorous homogenized transport cross sections for arbitrarily heterogeneous lattices.more » Comparisons of several common transport cross section ap- proximations are presented for a simple problem of infinite medium hydrogen. The new method has also been applied in computing 2-group diffusion data for an actual PWR lattice from BEAVRS benchmark.« less

  2. 47 CFR 69.154 - Per-minute carrier common line charge.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Per-minute carrier common line charge. 69.154 Section 69.154 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) ACCESS CHARGES Computation of Charges for Price Cap Local Exchange Carriers § 69.154 Per-minute...

  3. The Tragedy of the Commons

    ERIC Educational Resources Information Center

    Short, Daniel

    2016-01-01

    The tragedy of the commons is one of the principal tenets of ecology. Recent developments in experiential computer-based simulation of the tragedy of the commons are described. A virtual learning environment is developed using the popular video game "Minecraft". The virtual learning environment is used to experience first-hand depletion…

  4. CERN's Common Unix and X Terminal Environment

    NASA Astrophysics Data System (ADS)

    Cass, Tony

    The Desktop Infrastructure Group of CERN's Computing and Networks Division has developed a Common Unix and X Terminal Environment to ease the migration to Unix based Interactive Computing. The CUTE architecture relies on a distributed filesystem—currently Trans arc's AFS—to enable essentially interchangeable client work-stations to access both "home directory" and program files transparently. Additionally, we provide a suite of programs to configure workstations for CUTE and to ensure continued compatibility. This paper describes the different components and the development of the CUTE architecture.

  5. General Monte Carlo reliability simulation code including common mode failures and HARP fault/error-handling

    NASA Technical Reports Server (NTRS)

    Platt, M. E.; Lewis, E. E.; Boehm, F.

    1991-01-01

    A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.

  6. Brain-Computer Interfaces: A Neuroscience Paradigm of Social Interaction? A Matter of Perspective

    PubMed Central

    Mattout, Jérémie

    2012-01-01

    A number of recent studies have put human subjects in true social interactions, with the aim of better identifying the psychophysiological processes underlying social cognition. Interestingly, this emerging Neuroscience of Social Interactions (NSI) field brings up challenges which resemble important ones in the field of Brain-Computer Interfaces (BCI). Importantly, these challenges go beyond common objectives such as the eventual use of BCI and NSI protocols in the clinical domain or common interests pertaining to the use of online neurophysiological techniques and algorithms. Common fundamental challenges are now apparent and one can argue that a crucial one is to develop computational models of brain processes relevant to human interactions with an adaptive agent, whether human or artificial. Coupled with neuroimaging data, such models have proved promising in revealing the neural basis and mental processes behind social interactions. Similar models could help BCI to move from well-performing but offline static machines to reliable online adaptive agents. This emphasizes a social perspective to BCI, which is not limited to a computational challenge but extends to all questions that arise when studying the brain in interaction with its environment. PMID:22675291

  7. Taming Crowded Visual Scenes

    DTIC Science & Technology

    2014-08-12

    Nolan Warner, Mubarak Shah. Tracking in Dense Crowds Using Prominenceand Neighborhood Motion Concurrence, IEEE Transactions on Pattern Analysis...of  computer  vision,   computer   graphics  and  evacuation  dynamics  by  providing  a  common  platform,  and  provides...areas  that  includes  Computer  Vision,  Computer   Graphics ,  and  Pedestrian   Evacuation  Dynamics.  Despite  the

  8. CAD/CAE Integration Enhanced by New CAD Services Standard

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.

    2002-01-01

    A Government-industry team led by the NASA Glenn Research Center has developed a computer interface standard for accessing data from computer-aided design (CAD) systems. The Object Management Group, an international computer standards organization, has adopted this CAD services standard. The new standard allows software (e.g., computer-aided engineering (CAE) and computer-aided manufacturing software to access multiple CAD systems through one programming interface. The interface is built on top of a distributed computing system called the Common Object Request Broker Architecture (CORBA). CORBA allows the CAD services software to operate in a distributed, heterogeneous computing environment.

  9. Electrode channel selection based on backtracking search optimization in motor imagery brain-computer interfaces.

    PubMed

    Dai, Shengfa; Wei, Qingguo

    2017-01-01

    Common spatial pattern algorithm is widely used to estimate spatial filters in motor imagery based brain-computer interfaces. However, use of a large number of channels will make common spatial pattern tend to over-fitting and the classification of electroencephalographic signals time-consuming. To overcome these problems, it is necessary to choose an optimal subset of the whole channels to save computational time and improve the classification accuracy. In this paper, a novel method named backtracking search optimization algorithm is proposed to automatically select the optimal channel set for common spatial pattern. Each individual in the population is a N-dimensional vector, with each component representing one channel. A population of binary codes generate randomly in the beginning, and then channels are selected according to the evolution of these codes. The number and positions of 1's in the code denote the number and positions of chosen channels. The objective function of backtracking search optimization algorithm is defined as the combination of classification error rate and relative number of channels. Experimental results suggest that higher classification accuracy can be achieved with much fewer channels compared to standard common spatial pattern with whole channels.

  10. APC: A New Code for Atmospheric Polarization Computations

    NASA Technical Reports Server (NTRS)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2014-01-01

    A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.

  11. Basic instincts

    NASA Astrophysics Data System (ADS)

    Hutson, Matthew

    2018-05-01

    In their adaptability, young children demonstrate common sense, a kind of intelligence that, so far, computer scientists have struggled to reproduce. Gary Marcus, a developmental cognitive scientist at New York University in New York City, believes the field of artificial intelligence (AI) would do well to learn lessons from young thinkers. Researchers in machine learning argue that computers trained on mountains of data can learn just about anything—including common sense—with few, if any, programmed rules. But Marcus says computer scientists are ignoring decades of work in the cognitive sciences and developmental psychology showing that humans have innate abilities—programmed instincts that appear at birth or in early childhood—that help us think abstractly and flexibly. He believes AI researchers ought to include such instincts in their programs. Yet many computer scientists, riding high on the successes of machine learning, are eagerly exploring the limits of what a naïve AI can do. Computer scientists appreciate simplicity and have an aversion to debugging complex code. Furthermore, big companies such as Facebook and Google are pushing AI in this direction. These companies are most interested in narrowly defined, near-term problems, such as web search and facial recognition, in which blank-slate AI systems can be trained on vast data sets and work remarkably well. But in the longer term, computer scientists expect AIs to take on much tougher tasks that require flexibility and common sense. They want to create chatbots that explain the news, autonomous taxis that can handle chaotic city traffic, and robots that nurse the elderly. Some computer scientists are already trying. Such efforts, researchers hope, will result in AIs that sit somewhere between pure machine learning and pure instinct. They will boot up following some embedded rules, but will also learn as they go.

  12. Systematic Applications of Metabolomics in Metabolic Engineering

    PubMed Central

    Dromms, Robert A.; Styczynski, Mark P.

    2012-01-01

    The goals of metabolic engineering are well-served by the biological information provided by metabolomics: information on how the cell is currently using its biochemical resources is perhaps one of the best ways to inform strategies to engineer a cell to produce a target compound. Using the analysis of extracellular or intracellular levels of the target compound (or a few closely related molecules) to drive metabolic engineering is quite common. However, there is surprisingly little systematic use of metabolomics datasets, which simultaneously measure hundreds of metabolites rather than just a few, for that same purpose. Here, we review the most common systematic approaches to integrating metabolite data with metabolic engineering, with emphasis on existing efforts to use whole-metabolome datasets. We then review some of the most common approaches for computational modeling of cell-wide metabolism, including constraint-based models, and discuss current computational approaches that explicitly use metabolomics data. We conclude with discussion of the broader potential of computational approaches that systematically use metabolomics data to drive metabolic engineering. PMID:24957776

  13. UFO (UnFold Operator) computer program abstract

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kissel, L.; Biggs, F.

    UFO (UnFold Operator) is an interactive user-oriented computer program designed to solve a wide range of problems commonly encountered in physical measurements. This document provides a summary of the capabilities of version 3A of UFO.

  14. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2013-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques.

  15. RenderMan design principles

    NASA Technical Reports Server (NTRS)

    Apodaca, Tony; Porter, Tom

    1989-01-01

    The two worlds of interactive graphics and realistic graphics have remained separate. Fast graphics hardware runs simple algorithms and generates simple looking images. Photorealistic image synthesis software runs slowly on large expensive computers. The time has come for these two branches of computer graphics to merge. The speed and expense of graphics hardware is no longer the barrier to the wide acceptance of photorealism. There is every reason to believe that high quality image synthesis will become a standard capability of every graphics machine, from superworkstation to personal computer. The significant barrier has been the lack of a common language, an agreed-upon set of terms and conditions, for 3-D modeling systems to talk to 3-D rendering systems for computing an accurate rendition of that scene. Pixar has introduced RenderMan to serve as that common language. RenderMan, specifically the extensibility it offers in shading calculations, is discussed.

  16. Fluid Centrality: A Social Network Analysis of Social-Technical Relations in Computer-Mediated Communication

    ERIC Educational Resources Information Center

    Enriquez, Judith Guevarra

    2010-01-01

    In this article, centrality is explored as a measure of computer-mediated communication (CMC) in networked learning. Centrality measure is quite common in performing social network analysis (SNA) and in analysing social cohesion, strength of ties and influence in CMC, and computer-supported collaborative learning research. It argues that measuring…

  17. An eLearning Standard Approach for Supporting PBL in Computer Engineering

    ERIC Educational Resources Information Center

    Garcia-Robles, R.; Diaz-del-Rio, F.; Vicente-Diaz, S.; Linares-Barranco, A.

    2009-01-01

    Problem-based learning (PBL) has proved to be a highly successful pedagogical model in many fields, although it is not that common in computer engineering. PBL goes beyond the typical teaching methodology by promoting student interaction. This paper presents a PBL trial applied to a course in a computer engineering degree at the University of…

  18. Using Free Computational Resources to Illustrate the Drug Design Process in an Undergraduate Medicinal Chemistry Course

    ERIC Educational Resources Information Center

    Rodrigues, Ricardo P.; Andrade, Saulo F.; Mantoani, Susimaire P.; Eifler-Lima, Vera L.; Silva, Vinicius B.; Kawano, Daniel F.

    2015-01-01

    Advances in, and dissemination of, computer technologies in the field of drug research now enable the use of molecular modeling tools to teach important concepts of drug design to chemistry and pharmacy students. A series of computer laboratories is described to introduce undergraduate students to commonly adopted "in silico" drug design…

  19. Supporting Undergraduate Computer Architecture Students Using a Visual MIPS64 CPU Simulator

    ERIC Educational Resources Information Center

    Patti, D.; Spadaccini, A.; Palesi, M.; Fazzino, F.; Catania, V.

    2012-01-01

    The topics of computer architecture are always taught using an Assembly dialect as an example. The most commonly used textbooks in this field use the MIPS64 Instruction Set Architecture (ISA) to help students in learning the fundamentals of computer architecture because of its orthogonality and its suitability for real-world applications. This…

  20. The Advantages and Disadvantages of Five Common Computer Assisted Instruction Modes.

    ERIC Educational Resources Information Center

    Davidson, Robert L.; Traylor, Karen

    1987-01-01

    This article reviews five modes of computer-assisted software so that teachers will be more aware of them and use computers more in their classrooms. The five modes are the following: (1) drill and practice; (2) tutorial; (3) simulation; (4) demonstration; and (5) instructional games. Teachers should review softwares and choose those that meet…

  1. Evaluation of Two Different Teaching Concepts in Dentistry Using Computer Technology

    ERIC Educational Resources Information Center

    Reich, Sven; Simon, James F.; Ruedinger, Dirk; Shortall, Adrian; Wichmann, Manfred; Frankenberger, Roland

    2007-01-01

    The common teaching goal of two different phantom head courses was to enable the students to provide an all-ceramic restoration by the means of computer technology. The aim of this study was to compare these two courses with regard to the different educational methods using identical computer software. Undergraduate dental students from a single…

  2. 2D Mesh Manipulation

    DTIC Science & Technology

    2011-11-01

    the Poisson form of the equations can also be generated by manipulating the computational space , so forcing functions become superfluous . The...ABSTRACT Unstructured methods for region discretization have become common in computational fluid dynamics (CFD) analysis because of certain benefits...application of Winslow elliptic smoothing equations to unstructured meshes. It has been shown that it is not necessary for the computational space of

  3. The Social Organization of the Computer Underground

    DTIC Science & Technology

    1989-08-01

    colleagues, with only small groups approaching peer relationships. 14 . SUBJECT TERMS HACK, Computer Underground 15. NUMBER OF PAGES 16. PRICE CODE 17...between individuals involved in a common activity (pp. 13- 14 ). Assessing the degree and manner in which the...criminalized over the past several years . Hackers, and the "danger" that they present in our computer dependent

  4. Democratizing Computer Science Knowledge: Transforming the Face of Computer Science through Public High School Education

    ERIC Educational Resources Information Center

    Ryoo, Jean J.; Margolis, Jane; Lee, Clifford H.; Sandoval, Cueponcaxochitl D. M.; Goode, Joanna

    2013-01-01

    Despite the fact that computer science (CS) is the driver of technological innovations across all disciplines and aspects of our lives, including participatory media, high school CS too commonly fails to incorporate the perspectives and concerns of low-income students of color. This article describes a partnership program -- Exploring Computer…

  5. Dwindling Numbers of Female Computer Students: What Are We Missing?

    ERIC Educational Resources Information Center

    Saulsberry, Donna

    2012-01-01

    There is common agreement among researchers that women are under-represented in both 2-year and 4-year collegiate computer study programs. This leads to women being under-represented in the computer industry which may be limiting the progress of technology developments that will benefit mankind. It may also be depriving women of the opportunity to…

  6. Pharmacist Computer Skills and Needs Assessment Survey

    PubMed Central

    Jewesson, Peter J

    2004-01-01

    Background To use technology effectively for the advancement of patient care, pharmacists must possess a variety of computer skills. We recently introduced a novel applied informatics program in this Canadian hospital clinical service unit to enhance the informatics skills of our members. Objective This study was conducted to gain a better understanding of the baseline computer skills and needs of our hospital pharmacists immediately prior to the implementation of an applied informatics program. Methods In May 2001, an 84-question written survey was distributed by mail to 106 practicing hospital pharmacists in our multi-site, 1500-bed, acute-adult-tertiary care Canadian teaching hospital in Vancouver, British Columbia. Results Fifty-eight surveys (55% of total) were returned within the two-week study period. The survey responses reflected the opinions of licensed BSc and PharmD hospital pharmacists with a broad range of pharmacy practice experience. Most respondents had home access to personal computers, and regularly used computers in the work environment for drug distribution, information management, and communication purposes. Few respondents reported experience with handheld computers. Software use experience varied according to application. Although patient-care information software and e-mail were commonly used, experience with spreadsheet, statistical, and presentation software was negligible. The respondents were familiar with Internet search engines, and these were reported to be the most common method of seeking clinical information online. Although many respondents rated themselves as being generally computer literate and not particularly anxious about using computers, the majority believed they required more training to reach their desired level of computer literacy. Lack of familiarity with computer-related terms was prevalent. Self-reported basic computer skill was typically at a moderate level, and varied depending on the task. Specifically, respondents rated their ability to manipulate files, use software help features, and install software as low, but rated their ability to access and navigate the Internet as high. Respondents were generally aware of what online resources were available to them and Clinical Pharmacology was the most commonly employed reference. In terms of anticipated needs, most pharmacists believed they needed to upgrade their computer skills. Medical database and Internet searching skills were identified as those in greatest need of improvement. Conclusions Most pharmacists believed they needed to upgrade their computer skills. Medical database and Internet searching skills were identified as those in greatest need of improvement for the purposes of improving practice effectiveness. PMID:15111277

  7. Common Sense Initiative’s Recommendation on Cathode Ray Tube (CRT) Glass-to-Glass

    EPA Pesticide Factsheets

    From 1994 through 1998, EPA’s Common Sense Initiative (CSI) Computers and Electronics Subcommittee (CES) formed a workgroup to examine regulatory barriers to pollution prevention and electronic waste recycling.

  8. CALL and Less Commonly Taught Languages--Still a Way to Go

    ERIC Educational Resources Information Center

    Ward, Monica

    2016-01-01

    Many Computer Assisted Language Learning (CALL) innovations mainly apply to the Most Commonly Taught Languages (MCTLs), especially English. Recent manifestations of CALL for MCTLs such as corpora, Mobile Assisted Language Learning (MALL) and Massively Open Online Courses (MOOCs) are found less frequently in the world of Less Commonly Taught…

  9. Developing a Science Commons for Geosciences

    NASA Astrophysics Data System (ADS)

    Lenhardt, W. C.; Lander, H.

    2016-12-01

    Many scientific communities, recognizing the research possibilities inherent in data sets, have created domain specific archives such as the Incorporated Research Institutions for Seismology (iris.edu) and ClinicalTrials.gov. Though this is an important step forward, most scientists, including geoscientists, also use a variety of software tools and at least some amount of computation to conduct their research. While the archives make it simpler for scientists to locate the required data, provisioning disk space, compute resources, and network bandwidth can still require significant efforts. This challenge exists despite the wealth of resources available to researchers, namely lab IT resources, institutional IT resources, national compute resources (XSEDE, OSG), private clouds, public clouds, and the development of cyberinfrastructure technologies meant to facilitate use of those resources. Further tasks include obtaining and installing required tools for analysis and visualization. If the research effort is a collaboration or involves certain types of data, then the partners may well have additional non-scientific tasks such as securing the data and developing secure sharing methods for the data. These requirements motivate our investigations into the "Science Commons". This paper will present a working definition of a science commons, compare and contrast examples of existing science commons, and describe a project based at RENCI to implement a science commons for risk analytics. We will then explore what a similar tool might look like for the geosciences.

  10. High-performance conjugate-gradient benchmark: A new metric for ranking high-performance computing systems

    DOE PAGES

    Dongarra, Jack; Heroux, Michael A.; Luszczek, Piotr

    2015-08-17

    Here, we describe a new high-performance conjugate-gradient (HPCG) benchmark. HPCG is composed of computations and data-access patterns commonly found in scientific applications. HPCG strives for a better correlation to existing codes from the computational science domain and to be representative of their performance. Furthermore, HPCG is meant to help drive the computer system design and implementation in directions that will better impact future performance improvement.

  11. Automated drug identification system

    NASA Technical Reports Server (NTRS)

    Campen, C. F., Jr.

    1974-01-01

    System speeds up analysis of blood and urine and is capable of identifying 100 commonly abused drugs. System includes computer that controls entire analytical process by ordering various steps in specific sequences. Computer processes data output and has readout of identified drugs.

  12. Computer program for discounted cash flow/rate of return evaluations

    NASA Technical Reports Server (NTRS)

    Robson, W. D.

    1971-01-01

    Technique, incorporated into set of three computer programs, provides economic methodology for reducing all parameters to financially sound common denominator of present worth, and calculates resultant rate of return on new equipment, processes, or systems investments.

  13. Prediction of common epitopes on hemagglutinin of the influenza A virus (H1 subtype).

    PubMed

    Guo, Chunyan; Xie, Xin; Li, Huijin; Zhao, Penghua; Zhao, Xiangrong; Sun, Jingying; Wang, Haifang; Liu, Yang; Li, Yan; Hu, Qiaoxia; Hu, Jun; Li, Yuan

    2015-02-01

    Influenza A virus infection is a persistent threat to public health worldwide due to hemagglutinin (HA) variation. Current vaccines against influenza A virus provide immunity to viral isolates similar to vaccine strains. Antibodies against common epitopes provide immunity to diverse influenza virus strains and protect against future pandemic influenza. Therefore, it is vital to analyze common HA antigenic epitopes of influenza virus. In this study, 14 strains of monoclonal antibodies with high sensitivity to common epitopes of influenza virus antigens identified in our previous study were selected as the tool to predict common HA epitopes. The common HA antigenic epitopes were divided into four categories by ELISA blocking experiments, and separately, into three categories according to the preliminary results of computer simulation. Comparison between the results of computer simulations and ELISA blocking experiments indicated that at least two classes of common epitopes are present in influenza virus HA. This study provides experimental data for improving the prediction of HA epitopes of influenza virus (H1 subtype) and the development of a potential universal vaccine as well as a novel approach for the prediction of epitopes on other pathogenic microorganisms. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Neck Pain

    MedlinePlus

    Neck pain Overview Neck pain is a common complaint. Neck muscles can be strained from poor posture — whether it's leaning over your computer or ... workbench. Osteoarthritis also is a common cause of neck pain. Rarely, neck pain can be a symptom of ...

  15. The Army/Air Force RAMCAD (Reliability and Maintainability Computer-Aided Design) Program Progress Report Through September 1989

    DTIC Science & Technology

    1990-01-01

    deal with platforms and their weapons. Two approaches emerged from this effort. The first plan was to address the benefits and the problems involved in...inadam~ or on a remnoftd lcod oompLusr. 3. The Nwaiguior tabthe DGMSIt aD wWe rq4he datatote ApicadwProW tip FaMW Tebs. *4. The didaI formailed for...demonstrated the benefits to be gained in time and training cost of a common method for operating different computer programs. It is this common mode of

  16. The Development of Educational and/or Training Computer Games for Students with Disabilities

    ERIC Educational Resources Information Center

    Kwon, Jungmin

    2012-01-01

    Computer and video games have much in common with the strategies used in special education. Free resources for game development are becoming more widely available, so lay computer users, such as teachers and other practitioners, now have the capacity to develop games using a low budget and a little self-teaching. This article provides a guideline…

  17. L'Utilisation de l'ordinateur en lexicometrie (The Use of the Computer in Lexicometry). Series B-1.

    ERIC Educational Resources Information Center

    Savard, Jean-Guy

    This report treats some of the technical difficulties encountered in lexicological studies that were undertaken in order to establish a basic vocabulary. Its purpose is to show that the computer can overcome some of these difficulties, and specifically that computer programming can serve to establish a vocabulary common to scientific and technical…

  18. High School Students Learning University Level Computer Science on the Web: A Case Study of the "DASK"-Model

    ERIC Educational Resources Information Center

    Grandell, Linda

    2005-01-01

    Computer science is becoming increasingly important in our society. Meta skills, such as problem solving and logical and algorithmic thinking, are emphasized in every field, not only in the natural sciences. Still, largely due to gaps in tuition, common misunderstandings exist about the true nature of computer science. These are especially…

  19. Model-Based Knowing: How Do Students Ground Their Understanding About Climate Systems in Agent-Based Computer Models?

    NASA Astrophysics Data System (ADS)

    Markauskaite, Lina; Kelly, Nick; Jacobson, Michael J.

    2017-12-01

    This paper gives a grounded cognition account of model-based learning of complex scientific knowledge related to socio-scientific issues, such as climate change. It draws on the results from a study of high school students learning about the carbon cycle through computational agent-based models and investigates two questions: First, how do students ground their understanding about the phenomenon when they learn and solve problems with computer models? Second, what are common sources of mistakes in students' reasoning with computer models? Results show that students ground their understanding in computer models in five ways: direct observation, straight abstraction, generalisation, conceptualisation, and extension. Students also incorporate into their reasoning their knowledge and experiences that extend beyond phenomena represented in the models, such as attitudes about unsustainable carbon emission rates, human agency, external events, and the nature of computational models. The most common difficulties of the students relate to seeing the modelled scientific phenomenon and connecting results from the observations with other experiences and understandings about the phenomenon in the outside world. An important contribution of this study is the constructed coding scheme for establishing different ways of grounding, which helps to understand some challenges that students encounter when they learn about complex phenomena with agent-based computer models.

  20. Importance of balanced architectures in the design of high-performance imaging systems

    NASA Astrophysics Data System (ADS)

    Sgro, Joseph A.; Stanton, Paul C.

    1999-03-01

    Imaging systems employed in demanding military and industrial applications, such as automatic target recognition and computer vision, typically require real-time high-performance computing resources. While high- performances computing systems have traditionally relied on proprietary architectures and custom components, recent advances in high performance general-purpose microprocessor technology have produced an abundance of low cost components suitable for use in high-performance computing systems. A common pitfall in the design of high performance imaging system, particularly systems employing scalable multiprocessor architectures, is the failure to balance computational and memory bandwidth. The performance of standard cluster designs, for example, in which several processors share a common memory bus, is typically constrained by memory bandwidth. The symptom characteristic of this problem is failure to the performance of the system to scale as more processors are added. The problem becomes exacerbated if I/O and memory functions share the same bus. The recent introduction of microprocessors with large internal caches and high performance external memory interfaces makes it practical to design high performance imaging system with balanced computational and memory bandwidth. Real word examples of such designs will be presented, along with a discussion of adapting algorithm design to best utilize available memory bandwidth.

  1. Extending DIRAC File Management with Erasure-Coding for efficient storage.

    NASA Astrophysics Data System (ADS)

    Cadellin Skipsey, Samuel; Todev, Paulin; Britton, David; Crooks, David; Roy, Gareth

    2015-12-01

    The state of the art in Grid style data management is to achieve increased resilience of data via multiple complete replicas of data files across multiple storage endpoints. While this is effective, it is not the most space-efficient approach to resilience, especially when the reliability of individual storage endpoints is sufficiently high that only a few will be inactive at any point in time. We report on work performed as part of GridPP[1], extending the Dirac File Catalogue and file management interface to allow the placement of erasure-coded files: each file distributed as N identically-sized chunks of data striped across a vector of storage endpoints, encoded such that any M chunks can be lost and the original file can be reconstructed. The tools developed are transparent to the user, and, as well as allowing up and downloading of data to Grid storage, also provide the possibility of parallelising access across all of the distributed chunks at once, improving data transfer and IO performance. We expect this approach to be of most interest to smaller VOs, who have tighter bounds on the storage available to them, but larger (WLCG) VOs may be interested as their total data increases during Run 2. We provide an analysis of the costs and benefits of the approach, along with future development and implementation plans in this area. In general, overheads for multiple file transfers provide the largest issue for competitiveness of this approach at present.

  2. Managing virtual machines with Vac and Vcycle

    NASA Astrophysics Data System (ADS)

    McNab, A.; Love, P.; MacMahon, E.

    2015-12-01

    We compare the Vac and Vcycle virtual machine lifecycle managers and our experiences in providing production job execution services for ATLAS, CMS, LHCb, and the GridPP VO at sites in the UK, France and at CERN. In both the Vac and Vcycle systems, the virtual machines are created outside of the experiment's job submission and pilot framework. In the case of Vac, a daemon runs on each physical host which manages a pool of virtual machines on that host, and a peer-to-peer UDP protocol is used to achieve the desired target shares between experiments across the site. In the case of Vcycle, a daemon manages a pool of virtual machines on an Infrastructure-as-a-Service cloud system such as OpenStack, and has within itself enough information to create the types of virtual machines to achieve the desired target shares. Both systems allow unused shares for one experiment to temporarily taken up by other experiements with work to be done. The virtual machine lifecycle is managed with a minimum of information, gathered from the virtual machine creation mechanism (such as libvirt or OpenStack) and using the proposed Machine/Job Features API from WLCG. We demonstrate that the same virtual machine designs can be used to run production jobs on Vac and Vcycle/OpenStack sites for ATLAS, CMS, LHCb, and GridPP, and that these technologies allow sites to be operated in a reliable and robust way.

  3. Storageless and caching Tier-2 models in the UK context

    NASA Astrophysics Data System (ADS)

    Cadellin Skipsey, Samuel; Dewhurst, Alastair; Crooks, David; MacMahon, Ewan; Roy, Gareth; Smith, Oliver; Mohammed, Kashif; Brew, Chris; Britton, David

    2017-10-01

    Operational and other pressures have lead to WLCG experiments moving increasingly to a stratified model for Tier-2 resources, where “fat” Tier-2s (“T2Ds”) and “thin” Tier-2s (“T2Cs”) provide different levels of service. In the UK, this distinction is also encouraged by the terms of the current GridPP5 funding model. In anticipation of this, testing has been performed on the implications, and potential implementation, of such a distinction in our resources. In particular, this presentation presents the results of testing of storage T2Cs, where the “thin” nature is expressed by the site having either no local data storage, or only a thin caching layer; data is streamed or copied from a “nearby” T2D when needed by jobs. In OSG, this model has been adopted successfully for CMS AAA sites; but the network topology and capacity in the USA is significantly different to that in the UK (and much of Europe). We present the result of several operational tests: the in-production University College London (UCL) site, which runs ATLAS workloads using storage at the Queen Mary University of London (QMUL) site; the Oxford site, which has had scaling tests performed against T2Ds in various locations in the UK (to test network effects); and the Durham site, which has been testing the specific ATLAS caching solution of “Rucio Cache” integration with ARC’s caching layer.

  4. Learning Commons in Academic Libraries: Discussing Themes in the Literature from 2001 to the Present

    ERIC Educational Resources Information Center

    Blummer, Barbara; Kenton, Jeffrey M.

    2017-01-01

    Although the term lacks a standard definition, learning commons represent academic library spaces that provide computer and library resources as well as a range of academic services that support learners and learning. Learning commons have been equated to a laboratory for creating knowledge and staffed with librarians that serve as facilitators of…

  5. Computer vision syndrome-A common cause of unexplained visual symptoms in the modern era.

    PubMed

    Munshi, Sunil; Varghese, Ashley; Dhar-Munshi, Sushma

    2017-07-01

    The aim of this study was to assess the evidence and available literature on the clinical, pathogenetic, prognostic and therapeutic aspects of Computer vision syndrome. Information was collected from Medline, Embase & National Library of Medicine over the last 30 years up to March 2016. The bibliographies of relevant articles were searched for additional references. Patients with Computer vision syndrome present to a variety of different specialists, including General Practitioners, Neurologists, Stroke physicians and Ophthalmologists. While the condition is common, there is a poor awareness in the public and among health professionals. Recognising this condition in the clinic or in emergency situations like the TIA clinic is crucial. The implications are potentially huge in view of the extensive and widespread use of computers and visual display units. Greater public awareness of Computer vision syndrome and education of health professionals is vital. Preventive strategies should form part of work place ergonomics routinely. Prompt and correct recognition is important to allow management and avoid unnecessary treatments. © 2017 John Wiley & Sons Ltd.

  6. Within-grade quality differences for 1 and 2A common lumber affect processsing and yields when gang-ripping red oak lumber

    Treesearch

    Charles J. Gatchell; R. Edward Thomas; R. Edward Thomas

    1997-01-01

    Using a new computer grading program (UGRS), 1 and 2A Common lumber was sorted based on the percentage of board surface measure found in the maximum allowable number of grading cuttings. For 1 Common, 46.0 percent of 921 boards had percentages in the FAS range (83-1/3% and above). For 2A Common, 54.7 percent of 825 boards had percentages in the 1 Common range. When...

  7. How medical students use the computer and Internet at a Turkish military medical school.

    PubMed

    Kir, Tayfun; Ogur, Recai; Kilic, Selim; Tekbas, Omer Faruk; Hasde, Metin

    2004-12-01

    The aim of this study was to determine how medical students use the computer and World Wide Web at a Turkish military medical school and to discuss characteristics related to this computer use. The study was conducted in 2003 in the Department of Public Health at the Gulhane Military Medical School in Ankara, Turkey. A survey developed by the authors was distributed to 508 students, after pretest. Responses were analyzed statistically by using a computer. Most of the students (86.4%) could access a computer and the Internet and all of the computers that were used by students had Internet connections, and a small group (8.9%) had owned their own computers. One-half of the students use notes provided by attending stuff and textbooks as assistant resources for their studies. The most common usage of computers was connecting to the Internet (91.9%), and the most common use of the Internet was e-mail communication (81.6%). The most preferred site category for daily visit was newspaper sites (62.8%). Approximately 44.1% of students visited medical sites when they were surfing. Also, there was a negative correlation between school performance and the time spent for computer and Internet use (-0.056 and -0.034, respectively). It was observed that medical students used the computer and Internet essentially for nonmedical purposes. To encourage students to use the computer and Internet for medical purposes, tutors should use the computer and Internet during their teaching activities, and software companies should produce assistant applications for medical students. Also, medical schools should build interactive World Wide Web sites, e-mail groups, discussion boards, and study areas for medical students.

  8. Computer Simulation of Human Service Program Evaluations.

    ERIC Educational Resources Information Center

    Trochim, William M. K.; Davis, James E.

    1985-01-01

    Describes uses of computer simulations for the context of human service program evaluation. Presents simple mathematical models for most commonly used human service outcome evaluation designs (pretest-posttest randomized experiment, pretest-posttest nonequivalent groups design, and regression-discontinuity design). Translates models into single…

  9. Cyber War: The Next Frontier for NATO

    DTIC Science & Technology

    2015-03-01

    cyber-attacks as a way to advance their agenda. Common examples of cyber- attacks include computer viruses, worms , malware, and distributed denial of...take advantage of security holes and cause damage to computer systems, steal financial data, or acquire sensitive secrets. As technology becomes

  10. 40 CFR 63.1517 - Records

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... operating parameter value and corrective action taken. (6) For each continuous monitoring system, records... operator may retain records on microfilm, computer disks, magnetic tape, or microfiche; and (3) The owner or operator may report required information on paper or on a labeled computer disk using commonly...

  11. 10 CFR 2.4 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and safety or the common defense and security; security measures for the physical protection and... computer that contains the participant's name, e-mail address, and participant's digital signature, proves... inspection. It is also the place where NRC makes computer terminals available to access the Publicly...

  12. 10 CFR 2.4 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... and safety or the common defense and security; security measures for the physical protection and... computer that contains the participant's name, e-mail address, and participant's digital signature, proves... inspection. It is also the place where NRC makes computer terminals available to access the Publicly...

  13. Stencil computations for PDE-based applications with examples from DUNE and hypre

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engwer, C.; Falgout, R. D.; Yang, U. M.

    Here, stencils are commonly used to implement efficient on–the–fly computations of linear operators arising from partial differential equations. At the same time the term “stencil” is not fully defined and can be interpreted differently depending on the application domain and the background of the software developers. Common features in stencil codes are the preservation of the structure given by the discretization of the partial differential equation and the benefit of minimal data storage. We discuss stencil concepts of different complexity, show how they are used in modern software packages like hypre and DUNE, and discuss recent efforts to extend themore » software to enable stencil computations of more complex problems and methods such as inf–sup–stable Stokes discretizations and mixed finite element discretizations.« less

  14. Multiphysics Simulations: Challenges and Opportunities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keyes, David; McInnes, Lois C.; Woodward, Carol

    2013-02-12

    We consider multiphysics applications from algorithmic and architectural perspectives, where ‘‘algorithmic’’ includes both mathematical analysis and computational complexity, and ‘‘architectural’’ includes both software and hardware environments. Many diverse multiphysics applications can be reduced, en route to their computational simulation, to a common algebraic coupling paradigm. Mathematical analysis of multiphysics coupling in this form is not always practical for realistic applications, but model problems representative of applications discussed herein can provide insight. A variety of software frameworks for multiphysics applications have been constructed and refined within disciplinary communities and executed on leading-edge computer systems. We examine several of these, expose somemore » commonalities among them, and attempt to extrapolate best practices to future systems. From our study, we summarize challenges and forecast opportunities.« less

  15. Stencil computations for PDE-based applications with examples from DUNE and hypre

    DOE PAGES

    Engwer, C.; Falgout, R. D.; Yang, U. M.

    2017-02-24

    Here, stencils are commonly used to implement efficient on–the–fly computations of linear operators arising from partial differential equations. At the same time the term “stencil” is not fully defined and can be interpreted differently depending on the application domain and the background of the software developers. Common features in stencil codes are the preservation of the structure given by the discretization of the partial differential equation and the benefit of minimal data storage. We discuss stencil concepts of different complexity, show how they are used in modern software packages like hypre and DUNE, and discuss recent efforts to extend themore » software to enable stencil computations of more complex problems and methods such as inf–sup–stable Stokes discretizations and mixed finite element discretizations.« less

  16. Dynamic Extension of a Virtualized Cluster by using Cloud Resources

    NASA Astrophysics Data System (ADS)

    Oberst, Oliver; Hauth, Thomas; Kernert, David; Riedel, Stephan; Quast, Günter

    2012-12-01

    The specific requirements concerning the software environment within the HEP community constrain the choice of resource providers for the outsourcing of computing infrastructure. The use of virtualization in HPC clusters and in the context of cloud resources is therefore a subject of recent developments in scientific computing. The dynamic virtualization of worker nodes in common batch systems provided by ViBatch serves each user with a dynamically virtualized subset of worker nodes on a local cluster. Now it can be transparently extended by the use of common open source cloud interfaces like OpenNebula or Eucalyptus, launching a subset of the virtual worker nodes within the cloud. This paper demonstrates how a dynamically virtualized computing cluster is combined with cloud resources by attaching remotely started virtual worker nodes to the local batch system.

  17. Open environments to support systems engineering tool integration: A study using the Portable Common Tool Environment (PCTE)

    NASA Technical Reports Server (NTRS)

    Eckhardt, Dave E., Jr.; Jipping, Michael J.; Wild, Chris J.; Zeil, Steven J.; Roberts, Cathy C.

    1993-01-01

    A study of computer engineering tool integration using the Portable Common Tool Environment (PCTE) Public Interface Standard is presented. Over a 10-week time frame, three existing software products were encapsulated to work in the Emeraude environment, an implementation of the PCTE version 1.5 standard. The software products used were a computer-aided software engineering (CASE) design tool, a software reuse tool, and a computer architecture design and analysis tool. The tool set was then demonstrated to work in a coordinated design process in the Emeraude environment. The project and the features of PCTE used are described, experience with the use of Emeraude environment over the project time frame is summarized, and several related areas for future research are summarized.

  18. No. 1 and No. 2 Common red oak yields: similar part sizes when gang-ripping is used to process boards with crook

    Treesearch

    Charles J. Gatchell; Charles J. Gatchell

    1990-01-01

    Computer simulation was used to gang rip No. 1 and No. 2 Common red oak boards before and after removal of crook. While No. 1 Common produced slightly more total yield, the part yields were very similar. No. 1 Common was superior only in yielding 75-inch-long pieces. Either grade is an excellent choice for the furniture and cabinet industries.

  19. Math and science technology access and use in South Dakota public schools grades three through five

    NASA Astrophysics Data System (ADS)

    Schwietert, Debra L.

    The development of K-12 technology standards, soon to be added to state testing of technology proficiency, and the increasing presence of computers in homes and classrooms reflects the growing importance of technology in current society. This study examined math and science teachers' responses on a survey of technology use in grades three through five in South Dakota. A researcher-developed survey instrument was used to collect data from a random sample of 100 public schools throughout the South Dakota. Forced choice and open-ended responses were recorded. Most teachers have access to computers, but they lack resources to purchase software for their content areas, especially in science areas. Three-fourths of teachers in this study reported multiple computers in their classrooms and 67% reported access to labs in other areas of the school building. These numbers are lower than the national average of 84% of teachers with computers in their classrooms and 95% with access to computers elsewhere in the building (USDOE, 2000). Almost eight out of 10 teachers noted time as a barrier to learning more about educational software. Additional barriers included lack of school funds (38%), access to relevant training (32%), personal funds (30%), and poor quality of training (7%). Teachers most often use math and science software as supplemental, with practice tutorials cited as another common use. The most common interest for software was math for both boys and girls. The second most common choice for boys was science and for girls, language arts. Teachers reported that there was no preference for either individual or group work on computers for girls or boys. Most teachers do not systematically evaluate software for gender preferences, but review software over subjectively.

  20. Using parallel evolutionary development for a biologically-inspired computer vision system for mobile robots.

    PubMed

    Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J

    2005-01-01

    We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.

  1. A Tutorial for SPSS/PC+ Studentware. Study Guide for the Doctor of Arts in Computer-Based Learning.

    ERIC Educational Resources Information Center

    MacFarland, Thomas W.; Hou, Cheng-I

    The purpose of this tutorial is to provide the basic information needed for success with SPSS/PC+ Studentware, a student version of the statistical analysis software offered by SPSS, Inc., for the IBM PC+ and compatible computers. It is intended as a convenient summary of how to organize and conduct the most common computer-based statistical…

  2. Launch Processing System. [for Space Shuttle

    NASA Technical Reports Server (NTRS)

    Byrne, F.; Doolittle, G. V.; Hockenberger, R. W.

    1976-01-01

    This paper presents a functional description of the Launch Processing System, which provides automatic ground checkout and control of the Space Shuttle launch site and airborne systems, with emphasis placed on the Checkout, Control, and Monitor Subsystem. Hardware and software modular design concepts for the distributed computer system are reviewed relative to performing system tests, launch operations control, and status monitoring during ground operations. The communication network design, which uses a Common Data Buffer interface to all computers to allow computer-to-computer communication, is discussed in detail.

  3. Social and monetary reward learning engage overlapping neural substrates.

    PubMed

    Lin, Alice; Adolphs, Ralph; Rangel, Antonio

    2012-03-01

    Learning to make choices that yield rewarding outcomes requires the computation of three distinct signals: stimulus values that are used to guide choices at the time of decision making, experienced utility signals that are used to evaluate the outcomes of those decisions and prediction errors that are used to update the values assigned to stimuli during reward learning. Here we investigated whether monetary and social rewards involve overlapping neural substrates during these computations. Subjects engaged in two probabilistic reward learning tasks that were identical except that rewards were either social (pictures of smiling or angry people) or monetary (gaining or losing money). We found substantial overlap between the two types of rewards for all components of the learning process: a common area of ventromedial prefrontal cortex (vmPFC) correlated with stimulus value at the time of choice and another common area of vmPFC correlated with reward magnitude and common areas in the striatum correlated with prediction errors. Taken together, the findings support the hypothesis that shared anatomical substrates are involved in the computation of both monetary and social rewards. © The Author (2011). Published by Oxford University Press.

  4. Conceptual model of iCAL4LA: Proposing the components using comparative analysis

    NASA Astrophysics Data System (ADS)

    Ahmad, Siti Zulaiha; Mutalib, Ariffin Abdul

    2016-08-01

    This paper discusses an on-going study that initiates an initial process in determining the common components for a conceptual model of interactive computer-assisted learning that is specifically designed for low achieving children. This group of children needs a specific learning support that can be used as an alternative learning material in their learning environment. In order to develop the conceptual model, this study extracts the common components from 15 strongly justified computer assisted learning studies. A comparative analysis has been conducted to determine the most appropriate components by using a set of specific indication classification to prioritize the applicability. The results of the extraction process reveal 17 common components for consideration. Later, based on scientific justifications, 16 of them were selected as the proposed components for the model.

  5. Comparison of magnetic resonance imaging and computed tomography in suspected lesions in the posterior cranial fossa.

    PubMed Central

    Teasdale, G. M.; Hadley, D. M.; Lawrence, A.; Bone, I.; Burton, H.; Grant, R.; Condon, B.; Macpherson, P.; Rowan, J.

    1989-01-01

    OBJECTIVE--To compare computed tomography and magnetic resonance imaging in investigating patients suspected of having a lesion in the posterior cranial fossa. DESIGN--Randomised allocation of newly referred patients to undergo either computed tomography or magnetic resonance imaging; the alternative investigation was performed subsequently only in response to a request from the referring doctor. SETTING--A regional neuroscience centre serving 2.7 million. PATIENTS--1020 Patients recruited between April 1986 and December 1987, all suspected by neurologists, neurosurgeons, or other specialists of having a lesion in the posterior fossa and referred for neuroradiology. The groups allocated to undergo computed tomography or magnetic resonance imaging were well matched in distributions of age, sex, specialty of referring doctor, investigation as an inpatient or an outpatient, suspected site of lesion, and presumed disease process; the referring doctor's confidence in the initial clinical diagnosis was also similar. INTERVENTIONS--After the patients had been imaged by either computed tomography or magnetic resonance (using a resistive magnet of 0.15 T) doctors were given the radiologist's report and a form asking if they considered that imaging with the alternative technique was necessary and, if so, why; it also asked for their current diagnoses and their confidence in them. MAIN OUTCOME MEASURES--Number of requests for the alternative method of investigation. Assessment of characteristics of patients for whom further imaging was requested and lesions that were suspected initially and how the results of the second imaging affected clinicians' and radiologists' opinions. RESULTS--Ninety three of the 501 patients who initially underwent computed tomography were referred subsequently for magnetic resonance imaging whereas only 28 of the 493 patients who initially underwent magnetic resonance imaging were referred subsequently for computed tomography. Over the study the number of patients referred for magnetic resonance imaging after computed tomography increased but requests for computed tomography after magnetic resonance imaging decreased. The reason that clinicians gave most commonly for requesting further imaging by magnetic resonance was that the results of the initial computed tomography failed to exclude their suspected diagnosis (64 patients). This was less common in patients investigated initially by magnetic resonance imaging (eight patients). Management of 28 patients (6%) imaged initially with computed tomography and 12 patients (2%) imaged initially with magnetic resonance was changed on the basis of the results of the alternative imaging. CONCLUSIONS--Magnetic resonance imaging provided doctors with the information required to manage patients suspected of having a lesion in the posterior fossa more commonly than computed tomography, but computed tomography alone was satisfactory in 80% of cases... PMID:2506965

  6. Common families across test series—how many do we need?

    Treesearch

    G.R. Johnson

    2004-01-01

    In order to compare families that are planted on different sites, many forest tree breeding programs include common families in their different series of trials. Computer simulation was used to examine how many common families were needed in each series of progeny trials in order to reliably compare families across series. Average gain and its associated variation...

  7. West Virginia yellow-poplar lumber defect database

    Treesearch

    Lawrence E. Osborn; Charles J. Gatchell; Curt C. Hassler; Curt C. Hassler

    1992-01-01

    Describes the data collection methods and the format of the new West Virginia yellow-poplar lumber defect database that was developed for use with computer simulation programs. The database contains descriptions of 627 boards, totaling approximately 3,800 board. feet, collected in West Virginia in grades FAS, FASlF, No. 1 Common, No. 2A Common, and No. 2B Common. The...

  8. 47 CFR 69.307 - General support facilities.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED... computer investment used in the provision of the Line Information Database sub-element at § 69.120(b) shall be assigned to that sub-element. (b) General purpose computer investment used in the provision of the...

  9. 47 CFR 69.307 - General support facilities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ....307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED... computer investment used in the provision of the Line Information Database sub-element at § 69.120(b) shall be assigned to that sub-element. (b) General purpose computer investment used in the provision of the...

  10. 47 CFR 69.307 - General support facilities.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ....307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED... computer investment used in the provision of the Line Information Database sub-element at § 69.120(b) shall be assigned to that sub-element. (b) General purpose computer investment used in the provision of the...

  11. 47 CFR 69.307 - General support facilities.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED... computer investment used in the provision of the Line Information Database sub-element at § 69.120(b) shall be assigned to that sub-element. (b) General purpose computer investment used in the provision of the...

  12. 47 CFR 69.307 - General support facilities.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ....307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED... computer investment used in the provision of the Line Information Database sub-element at § 69.120(b) shall be assigned to that sub-element. (b) General purpose computer investment used in the provision of the...

  13. Computational Embryology and Predictive Toxicology of Hypospadias (SOT)

    EPA Science Inventory

    Hypospadias, one of the most common birth defects in human male infants, is a condition in which the urethral opening is misplaced along ventral aspect of the penis. We developed an Adverse Outcome Pathway (AOP) framework and computer simulation that describes the pathogenesis of...

  14. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2012-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often great, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques. Technical Methodology/Approach: Apply massively parallel algorithms and data structures to the specific analysis requirements presented when working with thermographic data sets.

  15. Production of Referring Expressions for an Unknown Audience: A Computational Model of Communal Common Ground

    PubMed Central

    Kutlak, Roman; van Deemter, Kees; Mellish, Chris

    2016-01-01

    This article presents a computational model of the production of referring expressions under uncertainty over the hearer's knowledge. Although situations where the hearer's knowledge is uncertain have seldom been addressed in the computational literature, they are common in ordinary communication, for example when a writer addresses an unknown audience, or when a speaker addresses a stranger. We propose a computational model composed of three complimentary heuristics based on, respectively, an estimation of the recipient's knowledge, an estimation of the extent to which a property is unexpected, and the question of what is the optimum number of properties in a given situation. The model was tested in an experiment with human readers, in which it was compared against the Incremental Algorithm and human-produced descriptions. The results suggest that the new model outperforms the Incremental Algorithm in terms of the proportion of correctly identified entities and in terms of the perceived quality of the generated descriptions. PMID:27630592

  16. Cloud Computing for Geosciences--GeoCloud for standardized geospatial service platforms (Invited)

    NASA Astrophysics Data System (ADS)

    Nebert, D. D.; Huang, Q.; Yang, C.

    2013-12-01

    The 21st century geoscience faces challenges of Big Data, spike computing requirements (e.g., when natural disaster happens), and sharing resources through cyberinfrastructure across different organizations (Yang et al., 2011). With flexibility and cost-efficiency of computing resources a primary concern, cloud computing emerges as a promising solution to provide core capabilities to address these challenges. Many governmental and federal agencies are adopting cloud technologies to cut costs and to make federal IT operations more efficient (Huang et al., 2010). However, it is still difficult for geoscientists to take advantage of the benefits of cloud computing to facilitate the scientific research and discoveries. This presentation reports using GeoCloud to illustrate the process and strategies used in building a common platform for geoscience communities to enable the sharing, integration of geospatial data, information and knowledge across different domains. GeoCloud is an annual incubator project coordinated by the Federal Geographic Data Committee (FGDC) in collaboration with the U.S. General Services Administration (GSA) and the Department of Health and Human Services. It is designed as a staging environment to test and document the deployment of a common GeoCloud community platform that can be implemented by multiple agencies. With these standardized virtual geospatial servers, a variety of government geospatial applications can be quickly migrated to the cloud. In order to achieve this objective, multiple projects are nominated each year by federal agencies as existing public-facing geospatial data services. From the initial candidate projects, a set of common operating system and software requirements was identified as the baseline for platform as a service (PaaS) packages. Based on these developed common platform packages, each project deploys and monitors its web application, develops best practices, and documents cost and performance information. This paper presents the background, architectural design, and activities of GeoCloud in support of the Geospatial Platform Initiative. System security strategies and approval processes for migrating federal geospatial data, information, and applications into cloud, and cost estimation for cloud operations are covered. Finally, some lessons learned from the GeoCloud project are discussed as reference for geoscientists to consider in the adoption of cloud computing.

  17. Prevalence of spinal disorders and their relationships with age and gender

    PubMed Central

    Alshami, Ali M.

    2015-01-01

    Objectives: To establish the period prevalence of spinal disorders referred to physical therapy in a university hospital over a 3-year period, and to determine the relationships of common spinal disorders with patients’ age and gender. Methods: This retrospective study was conducted in the Physical Therapy Department, King Fahd Hospital of the University, Dammam, Saudi Arabia. Computer data of all new electronic referrals from January 2011 to December 2013 were retrieved and reviewed. The computer data included demographic information, referring facility, and diagnosis/disorder. Results: One thousand six hundred and sixty-nine (28.1%) of all referred patients (5929) had spinal disorders. The most common disorders affected the lumbar spine (53.1%) and cervical spine (27.1%), and pain was the most common disorder. Neck pain (60.5%) was more common in patients <30 years old (p<0.001). Cervical spondylosis was common (~30%) in the >30 age groups. Spondylosis and low back pain were more prevalent in women (7.8% and 76.2%) than in men (73.9% and 3.3%). Conclusion: Spinal disorders were common compared with other disorders. Low back pain and neck pain were the most common spinal disorders. Age and gender were weakly related to some of the disorders that affected the lumbar and cervical spine. PMID:25987116

  18. [Musculoskeletal disorders among university student computer users].

    PubMed

    Lorusso, A; Bruno, S; L'Abbate, N

    2009-01-01

    Musculoskeletal disorders are a common problem among computer users. Many epidemiological studies have shown that ergonomic factors and aspects of work organization play an important role in the development of these disorders. We carried out a cross-sectional survey to estimate the prevalence of musculoskeletal symptoms among university students using personal computers and to investigate the features of occupational exposure and the prevalence of symptoms throughout the study course. Another objective was to assess the students' level of knowledge of computer ergonomics and the relevant health risks. A questionnaire was distributed to 183 students attending the lectures for second and fourth year courses of the Faculty of Architecture. Data concerning personal characteristics, ergonomic and organizational aspects of computer use, and the presence of musculoskeletal symptoms in the neck and upper limbs were collected. Exposure to risk factors such as daily duration of computer use, time spent at the computer without breaks, duration of mouse use and poor workstation ergonomics was significantly higher among students of the fourth year course. Neck pain was the most commonly reported symptom (69%), followed by hand/wrist (53%), shoulder (49%) and arm (8%) pain. The prevalence of symptoms in the neck and hand/wrist area was signifcantly higher in the students of the fourth year course. In our survey we found high prevalence of musculoskeletal symptoms among university students using computers for long time periods on a daily basis. Exposure to computer-related ergonomic and organizational risk factors, and the prevalence ofmusculoskeletal symptoms both seem to increase significantly throughout the study course. Furthermore, we found that the level of perception of computer-related health risks among the students was low. Our findings suggest the need for preventive intervention consisting of education in computer ergonomics.

  19. Common Postmortem Computed Tomography Findings Following Atraumatic Death: Differentiation between Normal Postmortem Changes and Pathologic Lesions

    PubMed Central

    Gonoi, Wataru; Okuma, Hidemi; Shirota, Go; Shintani, Yukako; Abe, Hiroyuki; Takazawa, Yutaka; Fukayama, Masashi; Ohtomo, Kuni

    2015-01-01

    Computed tomography (CT) is widely used in postmortem investigations as an adjunct to the traditional autopsy in forensic medicine. To date, several studies have described postmortem CT findings as being caused by normal postmortem changes. However, on interpretation, postmortem CT findings that are seemingly due to normal postmortem changes initially, may not have been mere postmortem artifacts. In this pictorial essay, we describe the common postmortem CT findings in cases of atraumatic in-hospital death and describe the diagnostic pitfalls of normal postmortem changes that can mimic real pathologic lesions. PMID:26175579

  20. Ceftriaxone-associated pancreatitis captured on serial computed tomography scans.

    PubMed

    Nakagawa, Nozomu; Ochi, Nobuaki; Yamane, Hiromichi; Honda, Yoshihiro; Nagasaki, Yasunari; Urata, Noriyo; Nakanishi, Hidekazu; Kawamoto, Hirofumi; Takigawa, Nagio

    2018-02-01

    A 74-year-old man was treated with ceftriaxone for 5 days and subsequently experienced epigastric pain. Computed tomography (CT) was performed 7 and 3 days before epigastralgia. Although the first CT image revealed no radiographic signs in his biliary system, the second CT image revealed dense radiopaque material in the gallbladder lumen. The third CT image, taken at symptom onset, showed high density in the common bile duct and enlargement of the pancreatic head. This is a very rare case of pseudolithiasis involving the common bile duct, as captured on a series of CT images.

  1. Computer Aided Design of Computer Generated Holograms for electron beam fabrication

    NASA Technical Reports Server (NTRS)

    Urquhart, Kristopher S.; Lee, Sing H.; Guest, Clark C.; Feldman, Michael R.; Farhoosh, Hamid

    1989-01-01

    Computer Aided Design (CAD) systems that have been developed for electrical and mechanical design tasks are also effective tools for the process of designing Computer Generated Holograms (CGHs), particularly when these holograms are to be fabricated using electron beam lithography. CAD workstations provide efficient and convenient means of computing, storing, displaying, and preparing for fabrication many of the features that are common to CGH designs. Experience gained in the process of designing CGHs with various types of encoding methods is presented. Suggestions are made so that future workstations may further accommodate the CGH design process.

  2. Trapping of a micro-bubble by non-paraxial Gaussian beam: computation using the FDTD method.

    PubMed

    Sung, Seung-Yong; Lee, Yong-Gu

    2008-03-03

    Optical forces on a micro-bubble were computed using the Finite Difference Time Domain method. Non-paraxial Gaussian beam equation was used to represent the incident laser with high numerical aperture, common in optical tweezers. The electromagnetic field distribution around a micro-bubble was computed using FDTD method and the electromagnetic stress tensor on the surface of a micro-bubble was used to compute the optical forces. By the analysis of the computational results, interesting relations between the radius of the circular trapping ring and the corresponding stability of the trap were found.

  3. The prevalence of computer and Internet addiction among pupils.

    PubMed

    Zboralski, Krzysztof; Orzechowska, Agata; Talarowska, Monika; Darmosz, Anna; Janiak, Aneta; Janiak, Marcin; Florkowski, Antoni; Gałecki, Piotr

    2009-02-02

    Media have an influence on the human psyche similar to the addictive actions of psychoactive substances or gambling. Computer overuse is claimed to be a cause of psychiatric disturbances such as computer and Internet addiction. It has not yet been recognized as a disease, but it evokes increasing controversy and results in mental disorders commonly defined as computer and Internet addiction. This study was based on a diagnostic survey in which 120 subjects participated. The participants were pupils of three kinds of schools: primary, middle, and secondary school (high school). Information for this study was obtained from a questionnaire prepared by the authors as well as the State-Trait Anxiety Inventory (STAI) and the Psychological Inventory of Aggression Syndrome (IPSA-II). he results confirmed that every fourth pupil was addicted to the Internet. Internet addiction was very common among the youngest users of computers and the Internet, especially those who had no brothers and sisters or came from families with some kind of problems. Moreover, more frequent use of the computer and the Internet was connected with higher levels of aggression and anxiety. Because computer and Internet addiction already constitute a real danger, it is worth considering preventive activities to treat this phenomenon. It is also necessary to make the youth and their parents aware of the dangers of uncontrolled Internet use and pay attention to behavior connected with Internet addiction.

  4. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization

    PubMed Central

    Chen, Qingkui; Zhao, Deyu; Wang, Jingjuan

    2017-01-01

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services. PMID:28777325

  5. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization.

    PubMed

    Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan

    2017-08-04

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.

  6. Assessment of gene order computing methods for Alzheimer's disease

    PubMed Central

    2013-01-01

    Background Computational genomics of Alzheimer disease (AD), the most common form of senile dementia, is a nascent field in AD research. The field includes AD gene clustering by computing gene order which generates higher quality gene clustering patterns than most other clustering methods. However, there are few available gene order computing methods such as Genetic Algorithm (GA) and Ant Colony Optimization (ACO). Further, their performance in gene order computation using AD microarray data is not known. We thus set forth to evaluate the performances of current gene order computing methods with different distance formulas, and to identify additional features associated with gene order computation. Methods Using different distance formulas- Pearson distance and Euclidean distance, the squared Euclidean distance, and other conditions, gene orders were calculated by ACO and GA (including standard GA and improved GA) methods, respectively. The qualities of the gene orders were compared, and new features from the calculated gene orders were identified. Results Compared to the GA methods tested in this study, ACO fits the AD microarray data the best when calculating gene order. In addition, the following features were revealed: different distance formulas generated a different quality of gene order, and the commonly used Pearson distance was not the best distance formula when used with both GA and ACO methods for AD microarray data. Conclusion Compared with Pearson distance and Euclidean distance, the squared Euclidean distance generated the best quality gene order computed by GA and ACO methods. PMID:23369541

  7. Computational pan-genomics: status, promises and challenges.

    PubMed

    2018-01-01

    Many disciplines, from human genetics and oncology to plant breeding, microbiology and virology, commonly face the challenge of analyzing rapidly increasing numbers of genomes. In case of Homo sapiens, the number of sequenced genomes will approach hundreds of thousands in the next few years. Simply scaling up established bioinformatics pipelines will not be sufficient for leveraging the full potential of such rich genomic data sets. Instead, novel, qualitatively different computational methods and paradigms are needed. We will witness the rapid extension of computational pan-genomics, a new sub-area of research in computational biology. In this article, we generalize existing definitions and understand a pan-genome as any collection of genomic sequences to be analyzed jointly or to be used as a reference. We examine already available approaches to construct and use pan-genomes, discuss the potential benefits of future technologies and methodologies and review open challenges from the vantage point of the above-mentioned biological disciplines. As a prominent example for a computational paradigm shift, we particularly highlight the transition from the representation of reference genomes as strings to representations as graphs. We outline how this and other challenges from different application domains translate into common computational problems, point out relevant bioinformatics techniques and identify open problems in computer science. With this review, we aim to increase awareness that a joint approach to computational pan-genomics can help address many of the problems currently faced in various domains. © The Author 2016. Published by Oxford University Press.

  8. Perceptions of University Students regarding Computer Assisted Assessment

    ERIC Educational Resources Information Center

    Jamil, Mubashrah

    2012-01-01

    Computer assisted assessment (CAA) is a common technique of assessment in higher educational institutions in Western countries, but a relatively new concept for students and teachers in Pakistan. It was therefore interesting to investigate students' perceptions about CAA practices from different universities of Pakistan. Information was collected…

  9. Wage Premiums for On-the-Job Computer Use: A Metro and Nonmetro Analysis. Rural Development Research Report.

    ERIC Educational Resources Information Center

    Kusmin, Lorin D.

    By 1997, almost half of all U.S. workers used computers on the job, and such workers generally received higher wages than non-users. However, on-the-job use was less common in nonmetro areas than in metro areas, and wages for nonmetro workers were generally lower. But is computer use instrumental in explaining the metro-nonmetro wage gap? A survey…

  10. Bacterial contamination of computer touch screens.

    PubMed

    Gerba, Charles P; Wuollet, Adam L; Raisanen, Peter; Lopez, Gerardo U

    2016-03-01

    The goal of this study was to determine the occurrence of opportunistic bacterial pathogens on the surfaces of computer touch screens used in hospitals and grocery stores. Opportunistic pathogenic bacteria were isolated on touch screens in hospitals; Clostridium difficile and vancomycin-resistant Enterococcus and in grocery stores; methicillin-resistant Staphylococcus aureus. Enteric bacteria were more common on grocery store touch screens than on hospital computer touch screens. Published by Elsevier Inc.

  11. Automatic and accurate reconstruction of distal humerus contours through B-Spline fitting based on control polygon deformation.

    PubMed

    Mostafavi, Kamal; Tutunea-Fatan, O Remus; Bordatchev, Evgueni V; Johnson, James A

    2014-12-01

    The strong advent of computer-assisted technologies experienced by the modern orthopedic surgery prompts for the expansion of computationally efficient techniques to be built on the broad base of computer-aided engineering tools that are readily available. However, one of the common challenges faced during the current developmental phase continues to remain the lack of reliable frameworks to allow a fast and precise conversion of the anatomical information acquired through computer tomography to a format that is acceptable to computer-aided engineering software. To address this, this study proposes an integrated and automatic framework capable to extract and then postprocess the original imaging data to a common planar and closed B-Spline representation. The core of the developed platform relies on the approximation of the discrete computer tomography data by means of an original two-step B-Spline fitting technique based on successive deformations of the control polygon. In addition to its rapidity and robustness, the developed fitting technique was validated to produce accurate representations that do not deviate by more than 0.2 mm with respect to alternate representations of the bone geometry that were obtained through different-contact-based-data acquisition or data processing methods. © IMechE 2014.

  12. Using modified fruit fly optimisation algorithm to perform the function test and case studies

    NASA Astrophysics Data System (ADS)

    Pan, Wen-Tsao

    2013-06-01

    Evolutionary computation is a computing mode established by practically simulating natural evolutionary processes based on the concept of Darwinian Theory, and it is a common research method. The main contribution of this paper was to reinforce the function of searching for the optimised solution using the fruit fly optimization algorithm (FOA), in order to avoid the acquisition of local extremum solutions. The evolutionary computation has grown to include the concepts of animal foraging behaviour and group behaviour. This study discussed three common evolutionary computation methods and compared them with the modified fruit fly optimization algorithm (MFOA). It further investigated the ability of the three mathematical functions in computing extreme values, as well as the algorithm execution speed and the forecast ability of the forecasting model built using the optimised general regression neural network (GRNN) parameters. The findings indicated that there was no obvious difference between particle swarm optimization and the MFOA in regards to the ability to compute extreme values; however, they were both better than the artificial fish swarm algorithm and FOA. In addition, the MFOA performed better than the particle swarm optimization in regards to the algorithm execution speed, and the forecast ability of the forecasting model built using the MFOA's GRNN parameters was better than that of the other three forecasting models.

  13. Self-Scheduling Parallel Methods for Multiple Serial Codes with Application to WOPWOP

    NASA Technical Reports Server (NTRS)

    Long, Lyle N.; Brentner, Kenneth S.

    2000-01-01

    This paper presents a scheme for efficiently running a large number of serial jobs on parallel computers. Two examples are given of computer programs that run relatively quickly, but often they must be run numerous times to obtain all the results needed. It is very common in science and engineering to have codes that are not massive computing challenges in themselves, but due to the number of instances that must be run, they do become large-scale computing problems. The two examples given here represent common problems in aerospace engineering: aerodynamic panel methods and aeroacoustic integral methods. The first example simply solves many systems of linear equations. This is representative of an aerodynamic panel code where someone would like to solve for numerous angles of attack. The complete code for this first example is included in the appendix so that it can be readily used by others as a template. The second example is an aeroacoustics code (WOPWOP) that solves the Ffowcs Williams Hawkings equation to predict the far-field sound due to rotating blades. In this example, one quite often needs to compute the sound at numerous observer locations, hence parallelization is utilized to automate the noise computation for a large number of observers.

  14. Seismic data enhancement and regularization using finite offset Common Diffraction Surface (CDS) stack

    NASA Astrophysics Data System (ADS)

    Garabito, German; Cruz, João Carlos Ribeiro; Oliva, Pedro Andrés Chira; Söllner, Walter

    2017-01-01

    The Common Reflection Surface stack is a robust method for simulating zero-offset and common-offset sections with high accuracy from multi-coverage seismic data. For simulating common-offset sections, the Common-Reflection-Surface stack method uses a hyperbolic traveltime approximation that depends on five kinematic parameters for each selected sample point of the common-offset section to be simulated. The main challenge of this method is to find a computationally efficient data-driven optimization strategy for accurately determining the five kinematic stacking parameters on which each sample of the stacked common-offset section depends. Several authors have applied multi-step strategies to obtain the optimal parameters by combining different pre-stack data configurations. Recently, other authors used one-step data-driven strategies based on a global optimization for estimating simultaneously the five parameters from multi-midpoint and multi-offset gathers. In order to increase the computational efficiency of the global optimization process, we use in this paper a reduced form of the Common-Reflection-Surface traveltime approximation that depends on only four parameters, the so-called Common Diffraction Surface traveltime approximation. By analyzing the convergence of both objective functions and the data enhancement effect after applying the two traveltime approximations to the Marmousi synthetic dataset and a real land dataset, we conclude that the Common-Diffraction-Surface approximation is more efficient within certain aperture limits and preserves at the same time a high image accuracy. The preserved image quality is also observed in a direct comparison after applying both approximations for simulating common-offset sections on noisy pre-stack data.

  15. Primary pulmonary spindle cell tumour (haemangiopericytoma) in a dog.

    PubMed

    Vignoli, M; Buchholz, J; Morandi, F; Laddaga, E; Brunetti, B; Rossi, F; Terragni, R; Sarli, G

    2008-10-01

    Haemangiopericytoma is a soft tissue sarcoma believed to originate from pericytes. These tumours are commonly located on the skin and subcutaneous tissue of dogs and are most commonly found on the limbs. To the authors' knowledge, primary lung haemangiopericytomas have not been previously described in dogs. This case report describes the diagnostic evaluation and treatment of a primary haemangiopericytoma of the lung in a 10-year-old male, neutered, Siberian husky dog. Staging of the tumour was performed using a computed tomography scan of the thorax and a computed tomography-guided fine-needle aspiration biopsy of the lesion. Treatment was a right caudal lobectomy from a right lateral approach. No regional lymph node changes were noted on computed tomography or intraoperative assessments. Histopathology confirmed a spindle cell tumour that stained positive for vimentin and negative for desmin and S-100.

  16. Single-trial detection of visual evoked potentials by common spatial patterns and wavelet filtering for brain-computer interface.

    PubMed

    Tu, Yiheng; Huang, Gan; Hung, Yeung Sam; Hu, Li; Hu, Yong; Zhang, Zhiguo

    2013-01-01

    Event-related potentials (ERPs) are widely used in brain-computer interface (BCI) systems as input signals conveying a subject's intention. A fast and reliable single-trial ERP detection method can be used to develop a BCI system with both high speed and high accuracy. However, most of single-trial ERP detection methods are developed for offline EEG analysis and thus have a high computational complexity and need manual operations. Therefore, they are not applicable to practical BCI systems, which require a low-complexity and automatic ERP detection method. This work presents a joint spatial-time-frequency filter that combines common spatial patterns (CSP) and wavelet filtering (WF) for improving the signal-to-noise (SNR) of visual evoked potentials (VEP), which can lead to a single-trial ERP-based BCI.

  17. The Gain of Resource Delegation in Distributed Computing Environments

    NASA Astrophysics Data System (ADS)

    Fölling, Alexander; Grimme, Christian; Lepping, Joachim; Papaspyrou, Alexander

    In this paper, we address job scheduling in Distributed Computing Infrastructures, that is a loosely coupled network of autonomous acting High Performance Computing systems. In contrast to the common approach of mutual workload exchange, we consider the more intuitive operator's viewpoint of load-dependent resource reconfiguration. In case of a site's over-utilization, the scheduling system is able to lease resources from other sites to keep up service quality for its local user community. Contrary, the granting of idle resources can increase utilization in times of low local workload and thus ensure higher efficiency. The evaluation considers real workload data and is done with respect to common service quality indicators. For two simple resource exchange policies and three basic setups we show the possible gain of this approach and analyze the dynamics in workload-adaptive reconfiguration behavior.

  18. Desiderata for computable representations of electronic health records-driven phenotype algorithms

    PubMed Central

    Mo, Huan; Thompson, William K; Rasmussen, Luke V; Pacheco, Jennifer A; Jiang, Guoqian; Kiefer, Richard; Zhu, Qian; Xu, Jie; Montague, Enid; Carrell, David S; Lingren, Todd; Mentch, Frank D; Ni, Yizhao; Wehbe, Firas H; Peissig, Peggy L; Tromp, Gerard; Larson, Eric B; Chute, Christopher G; Pathak, Jyotishman; Speltz, Peter; Kho, Abel N; Jarvik, Gail P; Bejan, Cosmin A; Williams, Marc S; Borthwick, Kenneth; Kitchner, Terrie E; Roden, Dan M; Harris, Paul A

    2015-01-01

    Background Electronic health records (EHRs) are increasingly used for clinical and translational research through the creation of phenotype algorithms. Currently, phenotype algorithms are most commonly represented as noncomputable descriptive documents and knowledge artifacts that detail the protocols for querying diagnoses, symptoms, procedures, medications, and/or text-driven medical concepts, and are primarily meant for human comprehension. We present desiderata for developing a computable phenotype representation model (PheRM). Methods A team of clinicians and informaticians reviewed common features for multisite phenotype algorithms published in PheKB.org and existing phenotype representation platforms. We also evaluated well-known diagnostic criteria and clinical decision-making guidelines to encompass a broader category of algorithms. Results We propose 10 desired characteristics for a flexible, computable PheRM: (1) structure clinical data into queryable forms; (2) recommend use of a common data model, but also support customization for the variability and availability of EHR data among sites; (3) support both human-readable and computable representations of phenotype algorithms; (4) implement set operations and relational algebra for modeling phenotype algorithms; (5) represent phenotype criteria with structured rules; (6) support defining temporal relations between events; (7) use standardized terminologies and ontologies, and facilitate reuse of value sets; (8) define representations for text searching and natural language processing; (9) provide interfaces for external software algorithms; and (10) maintain backward compatibility. Conclusion A computable PheRM is needed for true phenotype portability and reliability across different EHR products and healthcare systems. These desiderata are a guide to inform the establishment and evolution of EHR phenotype algorithm authoring platforms and languages. PMID:26342218

  19. Data bank for short-length red oak lumber

    Treesearch

    Janice K. Wiedenbeck; Charles J. Gatchell; Elizabeth S. Walker

    1994-01-01

    This data bank for short-length lumber (less than 8 feet long) contains information on board outlines and defect size and quality for 426 414-inch-thick red oak boards. The Selects, 1 Common, 2A Common, and 3A Common grades are represented in the data bank. The data bank provides the kind of detailed lumber description that is required as input by computer programs...

  20. PERSONAL COMPUTER MONITORS: A SCREENING EVALUATION OF VOLATILE ORGANIC EMISSIONS FROM EXISTING PRINTED CIRCUIT BOARD LAMINATES AND POTENTIAL POLLUTION PREVENTION ALTERNATIVES

    EPA Science Inventory

    The report gives results of a screening evaluation of volatile organic emissions from printed circuit board laminates and potential pollution prevention alternatives. In the evaluation, printed circuit board laminates, without circuitry, commonly found in personal computer (PC) m...

  1. Liminal Spaces and Learning Computing

    ERIC Educational Resources Information Center

    McCartney, Robert; Boustedt, Jonas; Eckerdal, Anna; Mostrom, Jan Erik; Sanders, Kate; Thomas, Lynda; Zander, Carol

    2009-01-01

    "Threshold concepts" are concepts that, among other things, transform the way a student looks at a discipline. Although the term "threshold" might suggest that the transformation occurs at a specific point in time, an "aha" moment, it seems more common (at least in computing) that a longer time period is required.…

  2. Multimedia Instructional Tools' Impact on Student Motivation and Learning Strategies in Computer Applications Courses

    ERIC Educational Resources Information Center

    Chapman, Debra; Wang, Shuyan

    2015-01-01

    Multimedia instructional tools (MMIT) have been identified as a way effectively and economically present instructional material. MMITs are commonly used in introductory computer applications courses as MMITs should be effective in increasing student knowledge and positively impact motivation and learning strategies, without increasing costs. This…

  3. Instructional Support and Implementation Structure during Elementary Teachers' Science Education Simulation Use

    ERIC Educational Resources Information Center

    Gonczi, Amanda L.; Chiu, Jennifer L.; Maeng, Jennifer L.; Bell, Randy L.

    2016-01-01

    This investigation sought to identify patterns in elementary science teachers' computer simulation use, particularly implementation structures and instructional supports commonly employed by teachers. Data included video-recorded science lessons of 96 elementary teachers who used computer simulations in one or more science lessons. Results…

  4. Dialogue-Based CALL: An Overview of Existing Research

    ERIC Educational Resources Information Center

    Bibauw, Serge; François, Thomas; Desmet, Piet

    2015-01-01

    Dialogue-based Computer-Assisted Language Learning (CALL) covers applications and systems allowing a learner to practice the target language in a meaning-focused conversational activity with an automated agent. We first present a common definition for dialogue-based CALL, based on three features: dialogue as the activity unit, computer as the…

  5. COMPUTATIONAL INVESTIGATION OF CHEMICAL REACTIVITY IN RELATION TO BIOACTIVATION AND TOXICITY ACROSS CLASSES OF HALOORGANICS: BROMINATION VS. CHLORINATION

    EPA Science Inventory

    COMPUTATIONAL INVESTIGATION OF CHEMICAL REACTIVITY IN RELATION TO BIOACTIV A TION AND TOXICITY ACROSS CLASSES OF HALOORGANICS: BROMINATION VS. CHLORINATION.

    Halogenation is a common feature of many classes of environmental contaminants, and often plays a crucial role in po...

  6. Compressed Continuous Computation v. 12/20/2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorodetsky, Alex

    2017-02-17

    A library for performing numerical computation with low-rank functions. The (C3) library enables performing continuous linear and multilinear algebra with multidimensional functions. Common tasks include taking "matrix" decompositions of vector- or matrix-valued functions, approximating multidimensional functions in low-rank format, adding or multiplying functions together, integrating multidimensional functions.

  7. Multimedia Instructional Tools and Student Learning in a Computer Applications Course

    ERIC Educational Resources Information Center

    Chapman, Debra L.; Wang, Shuyan

    2015-01-01

    Advances in technology and changes in educational strategies have resulted in the integration of technology in the classroom. Multimedia instructional tools (MMIT) provide student-centered active-learning instructional activities. MMITs are common in introductory computer applications courses based on the premise that MMITs should increase student…

  8. Multimedia Instructional Tools and Student Learning in Computer Applications Courses

    ERIC Educational Resources Information Center

    Chapman, Debra Laier

    2013-01-01

    Advances in technology and changes in educational strategies have resulted in the integration of technology into the classroom. Multimedia instructional tools (MMIT) have been identified as a way to provide student-centered active-learning instructional material to students. MMITs are common in introductory computer applications courses based on…

  9. Ontology-Driven Discovery of Scientific Computational Entities

    ERIC Educational Resources Information Center

    Brazier, Pearl W.

    2010-01-01

    Many geoscientists use modern computational resources, such as software applications, Web services, scientific workflows and datasets that are readily available on the Internet, to support their research and many common tasks. These resources are often shared via human contact and sometimes stored in data portals; however, they are not necessarily…

  10. Common Sense and Computer Magazines, or, What's the Good Word, Part 1: Periodicals.

    ERIC Educational Resources Information Center

    Crawford, Walt

    1984-01-01

    This list of 60 microcomputer magazines encountered at newsstands during September 1984 is broken down by specific computer or software coverage. Reviews for 22 magazines note number of pages, advertisements, reviews, and articles, reviewer's opinions, and recommended use. Eight magazines are recommended for most libraries. (EJS)

  11. A common stochastic accumulator with effector-dependent noise can explain eye-hand coordination

    PubMed Central

    Gopal, Atul; Viswanathan, Pooja

    2015-01-01

    The computational architecture that enables the flexible coupling between otherwise independent eye and hand effector systems is not understood. By using a drift diffusion framework, in which variability of the reaction time (RT) distribution scales with mean RT, we tested the ability of a common stochastic accumulator to explain eye-hand coordination. Using a combination of behavior, computational modeling and electromyography, we show how a single stochastic accumulator to threshold, followed by noisy effector-dependent delays, explains eye-hand RT distributions and their correlation, while an alternate independent, interactive eye and hand accumulator model does not. Interestingly, the common accumulator model did not explain the RT distributions of the same subjects when they made eye and hand movements in isolation. Taken together, these data suggest that a dedicated circuit underlies coordinated eye-hand planning. PMID:25568161

  12. CT Colonography with Computer-aided Detection: Recognizing the Causes of False-Positive Reader Results

    PubMed Central

    Dachman, Abraham H.; Wroblewski, Kristen; Vannier, Michael W.; Horne, John M.

    2014-01-01

    Computed tomography (CT) colonography is a screening modality used to detect colonic polyps before they progress to colorectal cancer. Computer-aided detection (CAD) is designed to decrease errors of detection by finding and displaying polyp candidates for evaluation by the reader. CT colonography CAD false-positive results are common and have numerous causes. The relative frequency of CAD false-positive results and their effect on reader performance on the basis of a 19-reader, 100-case trial shows that the vast majority of CAD false-positive results were dismissed by readers. Many CAD false-positive results are easily disregarded, including those that result from coarse mucosa, reconstruction, peristalsis, motion, streak artifacts, diverticulum, rectal tubes, and lipomas. CAD false-positive results caused by haustral folds, extracolonic candidates, diminutive lesions (<6 mm), anal papillae, internal hemorrhoids, varices, extrinsic compression, and flexural pseudotumors are almost always recognized and disregarded. The ileocecal valve and tagged stool are common sources of CAD false-positive results associated with reader false-positive results. Nondismissable CAD soft-tissue polyp candidates larger than 6 mm are another common cause of reader false-positive results that may lead to further evaluation with follow-up CT colonography or optical colonoscopy. Strategies for correctly evaluating CAD polyp candidates are important to avoid pitfalls from common sources of CAD false-positive results. ©RSNA, 2014 PMID:25384290

  13. Motion Planning in a Society of Intelligent Mobile Agents

    NASA Technical Reports Server (NTRS)

    Esterline, Albert C.; Shafto, Michael (Technical Monitor)

    2002-01-01

    The majority of the work on this grant involved formal modeling of human-computer integration. We conceptualize computer resources as a multiagent system so that these resources and human collaborators may be modeled uniformly. In previous work we had used modal for this uniform modeling, and we had developed a process-algebraic agent abstraction. In this work, we applied this abstraction (using CSP) in uniformly modeling agents and users, which allowed us to use tools for investigating CSP models. This work revealed the power of, process-algebraic handshakes in modeling face-to-face conversation. We also investigated specifications of human-computer systems in the style of algebraic specification. This involved specifying the common knowledge required for coordination and process-algebraic patterns of communication actions intended to establish the common knowledge. We investigated the conditions for agents endowed with perception to gain common knowledge and implemented a prototype neural-network system that allows agents to detect when such conditions hold. The literature on multiagent systems conceptualizes communication actions as speech acts. We implemented a prototype system that infers the deontic effects (obligations, permissions, prohibitions) of speech acts and detects violations of these effects. A prototype distributed system was developed that allows users to collaborate in moving proxy agents; it was designed to exploit handshakes and common knowledge Finally. in work carried over from a previous NASA ARC grant, about fifteen undergraduates developed and presented projects on multiagent motion planning.

  14. Handheld Computer Use in U.S. Family Practice Residency Programs

    PubMed Central

    Criswell, Dan F.; Parchman, Michael L.

    2002-01-01

    Objective: The purpose of the study was to evaluate the uses of handheld computers (also called personal digital assistants, or PDAs) in family practice residency programs in the United States. Study Design: In November 2000, the authors mailed a questionnaire to the program directors of all American Academy of Family Physicians (AAFP) and American College of Osteopathic Family Practice (ACOFP) residency programs in the United States. Measurements: Data and patterns of the use and non-use of handheld computers were identified. Results: Approximately 50 percent (306 of 610) of the programs responded to the survey. Two thirds of the programs reported that handheld computers were used in their residencies, and an additional 14 percent had plans for implementation within 24 months. Both the Palm and the Windows CE operating systems were used, with the Palm operating system the most common. Military programs had the highest rate of use (8 of 10 programs, 80 percent), and osteopathic programs had the lowest (23 of 55 programs, 42 percent). Of programs that reported handheld computer use, 45 percent had required handheld computer applications that are used uniformly by all users. Funding for handheld computers and related applications was non-budgeted in 76percent of the programs in which handheld computers were used. In programs providing a budget for handheld computers, the average annual budget per user was $461.58. Interested faculty or residents, rather than computer information services personnel, performed upkeep and maintenance of handheld computers in 72 percent of the programs in which the computers are used. In addition to the installed calendar, memo pad, and address book, the most common clinical uses of handheld computers in the programs were as medication reference tools, electronic textbooks, and clinical computational or calculator-type programs. Conclusions: Handheld computers are widely used in family practice residency programs in the United States. Although handheld computers were designed as electronic organizers, in family practice residencies they are used as medication reference tools, electronic textbooks, and clinical computational programs and to track activities that were previously associated with desktop database applications. PMID:11751806

  15. Handheld computer use in U.S. family practice residency programs.

    PubMed

    Criswell, Dan F; Parchman, Michael L

    2002-01-01

    The purpose of the study was to evaluate the uses of handheld computers (also called personal digital assistants, or PDAs) in family practice residency programs in the United States. In November 2000, the authors mailed a questionnaire to the program directors of all American Academy of Family Physicians (AAFP) and American College of Osteopathic Family Practice (ACOFP) residency programs in the United States. Data and patterns of the use and non-use of handheld computers were identified. Approximately 50 percent (306 of 610) of the programs responded to the survey. Two thirds of the programs reported that handheld computers were used in their residencies, and an additional 14 percent had plans for implementation within 24 months. Both the Palm and the Windows CE operating systems were used, with the Palm operating system the most common. Military programs had the highest rate of use (8 of 10 programs, 80 percent), and osteopathic programs had the lowest (23 of 55 programs, 42 percent). Of programs that reported handheld computer use, 45 percent had required handheld computer applications that are used uniformly by all users. Funding for handheld computers and related applications was non-budgeted in 76percent of the programs in which handheld computers were used. In programs providing a budget for handheld computers, the average annual budget per user was 461.58 dollars. Interested faculty or residents, rather than computer information services personnel, performed upkeep and maintenance of handheld computers in 72 percent of the programs in which the computers are used. In addition to the installed calendar, memo pad, and address book, the most common clinical uses of handheld computers in the programs were as medication reference tools, electronic textbooks, and clinical computational or calculator-type programs. Handheld computers are widely used in family practice residency programs in the United States. Although handheld computers were designed as electronic organizers, in family practice residencies they are used as medication reference tools, electronic textbooks, and clinical computational programs and to track activities that were previously associated with desktop database applications.

  16. Dynamic VM Provisioning for TORQUE in a Cloud Environment

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Boland, L.; Coddington, P.; Sevior, M.

    2014-06-01

    Cloud computing, also known as an Infrastructure-as-a-Service (IaaS), is attracting more interest from the commercial and educational sectors as a way to provide cost-effective computational infrastructure. It is an ideal platform for researchers who must share common resources but need to be able to scale up to massive computational requirements for specific periods of time. This paper presents the tools and techniques developed to allow the open source TORQUE distributed resource manager and Maui cluster scheduler to dynamically integrate OpenStack cloud resources into existing high throughput computing clusters.

  17. Machine learning and computer vision approaches for phenotypic profiling.

    PubMed

    Grys, Ben T; Lo, Dara S; Sahin, Nil; Kraus, Oren Z; Morris, Quaid; Boone, Charles; Andrews, Brenda J

    2017-01-02

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. © 2017 Grys et al.

  18. Machine learning and computer vision approaches for phenotypic profiling

    PubMed Central

    Morris, Quaid

    2017-01-01

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. PMID:27940887

  19. Cloud computing applications for biomedical science: A perspective.

    PubMed

    Navale, Vivek; Bourne, Philip E

    2018-06-01

    Biomedical research has become a digital data-intensive endeavor, relying on secure and scalable computing, storage, and network infrastructure, which has traditionally been purchased, supported, and maintained locally. For certain types of biomedical applications, cloud computing has emerged as an alternative to locally maintained traditional computing approaches. Cloud computing offers users pay-as-you-go access to services such as hardware infrastructure, platforms, and software for solving common biomedical computational problems. Cloud computing services offer secure on-demand storage and analysis and are differentiated from traditional high-performance computing by their rapid availability and scalability of services. As such, cloud services are engineered to address big data problems and enhance the likelihood of data and analytics sharing, reproducibility, and reuse. Here, we provide an introductory perspective on cloud computing to help the reader determine its value to their own research.

  20. Cloud computing applications for biomedical science: A perspective

    PubMed Central

    2018-01-01

    Biomedical research has become a digital data–intensive endeavor, relying on secure and scalable computing, storage, and network infrastructure, which has traditionally been purchased, supported, and maintained locally. For certain types of biomedical applications, cloud computing has emerged as an alternative to locally maintained traditional computing approaches. Cloud computing offers users pay-as-you-go access to services such as hardware infrastructure, platforms, and software for solving common biomedical computational problems. Cloud computing services offer secure on-demand storage and analysis and are differentiated from traditional high-performance computing by their rapid availability and scalability of services. As such, cloud services are engineered to address big data problems and enhance the likelihood of data and analytics sharing, reproducibility, and reuse. Here, we provide an introductory perspective on cloud computing to help the reader determine its value to their own research. PMID:29902176

  1. Practical advantages of evolutionary computation

    NASA Astrophysics Data System (ADS)

    Fogel, David B.

    1997-10-01

    Evolutionary computation is becoming a common technique for solving difficult, real-world problems in industry, medicine, and defense. This paper reviews some of the practical advantages to using evolutionary algorithms as compared with classic methods of optimization or artificial intelligence. Specific advantages include the flexibility of the procedures, as well as their ability to self-adapt the search for optimum solutions on the fly. As desktop computers increase in speed, the application of evolutionary algorithms will become routine.

  2. The Battle for Desktop Control: When It Comes to the Management of Classroom Computers, Educators and the Technical Staff Who Support Them Must Forge a Common Ground

    ERIC Educational Resources Information Center

    Fryer, Wesley

    2004-01-01

    There has long been a power struggle between techies and teachers over classroom computer desktops. IT personnel tend to believe allowing "inept" educators to have unfettered access to their computer's hard drive is an open invitation for trouble. Conversely, teachers often perceive tech support to be "uncaring" adversaries standing in the way of…

  3. Universal quantum computation with little entanglement.

    PubMed

    Van den Nest, Maarten

    2013-02-08

    We show that universal quantum computation can be achieved in the standard pure-state circuit model while the entanglement entropy of every bipartition is small in each step of the computation. The entanglement entropy required for large-scale quantum computation even tends to zero. Moreover we show that the same conclusion applies to many entanglement measures commonly used in the literature. This includes e.g., the geometric measure, localizable entanglement, multipartite concurrence, squashed entanglement, witness-based measures, and more generally any entanglement measure which is continuous in a certain natural sense. These results demonstrate that many entanglement measures are unsuitable tools to assess the power of quantum computers.

  4. Quantum Algorithms and Protocols

    NASA Astrophysics Data System (ADS)

    Divincenzo, David

    2001-06-01

    Quantum Computing is better than classical computing, but not just because it speeds up some computations. Some of the best known quantum algorithms, like Grover's, may well have their most interesting applications in settings that involve the combination of computation and communication. Thus, Grover speeds up the appointment scheduling problem by reducing the amount of communication needed between two parties who want to find a common free slot on their calendars. I will review various other applications of this sort that are being explored. Other distributed computing protocols are required to have other attributes like obliviousness and privacy; I will discuss our recent applications involving quantum data hiding.

  5. A survey of GPU-based acceleration techniques in MRI reconstructions

    PubMed Central

    Wang, Haifeng; Peng, Hanchuan; Chang, Yuchou

    2018-01-01

    Image reconstruction in magnetic resonance imaging (MRI) clinical applications has become increasingly more complicated. However, diagnostic and treatment require very fast computational procedure. Modern competitive platforms of graphics processing unit (GPU) have been used to make high-performance parallel computations available, and attractive to common consumers for computing massively parallel reconstruction problems at commodity price. GPUs have also become more and more important for reconstruction computations, especially when deep learning starts to be applied into MRI reconstruction. The motivation of this survey is to review the image reconstruction schemes of GPU computing for MRI applications and provide a summary reference for researchers in MRI community. PMID:29675361

  6. A survey of GPU-based acceleration techniques in MRI reconstructions.

    PubMed

    Wang, Haifeng; Peng, Hanchuan; Chang, Yuchou; Liang, Dong

    2018-03-01

    Image reconstruction in magnetic resonance imaging (MRI) clinical applications has become increasingly more complicated. However, diagnostic and treatment require very fast computational procedure. Modern competitive platforms of graphics processing unit (GPU) have been used to make high-performance parallel computations available, and attractive to common consumers for computing massively parallel reconstruction problems at commodity price. GPUs have also become more and more important for reconstruction computations, especially when deep learning starts to be applied into MRI reconstruction. The motivation of this survey is to review the image reconstruction schemes of GPU computing for MRI applications and provide a summary reference for researchers in MRI community.

  7. 1998 data bank for kiln-dried red oak lumber

    Treesearch

    Charles J. Gatchell; R. Edward Thomas; Elizabeth S. Walker

    1998-01-01

    A collection of 3,487 fully described, kiln-dried red oak boards totaling 20,021 board feet. The boards, which are straight or contain no more than ? inch of crook, are FAS, FAS ONE FACE (F1F), Selects, No. 1 Common, No. 2A Common, or No. 3A common. The boards were graded with the UGRS (Ultimate Grading and Remanufacturing System) computer program. After the grade was...

  8. Computed Tomography of the Abdomen in Eight Clinically Normal Common Marmosets (Callithrix jacchus).

    PubMed

    du Plessis, W M; Groenewald, H B; Elliott, D

    2017-08-01

    The aim of this study was to provide a detailed anatomical description of the abdomen in the clinically normal common marmoset by means of computed tomography (CT). Eight clinically healthy mature common marmosets ranging from 12 to 48 months and 235 to 365 g bodyweight were anesthetized and pre- and post-contrast CT examinations were performed using different CT settings in dorsal recumbency. Abdominal organs were identified and visibility noted. Diagnostic quality abdominal images could be obtained of the common marmoset despite its small size using a dual-slice CT scanner. Representative cross-sectional images were chosen from different animals illustrating the abdominal CT anatomy of clinically normal common marmosets. Identification or delineation of abdominal organs greatly improved with i.v. contrast. A modified high-frequency algorithm with edge enhancement added valuable information for identification of small structures such as the ureters. The Hounsfield unit (HU) of major abdominal organs differed from that of small animals (domestic dogs and cats). Due to their size and different anatomy, standard small animal CT protocols need to be critically assessed and adapted for exotics, such as the common marmoset. The established normal reference range of HU of major abdominal organs and adapted settings for a CT protocol will aid clinical assessment of the common marmoset. © 2017 Blackwell Verlag GmbH.

  9. The frequency spectrum crisis - Issues and answers

    NASA Astrophysics Data System (ADS)

    Armes, G. L.

    The frequency spectrum represents a unique resource which can be overtaxed. In the present investigation, it is attempted to evalute the demand for satellite and microwave services. Dimensions of increased demand are discussed, taking into account developments related to the introduction of the personal computer, the activities of the computer and communications industries in preparation for the office of the future, and electronic publishing. Attention is given to common carrier spectrum congestion, common carrier microwave, satellite communications, teleports, international implications, satellite frequency bands, satellite spectrum implications, alternatives regarding the utilization of microwave frequency bands, U.S. Government spectrum utilization, and the impact at C-band.

  10. BIRD: A general interface for sparse distributed memory simulators

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1990-01-01

    Kanerva's sparse distributed memory (SDM) has now been implemented for at least six different computers, including SUN3 workstations, the Apple Macintosh, and the Connection Machine. A common interface for input of commands would both aid testing of programs on a broad range of computer architectures and assist users in transferring results from research environments to applications. A common interface also allows secondary programs to generate command sequences for a sparse distributed memory, which may then be executed on the appropriate hardware. The BIRD program is an attempt to create such an interface. Simplifying access to different simulators should assist developers in finding appropriate uses for SDM.

  11. A Novel Multilayer Correlation Maximization Model for Improving CCA-Based Frequency Recognition in SSVEP Brain-Computer Interface.

    PubMed

    Jiao, Yong; Zhang, Yu; Wang, Yu; Wang, Bei; Jin, Jing; Wang, Xingyu

    2018-05-01

    Multiset canonical correlation analysis (MsetCCA) has been successfully applied to optimize the reference signals by extracting common features from multiple sets of electroencephalogram (EEG) for steady-state visual evoked potential (SSVEP) recognition in brain-computer interface application. To avoid extracting the possible noise components as common features, this study proposes a sophisticated extension of MsetCCA, called multilayer correlation maximization (MCM) model for further improving SSVEP recognition accuracy. MCM combines advantages of both CCA and MsetCCA by carrying out three layers of correlation maximization processes. The first layer is to extract the stimulus frequency-related information in using CCA between EEG samples and sine-cosine reference signals. The second layer is to learn reference signals by extracting the common features with MsetCCA. The third layer is to re-optimize the reference signals set in using CCA with sine-cosine reference signals again. Experimental study is implemented to validate effectiveness of the proposed MCM model in comparison with the standard CCA and MsetCCA algorithms. Superior performance of MCM demonstrates its promising potential for the development of an improved SSVEP-based brain-computer interface.

  12. The LISS--a public database of common imaging signs of lung diseases for computer-aided detection and diagnosis research and medical education.

    PubMed

    Han, Guanghui; Liu, Xiabi; Han, Feifei; Santika, I Nyoman Tenaya; Zhao, Yanfeng; Zhao, Xinming; Zhou, Chunwu

    2015-02-01

    Lung computed tomography (CT) imaging signs play important roles in the diagnosis of lung diseases. In this paper, we review the significance of CT imaging signs in disease diagnosis and determine the inclusion criterion of CT scans and CT imaging signs of our database. We develop the software of abnormal regions annotation and design the storage scheme of CT images and annotation data. Then, we present a publicly available database of lung CT imaging signs, called LISS for short, which contains 271 CT scans and 677 abnormal regions in them. The 677 abnormal regions are divided into nine categories of common CT imaging signs of lung disease (CISLs). The ground truth of these CISLs regions and the corresponding categories are provided. Furthermore, to make the database publicly available, all private data in CT scans are eliminated or replaced with provisioned values. The main characteristic of our LISS database is that it is developed from a new perspective of CT imaging signs of lung diseases instead of commonly considered lung nodules. Thus, it is promising to apply to computer-aided detection and diagnosis research and medical education.

  13. Starting Over: Current Issues in Online Catalog User Interface Design.

    ERIC Educational Resources Information Center

    Crawford, Walt

    1992-01-01

    Discussion of online catalogs focuses on issues in interface design. Issues addressed include understanding the user base; common user access (CUA) with personal computers; common command language (CCL); hyperlinks; screen design issues; differences from card catalogs; indexes; graphic user interfaces (GUIs); color; online help; and remote users.…

  14. 47 CFR 64.607 - Furnishing related customer premises equipment.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    .... 64.607 Section 64.607 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER... premises equipment. (a) Any communications common carrier may provide, under tariff, customer premises... computers. [56 FR 36731, Aug. 1, 1991, as amended at 72 FR 43560, Aug. 6, 2007; 73 FR 21252, Apr. 21, 2008...

  15. 47 CFR 64.607 - Furnishing related customer premises equipment.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... 64.607 Section 64.607 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER... premises equipment. (a) Any communications common carrier may provide, under tariff, customer premises... computers. [56 FR 36731, Aug. 1, 1991, as amended at 72 FR 43560, Aug. 6, 2007; 73 FR 21252, Apr. 21, 2008...

  16. 47 CFR 64.607 - Furnishing related customer premises equipment.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... 64.607 Section 64.607 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER... premises equipment. (a) Any communications common carrier may provide, under tariff, customer premises... computers. [56 FR 36731, Aug. 1, 1991, as amended at 72 FR 43560, Aug. 6, 2007; 73 FR 21252, Apr. 21, 2008...

  17. 47 CFR 64.607 - Furnishing related customer premises equipment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... 64.607 Section 64.607 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER... premises equipment. (a) Any communications common carrier may provide, under tariff, customer premises... computers. [56 FR 36731, Aug. 1, 1991, as amended at 72 FR 43560, Aug. 6, 2007; 73 FR 21252, Apr. 21, 2008...

  18. 47 CFR 64.607 - Furnishing related customer premises equipment.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... 64.607 Section 64.607 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER... premises equipment. (a) Any communications common carrier may provide, under tariff, customer premises... computers. [56 FR 36731, Aug. 1, 1991, as amended at 72 FR 43560, Aug. 6, 2007; 73 FR 21252, Apr. 21, 2008...

  19. 26 CFR 7.999-1 - Computation of the international boycott factor.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... controlled group's common parent corporation. In the event that no common parent corporation exists, the... were communicated on or before November 3, 1976, to the governments or persons with which the... completed during the taxable year, but on November 1, 1976, Corporation A communicated a renunciation of the...

  20. Haptic Classification of Common Objects: Knowledge-Driven Exploration.

    ERIC Educational Resources Information Center

    Lederman, Susan J.; Klatzky, Roberta L.

    1990-01-01

    Theoretical and empirical issues relating to haptic exploration and the representation of common objects during haptic classification were investigated in 3 experiments involving a total of 112 college students. Results are discussed in terms of a computational model of human haptic object classification with implications for dextrous robot…

  1. Applied Graph-Mining Algorithms to Study Biomolecular Interaction Networks

    PubMed Central

    2014-01-01

    Protein-protein interaction (PPI) networks carry vital information on the organization of molecular interactions in cellular systems. The identification of functionally relevant modules in PPI networks is one of the most important applications of biological network analysis. Computational analysis is becoming an indispensable tool to understand large-scale biomolecular interaction networks. Several types of computational methods have been developed and employed for the analysis of PPI networks. Of these computational methods, graph comparison and module detection are the two most commonly used strategies. This review summarizes current literature on graph kernel and graph alignment methods for graph comparison strategies, as well as module detection approaches including seed-and-extend, hierarchical clustering, optimization-based, probabilistic, and frequent subgraph methods. Herein, we provide a comprehensive review of the major algorithms employed under each theme, including our recently published frequent subgraph method, for detecting functional modules commonly shared across multiple cancer PPI networks. PMID:24800226

  2. Toward customer-centric organizational science: A common language effect size indicator for multiple linear regressions and regressions with higher-order terms.

    PubMed

    Krasikova, Dina V; Le, Huy; Bachura, Eric

    2018-06-01

    To address a long-standing concern regarding a gap between organizational science and practice, scholars called for more intuitive and meaningful ways of communicating research results to users of academic research. In this article, we develop a common language effect size index (CLβ) that can help translate research results to practice. We demonstrate how CLβ can be computed and used to interpret the effects of continuous and categorical predictors in multiple linear regression models. We also elaborate on how the proposed CLβ index is computed and used to interpret interactions and nonlinear effects in regression models. In addition, we test the robustness of the proposed index to violations of normality and provide means for computing standard errors and constructing confidence intervals around its estimates. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. Iterative methods for plasma sheath calculations: Application to spherical probe

    NASA Technical Reports Server (NTRS)

    Parker, L. W.; Sullivan, E. C.

    1973-01-01

    The computer cost of a Poisson-Vlasov iteration procedure for the numerical solution of a steady-state collisionless plasma-sheath problem depends on: (1) the nature of the chosen iterative algorithm, (2) the position of the outer boundary of the grid, and (3) the nature of the boundary condition applied to simulate a condition at infinity (as in three-dimensional probe or satellite-wake problems). Two iterative algorithms, in conjunction with three types of boundary conditions, are analyzed theoretically and applied to the computation of current-voltage characteristics of a spherical electrostatic probe. The first algorithm was commonly used by physicists, and its computer costs depend primarily on the boundary conditions and are only slightly affected by the mesh interval. The second algorithm is not commonly used, and its costs depend primarily on the mesh interval and slightly on the boundary conditions.

  4. Pattern statistics on Markov chains and sensitivity to parameter estimation

    PubMed Central

    Nuel, Grégory

    2006-01-01

    Background: In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,...). Results: In the particular case where pattern statistics (overlap counting only) computed through binomial approximations we use the delta-method to give an explicit expression of σ, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. Conclusion: We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation. PMID:17044916

  5. Pattern statistics on Markov chains and sensitivity to parameter estimation.

    PubMed

    Nuel, Grégory

    2006-10-17

    In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,...). In the particular case where pattern statistics (overlap counting only) computed through binomial approximations we use the delta-method to give an explicit expression of sigma, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation.

  6. LTCP 2D Graphical User Interface. Application Description and User's Guide

    NASA Technical Reports Server (NTRS)

    Ball, Robert; Navaz, Homayun K.

    1996-01-01

    A graphical user interface (GUI) written for NASA's LTCP (Liquid Thrust Chamber Performance) 2 dimensional computational fluid dynamic code is described. The GUI is written in C++ for a desktop personal computer running under a Microsoft Windows operating environment. Through the use of common and familiar dialog boxes, features, and tools, the user can easily and quickly create and modify input files for the LTCP code. In addition, old input files used with the LTCP code can be opened and modified using the GUI. The application is written in C++ for a desktop personal computer running under a Microsoft Windows operating environment. The program and its capabilities are presented, followed by a detailed description of each menu selection and the method of creating an input file for LTCP. A cross reference is included to help experienced users quickly find the variables which commonly need changes. Finally, the system requirements and installation instructions are provided.

  7. Aerodynamically induced radial forces in a centrifugal gas compressor: Part 2 -- Computational investigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flathers, M.B.; Bache, G.E.

    1999-10-01

    Radial loads and direction of a centrifugal gas compressor containing a high specific speed mixed flow impeller and a single tongue volute were determined both experimentally and computationally at both design and off-design conditions. The experimental methodology was developed in conjunction with a traditional ASME PTC-10 closed-loop test to determine radial load and direction. The experimental study is detailed in Part 1 of this paper (Moore and Flathers, 1998). The computational method employs a commercially available, fully three-dimensional viscous code to analyze the impeller and the volute interaction. An uncoupled scheme was initially used where the impeller and volute weremore » analyzed as separate models using a common vaneless diffuser geometry. The two calculations were then repeated until the boundary conditions at a chosen location in the common vaneless diffuser were nearly the same. Subsequently, a coupled scheme was used where the entire stage geometry was analyzed in one calculation, thus eliminating the need for manual iteration of the two independent calculations. In addition to radial load and direction information, this computational procedure also provided aerodynamic stage performance. The effect of impeller front face and rear face cavities was also quantified. The paper will discuss computational procedures, including grid generation and boundary conditions, as well as comparisons of the various computational schemes to experiment. The results of this study will show the limitations and benefits of Computational Fluid Dynamics (CFD) for determination of radial load, direction, and aerodynamic stage performance.« less

  8. The Use of Object-Oriented Analysis Methods in Surety Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craft, Richard L.; Funkhouser, Donald R.; Wyss, Gregory D.

    1999-05-01

    Object-oriented analysis methods have been used in the computer science arena for a number of years to model the behavior of computer-based systems. This report documents how such methods can be applied to surety analysis. By embodying the causality and behavior of a system in a common object-oriented analysis model, surety analysts can make the assumptions that underlie their models explicit and thus better communicate with system designers. Furthermore, given minor extensions to traditional object-oriented analysis methods, it is possible to automatically derive a wide variety of traditional risk and reliability analysis methods from a single common object model. Automaticmore » model extraction helps ensure consistency among analyses and enables the surety analyst to examine a system from a wider variety of viewpoints in a shorter period of time. Thus it provides a deeper understanding of a system's behaviors and surety requirements. This report documents the underlying philosophy behind the common object model representation, the methods by which such common object models can be constructed, and the rules required to interrogate the common object model for derivation of traditional risk and reliability analysis models. The methodology is demonstrated in an extensive example problem.« less

  9. An Interaction of Screen Colour and Lesson Task in CAL

    ERIC Educational Resources Information Center

    Clariana, Roy B.

    2004-01-01

    Colour is a common feature in computer-aided learning (CAL), though the instructional effects of screen colour are not well understood. This investigation considers the effects of different CAL study tasks with feedback on posttest performance and on posttest memory of the lesson colour scheme. Graduate students (n=68) completed a computer-based…

  10. Risk in Enterprise Cloud Computing: Re-Evaluated

    ERIC Educational Resources Information Center

    Funmilayo, Bolonduro, R.

    2016-01-01

    A quantitative study was conducted to get the perspectives of IT experts about risks in enterprise cloud computing. In businesses, these IT experts are often not in positions to prioritize business needs. The business experts commonly known as business managers mostly determine an organization's business needs. Even if an IT expert classified a…

  11. Using a Cloud-Based Computing Environment to Support Teacher Training on Common Core Implementation

    ERIC Educational Resources Information Center

    Robertson, Cory

    2013-01-01

    A cloud-based computing environment, Google Apps for Education (GAFE), has provided the Anaheim City School District (ACSD) a comprehensive and collaborative avenue for creating, sharing, and editing documents, calendars, and social networking communities. With this environment, teachers and district staff at ACSD are able to utilize the deep…

  12. Educational Technology Research Journals: "Journal of Educational Computing Research," 2003-2012

    ERIC Educational Resources Information Center

    Nyland, Rob; Anderson, Noelle; Beckstrom, Tyler; Boren, Michael; Thomas, Rebecca; West, Richard E.

    2015-01-01

    This article analyzes articles published in the "Journal of Educational Computing Research" ("JECR") from 2003 to 2012. The authors analyzed the articles looking for trends in article types and methodologies, the most common topics addressed in the articles, the top-cited articles, and the top authors during the period. The…

  13. Scaffolding Argumentation about Water Quality: A Mixed-Method Study in a Rural Middle School

    ERIC Educational Resources Information Center

    Belland, Brian R.; Gu, Jiangyue; Armbrust, Sara; Cook, Brant

    2015-01-01

    A common way for students to develop scientific argumentation abilities is through argumentation about socioscientific issues, defined as scientific problems with social, ethical, and moral aspects. Computer-based scaffolding can support students in this process. In this mixed method study, we examined the use and impact of computer based…

  14. Common data buffer

    NASA Technical Reports Server (NTRS)

    Byrne, F.

    1981-01-01

    Time-shared interface speeds data processing in distributed computer network. Two-level high-speed scanning approach routes information to buffer, portion of which is reserved for series of "first-in, first-out" memory stacks. Buffer address structure and memory are protected from noise or failed components by error correcting code. System is applicable to any computer or processing language.

  15. The Operating System Jungle: Finding a Common Path Keeps Getting More Difficult.

    ERIC Educational Resources Information Center

    Pournelle, Jerry

    1984-01-01

    Describes the computer field before the advent of CP/M (Control Program for Microcomputers), an operating system which facilitated compatibility between different computers. CP/M's functions and flaws and the advent of Apple DOS and UCSD Pascal, two additional widely used operating systems, and the significance of their development are also…

  16. Construction of Logarithm Tables for Galois Fields

    ERIC Educational Resources Information Center

    Torres-Jimenez, Jose; Rangel-Valdez, Nelson; Gonzalez-Hernandez, Ana Loreto; Avila-George, Himer

    2011-01-01

    A branch of mathematics commonly used in cryptography is Galois Fields GF(p[superscript n]). Two basic operations performed in GF(p[superscript n]) are the addition and the multiplication. While the addition is generally easy to compute, the multiplication requires a special treatment. A well-known method to compute the multiplication is based on…

  17. Computer-Based Information System Cultivated To Support a College of Education.

    ERIC Educational Resources Information Center

    Smith, Gary R.

    This brief paper discusses four of the computer applications explored at Wayne State University over the past decade to provide alternative solutions to problems commonly encountered in teacher education and in providing support for the classroom teacher. These studies examined only databases that are available in the public domain; obtained…

  18. Scientific Computing for Chemists: An Undergraduate Course in Simulations, Data Processing, and Visualization

    ERIC Educational Resources Information Center

    Weiss, Charles J.

    2017-01-01

    The Scientific Computing for Chemists course taught at Wabash College teaches chemistry students to use the Python programming language, Jupyter notebooks, and a number of common Python scientific libraries to process, analyze, and visualize data. Assuming no prior programming experience, the course introduces students to basic programming and…

  19. CCSSM Challenge: Graphing Ratio and Proportion

    ERIC Educational Resources Information Center

    Kastberg, Signe E.; D'Ambrosio, Beatriz S.; Lynch-Davis, Kathleen; Mintos, Alexia; Krawczyk, Kathryn

    2013-01-01

    A renewed emphasis was placed on ratio and proportional reasoning in the middle grades in the Common Core State Standards for Mathematics (CCSSM). The expectation for students includes the ability to not only compute and then compare and interpret the results of computations in context but also interpret ratios and proportions as they are…

  20. Computer-Based Methods for Collecting Peer Nomination Data: Utility, Practice, and Empirical Support

    ERIC Educational Resources Information Center

    van den Berg, Yvonne H. M.; Gommans, Rob

    2017-01-01

    New technologies have led to several major advances in psychological research over the past few decades. Peer nomination research is no exception. Thanks to these technological innovations, computerized data collection is becoming more common in peer nomination research. However, computer-based assessment is more than simply programming the…

  1. Studying Parental Decision Making with Micro-Computers: The CPSI Technique.

    ERIC Educational Resources Information Center

    Holden, George W.

    A technique for studying how parents think, make decisions, and solve childrearing problems, Computer-Presented Social Interactions (CPSI), is described. Two studies involving CPSI are presented. The first study concerns a common parental cognitive task: causal analysis of an undesired behavior. The task was to diagnose the cause of non-contingent…

  2. Artificial Intelligence, Computational Thinking, and Mathematics Education

    ERIC Educational Resources Information Center

    Gadanidis, George

    2017-01-01

    Purpose: The purpose of this paper is to examine the intersection of artificial intelligence (AI), computational thinking (CT), and mathematics education (ME) for young students (K-8). Specifically, it focuses on three key elements that are common to AI, CT and ME: agency, modeling of phenomena and abstracting concepts beyond specific instances.…

  3. Motor Abilities in Autism: A Review Using a Computational Context

    ERIC Educational Resources Information Center

    Gowen, Emma; Hamilton, Antonia

    2013-01-01

    Altered motor behaviour is commonly reported in Autism Spectrum Disorder, but the aetiology remains unclear. Here, we have taken a computational approach in order to break down motor control into different components and review the functioning of each process. Our findings suggest abnormalities in two areas--poor integration of information for…

  4. Animated Pedagogical Agents: A Review of Agent Technology Software in Electronic Learning Environments

    ERIC Educational Resources Information Center

    Govindasamy, Malliga K.

    2014-01-01

    Agent technology has become one of the dynamic and most interesting areas of computer science in recent years. The dynamism of this technology has resulted in computer generated characters, known as pedagogical agent, entering the digital learning environments in increasing numbers. Commonly deployed in implementing tutoring strategies, these…

  5. Effective Computer-Aided Assessment of Mathematics; Principles, Practice and Results

    ERIC Educational Resources Information Center

    Greenhow, Martin

    2015-01-01

    This article outlines some key issues for writing effective computer-aided assessment (CAA) questions in subjects with substantial mathematical or statistical content, especially the importance of control of random parameters and the encoding of wrong methods of solution (mal-rules) commonly used by students. The pros and cons of using CAA and…

  6. Business and Technology Concepts--Business Computations. Teacher's Guide.

    ERIC Educational Resources Information Center

    Illinois State Board of Education, Springfield. Dept. of Adult, Vocational and Technical Education.

    This Illinois State Board of Education teacher's guide on business computations is for students enrolled in the 9th or 10th grade. The course provides a foundation in arithmetic skills and their applications to common business problems for the senior high school vocational business courses. The curriculum guide includes teacher and student…

  7. Fractal Simulations of African Design in Pre-College Computing Education

    ERIC Educational Resources Information Center

    Eglash, Ron; Krishnamoorthy, Mukkai; Sanchez, Jason; Woodbridge, Andrew

    2011-01-01

    This article describes the use of fractal simulations of African design in a high school computing class. Fractal patterns--repetitions of shape at multiple scales--are a common feature in many aspects of African design. In African architecture we often see circular houses grouped in circular complexes, or rectangular houses in rectangular…

  8. Probing Student Teachers' Subject Content Knowledge in Chemistry: Case Studies Using Dynamic Computer Models

    ERIC Educational Resources Information Center

    Toplis, Rob

    2008-01-01

    This paper reports case study research into the knowledge and understanding of chemistry for six secondary science student teachers. It combines innovative student-generated computer animations, using "ChemSense" software, with interviews to probe understanding of four common chemical processes used in the secondary school curriculum. Findings…

  9. A Choice of Terminals: Spatial Patterning in Computer Laboratories

    ERIC Educational Resources Information Center

    Spennemann, Dirk; Cornforth, David; Atkinson, John

    2007-01-01

    Purpose: This paper seeks to examine the spatial patterns of student use of machines in each laboratory to whether there are underlying commonalities. Design/methodology/approach: The research was carried out by assessing the user behaviour in 16 computer laboratories at a regional university in Australia. Findings: The study found that computers…

  10. Coupling Molecular Modeling to the Traditional "IR-ID" Exercise in the Introductory Organic Chemistry Laboratory

    ERIC Educational Resources Information Center

    Stokes-Huby, Heather; Vitale, Dale E.

    2007-01-01

    This exercise integrates the infrared unknown identification ("IR-ID") experiment common to most organic laboratory syllabi with computer molecular modeling. In this modification students are still required to identify unknown compounds from their IR spectra, but must additionally match some of the absorptions with computed frequencies they…

  11. Scalable Vector Media-processors for Embedded Systems

    DTIC Science & Technology

    2002-05-01

    Set Architecture for Multimedia “When you do the common things in life in an uncommon way, you will command the attention of the world.” George ...Bibliography [ABHS89] M. August, G. Brost , C. Hsiung, and C. Schiffleger. Cray X-MP: The Birth of a Super- computer. IEEE Computer, 22(1):45–52, January

  12. BASIC, Logo, and Pilot: A Comparison of Three Computer Languages.

    ERIC Educational Resources Information Center

    Maddux, Cleborne D.; Cummings, Rhoda E.

    1985-01-01

    Following a brief history of Logo, BASIC, and Pilot programing languages, common educational programing tasks (input from keyboard, evaluation of keyboard input, and computation) are presented in each language to illustrate how each can be used to perform the same tasks and to demonstrate each language's strengths and weaknesses. (MBR)

  13. A Computer Algebra Approach to Solving Chemical Equilibria in General Chemistry

    ERIC Educational Resources Information Center

    Kalainoff, Melinda; Lachance, Russ; Riegner, Dawn; Biaglow, Andrew

    2012-01-01

    In this article, we report on a semester-long study of the incorporation into our general chemistry course, of advanced algebraic and computer algebra techniques for solving chemical equilibrium problems. The method presented here is an alternative to the commonly used concentration table method for describing chemical equilibria in general…

  14. Speech Development of Autistic Children by Interactive Computer Games

    ERIC Educational Resources Information Center

    Rahman, Mustafizur; Ferdous, S. M.; Ahmed, Syed Ishtiaque; Anwar, Anika

    2011-01-01

    Purpose: Speech disorder is one of the most common problems found with autistic children. The purpose of this paper is to investigate the introduction of computer-based interactive games along with the traditional therapies in order to help improve the speech of autistic children. Design/methodology/approach: From analysis of the works of Ivar…

  15. p88110: A Graphical Simulator for Computer Architecture and Organization Courses

    ERIC Educational Resources Information Center

    Garcia, M. I.; Rodriguez, S.; Perez, A.; Garcia, A.

    2009-01-01

    Studying fundamental Computer Architecture and Organization topics requires a significant amount of practical work if students are to acquire a good grasp of the theoretical concepts presented in classroom lectures or textbooks. The use of simulators is commonly adopted in order to reach this objective. However, as most of the available…

  16. Beach Profile Analysis System (BPAS). Volume III. BPAS User’s Guide: Analysis Module SURVY1.

    DTIC Science & Technology

    1982-06-01

    extrapolated using the two seawardmost points. Before computing volume changes, common bonds are established relative to the landward and seawsrd extent...Cyber 176 or equivalent computer. Such features include the 10- character, 60-bit word size, the FORTRAN- callable sort routine (interfacing with the NOS

  17. Microcomputer Based Computer-Assisted Learning System: CASTLE.

    ERIC Educational Resources Information Center

    Garraway, R. W. T.

    The purpose of this study was to investigate the extent to which a sophisticated computer assisted instruction (CAI) system could be implemented on the type of microcomputer system currently found in the schools. A method was devised for comparing CAI languages and was used to rank five common CAI languages. The highest ranked language, NATAL,…

  18. Electronic Structure at Electrode/Electrolyte Interfaces in Magnesium based Batteries

    NASA Astrophysics Data System (ADS)

    Balachandran, Janakiraman; Siegel, Donald

    2015-03-01

    Magnesium is a promising multivalent element for use in next generation electrochemical energy storage systems. However, a wide range of challenges such as low coulombic efficiency, low/varying capacity and cyclability need to be resolved in order to realize Mg based batteries. Many of these issues can be related to interfacial phenomena between the Mg anode and common electrolytes. Ab-initio based computational models of these interfaces can provide insights on the interfacial interactions that can be difficult to probe experimentally. In this work we present ab-initio computations of common electrolyte solvents (THF, DME) in contact with two model electrode surfaces namely -- (i) an ``SEI-free'' electrode based on Mg metal and, (ii) a ``passivated'' electrode consisting of MgO. We perform GW calculations to predict the reorganization of the molecular orbitals (HOMO/LUMO) upon contact with the these surfaces and their alignment with respect to the Fermi energy of the electrodes. These computations are in turn compared with more efficient GGA (PBE) & Hybrid (HSE) functional calculations. The results obtained from these computations enable us to qualitatively describe the stability of these solvent molecules at electrode-electrolyte interfaces

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keyes, D.; McInnes, L. C.; Woodward, C.

    This report is an outcome of the workshop Multiphysics Simulations: Challenges and Opportunities, sponsored by the Institute of Computing in Science (ICiS). Additional information about the workshop, including relevant reading and presentations on multiphysics issues in applications, algorithms, and software, is available via https://sites.google.com/site/icismultiphysics2011/. We consider multiphysics applications from algorithmic and architectural perspectives, where 'algorithmic' includes both mathematical analysis and computational complexity and 'architectural' includes both software and hardware environments. Many diverse multiphysics applications can be reduced, en route to their computational simulation, to a common algebraic coupling paradigm. Mathematical analysis of multiphysics coupling in this form is not alwaysmore » practical for realistic applications, but model problems representative of applications discussed herein can provide insight. A variety of software frameworks for multiphysics applications have been constructed and refined within disciplinary communities and executed on leading-edge computer systems. We examine several of these, expose some commonalities among them, and attempt to extrapolate best practices to future systems. From our study, we summarize challenges and forecast opportunities. We also initiate a modest suite of test problems encompassing features present in many applications.« less

  20. Computer and Internet usage by Canadian dentists.

    PubMed

    Flores-Mir, Carlos; Palmer, Neal G; Northcott, Herbert C; Huston, Carolyn; Major, Paul W

    2006-03-01

    To determine the frequency of computers in Canadian dental offices and to assess their use; to evaluate Internet access and use in Canadian dental offices; and to compare use of computers and the Internet by Canadian dentists, by the general public and by other dental groups. An anonymous, self-administered survey of Canadian dentists was conducted by mail. A potential mailing list of 14,052 active Canadian dentists was compiled from the 2003 records of provincial regulatory bodies. For each province, 7.8% of the general dentists were randomly selected with the help of computer software. The surveys were mailed to this stratified random sample of 1,096 dentists. The response rate was 28%. Of the 312 respondents, 4 (1%) were in full-time academic positions, 15 (5%) were not practising, and 9 (3%) provided incomplete data. Therefore, 284 survey responses were available for descriptive analysis. Two hundred and fifty-seven (90%) of the respondents had a computer in their primary practice. Computers were used mainly for administrative tasks (accounting, bookkeeping and scheduling) rather than clinical tasks. Internet access was common (185/250 or 74%), and high-speed Internet access (93/250 or 37%) was increasingly common, judging from the results of previous studies on computer use. The main reasons given for not having in-office Internet access were security or privacy concerns and no reported need for or interest in the service. Computer use was high in this sample of Canadian dentists, but a small proportion of dental offices remained without computers. Canadian dentists" use of the Internet was greater than that of American dentists, private enterprise and the North American public in general.

  1. Computer-assisted learning in critical care: from ENIAC to HAL.

    PubMed

    Tegtmeyer, K; Ibsen, L; Goldstein, B

    2001-08-01

    Computers are commonly used to serve many functions in today's modern intensive care unit. One of the most intriguing and perhaps most challenging applications of computers has been to attempt to improve medical education. With the introduction of the first computer, medical educators began looking for ways to incorporate their use into the modern curriculum. Prior limitations of cost and complexity of computers have consistently decreased since their introduction, making it increasingly feasible to incorporate computers into medical education. Simultaneously, the capabilities and capacities of computers have increased. Combining the computer with other modern digital technology has allowed the development of more intricate and realistic educational tools. The purpose of this article is to briefly describe the history and use of computers in medical education with special reference to critical care medicine. In addition, we will examine the role of computers in teaching and learning and discuss the types of interaction between the computer user and the computer.

  2. Color graphics, interactive processing, and the supercomputer

    NASA Technical Reports Server (NTRS)

    Smith-Taylor, Rudeen

    1987-01-01

    The development of a common graphics environment for the NASA Langley Research Center user community and the integration of a supercomputer into this environment is examined. The initial computer hardware, the software graphics packages, and their configurations are described. The addition of improved computer graphics capability to the supercomputer, and the utilization of the graphic software and hardware are discussed. Consideration is given to the interactive processing system which supports the computer in an interactive debugging, processing, and graphics environment.

  3. Light reflection models for computer graphics.

    PubMed

    Greenberg, D P

    1989-04-14

    During the past 20 years, computer graphic techniques for simulating the reflection of light have progressed so that today images of photorealistic quality can be produced. Early algorithms considered direct lighting only, but global illumination phenomena with indirect lighting, surface interreflections, and shadows can now be modeled with ray tracing, radiosity, and Monte Carlo simulations. This article describes the historical development of computer graphic algorithms for light reflection and pictorially illustrates what will be commonly available in the near future.

  4. Desiderata for computable representations of electronic health records-driven phenotype algorithms.

    PubMed

    Mo, Huan; Thompson, William K; Rasmussen, Luke V; Pacheco, Jennifer A; Jiang, Guoqian; Kiefer, Richard; Zhu, Qian; Xu, Jie; Montague, Enid; Carrell, David S; Lingren, Todd; Mentch, Frank D; Ni, Yizhao; Wehbe, Firas H; Peissig, Peggy L; Tromp, Gerard; Larson, Eric B; Chute, Christopher G; Pathak, Jyotishman; Denny, Joshua C; Speltz, Peter; Kho, Abel N; Jarvik, Gail P; Bejan, Cosmin A; Williams, Marc S; Borthwick, Kenneth; Kitchner, Terrie E; Roden, Dan M; Harris, Paul A

    2015-11-01

    Electronic health records (EHRs) are increasingly used for clinical and translational research through the creation of phenotype algorithms. Currently, phenotype algorithms are most commonly represented as noncomputable descriptive documents and knowledge artifacts that detail the protocols for querying diagnoses, symptoms, procedures, medications, and/or text-driven medical concepts, and are primarily meant for human comprehension. We present desiderata for developing a computable phenotype representation model (PheRM). A team of clinicians and informaticians reviewed common features for multisite phenotype algorithms published in PheKB.org and existing phenotype representation platforms. We also evaluated well-known diagnostic criteria and clinical decision-making guidelines to encompass a broader category of algorithms. We propose 10 desired characteristics for a flexible, computable PheRM: (1) structure clinical data into queryable forms; (2) recommend use of a common data model, but also support customization for the variability and availability of EHR data among sites; (3) support both human-readable and computable representations of phenotype algorithms; (4) implement set operations and relational algebra for modeling phenotype algorithms; (5) represent phenotype criteria with structured rules; (6) support defining temporal relations between events; (7) use standardized terminologies and ontologies, and facilitate reuse of value sets; (8) define representations for text searching and natural language processing; (9) provide interfaces for external software algorithms; and (10) maintain backward compatibility. A computable PheRM is needed for true phenotype portability and reliability across different EHR products and healthcare systems. These desiderata are a guide to inform the establishment and evolution of EHR phenotype algorithm authoring platforms and languages. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  5. Integrative Computational Network Analysis Reveals Site-Specific Mediators of Inflammation in Alzheimer's Disease

    PubMed Central

    Ravichandran, Srikanth; Michelucci, Alessandro; del Sol, Antonio

    2018-01-01

    Alzheimer's disease (AD) is a major neurodegenerative disease and is one of the most common cause of dementia in older adults. Among several factors, neuroinflammation is known to play a critical role in the pathogenesis of chronic neurodegenerative diseases. In particular, studies of brains affected by AD show a clear involvement of several inflammatory pathways. Furthermore, depending on the brain regions affected by the disease, the nature and the effect of inflammation can vary. Here, in order to shed more light on distinct and common features of inflammation in different brain regions affected by AD, we employed a computational approach to analyze gene expression data of six site-specific neuronal populations from AD patients. Our network based computational approach is driven by the concept that a sustained inflammatory environment could result in neurotoxicity leading to the disease. Thus, our method aims to infer intracellular signaling pathways/networks that are likely to be constantly activated or inhibited due to persistent inflammatory conditions. The computational analysis identified several inflammatory mediators, such as tumor necrosis factor alpha (TNF-a)-associated pathway, as key upstream receptors/ligands that are likely to transmit sustained inflammatory signals. Further, the analysis revealed that several inflammatory mediators were mainly region specific with few commonalities across different brain regions. Taken together, our results show that our integrative approach aids identification of inflammation-related signaling pathways that could be responsible for the onset or the progression of AD and can be applied to study other neurodegenerative diseases. Furthermore, such computational approaches can enable the translation of clinical omics data toward the development of novel therapeutic strategies for neurodegenerative diseases. PMID:29551980

  6. Integrative Computational Network Analysis Reveals Site-Specific Mediators of Inflammation in Alzheimer's Disease.

    PubMed

    Ravichandran, Srikanth; Michelucci, Alessandro; Del Sol, Antonio

    2018-01-01

    Alzheimer's disease (AD) is a major neurodegenerative disease and is one of the most common cause of dementia in older adults. Among several factors, neuroinflammation is known to play a critical role in the pathogenesis of chronic neurodegenerative diseases. In particular, studies of brains affected by AD show a clear involvement of several inflammatory pathways. Furthermore, depending on the brain regions affected by the disease, the nature and the effect of inflammation can vary. Here, in order to shed more light on distinct and common features of inflammation in different brain regions affected by AD, we employed a computational approach to analyze gene expression data of six site-specific neuronal populations from AD patients. Our network based computational approach is driven by the concept that a sustained inflammatory environment could result in neurotoxicity leading to the disease. Thus, our method aims to infer intracellular signaling pathways/networks that are likely to be constantly activated or inhibited due to persistent inflammatory conditions. The computational analysis identified several inflammatory mediators, such as tumor necrosis factor alpha (TNF-a)-associated pathway, as key upstream receptors/ligands that are likely to transmit sustained inflammatory signals. Further, the analysis revealed that several inflammatory mediators were mainly region specific with few commonalities across different brain regions. Taken together, our results show that our integrative approach aids identification of inflammation-related signaling pathways that could be responsible for the onset or the progression of AD and can be applied to study other neurodegenerative diseases. Furthermore, such computational approaches can enable the translation of clinical omics data toward the development of novel therapeutic strategies for neurodegenerative diseases.

  7. Consolidation and development roadmap of the EMI middleware

    NASA Astrophysics Data System (ADS)

    Kónya, B.; Aiftimiei, C.; Cecchi, M.; Field, L.; Fuhrmann, P.; Nilsen, J. K.; White, J.

    2012-12-01

    Scientific research communities have benefited recently from the increasing availability of computing and data infrastructures with unprecedented capabilities for large scale distributed initiatives. These infrastructures are largely defined and enabled by the middleware they deploy. One of the major issues in the current usage of research infrastructures is the need to use similar but often incompatible middleware solutions. The European Middleware Initiative (EMI) is a collaboration of the major European middleware providers ARC, dCache, gLite and UNICORE. EMI aims to: deliver a consolidated set of middleware components for deployment in EGI, PRACE and other Distributed Computing Infrastructures; extend the interoperability between grids and other computing infrastructures; strengthen the reliability of the services; establish a sustainable model to maintain and evolve the middleware; fulfil the requirements of the user communities. This paper presents the consolidation and development objectives of the EMI software stack covering the last two years. The EMI development roadmap is introduced along the four technical areas of compute, data, security and infrastructure. The compute area plan focuses on consolidation of standards and agreements through a unified interface for job submission and management, a common format for accounting, the wide adoption of GLUE schema version 2.0 and the provision of a common framework for the execution of parallel jobs. The security area is working towards a unified security model and lowering the barriers to Grid usage by allowing users to gain access with their own credentials. The data area is focusing on implementing standards to ensure interoperability with other grids and industry components and to reuse already existing clients in operating systems and open source distributions. One of the highlights of the infrastructure area is the consolidation of the information system services via the creation of a common information backbone.

  8. Wildlife software: procedures for publication of computer software

    USGS Publications Warehouse

    Samuel, M.D.

    1990-01-01

    Computers and computer software have become an integral part of the practice of wildlife science. Computers now play an important role in teaching, research, and management applications. Because of the specialized nature of wildlife problems, specific computer software is usually required to address a given problem (e.g., home range analysis). This type of software is not usually available from commercial vendors and therefore must be developed by those wildlife professionals with particular skill in computer programming. Current journal publication practices generally prevent a detailed description of computer software associated with new techniques. In addition, peer review of journal articles does not usually include a review of associated computer software. Thus, many wildlife professionals are usually unaware of computer software that would meet their needs or of major improvements in software they commonly use. Indeed most users of wildlife software learn of new programs or important changes only by word of mouth.

  9. Design and Execution of make-like, distributed Analyses based on Spotify’s Pipelining Package Luigi

    NASA Astrophysics Data System (ADS)

    Erdmann, M.; Fischer, B.; Fischer, R.; Rieger, M.

    2017-10-01

    In high-energy particle physics, workflow management systems are primarily used as tailored solutions in dedicated areas such as Monte Carlo production. However, physicists performing data analyses are usually required to steer their individual workflows manually which is time-consuming and often leads to undocumented relations between particular workloads. We present a generic analysis design pattern that copes with the sophisticated demands of end-to-end HEP analyses and provides a make-like execution system. It is based on the open-source pipelining package Luigi which was developed at Spotify and enables the definition of arbitrary workloads, so-called Tasks, and the dependencies between them in a lightweight and scalable structure. Further features are multi-user support, automated dependency resolution and error handling, central scheduling, and status visualization in the web. In addition to already built-in features for remote jobs and file systems like Hadoop and HDFS, we added support for WLCG infrastructure such as LSF and CREAM job submission, as well as remote file access through the Grid File Access Library. Furthermore, we implemented automated resubmission functionality, software sandboxing, and a command line interface with auto-completion for a convenient working environment. For the implementation of a t \\overline{{{t}}} H cross section measurement, we created a generic Python interface that provides programmatic access to all external information such as datasets, physics processes, statistical models, and additional files and values. In summary, the setup enables the execution of the entire analysis in a parallelized and distributed fashion with a single command.

  10. VOMS/VOMRS utilization patterns and convergence plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceccanti, A.; /INFN, CNAF; Ciaschini, V.

    2010-01-01

    The Grid community uses two well-established registration services, which allow users to be authenticated under the auspices of Virtual Organizations (VOs). The Virtual Organization Membership Service (VOMS), developed in the context of the Enabling Grid for E-sciencE (EGEE) project, is an Attribute Authority service that issues attributes expressing membership information of a subject within a VO. VOMS allows to partition users in groups, assign them roles and free-form attributes which are then used to drive authorization decisions. The VOMS administrative application, VOMS-Admin, manages and populates the VOMS database with membership information. The Virtual Organization Management Registration Service (VOMRS), developed atmore » Fermilab, extends the basic registration and management functionalities present in VOMS-Admin. It implements a registration workflow that requires VO usage policy acceptance and membership approval by administrators. VOMRS supports management of multiple grid certificates, and handling users' request for group and role assignments, and membership status. VOMRS is capable of interfacing to local systems with personnel information (e.g. the CERN Human Resource Database) and of pulling relevant member information from them. VOMRS synchronizes the relevant subset of information with VOMS. The recent development of new features in VOMS-Admin raises the possibility of rationalizing the support and converging on a single solution by continuing and extending existing collaborations between EGEE and OSG. Such strategy is supported by WLCG, OSG, US CMS, US Atlas, and other stakeholders worldwide. In this paper, we will analyze features in use by major experiments and the use cases for registration addressed by the mature single solution.« less

  11. VOMS/VOMRS utilization patterns and convergence plan

    NASA Astrophysics Data System (ADS)

    Ceccanti, A.; Ciaschini, V.; Dimou, M.; Garzoglio, G.; Levshina, T.; Traylen, S.; Venturi, V.

    2010-04-01

    The Grid community uses two well-established registration services, which allow users to be authenticated under the auspices of Virtual Organizations (VOs). The Virtual Organization Membership Service (VOMS), developed in the context of the Enabling Grid for E-sciencE (EGEE) project, is an Attribute Authority service that issues attributes expressing membership information of a subject within a VO. VOMS allows to partition users in groups, assign them roles and free-form attributes which are then used to drive authorization decisions. The VOMS administrative application, VOMS-Admin, manages and populates the VOMS database with membership information. The Virtual Organization Management Registration Service (VOMRS), developed at Fermilab, extends the basic registration and management functionalities present in VOMS-Admin. It implements a registration workflow that requires VO usage policy acceptance and membership approval by administrators. VOMRS supports management of multiple grid certificates, and handling users' request for group and role assignments, and membership status. VOMRS is capable of interfacing to local systems with personnel information (e.g. the CERN Human Resource Database) and of pulling relevant member information from them. VOMRS synchronizes the relevant subset of information with VOMS. The recent development of new features in VOMS-Admin raises the possibility of rationalizing the support and converging on a single solution by continuing and extending existing collaborations between EGEE and OSG. Such strategy is supported by WLCG, OSG, US CMS, US Atlas, and other stakeholders worldwide. In this paper, we will analyze features in use by major experiments and the use cases for registration addressed by the mature single solution.

  12. Approximation-based common principal component for feature extraction in multi-class brain-computer interfaces.

    PubMed

    Hoang, Tuan; Tran, Dat; Huang, Xu

    2013-01-01

    Common Spatial Pattern (CSP) is a state-of-the-art method for feature extraction in Brain-Computer Interface (BCI) systems. However it is designed for 2-class BCI classification problems. Current extensions of this method to multiple classes based on subspace union and covariance matrix similarity do not provide a high performance. This paper presents a new approach to solving multi-class BCI classification problems by forming a subspace resembled from original subspaces and the proposed method for this approach is called Approximation-based Common Principal Component (ACPC). We perform experiments on Dataset 2a used in BCI Competition IV to evaluate the proposed method. This dataset was designed for motor imagery classification with 4 classes. Preliminary experiments show that the proposed ACPC feature extraction method when combining with Support Vector Machines outperforms CSP-based feature extraction methods on the experimental dataset.

  13. Impact of computer use on children's vision.

    PubMed

    Kozeis, N

    2009-10-01

    Today, millions of children use computers on a daily basis. Extensive viewing of the computer screen can lead to eye discomfort, fatigue, blurred vision and headaches, dry eyes and other symptoms of eyestrain. These symptoms may be caused by poor lighting, glare, an improper work station set-up, vision problems of which the person was not previously aware, or a combination of these factors. Children can experience many of the same symptoms related to computer use as adults. However, some unique aspects of how children use computers may make them more susceptible than adults to the development of these problems. In this study, the most common eye symptoms related to computer use in childhood, the possible causes and ways to avoid them are reviewed.

  14. Computer validation in toxicology: historical review for FDA and EPA good laboratory practice.

    PubMed

    Brodish, D L

    1998-01-01

    The application of computer validation principles to Good Laboratory Practice is a fairly recent phenomenon. As automated data collection systems have become more common in toxicology facilities, the U.S. Food and Drug Administration and the U.S. Environmental Protection Agency have begun to focus inspections in this area. This historical review documents the development of regulatory guidance on computer validation in toxicology over the past several decades. An overview of the components of a computer life cycle is presented, including the development of systems descriptions, validation plans, validation testing, system maintenance, SOPs, change control, security considerations, and system retirement. Examples are provided for implementation of computer validation principles on laboratory computer systems in a toxicology facility.

  15. Indirection and computer security.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berg, Michael J.

    2011-09-01

    The discipline of computer science is built on indirection. David Wheeler famously said, 'All problems in computer science can be solved by another layer of indirection. But that usually will create another problem'. We propose that every computer security vulnerability is yet another problem created by the indirections in system designs and that focusing on the indirections involved is a better way to design, evaluate, and compare security solutions. We are not proposing that indirection be avoided when solving problems, but that understanding the relationships between indirections and vulnerabilities is key to securing computer systems. Using this perspective, we analyzemore » common vulnerabilities that plague our computer systems, consider the effectiveness of currently available security solutions, and propose several new security solutions.« less

  16. Evidence of common and separate eye and hand accumulators underlying flexible eye-hand coordination

    PubMed Central

    Jana, Sumitash; Gopal, Atul

    2016-01-01

    Eye and hand movements are initiated by anatomically separate regions in the brain, and yet these movements can be flexibly coupled and decoupled, depending on the need. The computational architecture that enables this flexible coupling of independent effectors is not understood. Here, we studied the computational architecture that enables flexible eye-hand coordination using a drift diffusion framework, which predicts that the variability of the reaction time (RT) distribution scales with its mean. We show that a common stochastic accumulator to threshold, followed by a noisy effector-dependent delay, explains eye-hand RT distributions and their correlation in a visual search task that required decision-making, while an interactive eye and hand accumulator model did not. In contrast, in an eye-hand dual task, an interactive model better predicted the observed correlations and RT distributions than a common accumulator model. Notably, these two models could only be distinguished on the basis of the variability and not the means of the predicted RT distributions. Additionally, signatures of separate initiation signals were also observed in a small fraction of trials in the visual search task, implying that these distinct computational architectures were not a manifestation of the task design per se. Taken together, our results suggest two unique computational architectures for eye-hand coordination, with task context biasing the brain toward instantiating one of the two architectures. NEW & NOTEWORTHY Previous studies on eye-hand coordination have considered mainly the means of eye and hand reaction time (RT) distributions. Here, we leverage the approximately linear relationship between the mean and standard deviation of RT distributions, as predicted by the drift-diffusion model, to propose the existence of two distinct computational architectures underlying coordinated eye-hand movements. These architectures, for the first time, provide a computational basis for the flexible coupling between eye and hand movements. PMID:27784809

  17. [Clinical skills and outcomes of chair-side computer aided design and computer aided manufacture system].

    PubMed

    Yu, Q

    2018-04-09

    Computer aided design and computer aided manufacture (CAD/CAM) technology is a kind of oral digital system which is applied to clinical diagnosis and treatment. It overturns the traditional pattern, and provides a solution to restore defect tooth quickly and efficiently. In this paper we mainly discuss the clinical skills of chair-side CAD/CAM system, including tooth preparation, digital impression, the three-dimensional design of prosthesis, numerical control machining, clinical bonding and so on, and review the outcomes of several common kinds of materials at the same time.

  18. Computing and information services at the Jet Propulsion Laboratory - A management approach to a diversity of needs

    NASA Technical Reports Server (NTRS)

    Felberg, F. H.

    1984-01-01

    The Jet Propulsion Laboratory, a research and development organization with about 5,000 employees, presents a complicated set of requirements for an institutional system of computing and informational services. The approach taken by JPL in meeting this challenge is one of controlled flexibility. A central communications network is provided, together with selected computing facilities for common use. At the same time, staff members are given considerable discretion in choosing the mini- and microcomputers that they believe will best serve their needs. Consultation services, computer education, and other support functions are also provided.

  19. Goal-seismic computer programs in BASIC: Part I; Store, plot, and edit array data

    USGS Publications Warehouse

    Hasbrouck, Wilfred P.

    1979-01-01

    Processing of geophysical data taken with the U.S. Geological Survey's coal-seismic system is done with a desk-top, stand-alone computer. Programs for this computer are written in an extended BASIC language specially augmented for acceptance by the Tektronix 4051 Graphic System. This report presents five computer programs used to store, plot, and edit array data for the line, cross, and triangle arrays commonly employed in our coal-seismic investigations. * Use of brand names in this report is for descriptive purposes only and does not constitute endorsement by the U.S. Geological Survey.

  20. Consistent and efficient processing of ADCP streamflow measurements

    USGS Publications Warehouse

    Mueller, David S.; Constantinescu, George; Garcia, Marcelo H.; Hanes, Dan

    2016-01-01

    The use of Acoustic Doppler Current Profilers (ADCPs) from a moving boat is a commonly used method for measuring streamflow. Currently, the algorithms used to compute the average depth, compute edge discharge, identify invalid data, and estimate velocity and discharge for invalid data vary among manufacturers. These differences could result in different discharges being computed from identical data. Consistent computational algorithm, automated filtering, and quality assessment of ADCP streamflow measurements that are independent of the ADCP manufacturer are being developed in a software program that can process ADCP moving-boat discharge measurements independent of the ADCP used to collect the data.

  1. Cooperative combinatorial optimization: evolutionary computation case study.

    PubMed

    Burgin, Mark; Eberbach, Eugene

    2008-01-01

    This paper presents a formalization of the notion of cooperation and competition of multiple systems that work toward a common optimization goal of the population using evolutionary computation techniques. It is proved that evolutionary algorithms are more expressive than conventional recursive algorithms, such as Turing machines. Three classes of evolutionary computations are introduced and studied: bounded finite, unbounded finite, and infinite computations. Universal evolutionary algorithms are constructed. Such properties of evolutionary algorithms as completeness, optimality, and search decidability are examined. A natural extension of evolutionary Turing machine (ETM) model is proposed to properly reflect phenomena of cooperation and competition in the whole population.

  2. A computer-based physics laboratory apparatus: Signal generator software

    NASA Astrophysics Data System (ADS)

    Thanakittiviroon, Tharest; Liangrocapart, Sompong

    2005-09-01

    This paper describes a computer-based physics laboratory apparatus to replace expensive instruments such as high-precision signal generators. This apparatus uses a sound card in a common personal computer to give sinusoidal signals with an accurate frequency that can be programmed to give different frequency signals repeatedly. An experiment on standing waves on an oscillating string uses this apparatus. In conjunction with interactive lab manuals, which have been developed using personal computers in our university, we achieve a complete set of low-cost, accurate, and easy-to-use equipment for teaching a physics laboratory.

  3. Hispanic women overcoming deterrents to computer science: A phenomenological study

    NASA Astrophysics Data System (ADS)

    Herling, Lourdes

    The products of computer science are important to all aspects of society and are tools in the solution of the world's problems. It is, therefore, troubling that the United States faces a shortage in qualified graduates in computer science. The number of women and minorities in computer science is significantly lower than the percentage of the U.S. population which they represent. The overall enrollment in computer science programs has continued to decline with the enrollment of women declining at a higher rate than that of men. This study addressed three aspects of underrepresentation about which there has been little previous research: addressing computing disciplines specifically rather than embedding them within the STEM disciplines, what attracts women and minorities to computer science, and addressing the issues of race/ethnicity and gender in conjunction rather than in isolation. Since women of underrepresented ethnicities are more severely underrepresented than women in general, it is important to consider whether race and ethnicity play a role in addition to gender as has been suggested by previous research. Therefore, this study examined what attracted Hispanic women to computer science specifically. The study determines whether being subjected to multiple marginalizations---female and Hispanic---played a role in the experiences of Hispanic women currently in computer science. The study found five emergent themes within the experiences of Hispanic women in computer science. Encouragement and role models strongly influenced not only the participants' choice to major in the field, but to persist as well. Most of the participants experienced a negative atmosphere and feelings of not fitting in while in college and industry. The interdisciplinary nature of computer science was the most common aspect that attracted the participants to computer science. The aptitudes participants commonly believed are needed for success in computer science are the Twenty-First Century skills problem solving, creativity, and critical thinking. While not all the participants had experience with computers or programming prior to attending college, experience played a role in the self-confidence of those who did.

  4. An Affordance-Based Framework for Human Computation and Human-Computer Collaboration.

    PubMed

    Crouser, R J; Chang, R

    2012-12-01

    Visual Analytics is "the science of analytical reasoning facilitated by visual interactive interfaces". The goal of this field is to develop tools and methodologies for approaching problems whose size and complexity render them intractable without the close coupling of both human and machine analysis. Researchers have explored this coupling in many venues: VAST, Vis, InfoVis, CHI, KDD, IUI, and more. While there have been myriad promising examples of human-computer collaboration, there exists no common language for comparing systems or describing the benefits afforded by designing for such collaboration. We argue that this area would benefit significantly from consensus about the design attributes that define and distinguish existing techniques. In this work, we have reviewed 1,271 papers from many of the top-ranking conferences in visual analytics, human-computer interaction, and visualization. From these, we have identified 49 papers that are representative of the study of human-computer collaborative problem-solving, and provide a thorough overview of the current state-of-the-art. Our analysis has uncovered key patterns of design hinging on human and machine-intelligence affordances, and also indicates unexplored avenues in the study of this area. The results of this analysis provide a common framework for understanding these seemingly disparate branches of inquiry, which we hope will motivate future work in the field.

  5. Using NERSC High-Performance Computing (HPC) systems for high-energy nuclear physics applications with ALICE

    NASA Astrophysics Data System (ADS)

    Fasel, Markus

    2016-10-01

    High-Performance Computing Systems are powerful tools tailored to support large- scale applications that rely on low-latency inter-process communications to run efficiently. By design, these systems often impose constraints on application workflows, such as limited external network connectivity and whole node scheduling, that make more general-purpose computing tasks, such as those commonly found in high-energy nuclear physics applications, more difficult to carry out. In this work, we present a tool designed to simplify access to such complicated environments by handling the common tasks of job submission, software management, and local data management, in a framework that is easily adaptable to the specific requirements of various computing systems. The tool, initially constructed to process stand-alone ALICE simulations for detector and software development, was successfully deployed on the NERSC computing systems, Carver, Hopper and Edison, and is being configured to provide access to the next generation NERSC system, Cori. In this report, we describe the tool and discuss our experience running ALICE applications on NERSC HPC systems. The discussion will include our initial benchmarks of Cori compared to other systems and our attempts to leverage the new capabilities offered with Cori to support data-intensive applications, with a future goal of full integration of such systems into ALICE grid operations.

  6. Methane Adsorption in Zr-Based MOFs: Comparison and Critical Evaluation of Force Fields

    PubMed Central

    2017-01-01

    The search for nanoporous materials that are highly performing for gas storage and separation is one of the contemporary challenges in material design. The computational tools to aid these experimental efforts are widely available, and adsorption isotherms are routinely computed for huge sets of (hypothetical) frameworks. Clearly the computational results depend on the interactions between the adsorbed species and the adsorbent, which are commonly described using force fields. In this paper, an extensive comparison and in-depth investigation of several force fields from literature is reported for the case of methane adsorption in the Zr-based Metal–Organic Frameworks UiO-66, UiO-67, DUT-52, NU-1000, and MOF-808. Significant quantitative differences in the computed uptake are observed when comparing different force fields, but most qualitative features are common which suggests some predictive power of the simulations when it comes to these properties. More insight into the host–guest interactions is obtained by benchmarking the force fields with an extensive number of ab initio computed single molecule interaction energies. This analysis at the molecular level reveals that especially ab initio derived force fields perform well in reproducing the ab initio interaction energies. Finally, the high sensitivity of uptake predictions on the underlying potential energy surface is explored. PMID:29170687

  7. Simulating realistic predator signatures in quantitative fatty acid signature analysis

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.

    2015-01-01

    Diet estimation is an important field within quantitative ecology, providing critical insights into many aspects of ecology and community dynamics. Quantitative fatty acid signature analysis (QFASA) is a prominent method of diet estimation, particularly for marine mammal and bird species. Investigators using QFASA commonly use computer simulation to evaluate statistical characteristics of diet estimators for the populations they study. Similar computer simulations have been used to explore and compare the performance of different variations of the original QFASA diet estimator. In both cases, computer simulations involve bootstrap sampling prey signature data to construct pseudo-predator signatures with known properties. However, bootstrap sample sizes have been selected arbitrarily and pseudo-predator signatures therefore may not have realistic properties. I develop an algorithm to objectively establish bootstrap sample sizes that generates pseudo-predator signatures with realistic properties, thereby enhancing the utility of computer simulation for assessing QFASA estimator performance. The algorithm also appears to be computationally efficient, resulting in bootstrap sample sizes that are smaller than those commonly used. I illustrate the algorithm with an example using data from Chukchi Sea polar bears (Ursus maritimus) and their marine mammal prey. The concepts underlying the approach may have value in other areas of quantitative ecology in which bootstrap samples are post-processed prior to their use.

  8. Instructor Perspectives of Multiple-Choice Questions in Summative Assessment for Novice Programmers

    ERIC Educational Resources Information Center

    Shuhidan, Shuhaida; Hamilton, Margaret; D'Souza, Daryl

    2010-01-01

    Learning to program is known to be difficult for novices. High attrition and high failure rates in foundation-level programming courses undertaken at tertiary level in Computer Science programs, are commonly reported. A common approach to evaluating novice programming ability is through a combination of formative and summative assessments, with…

  9. Common Grounds for Modelling Mathematics in Educational Software

    ERIC Educational Resources Information Center

    Neuper, Walther

    2010-01-01

    Two kinds of software, CAS and DGS, are starting to work towards mutual integration. This paper envisages common grounds for such integration based on principles of computer theorem proving (CTP). Presently, the CTP community seems to lack awareness as to which of their products' features might serve mathematics education from high-school to…

  10. Space Station Common Berthing Mechanism, a multi-body simulation application

    NASA Technical Reports Server (NTRS)

    Searle, Ian

    1993-01-01

    This paper discusses an application of multi-body dynamic analysis conducted at the Boeing Company in connection with the Space Station (SS) Common Berthing Mechanism (CBM). After introducing the hardware and analytical objectives we will focus on some of the day-to-day computational issues associated with this type of analysis.

  11. Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds

    NASA Astrophysics Data System (ADS)

    Seinstra, Frank J.; Maassen, Jason; van Nieuwpoort, Rob V.; Drost, Niels; van Kessel, Timo; van Werkhoven, Ben; Urbani, Jacopo; Jacobs, Ceriel; Kielmann, Thilo; Bal, Henri E.

    In recent years, the application of high-performance and distributed computing in scientific practice has become increasingly wide spread. Among the most widely available platforms to scientists are clusters, grids, and cloud systems. Such infrastructures currently are undergoing revolutionary change due to the integration of many-core technologies, providing orders-of-magnitude speed improvements for selected compute kernels. With high-performance and distributed computing systems thus becoming more heterogeneous and hierarchical, programming complexity is vastly increased. Further complexities arise because urgent desire for scalability and issues including data distribution, software heterogeneity, and ad hoc hardware availability commonly force scientists into simultaneous use of multiple platforms (e.g., clusters, grids, and clouds used concurrently). A true computing jungle.

  12. Custom blending of lamp phosphors

    NASA Technical Reports Server (NTRS)

    Klemm, R. E.

    1978-01-01

    Spectral output of fluorescent lamps can be precisely adjusted by using computer-assisted analysis for custom blending lamp phosphors. With technique, spectrum of main bank of lamps is measured and stored in computer memory along with emission characteristics of commonly available phosphors. Computer then calculates ratio of green and blue intensities for each phosphor according to manufacturer's specifications and plots them as coordinates on graph. Same ratios are calculated for measured spectrum. Once proper mix is determined, it is applied as coating to fluorescent tubing.

  13. Text Exchange System

    NASA Technical Reports Server (NTRS)

    Snyder, W. V.; Hanson, R. J.

    1986-01-01

    Text Exchange System (TES) exchanges and maintains organized textual information including source code, documentation, data, and listings. System consists of two computer programs and definition of format for information storage. Comprehensive program used to create, read, and maintain TES files. TES developed to meet three goals: First, easy and efficient exchange of programs and other textual data between similar and dissimilar computer systems via magnetic tape. Second, provide transportable management system for textual information. Third, provide common user interface, over wide variety of computing systems, for all activities associated with text exchange.

  14. Color Variations in Screen Text: Effects on Proofreading.

    ERIC Educational Resources Information Center

    Szul, Linda; Berry, Louis

    As the use of computers has become more common in society, human engineering and ergonomics have lagged behind the sciences which developed the equipment. Some research has been done in the past on the effects of screen colors on computer use efficiency, but results were inconclusive. This paper describes a study of the impact of screen color…

  15. Effects of Computer Programming on Students' Cognitive Performance: A Quantitative Synthesis.

    ERIC Educational Resources Information Center

    Liao, Yuen-Kuang Cliff

    A meta-analysis was performed to synthesize existing data concerning the effects of computer programing on cognitive outcomes of students. Sixty-five studies were located from three sources, and their quantitative data were transformed into a common scale--Effect Size (ES). The analysis showed that 58 (89%) of the study-weighted ESs were positive…

  16. Computer simulation modeling of recreation use: Current status, case studies, and future directions

    Treesearch

    David N. Cole

    2005-01-01

    This report compiles information about recent progress in the application of computer simulation modeling to planning and management of recreation use, particularly in parks and wilderness. Early modeling efforts are described in a chapter that provides an historical perspective. Another chapter provides an overview of modeling options, common data input requirements,...

  17. Why Do We Need the Derivative for the Surface Area?

    ERIC Educational Resources Information Center

    Hristova, Yulia; Zeytuncu, Yunus E.

    2016-01-01

    Surface area and volume computations are the most common applications of integration in calculus books. When computing the surface area of a solid of revolution, students are usually told to use the frustum method instead of the disc method; however, a rigorous explanation is rarely provided. In this note, we provide one by using geometric…

  18. Usnic Acid and the Intramolecular Hydrogen Bond: A Computational Experiment for the Organic Laboratory

    ERIC Educational Resources Information Center

    Green, Thomas K.; Lane, Charles A.

    2006-01-01

    A computational experiment is described for the organic chemistry laboratory that allows students to estimate the relative strengths of the intramolecular hydrogen bonds of usnic and isousnic acids, two related lichen secondary metabolites. Students first extract and purify usnic acid from common lichens and obtain [superscript 1]H NMR and IR…

  19. The Impact of the Computer on the English Language.

    ERIC Educational Resources Information Center

    Perry, Devern

    1990-01-01

    Study analyzed 224 product announcements from 69 hardware and software companies to detail computer-related words that are in common usage and compare the words and definitions with those in the Merriam-Webster dictionary. It was found that 67.3 percent of the words were not included in the dictionary, pointing out the need for teachers to help…

  20. Information Infrastructures for Integrated Enterprises

    DTIC Science & Technology

    1993-05-01

    PROCESSING demographic CAM realization; ule leveling; studies; prelimi- rapid tooling; con- accounting/admin- nary CAFE and tinuous cost istrative reports...nies might consider franchising some facets of indirect labor, such as selected functions of administration, finance, and human resources. Incorporate as...vices CAFE Corporate Average Fuel Economy CAD Computer-Aided Design 0 CAE Computer-Aided Engineering CAIS Common Ada Programming Support Environment

  1. Flexible and Secure Computer-Based Assessment Using a Single Zip Disk

    ERIC Educational Resources Information Center

    Ko, C. C.; Cheng, C. D.

    2008-01-01

    Electronic examination systems, which include Internet-based system, require extremely complicated installation, configuration and maintenance of software as well as hardware. In this paper, we present the design and development of a flexible, easy-to-use and secure examination system (e-Test), in which any commonly used computer can be used as a…

  2. minimUML: A Minimalist Approach to UML Diagramming for Early Computer Science Education

    ERIC Educational Resources Information Center

    Turner, Scott A.; Perez-Quinones, Manuel A.; Edwards, Stephen H.

    2005-01-01

    In introductory computer science courses, the Unified Modeling Language (UML) is commonly used to teach basic object-oriented design. However, there appears to be a lack of suitable software to support this task. Many of the available programs that support UML focus on developing code and not on enhancing learning. Programs designed for…

  3. A review of classification algorithms for EEG-based brain-computer interfaces.

    PubMed

    Lotte, F; Congedo, M; Lécuyer, A; Lamarche, F; Arnaldi, B

    2007-06-01

    In this paper we review classification algorithms used to design brain-computer interface (BCI) systems based on electroencephalography (EEG). We briefly present the commonly employed algorithms and describe their critical properties. Based on the literature, we compare them in terms of performance and provide guidelines to choose the suitable classification algorithm(s) for a specific BCI.

  4. Test Anxiety Analysis of Chinese College Students in Computer-Based Spoken English Test

    ERIC Educational Resources Information Center

    Yanxia, Yang

    2017-01-01

    Test anxiety was a commonly known or assumed factor that could greatly influence performance of test takers. With the employment of designed questionnaires and computer-based spoken English test, this paper explored test anxiety manifestation of Chinese college students from both macro and micro aspects, and found out that the major anxiety in…

  5. Getting from x to y without Crashing: Computer Syntax in Mathematics Education

    ERIC Educational Resources Information Center

    Jeffrey, David J.

    2010-01-01

    When we use technology to teach mathematics, we hope to focus on the mathematics, restricting the computer software systems to providing support for our pedagogy. It is a matter of common experience, however, that students can become distracted or frustrated by the quirks of the particular software system being used. Here, experience using the…

  6. Gender and Stereotypes in Motivation to Study Computer Programming for Careers in Multimedia

    ERIC Educational Resources Information Center

    Doube, Wendy; Lang, Catherine

    2012-01-01

    A multimedia university programme with relatively equal numbers of male and female students in elective programming subjects provided a rare opportunity to investigate female motivation to study and pursue computer programming in a career. The MSLQ was used to survey 85 participants. In common with research into deterrence of females from STEM…

  7. Suggestions for Content Selection and Presentation in High School Computer Textbooks

    ERIC Educational Resources Information Center

    Lin, Janet Mei-Chuen; Wu, Cheng-Chih

    2007-01-01

    Based on the findings from reviewing 32 textbooks in the past four years for Taiwan's Ministry of Education, we have identified common problems in the reviewed textbooks and analyzed their inadequacies. Typical problems include the Wintel bias, too much coverage of software application tools and too little of computer science concepts, too many…

  8. A Peer-Assisted Learning Experience in Computer Programming Language Learning and Developing Computer Programming Skills

    ERIC Educational Resources Information Center

    Altintas, Tugba; Gunes, Ali; Sayan, Hamiyet

    2016-01-01

    Peer learning or, as commonly expressed, peer-assisted learning (PAL) involves school students who actively assist others to learn and in turn benefit from an effective learning environment. This research was designed to support students in becoming more autonomous in their learning, help them enhance their confidence level in tackling computer…

  9. A Model for Minimizing Numeric Function Generator Complexity and Delay

    DTIC Science & Technology

    2007-12-01

    allow computation of difficult mathematical functions in less time and with less hardware than commonly employed methods. They compute piecewise...Programmable Gate Arrays (FPGAs). The algorithms and estimation techniques apply to various NFG architectures and mathematical functions. This...thesis compares hardware utilization and propagation delay for various NFG architectures, mathematical functions, word widths, and segmentation methods

  10. Assessment of Selected Aspects of Teaching Programming in SK and CZ

    ERIC Educational Resources Information Center

    Záhorec, Jan; Hašková, Alena; Munk, Michal

    2014-01-01

    Authors of this paper carried out a broader international research aimed at assessing the computer science education at upper secondary level of education--ISCED 3A. The assessed school subjects were informatics and programming as the most common school subjects taught at secondary schools within computer sciences. The assessment was based on the…

  11. Beach Profile Analysis System (BPAS). Volume VII. BPAS User’s Guide: Analysis Module ELVDIS.

    DTIC Science & Technology

    1982-06-01

    changes, common bonds are established relative to the landward and seaward extent of the surveys on each profile line. The computed area under each pro...Cyber 176 or equivalent computer. Such features include the 10-character, 60- bit word size, the FORTRAN- callable sort routine (interfacing with the NOS

  12. CROMAX : a crosscut-first computer simulation program to determine cutting yield

    Treesearch

    Pamela J. Giese; Jeanne D. Danielson

    1983-01-01

    CROMAX simulates crosscut-first, then rip operations as commonly practiced in furniture manufacture. This program calculates cutting yields from individual boards based on board size and defect location. Such information can be useful in predicting yield from various grades and grade mixes thereby allowing for better management decisions in the rough mill. The computer...

  13. BIOCOMPUTATION: some history and prospects.

    PubMed

    Cull, Paul

    2013-06-01

    At first glance, biology and computer science are diametrically opposed sciences. Biology deals with carbon based life forms shaped by evolution and natural selection. Computer Science deals with electronic machines designed by engineers and guided by mathematical algorithms. In this brief paper, we review biologically inspired computing. We discuss several models of computation which have arisen from various biological studies. We show what these have in common, and conjecture how biology can still suggest answers and models for the next generation of computing problems. We discuss computation and argue that these biologically inspired models do not extend the theoretical limits on computation. We suggest that, in practice, biological models may give more succinct representations of various problems, and we mention a few cases in which biological models have proved useful. We also discuss the reciprocal impact of computer science on biology and cite a few significant contributions to biological science. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  14. Rotation, Reflection, and Frame Changes; Orthogonal tensors in computational engineering mechanics

    NASA Astrophysics Data System (ADS)

    Brannon, R. M.

    2018-04-01

    Whilst vast literature is available for the most common rotation-related tasks such as coordinate changes, most reference books tend to cover one or two methods, and resources for less-common tasks are scarce. Specialized research applications can be found in disparate journal articles, but a self-contained comprehensive review that covers both elementary and advanced concepts in a manner comprehensible to engineers is rare. Rotation, Reflection, and Frame Changes surveys a refreshingly broad range of rotation-related research that is routinely needed in engineering practice. By illustrating key concepts in computer source code, this book stands out as an unusually accessible guide for engineers and scientists in engineering mechanics.

  15. ATLAS, an integrated structural analysis and design system. Volume 3: User's manual, input and execution data

    NASA Technical Reports Server (NTRS)

    Dreisbach, R. L. (Editor)

    1979-01-01

    The input data and execution control statements for the ATLAS integrated structural analysis and design system are described. It is operational on the Control Data Corporation (CDC) 6600/CYBER computers in a batch mode or in a time-shared mode via interactive graphic or text terminals. ATLAS is a modular system of computer codes with common executive and data base management components. The system provides an extensive set of general-purpose technical programs with analytical capabilities including stiffness, stress, loads, mass, substructuring, strength design, unsteady aerodynamics, vibration, and flutter analyses. The sequence and mode of execution of selected program modules are controlled via a common user-oriented language.

  16. Neurobiological roots of language in primate audition: common computational properties.

    PubMed

    Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias; Small, Steven L; Rauschecker, Josef P

    2015-03-01

    Here, we present a new perspective on an old question: how does the neurobiology of human language relate to brain systems in nonhuman primates? We argue that higher-order language combinatorics, including sentence and discourse processing, can be situated in a unified, cross-species dorsal-ventral streams architecture for higher auditory processing, and that the functions of the dorsal and ventral streams in higher-order language processing can be grounded in their respective computational properties in primate audition. This view challenges an assumption, common in the cognitive sciences, that a nonhuman primate model forms an inherently inadequate basis for modeling higher-level language functions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. EMGAN: A computer program for time and frequency domain reduction of electromyographic data

    NASA Technical Reports Server (NTRS)

    Hursta, W. N.

    1975-01-01

    An experiment in electromyography utilizing surface electrode techniques was developed for the Apollo-Soyuz test project. This report describes the computer program, EMGAN, which was written to provide first order data reduction for the experiment. EMG signals are produced by the membrane depolarization of muscle fibers during a muscle contraction. Surface electrodes detect a spatially summated signal from a large number of muscle fibers commonly called an interference pattern. An interference pattern is usually so complex that analysis through signal morphology is extremely difficult if not impossible. It has become common to process EMG interference patterns in the frequency domain. Muscle fatigue and certain myopathic conditions are recognized through changes in muscle frequency spectra.

  18. Computational Social Creativity.

    PubMed

    Saunders, Rob; Bown, Oliver

    2015-01-01

    This article reviews the development of computational models of creativity where social interactions are central. We refer to this area as computational social creativity. Its context is described, including the broader study of creativity, the computational modeling of other social phenomena, and computational models of individual creativity. Computational modeling has been applied to a number of areas of social creativity and has the potential to contribute to our understanding of creativity. A number of requirements for computational models of social creativity are common in artificial life and computational social science simulations. Three key themes are identified: (1) computational social creativity research has a critical role to play in understanding creativity as a social phenomenon and advancing computational creativity by making clear epistemological contributions in ways that would be challenging for other approaches; (2) the methodologies developed in artificial life and computational social science carry over directly to computational social creativity; and (3) the combination of computational social creativity with individual models of creativity presents significant opportunities and poses interesting challenges for the development of integrated models of creativity that have yet to be realized.

  19. Establishing a communications link between two different, incompatible, personal computers: with practical examples and illustrations and program code.

    PubMed

    Davidson, R W

    1985-01-01

    The increasing need to communicate to exchange data can be handled by personal microcomputers. The necessity for the transference of information stored in one type of personal computer to another type of personal computer is often encountered in the process of integrating multiple sources of information stored in different and incompatible computers in Medical Research and Practice. A practical example is demonstrated with two relatively inexpensive commonly used computers, the IBM PC jr. and the Apple IIe. The basic input/output (I/O) interface chip for serial communication for each computer are joined together using a Null connector and cable to form a communications link. Using BASIC (Beginner's All-purpose Symbolic Instruction Code) Computer Language and the Disk Operating System (DOS) the communications handshaking protocol and file transfer is established between the two computers. The BASIC programming languages used are Applesoft (Apple Personal Computer) and PC BASIC (IBM Personal computer).

  20. When technology became language: the origins of the linguistic conception of computer programming, 1950-1960.

    PubMed

    Nofre, David; Priestley, Mark; Alberts, Gerard

    2014-01-01

    Language is one of the central metaphors around which the discipline of computer science has been built. The language metaphor entered modern computing as part of a cybernetic discourse, but during the second half of the 1950s acquired a more abstract meaning, closely related to the formal languages of logic and linguistics. The article argues that this transformation was related to the appearance of the commercial computer in the mid-1950s. Managers of computing installations and specialists on computer programming in academic computer centers, confronted with an increasing variety of machines, called for the creation of "common" or "universal languages" to enable the migration of computer code from machine to machine. Finally, the article shows how the idea of a universal language was a decisive step in the emergence of programming languages, in the recognition of computer programming as a proper field of knowledge, and eventually in the way we think of the computer.

  1. Screen Time at Home and School among Low-Income Children Attending Head Start

    PubMed Central

    Fletcher, Erica N.; Whitaker, Robert C.; Marino, Alexis J.; Anderson, Sarah E.

    2013-01-01

    Objective To describe the patterns of screen viewing at home and school among low-income preschool-aged children attending Head Start and identify factors associated with high home screen time in this population. Few studies have examined both home and classroom screen time, or included computer use as a component of screen viewing. Methods Participants were 2221 low-income preschool-aged children in the United States studied in the Head Start Family and Child Experiences Survey (FACES) in spring 2007. For 5 categories of screen viewing (television, video/DVD, video games, computer games, other computer use), we assessed children’s typical weekday home (parent-reported) and classroom (teacher-reported) screen viewing in relation to having a television in the child’s bedroom and sociodemographic factors. Results Over half of children (55.7%) had a television in their bedroom, and 12.5% had high home screen time (>4 hours/weekday). Television was the most common category of home screen time, but 56.6% of children had access to a computer at home and 37.5% had used it on the last typical weekday. After adjusting for sociodemographic characteristics, children with a television in their bedroom were more likely to have high home screen time [odds ratio=2.57 (95% confidence interval: 1.80–3.68)]. Classroom screen time consisted almost entirely of computer use; 49.4% of children used a classroom computer for ≥1 hour/week, and 14.2% played computer games at school ≥5 hours/week. Conclusions In 2007, one in eight low-income children attending Head Start had >4 hours/weekday of home screen time, which was associated with having a television in the bedroom. In the Head Start classroom, television and video viewing were uncommon but computer use was common. PMID:24891924

  2. Individual and work-related risk factors for musculoskeletal pain: a cross-sectional study among Estonian computer users.

    PubMed

    Oha, Kristel; Animägi, Liina; Pääsuke, Mati; Coggon, David; Merisalu, Eda

    2014-05-28

    Occupational use of computers has increased rapidly over recent decades, and has been linked with various musculoskeletal disorders, which are now the most commonly diagnosed occupational diseases in Estonia. The aim of this study was to assess the prevalence of musculoskeletal pain (MSP) by anatomical region during the past 12 months and to investigate its association with personal characteristics and work-related risk factors among Estonian office workers using computers. In a cross-sectional survey, the questionnaires were sent to the 415 computer users. Data were collected by self-administered questionnaire from 202 computer users at two universities in Estonia. The questionnaire asked about MSP at different anatomical sites, and potential individual and work related risk factors. Associations with risk factors were assessed by logistic regression. Most respondents (77%) reported MSP in at least one anatomical region during the past 12 months. Most prevalent was pain in the neck (51%), followed by low back pain (42%), wrist/hand pain (35%) and shoulder pain (30%). Older age, right-handedness, not currently smoking, emotional exhaustion, belief that musculoskeletal problems are commonly caused by work, and low job security were the statistically significant risk factors for MSP in different anatomical sites. A high prevalence of MSP in the neck, low back, wrist/arm and shoulder was observed among Estonian computer users. Psychosocial risk factors were broadly consistent with those reported from elsewhere. While computer users should be aware of ergonomic techniques that can make their work easier and more comfortable, presenting computer use as a serious health hazard may modify health beliefs in a way that is unhelpful.

  3. Spectrum of tablet computer use by medical students and residents at an academic medical center.

    PubMed

    Robinson, Robert

    2015-01-01

    Introduction. The value of tablet computer use in medical education is an area of considerable interest, with preliminary investigations showing that the majority of medical trainees feel that tablet computers added value to the curriculum. This study investigated potential differences in tablet computer use between medical students and resident physicians. Materials & Methods. Data collection for this survey was accomplished with an anonymous online questionnaire shared with the medical students and residents at Southern Illinois University School of Medicine (SIU-SOM) in July and August of 2012. Results. There were 76 medical student responses (26% response rate) and 66 resident/fellow responses to this survey (21% response rate). Residents/fellows were more likely to use tablet computers several times daily than medical students (32% vs. 20%, p = 0.035). The most common reported uses were for accessing medical reference applications (46%), e-Books (45%), and board study (32%). Residents were more likely than students to use a tablet computer to access an electronic medical record (41% vs. 21%, p = 0.010), review radiology images (27% vs. 12%, p = 0.019), and enter patient care orders (26% vs. 3%, p < 0.001). Discussion. This study shows a high prevalence and frequency of tablet computer use among physicians in training at this academic medical center. Most residents and students use tablet computers to access medical references, e-Books, and to study for board exams. Residents were more likely to use tablet computers to complete clinical tasks. Conclusions. Tablet computer use among medical students and resident physicians was common in this survey. All learners used tablet computers for point of care references and board study. Resident physicians were more likely to use tablet computers to access the EMR, enter patient care orders, and review radiology studies. This difference is likely due to the differing educational and professional demands placed on resident physicians. Further study is needed better understand how tablet computers and other mobile devices may assist in medical education and patient care.

  4. Spectrum of tablet computer use by medical students and residents at an academic medical center

    PubMed Central

    2015-01-01

    Introduction. The value of tablet computer use in medical education is an area of considerable interest, with preliminary investigations showing that the majority of medical trainees feel that tablet computers added value to the curriculum. This study investigated potential differences in tablet computer use between medical students and resident physicians. Materials & Methods. Data collection for this survey was accomplished with an anonymous online questionnaire shared with the medical students and residents at Southern Illinois University School of Medicine (SIU-SOM) in July and August of 2012. Results. There were 76 medical student responses (26% response rate) and 66 resident/fellow responses to this survey (21% response rate). Residents/fellows were more likely to use tablet computers several times daily than medical students (32% vs. 20%, p = 0.035). The most common reported uses were for accessing medical reference applications (46%), e-Books (45%), and board study (32%). Residents were more likely than students to use a tablet computer to access an electronic medical record (41% vs. 21%, p = 0.010), review radiology images (27% vs. 12%, p = 0.019), and enter patient care orders (26% vs. 3%, p < 0.001). Discussion. This study shows a high prevalence and frequency of tablet computer use among physicians in training at this academic medical center. Most residents and students use tablet computers to access medical references, e-Books, and to study for board exams. Residents were more likely to use tablet computers to complete clinical tasks. Conclusions. Tablet computer use among medical students and resident physicians was common in this survey. All learners used tablet computers for point of care references and board study. Resident physicians were more likely to use tablet computers to access the EMR, enter patient care orders, and review radiology studies. This difference is likely due to the differing educational and professional demands placed on resident physicians. Further study is needed better understand how tablet computers and other mobile devices may assist in medical education and patient care. PMID:26246973

  5. Computer Graphic Representation of Remote Environment Using Position Tactile Sensors.

    DTIC Science & Technology

    1981-08-01

    200,3),M(100,3) COMMON /FACT/IFMAX,NX(30),NA,IPS.’ NCON,NPOL,ICCN, 1 IVECT,ISUP,IRX COMMON /IPTPs/ IANG(100,2) , ICHECK , VEX COMMON /DISPL/ICM,XXD,XYD...IDuM1(3100),IFc(200,3;,N-(100,3) COMMON /FACT/ IFMAXNX(30),NA,IsiTCON,NPOL COMMON /IPTPS/ IANG(100,2) , ICHECK ,VEX,IFCC C FIND ALTERNATE SET OF POINTS Mi...ILF(M1 .EQ.M2)RETURN CALL SEARCH(MlI,M2,J,I) OUT OF THE PAGfl IF(J.NE.O)RETURN IS OUTSIDE THE C IF ICHECK =2 FORCE FACE’ CHANGE POLYHEDRON IF(ICHECK.EQ

  6. An efficient, large-scale, non-lattice-detection algorithm for exhaustive structural auditing of biomedical ontologies.

    PubMed

    Zhang, Guo-Qiang; Xing, Guangming; Cui, Licong

    2018-04-01

    One of the basic challenges in developing structural methods for systematic audition on the quality of biomedical ontologies is the computational cost usually involved in exhaustive sub-graph analysis. We introduce ANT-LCA, a new algorithm for computing all non-trivial lowest common ancestors (LCA) of each pair of concepts in the hierarchical order induced by an ontology. The computation of LCA is a fundamental step for non-lattice approach for ontology quality assurance. Distinct from existing approaches, ANT-LCA only computes LCAs for non-trivial pairs, those having at least one common ancestor. To skip all trivial pairs that may be of no practical interest, ANT-LCA employs a simple but innovative algorithmic strategy combining topological order and dynamic programming to keep track of non-trivial pairs. We provide correctness proofs and demonstrate a substantial reduction in computational time for two largest biomedical ontologies: SNOMED CT and Gene Ontology (GO). ANT-LCA achieved an average computation time of 30 and 3 sec per version for SNOMED CT and GO, respectively, about 2 orders of magnitude faster than the best known approaches. Our algorithm overcomes a fundamental computational barrier in sub-graph based structural analysis of large ontological systems. It enables the implementation of a new breed of structural auditing methods that not only identifies potential problematic areas, but also automatically suggests changes to fix the issues. Such structural auditing methods can lead to more effective tools supporting ontology quality assurance work. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Graduate Training at the Interface of Computational and Experimental Biology: An Outcome Report from a Partnership of Volunteers between a University and a National Laboratory.

    PubMed

    von Arnim, Albrecht G; Missra, Anamika

    2017-01-01

    Leading voices in the biological sciences have called for a transformation in graduate education leading to the PhD degree. One area commonly singled out for growth and innovation is cross-training in computational science. In 1998, the University of Tennessee (UT) founded an intercollegiate graduate program called the UT-ORNL Graduate School of Genome Science and Technology in partnership with the nearby Oak Ridge National Laboratory. Here, we report outcome data that attest to the program's effectiveness in graduating computationally enabled biologists for diverse careers. Among 77 PhD graduates since 2003, the majority came with traditional degrees in the biological sciences, yet two-thirds moved into computational or hybrid (computational-experimental) positions. We describe the curriculum of the program and how it has changed. We also summarize how the program seeks to establish cohesion between computational and experimental biologists. This type of program can respond flexibly and dynamically to unmet training needs. In conclusion, this study from a flagship, state-supported university may serve as a reference point for creating a stable, degree-granting, interdepartmental graduate program in computational biology and allied areas. © 2017 A. G. von Arnim and A. Missra. CBE—Life Sciences Education © 2017 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  8. Interactive planning of miniplates

    NASA Astrophysics Data System (ADS)

    Gall, Markus; Reinbacher, Knut; Wallner, Jürgen; Stanzel, Jan; Chen, Xiaojun; Schwenzer-Zimmerer, Katja; Schmalstieg, Dieter; Egger, Jan

    2017-03-01

    In this contribution, a novel method for computer aided surgery planning of facial defects by using models of purchasable MedArtis Modus 2.0 miniplates is proposed. Implants of this kind, which belong to the osteosynthetic material, are commonly used for treating defects in the facial area. By placing them perpendicular on the defect, the miniplates are fixed on the healthy bone, bent with respect to the surface, to stabilize the defective area. Our software is able to fit a selection of the most common implant models to the surgeon's desired position in a 3D computer model. The fitting respects the local surface curvature and adjusts direction and position in any desired way. Conventional methods use Computed Tomography (CT) scans to generate STereoLithic (STL) models serving as bending template for the implants or use a bending tool during the surgery for readjusting the implant several times. Both approaches lead to undesirable expenses in time. With our visual planning tool, surgeons are able to pre-plan the final implant within just a few minutes. The resulting model can be stored in STL format, which is the commonly used format for 3D printing. With this technology, surgeons are able to print the implant just in time or use it for generating a bending tool, both leading to an exactly bent miniplate.

  9. Parallelisation study of a three-dimensional environmental flow model

    NASA Astrophysics Data System (ADS)

    O'Donncha, Fearghal; Ragnoli, Emanuele; Suits, Frank

    2014-03-01

    There are many simulation codes in the geosciences that are serial and cannot take advantage of the parallel computational resources commonly available today. One model important for our work in coastal ocean current modelling is EFDC, a Fortran 77 code configured for optimal deployment on vector computers. In order to take advantage of our cache-based, blade computing system we restructured EFDC from serial to parallel, thereby allowing us to run existing models more quickly, and to simulate larger and more detailed models that were previously impractical. Since the source code for EFDC is extensive and involves detailed computation, it is important to do such a port in a manner that limits changes to the files, while achieving the desired speedup. We describe a parallelisation strategy involving surgical changes to the source files to minimise error-prone alteration of the underlying computations, while allowing load-balanced domain decomposition for efficient execution on a commodity cluster. The use of conjugate gradient posed particular challenges due to implicit non-local communication posing a hindrance to standard domain partitioning schemes; a number of techniques are discussed to address this in a feasible, computationally efficient manner. The parallel implementation demonstrates good scalability in combination with a novel domain partitioning scheme that specifically handles mixed water/land regions commonly found in coastal simulations. The approach presented here represents a practical methodology to rejuvenate legacy code on a commodity blade cluster with reasonable effort; our solution has direct application to other similar codes in the geosciences.

  10. ScoreRel CI: An Excel Program for Computing Confidence Intervals for Commonly Used Score Reliability Coefficients

    ERIC Educational Resources Information Center

    Barnette, J. Jackson

    2005-01-01

    An Excel program developed to assist researchers in the determination and presentation of confidence intervals around commonly used score reliability coefficients is described. The software includes programs to determine confidence intervals for Cronbachs alpha, Pearson r-based coefficients such as those used in test-retest and alternate forms…

  11. Promoting Access to Common Core Mathematics for Students with Severe Disabilities through Mathematical Problem Solving

    ERIC Educational Resources Information Center

    Spooner, Fred; Saunders, Alicia; Root, Jenny; Brosh, Chelsi

    2017-01-01

    There is a need to teach the pivotal skill of mathematical problem solving to students with severe disabilities, moving beyond basic skills like computation to higher level thinking skills. Problem solving is emphasized as a Standard for Mathematical Practice in the Common Core State Standards across grade levels. This article describes a…

  12. A Comparison of Vector and Raster GIS Methods for Calculating Landscape Metrics Used in Environmental Assessments

    Treesearch

    Timothy G. Wade; James D. Wickham; Maliha S. Nash; Anne C. Neale; Kurt H. Riitters; K. Bruce Jones

    2003-01-01

    AbstractGIS-based measurements that combine native raster and native vector data are commonly used in environmental assessments. Most of these measurements can be calculated using either raster or vector data formats and processing methods. Raster processes are more commonly used because they can be significantly faster computationally...

  13. In Spite of Indeterminacy Many Common Factor Score Estimates Yield an Identical Reproduced Covariance Matrix

    ERIC Educational Resources Information Center

    Beauducel, Andre

    2007-01-01

    It was investigated whether commonly used factor score estimates lead to the same reproduced covariance matrix of observed variables. This was achieved by means of Schonemann and Steiger's (1976) regression component analysis, since it is possible to compute the reproduced covariance matrices of the regression components corresponding to different…

  14. Common Sense and the Uncommon Bacterium--Is "Life" Patentable?

    ERIC Educational Resources Information Center

    Kiley, Thomas D.

    1978-01-01

    The Supreme Court is faced with some difficult issues with a common origin in disagreement between the Patent and Trademark Office and the Court of Customs and Patent Appeals over the code that defines what things are and are not patentable. The patent concerns of the computer software and molecular biology fields are addressed. (JMD)

  15. Worldwide Regulations of Standard Values of Pesticides for Human Health Risk Control: A Review.

    PubMed

    Li, Zijian; Jennings, Aaron

    2017-07-22

    Abstract : The impact of pesticide residues on human health is a worldwide problem, as human exposure to pesticides can occur through ingestion, inhalation, and dermal contact. Regulatory jurisdictions have promulgated the standard values for pesticides in residential soil, air, drinking water, and agricultural commodity for years. Until now, more than 19,400 pesticide soil regulatory guidance values (RGVs) and 5400 pesticide drinking water maximum concentration levels (MCLs) have been regulated by 54 and 102 nations, respectively. Over 90 nations have provided pesticide agricultural commodity maximum residue limits (MRLs) for at least one of the 12 most commonly consumed agricultural foods. A total of 22 pesticides have been regulated with more than 100 soil RGVs, and 25 pesticides have more than 100 drinking water MCLs. This research indicates that those RGVs and MCLs for an individual pesticide could vary over seven (DDT drinking water MCLs), eight (Lindane soil RGVs), or even nine (Dieldrin soil RGVs) orders of magnitude. Human health risk uncertainty bounds and the implied total exposure mass burden model were applied to analyze the most commonly regulated and used pesticides for human health risk control. For the top 27 commonly regulated pesticides in soil, there are at least 300 RGVs (8% of the total) that are above all of the computed upper bounds for human health risk uncertainty. For the top 29 most-commonly regulated pesticides in drinking water, at least 172 drinking water MCLs (5% of the total) exceed the computed upper bounds for human health risk uncertainty; while for the 14 most widely used pesticides, there are at least 310 computed implied dose limits (28.0% of the total) that are above the acceptable daily intake values. The results show that some worldwide standard values were not derived conservatively enough to avoid human health risk by the pesticides, and that some values were not computed comprehensively by considering all major human exposure pathways.

  16. Worldwide Regulations of Standard Values of Pesticides for Human Health Risk Control: A Review

    PubMed Central

    Jennings, Aaron

    2017-01-01

    The impact of pesticide residues on human health is a worldwide problem, as human exposure to pesticides can occur through ingestion, inhalation, and dermal contact. Regulatory jurisdictions have promulgated the standard values for pesticides in residential soil, air, drinking water, and agricultural commodity for years. Until now, more than 19,400 pesticide soil regulatory guidance values (RGVs) and 5400 pesticide drinking water maximum concentration levels (MCLs) have been regulated by 54 and 102 nations, respectively. Over 90 nations have provided pesticide agricultural commodity maximum residue limits (MRLs) for at least one of the 12 most commonly consumed agricultural foods. A total of 22 pesticides have been regulated with more than 100 soil RGVs, and 25 pesticides have more than 100 drinking water MCLs. This research indicates that those RGVs and MCLs for an individual pesticide could vary over seven (DDT drinking water MCLs), eight (Lindane soil RGVs), or even nine (Dieldrin soil RGVs) orders of magnitude. Human health risk uncertainty bounds and the implied total exposure mass burden model were applied to analyze the most commonly regulated and used pesticides for human health risk control. For the top 27 commonly regulated pesticides in soil, there are at least 300 RGVs (8% of the total) that are above all of the computed upper bounds for human health risk uncertainty. For the top 29 most-commonly regulated pesticides in drinking water, at least 172 drinking water MCLs (5% of the total) exceed the computed upper bounds for human health risk uncertainty; while for the 14 most widely used pesticides, there are at least 310 computed implied dose limits (28.0% of the total) that are above the acceptable daily intake values. The results show that some worldwide standard values were not derived conservatively enough to avoid human health risk by the pesticides, and that some values were not computed comprehensively by considering all major human exposure pathways. PMID:28737697

  17. Metamodels for Computer-Based Engineering Design: Survey and Recommendations

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.

    1997-01-01

    The use of statistical techniques to build approximations of expensive computer analysis codes pervades much of todays engineering design. These statistical approximations, or metamodels, are used to replace the actual expensive computer analyses, facilitating multidisciplinary, multiobjective optimization and concept exploration. In this paper we review several of these techniques including design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We survey their existing application in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of statistical approximation techniques in given situations and how common pitfalls can be avoided.

  18. Computing aggregate properties of preimages for 2D cellular automata.

    PubMed

    Beer, Randall D

    2017-11-01

    Computing properties of the set of precursors of a given configuration is a common problem underlying many important questions about cellular automata. Unfortunately, such computations quickly become intractable in dimension greater than one. This paper presents an algorithm-incremental aggregation-that can compute aggregate properties of the set of precursors exponentially faster than naïve approaches. The incremental aggregation algorithm is demonstrated on two problems from the two-dimensional binary Game of Life cellular automaton: precursor count distributions and higher-order mean field theory coefficients. In both cases, incremental aggregation allows us to obtain new results that were previously beyond reach.

  19. Fermilab computing at the Intensity Frontier

    DOE PAGES

    Group, Craig; Fuess, S.; Gutsche, O.; ...

    2015-12-23

    The Intensity Frontier refers to a diverse set of particle physics experiments using high- intensity beams. In this paper I will focus the discussion on the computing requirements and solutions of a set of neutrino and muon experiments in progress or planned to take place at the Fermi National Accelerator Laboratory located near Chicago, Illinois. In addition, the experiments face unique challenges, but also have overlapping computational needs. In principle, by exploiting the commonality and utilizing centralized computing tools and resources, requirements can be satisfied efficiently and scientists of individual experiments can focus more on the science and less onmore » the development of tools and infrastructure.« less

  20. Java and its future in biomedical computing.

    PubMed Central

    Rodgers, R P

    1996-01-01

    Java, a new object-oriented computing language related to C++, is receiving considerable attention due to its use in creating network-sharable, platform-independent software modules (known as "applets") that can be used with the World Wide Web. The Web has rapidly become the most commonly used information-retrieval tool associated with the global computer network known as the Internet, and Java has the potential to further accelerate the Web's application to medical problems. Java's potentially wide acceptance due to its Web association and its own technical merits also suggests that it may become a popular language for non-Web-based, object-oriented computing. PMID:8880677

  1. Computing aggregate properties of preimages for 2D cellular automata

    NASA Astrophysics Data System (ADS)

    Beer, Randall D.

    2017-11-01

    Computing properties of the set of precursors of a given configuration is a common problem underlying many important questions about cellular automata. Unfortunately, such computations quickly become intractable in dimension greater than one. This paper presents an algorithm—incremental aggregation—that can compute aggregate properties of the set of precursors exponentially faster than naïve approaches. The incremental aggregation algorithm is demonstrated on two problems from the two-dimensional binary Game of Life cellular automaton: precursor count distributions and higher-order mean field theory coefficients. In both cases, incremental aggregation allows us to obtain new results that were previously beyond reach.

  2. Learning, Interactional, and Motivational Outcomes in One-to-One Synchronous Computer-Mediated versus Face-to-Face Tutoring

    ERIC Educational Resources Information Center

    Siler, Stephanie Ann; VanLehn, Kurt

    2009-01-01

    Face-to-face (FTF) human-human tutoring has ranked among the most effective forms of instruction. However, because computer-mediated (CM) tutoring is becoming increasingly common, it is instructive to evaluate its effectiveness relative to face-to-face tutoring. Does the lack of spoken, face-to-face interaction affect learning gains and…

  3. Probabilistic simulation of concurrent engineering of propulsion systems

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Singhal, S. N.

    1993-01-01

    Technology readiness and the available infrastructure is assessed for timely computational simulation of concurrent engineering for propulsion systems. Results for initial coupled multidisciplinary, fabrication-process, and system simulators are presented including uncertainties inherent in various facets of engineering processes. An approach is outlined for computationally formalizing the concurrent engineering process from cradle-to-grave via discipline dedicated workstations linked with a common database.

  4. What Cognitive Science May Learn from Instructional Design: A Case Study in Introductory Computer Programming.

    ERIC Educational Resources Information Center

    van Merrienboer, Jeroen J. G.

    The contributions of instructional design to cognitive science are discussed. It is argued that both sciences have their own object of study, but share a common interest in human cognition and performance as part of instructional systems. From a case study based on experience in teaching introductory computer programming, it is concluded that both…

  5. Learning Density in Vanuatu High School with Computer Simulation: Influence of Different Levels of Guidance

    ERIC Educational Resources Information Center

    Moli, Lemuel; Delserieys, Alice Pedregosa; Impedovo, Maria Antonietta; Castera, Jeremy

    2017-01-01

    This paper presents a study on discovery learning of scientific concepts with the support of computer simulation. In particular, the paper will focus on the effect of the levels of guidance on students with a low degree of experience in informatics and educational technology. The first stage of this study was to identify the common misconceptions…

  6. Learning Computer Programming: Implementing a Fractal in a Turing Machine

    ERIC Educational Resources Information Center

    Pereira, Hernane B. de B.; Zebende, Gilney F.; Moret, Marcelo A.

    2010-01-01

    It is common to start a course on computer programming logic by teaching the algorithm concept from the point of view of natural languages, but in a schematic way. In this sense we note that the students have difficulties in understanding and implementation of the problems proposed by the teacher. The main idea of this paper is to show that the…

  7. Magnetic resonance and computed tomographic features of 4 cases of canine congenital thoracic vertebral anomalies

    PubMed Central

    Berlanda, Michele; Zotti, Alessandro; Brandazza, Giada; Poser, Helen; Calò, Pietro; Bernardini, Marco

    2011-01-01

    Magnetic resonance and computed tomography features of 4 cases of canine congenital vertebral anomalies (CVAs) are discussed. Two of the cases represent unusual presentations for such anomalies that commonly affect screw-tail or toy breeds. Moreover, the combination of CVAs and a congenital peritoneo-pericardial diaphragmatic hernia has never before been imaged. PMID:22654139

  8. Frontiers in Educational Computing. Association for Educational Data Systems Annual Convention Proceedings (21st, Portland, Oregon, May 9-13, 1983).

    ERIC Educational Resources Information Center

    Association for Educational Data Systems, Washington, DC.

    The 98 papers in this collection examine a wide variety of topics related to the latest technological developments as they apply to the educational process. Papers are grouped to reflect common, broad areas of interest, representing the instructional, administrative, and computer science divisions of the Association for Educational Data Systems…

  9. What to Use for Mathematics in High School: PC, Tablet or Graphing Calculator?

    ERIC Educational Resources Information Center

    Korenova, Lilla

    2015-01-01

    Digital technologies have made their way not only into our everyday lives, but nowadays they are also commonly used in schools. Computers, tablets and smartphones are now part of the lives of this new generation of students, so it's only natural that they are used for educational purposes as well. Besides the interactive whiteboards, computers and…

  10. catcher: A Software Program to Detect Answer Copying in Multiple-Choice Tests Based on Nominal Response Model

    ERIC Educational Resources Information Center

    Kalender, Ilker

    2012-01-01

    catcher is a software program designed to compute the [omega] index, a common statistical index for the identification of collusions (cheating) among examinees taking an educational or psychological test. It requires (a) responses and (b) ability estimations of individuals, and (c) item parameters to make computations and outputs the results of…

  11. Knowledge-based geographic information systems on the Macintosh computer: a component of the GypsES project

    Treesearch

    Gregory Elmes; Thomas Millette; Charles B. Yuill

    1991-01-01

    GypsES, a decision-support and expert system for the management of Gypsy Moth addresses five related research problems in a modular, computer-based project. The modules are hazard rating, monitoring, prediction, treatment decision and treatment implementation. One common component is a geographic information system designed to function intelligently. We refer to this...

  12. The Cognitve Cost of Chatting While Attending a Lecture: A Temporal Analysis

    ERIC Educational Resources Information Center

    Bigenho, Chris; Lin, Lin; Gold, Caroline; Gupta, Arjun; Rawitscher, Lindsay

    2013-01-01

    It is common to see students multitasking or switching between different tasks on the computer while also listening to the teacher lecture in the front of a classroom. In today's classrooms, students have much greater control over how they use their time, with the classroom integration of computers and mobile devices combined with social media and…

  13. The Effect of Interactive, Three Dimensional, High Speed Simulations on High School Science Students' Conceptions of the Molecular Structure of Water.

    ERIC Educational Resources Information Center

    Hakerem, Gita; And Others

    The Water and Molecular Networks (WAMNet) Project uses graduate student written Reduced Instruction Set Computing (RISC) computer simulations of the molecular structure of water to assist high school students learn about the nature of water. This study examined: (1) preconceptions concerning the molecular structure of water common among high…

  14. Easing the Transition from Paper to Screen: An Evaluatory Framework for CAA Migration

    ERIC Educational Resources Information Center

    McAlpine, Mhairi

    2004-01-01

    Computer assisted assessment is becoming more and more common through further and higher education. There is some debate about how easy it will be to migrate current assessment practice to a computer enhanced format and how items which are currently re-used for formative purposes may be adapted to be presented online. This paper proposes an…

  15. Commentary: Tablet PCs--Lightweights with a Teaching Punch

    ERIC Educational Resources Information Center

    Parslow, Graham R.

    2010-01-01

    Tablet (or slate) computers are a group of small portable computers that have two features in common, a touch screen and wireless connectivity to the web. At the 2010 Consumer Electronics show held in January in Las Vegas, this category of product caused the greatest interest ahead of the release of the Apple iPad (www.cesweb.org). The tablet PC…

  16. Behavior-Based Fault Monitoring

    DTIC Science & Technology

    1990-12-03

    processor targeted for avionics and space applications . It appears that the signature monitoring technique can be extended to detect computer viruses as...most common approach is structural duplication. Although effective, duplication is too expensive for all but a few applications . Redundancy can also be...Signature Monitoring and Encryption," Int. Conf. on Dependable Computing for Critical Applications , August 1989. 7. K.D. Wilken and J.P. Shen

  17. A Conceptual Review of Research on the Pathological Use of Computers, Video Games, and the Internet

    ERIC Educational Resources Information Center

    Sim, Timothy; Gentile, Douglas A.; Bricolo, Francesco; Serpelloni, Giovanni; Gulamoydeen, Farah

    2012-01-01

    Preliminary research studies suggest that some people who use computer, video games, and the Internet heavily develop dysfunctional symptoms, often referred to in the popular press as an "addiction." Although several studies have measured various facets of this issue, there has been no common framework within which to view these studies. This…

  18. Computer-aided position planning of miniplates to treat facial bone defects

    PubMed Central

    Wallner, Jürgen; Gall, Markus; Chen, Xiaojun; Schwenzer-Zimmerer, Katja; Reinbacher, Knut; Schmalstieg, Dieter

    2017-01-01

    In this contribution, a software system for computer-aided position planning of miniplates to treat facial bone defects is proposed. The intra-operatively used bone plates have to be passively adapted on the underlying bone contours for adequate bone fragment stabilization. However, this procedure can lead to frequent intra-operatively performed material readjustments especially in complex surgical cases. Our approach is able to fit a selection of common implant models on the surgeon’s desired position in a 3D computer model. This happens with respect to the surrounding anatomical structures, always including the possibility of adjusting both the direction and the position of the used osteosynthesis material. By using the proposed software, surgeons are able to pre-plan the out coming implant in its form and morphology with the aid of a computer-visualized model within a few minutes. Further, the resulting model can be stored in STL file format, the commonly used format for 3D printing. Using this technology, surgeons are able to print the virtual generated implant, or create an individually designed bending tool. This method leads to adapted osteosynthesis materials according to the surrounding anatomy and requires further a minimum amount of money and time. PMID:28817607

  19. Computer-aided position planning of miniplates to treat facial bone defects.

    PubMed

    Egger, Jan; Wallner, Jürgen; Gall, Markus; Chen, Xiaojun; Schwenzer-Zimmerer, Katja; Reinbacher, Knut; Schmalstieg, Dieter

    2017-01-01

    In this contribution, a software system for computer-aided position planning of miniplates to treat facial bone defects is proposed. The intra-operatively used bone plates have to be passively adapted on the underlying bone contours for adequate bone fragment stabilization. However, this procedure can lead to frequent intra-operatively performed material readjustments especially in complex surgical cases. Our approach is able to fit a selection of common implant models on the surgeon's desired position in a 3D computer model. This happens with respect to the surrounding anatomical structures, always including the possibility of adjusting both the direction and the position of the used osteosynthesis material. By using the proposed software, surgeons are able to pre-plan the out coming implant in its form and morphology with the aid of a computer-visualized model within a few minutes. Further, the resulting model can be stored in STL file format, the commonly used format for 3D printing. Using this technology, surgeons are able to print the virtual generated implant, or create an individually designed bending tool. This method leads to adapted osteosynthesis materials according to the surrounding anatomy and requires further a minimum amount of money and time.

  20. An approach to quantum-computational hydrologic inverse analysis

    DOE PAGES

    O'Malley, Daniel

    2018-05-02

    Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealermore » to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.« less

Top