Sample records for huge computing resources

  1. An Overview of Cloud Computing in Distributed Systems

    NASA Astrophysics Data System (ADS)

    Divakarla, Usha; Kumari, Geetha

    2010-11-01

    Cloud computing is the emerging trend in the field of distributed computing. Cloud computing evolved from grid computing and distributed computing. Cloud plays an important role in huge organizations in maintaining huge data with limited resources. Cloud also helps in resource sharing through some specific virtual machines provided by the cloud service provider. This paper gives an overview of the cloud organization and some of the basic security issues pertaining to the cloud.

  2. Dynamic virtual machine allocation policy in cloud computing complying with service level agreement using CloudSim

    NASA Astrophysics Data System (ADS)

    Aneri, Parikh; Sumathy, S.

    2017-11-01

    Cloud computing provides services over the internet and provides application resources and data to the users based on their demand. Base of the Cloud Computing is consumer provider model. Cloud provider provides resources which consumer can access using cloud computing model in order to build their application based on their demand. Cloud data center is a bulk of resources on shared pool architecture for cloud user to access. Virtualization is the heart of the Cloud computing model, it provides virtual machine as per application specific configuration and those applications are free to choose their own configuration. On one hand, there is huge number of resources and on other hand it has to serve huge number of requests effectively. Therefore, resource allocation policy and scheduling policy play very important role in allocation and managing resources in this cloud computing model. This paper proposes the load balancing policy using Hungarian algorithm. Hungarian Algorithm provides dynamic load balancing policy with a monitor component. Monitor component helps to increase cloud resource utilization by managing the Hungarian algorithm by monitoring its state and altering its state based on artificial intelligent. CloudSim used in this proposal is an extensible toolkit and it simulates cloud computing environment.

  3. Parallel high-performance grid computing: capabilities and opportunities of a novel demanding service and business class allowing highest resource efficiency.

    PubMed

    Kepper, Nick; Ettig, Ramona; Dickmann, Frank; Stehr, Rene; Grosveld, Frank G; Wedemann, Gero; Knoch, Tobias A

    2010-01-01

    Especially in the life-science and the health-care sectors the huge IT requirements are imminent due to the large and complex systems to be analysed and simulated. Grid infrastructures play here a rapidly increasing role for research, diagnostics, and treatment, since they provide the necessary large-scale resources efficiently. Whereas grids were first used for huge number crunching of trivially parallelizable problems, increasingly parallel high-performance computing is required. Here, we show for the prime example of molecular dynamic simulations how the presence of large grid clusters including very fast network interconnects within grid infrastructures allows now parallel high-performance grid computing efficiently and thus combines the benefits of dedicated super-computing centres and grid infrastructures. The demands for this service class are the highest since the user group has very heterogeneous requirements: i) two to many thousands of CPUs, ii) different memory architectures, iii) huge storage capabilities, and iv) fast communication via network interconnects, are all needed in different combinations and must be considered in a highly dedicated manner to reach highest performance efficiency. Beyond, advanced and dedicated i) interaction with users, ii) the management of jobs, iii) accounting, and iv) billing, not only combines classic with parallel high-performance grid usage, but more importantly is also able to increase the efficiency of IT resource providers. Consequently, the mere "yes-we-can" becomes a huge opportunity like e.g. the life-science and health-care sectors as well as grid infrastructures by reaching higher level of resource efficiency.

  4. Task Assignment Heuristics for Distributed CFD Applications

    NASA Technical Reports Server (NTRS)

    Lopez-Benitez, N.; Djomehri, M. J.; Biswas, R.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    CFD applications require high-performance computational platforms: 1. Complex physics and domain configuration demand strongly coupled solutions; 2. Applications are CPU and memory intensive; and 3. Huge resource requirements can only be satisfied by teraflop-scale machines or distributed computing.

  5. Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds

    NASA Astrophysics Data System (ADS)

    Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni

    2012-09-01

    Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.

  6. Cloud Computing for radiologists.

    PubMed

    Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit

    2012-07-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  7. Cloud Computing for radiologists

    PubMed Central

    Kharat, Amit T; Safvi, Amjad; Thind, SS; Singh, Amarjit

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future. PMID:23599560

  8. Exploring Cloud Computing for Large-scale Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Guang; Han, Binh; Yin, Jian

    This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less

  9. Intrusion Prevention and Detection in Grid Computing - The ALICE Case

    NASA Astrophysics Data System (ADS)

    Gomez, Andres; Lara, Camilo; Kebschull, Udo

    2015-12-01

    Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machine Learning approach. We plan to implement the proposed framework as a software prototype that will be tested as a component of the ALICE Grid middleware.

  10. Sesquinaries, Magnetics and Atmospheres: Studies of the Terrestrial Moons and Exoplanets

    DTIC Science & Technology

    2016-12-01

    support provided by Red Sky Research, LLC. Computational support was provided by the NASA Ames Mission Design Division (Code RD) for research...Systems Branch (Code SST), NASA Ames Research Center, provided supercomputer access and computational resources for the work in Chapter 5. I owe a...huge debt of gratitude to Dr. Pete Worden, Dr. Steve Zornetzer, Dr. Alan Weston ( NASA ), and Col. Carol Welsch, Lt. Col Joe Nance and Lt. Col Brian

  11. Supercomputing resources empowering superstack with interactive and integrated systems

    NASA Astrophysics Data System (ADS)

    Rückemann, Claus-Peter

    2012-09-01

    This paper presents the results from the development and implementation of Superstack algorithms to be dynamically used with integrated systems and supercomputing resources. Processing of geophysical data, thus named geoprocessing, is an essential part of the analysis of geoscientific data. The theory of Superstack algorithms and the practical application on modern computing architectures was inspired by developments introduced with processing of seismic data on mainframes and within the last years leading to high end scientific computing applications. There are several stacking algorithms known but with low signal to noise ratio in seismic data the use of iterative algorithms like the Superstack can support analysis and interpretation. The new Superstack algorithms are in use with wave theory and optical phenomena on highly performant computing resources for huge data sets as well as for sophisticated application scenarios in geosciences and archaeology.

  12. Collaborative Working Architecture for IoT-Based Applications.

    PubMed

    Mora, Higinio; Signes-Pont, María Teresa; Gil, David; Johnsson, Magnus

    2018-05-23

    The new sensing applications need enhanced computing capabilities to handle the requirements of complex and huge data processing. The Internet of Things (IoT) concept brings processing and communication features to devices. In addition, the Cloud Computing paradigm provides resources and infrastructures for performing the computations and outsourcing the work from the IoT devices. This scenario opens new opportunities for designing advanced IoT-based applications, however, there is still much research to be done to properly gear all the systems for working together. This work proposes a collaborative model and an architecture to take advantage of the available computing resources. The resulting architecture involves a novel network design with different levels which combines sensing and processing capabilities based on the Mobile Cloud Computing (MCC) paradigm. An experiment is included to demonstrate that this approach can be used in diverse real applications. The results show the flexibility of the architecture to perform complex computational tasks of advanced applications.

  13. Implementation of DFT application on ternary optical computer

    NASA Astrophysics Data System (ADS)

    Junjie, Peng; Youyi, Fu; Xiaofeng, Zhang; Shuai, Kong; Xinyu, Wei

    2018-03-01

    As its characteristics of huge number of data bits and low energy consumption, optical computing may be used in the applications such as DFT etc. which needs a lot of computation and can be implemented in parallel. According to this, DFT implementation methods in full parallel as well as in partial parallel are presented. Based on resources ternary optical computer (TOC), extensive experiments were carried out. Experimental results show that the proposed schemes are correct and feasible. They provide a foundation for further exploration of the applications on TOC that needs a large amount calculation and can be processed in parallel.

  14. Using Cloud Computing Services in e-Learning Process: Benefits and Challenges

    ERIC Educational Resources Information Center

    El Mhouti, Abderrahim; Erradi, Mohamed; Nasseh, Azeddine

    2018-01-01

    During the recent years, Information and Communication Technologies (ICT) play a significant role in the field of education and e-learning has become a very popular trend of the education technology. However, with the huge growth of the number of users, data and educational resources generated, e-learning systems have become more and more…

  15. Now and next-generation sequencing techniques: future of sequence analysis using cloud computing.

    PubMed

    Thakur, Radhe Shyam; Bandopadhyay, Rajib; Chaudhary, Bratati; Chatterjee, Sourav

    2012-01-01

    Advances in the field of sequencing techniques have resulted in the greatly accelerated production of huge sequence datasets. This presents immediate challenges in database maintenance at datacenters. It provides additional computational challenges in data mining and sequence analysis. Together these represent a significant overburden on traditional stand-alone computer resources, and to reach effective conclusions quickly and efficiently, the virtualization of the resources and computation on a pay-as-you-go concept (together termed "cloud computing") has recently appeared. The collective resources of the datacenter, including both hardware and software, can be available publicly, being then termed a public cloud, the resources being provided in a virtual mode to the clients who pay according to the resources they employ. Examples of public companies providing these resources include Amazon, Google, and Joyent. The computational workload is shifted to the provider, which also implements required hardware and software upgrades over time. A virtual environment is created in the cloud corresponding to the computational and data storage needs of the user via the internet. The task is then performed, the results transmitted to the user, and the environment finally deleted after all tasks are completed. In this discussion, we focus on the basics of cloud computing, and go on to analyze the prerequisites and overall working of clouds. Finally, the applications of cloud computing in biological systems, particularly in comparative genomics, genome informatics, and SNP detection are discussed with reference to traditional workflows.

  16. Security model for VM in cloud

    NASA Astrophysics Data System (ADS)

    Kanaparti, Venkataramana; Naveen K., R.; Rajani, S.; Padmvathamma, M.; Anitha, C.

    2013-03-01

    Cloud computing is a new approach emerged to meet ever-increasing demand for computing resources and to reduce operational costs and Capital Expenditure for IT services. As this new way of computation allows data and applications to be stored away from own corporate server, it brings more issues in security such as virtualization security, distributed computing, application security, identity management, access control and authentication. Even though Virtualization forms the basis for cloud computing it poses many threats in securing cloud. As most of Security threats lies at Virtualization layer in cloud we proposed this new Security Model for Virtual Machine in Cloud (SMVC) in which every process is authenticated by Trusted-Agent (TA) in Hypervisor as well as in VM. Our proposed model is designed to with-stand attacks by unauthorized process that pose threat to applications related to Data Mining, OLAP systems, Image processing which requires huge resources in cloud deployed on one or more VM's.

  17. Parallel computing method for simulating hydrological processesof large rivers under climate change

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chen, Y.

    2016-12-01

    Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.

  18. Now and Next-Generation Sequencing Techniques: Future of Sequence Analysis Using Cloud Computing

    PubMed Central

    Thakur, Radhe Shyam; Bandopadhyay, Rajib; Chaudhary, Bratati; Chatterjee, Sourav

    2012-01-01

    Advances in the field of sequencing techniques have resulted in the greatly accelerated production of huge sequence datasets. This presents immediate challenges in database maintenance at datacenters. It provides additional computational challenges in data mining and sequence analysis. Together these represent a significant overburden on traditional stand-alone computer resources, and to reach effective conclusions quickly and efficiently, the virtualization of the resources and computation on a pay-as-you-go concept (together termed “cloud computing”) has recently appeared. The collective resources of the datacenter, including both hardware and software, can be available publicly, being then termed a public cloud, the resources being provided in a virtual mode to the clients who pay according to the resources they employ. Examples of public companies providing these resources include Amazon, Google, and Joyent. The computational workload is shifted to the provider, which also implements required hardware and software upgrades over time. A virtual environment is created in the cloud corresponding to the computational and data storage needs of the user via the internet. The task is then performed, the results transmitted to the user, and the environment finally deleted after all tasks are completed. In this discussion, we focus on the basics of cloud computing, and go on to analyze the prerequisites and overall working of clouds. Finally, the applications of cloud computing in biological systems, particularly in comparative genomics, genome informatics, and SNP detection are discussed with reference to traditional workflows. PMID:23248640

  19. Design and deployment of an elastic network test-bed in IHEP data center based on SDN

    NASA Astrophysics Data System (ADS)

    Zeng, Shan; Qi, Fazhi; Chen, Gang

    2017-10-01

    High energy physics experiments produce huge amounts of raw data, while because of the sharing characteristics of the network resources, there is no guarantee of the available bandwidth for each experiment which may cause link congestion problems. On the other side, with the development of cloud computing technologies, IHEP have established a cloud platform based on OpenStack which can ensure the flexibility of the computing and storage resources, and more and more computing applications have been deployed on virtual machines established by OpenStack. However, under the traditional network architecture, network capability can’t be required elastically, which becomes the bottleneck of restricting the flexible application of cloud computing. In order to solve the above problems, we propose an elastic cloud data center network architecture based on SDN, and we also design a high performance controller cluster based on OpenDaylight. In the end, we present our current test results.

  20. Science-Driven Computing: NERSC's Plan for 2006-2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simon, Horst D.; Kramer, William T.C.; Bailey, David H.

    NERSC has developed a five-year strategic plan focusing on three components: Science-Driven Systems, Science-Driven Services, and Science-Driven Analytics. (1) Science-Driven Systems: Balanced introduction of the best new technologies for complete computational systems--computing, storage, networking, visualization and analysis--coupled with the activities necessary to engage vendors in addressing the DOE computational science requirements in their future roadmaps. (2) Science-Driven Services: The entire range of support activities, from high-quality operations and user services to direct scientific support, that enable a broad range of scientists to effectively use NERSC systems in their research. NERSC will concentrate on resources needed to realize the promise ofmore » the new highly scalable architectures for scientific discovery in multidisciplinary computational science projects. (3) Science-Driven Analytics: The architectural and systems enhancements and services required to integrate NERSC's powerful computational and storage resources to provide scientists with new tools to effectively manipulate, visualize, and analyze the huge data sets derived from simulations and experiments.« less

  1. Optimization of over-provisioned clouds

    NASA Astrophysics Data System (ADS)

    Balashov, N.; Baranov, A.; Korenkov, V.

    2016-09-01

    The functioning of modern applications in cloud-centers is characterized by a huge variety of computational workloads generated. This causes uneven workload distribution and as a result leads to ineffective utilization of cloud-centers' hardware. The proposed article addresses the possible ways to solve this issue and demonstrates that it is a matter of necessity to optimize cloud-centers' hardware utilization. As one of the possible ways to solve the problem of the inefficient resource utilization in heterogeneous cloud-environments an algorithm of dynamic re-allocation of virtual resources is suggested.

  2. Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment

    NASA Astrophysics Data System (ADS)

    Ritsch, E.; Atlas Collaboration

    2014-06-01

    The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during Run 1 relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for Run 2, and beyond. A number of fast detector simulation, digitization and reconstruction techniques are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.

  3. Experimental Realization of a Quantum Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Li, Zhaokai; Liu, Xiaomei; Xu, Nanyang; Du, Jiangfeng

    2015-04-01

    The fundamental principle of artificial intelligence is the ability of machines to learn from previous experience and do future work accordingly. In the age of big data, classical learning machines often require huge computational resources in many practical cases. Quantum machine learning algorithms, on the other hand, could be exponentially faster than their classical counterparts by utilizing quantum parallelism. Here, we demonstrate a quantum machine learning algorithm to implement handwriting recognition on a four-qubit NMR test bench. The quantum machine learns standard character fonts and then recognizes handwritten characters from a set with two candidates. Because of the wide spread importance of artificial intelligence and its tremendous consumption of computational resources, quantum speedup would be extremely attractive against the challenges of big data.

  4. Geo-spatial Service and Application based on National E-government Network Platform and Cloud

    NASA Astrophysics Data System (ADS)

    Meng, X.; Deng, Y.; Li, H.; Yao, L.; Shi, J.

    2014-04-01

    With the acceleration of China's informatization process, our party and government take a substantive stride in advancing development and application of digital technology, which promotes the evolution of e-government and its informatization. Meanwhile, as a service mode based on innovative resources, cloud computing may connect huge pools together to provide a variety of IT services, and has become one relatively mature technical pattern with further studies and massive practical applications. Based on cloud computing technology and national e-government network platform, "National Natural Resources and Geospatial Database (NRGD)" project integrated and transformed natural resources and geospatial information dispersed in various sectors and regions, established logically unified and physically dispersed fundamental database and developed national integrated information database system supporting main e-government applications. Cross-sector e-government applications and services are realized to provide long-term, stable and standardized natural resources and geospatial fundamental information products and services for national egovernment and public users.

  5. BelleII@home: Integrate volunteer computing resources into DIRAC in a secure way

    NASA Astrophysics Data System (ADS)

    Wu, Wenjing; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo; Kan, Wenxiao; Urquijo, Phillip

    2017-10-01

    The exploitation of volunteer computing resources has become a popular practice in the HEP computing community as the huge amount of potential computing power it provides. In the recent HEP experiments, the grid middleware has been used to organize the services and the resources, however it relies heavily on the X.509 authentication, which is contradictory to the untrusted feature of volunteer computing resources, therefore one big challenge to utilize the volunteer computing resources is how to integrate them into the grid middleware in a secure way. The DIRAC interware which is commonly used as the major component of the grid computing infrastructure for several HEP experiments proposes an even bigger challenge to this paradox as its pilot is more closely coupled with operations requiring the X.509 authentication compared to the implementations of pilot in its peer grid interware. The Belle II experiment is a B-factory experiment at KEK, and it uses DIRAC for its distributed computing. In the project of BelleII@home, in order to integrate the volunteer computing resources into the Belle II distributed computing platform in a secure way, we adopted a new approach which detaches the payload running from the Belle II DIRAC pilot which is a customized pilot pulling and processing jobs from the Belle II distributed computing platform, so that the payload can run on volunteer computers without requiring any X.509 authentication. In this approach we developed a gateway service running on a trusted server which handles all the operations requiring the X.509 authentication. So far, we have developed and deployed the prototype of BelleII@home, and tested its full workflow which proves the feasibility of this approach. This approach can also be applied on HPC systems whose work nodes do not have outbound connectivity to interact with the DIRAC system in general.

  6. A new technique in reference based DNA sequence compression algorithm: Enabling partial decompression

    NASA Astrophysics Data System (ADS)

    Banerjee, Kakoli; Prasad, R. A.

    2014-10-01

    The whole gamut of Genetic data is ever increasing exponentially. The human genome in its base format occupies almost thirty terabyte of data and doubling its size every two and a half year. It is well-know that computational resources are limited. The most important resource which genetic data requires in its collection, storage and retrieval is its storage space. Storage is limited. Computational performance is also dependent on storage and execution time. Transmission capabilities are also directly dependent on the size of the data. Hence Data compression techniques become an issue of utmost importance when we confront with the task of handling such giganticdatabases like GenBank. Decompression is also an issue when such huge databases are being handled. This paper is intended not only to provide genetic data compression but also partially decompress the genetic sequences.

  7. A Medical Image Backup Architecture Based on a NoSQL Database and Cloud Computing Services.

    PubMed

    Santos Simões de Almeida, Luan Henrique; Costa Oliveira, Marcelo

    2015-01-01

    The use of digital systems for storing medical images generates a huge volume of data. Digital images are commonly stored and managed on a Picture Archiving and Communication System (PACS), under the DICOM standard. However, PACS is limited because it is strongly dependent on the server's physical space. Alternatively, Cloud Computing arises as an extensive, low cost, and reconfigurable resource. However, medical images contain patient information that can not be made available in a public cloud. Therefore, a mechanism to anonymize these images is needed. This poster presents a solution for this issue by taking digital images from PACS, converting the information contained in each image file to a NoSQL database, and using cloud computing to store digital images.

  8. FPGA-Based Multimodal Embedded Sensor System Integrating Low- and Mid-Level Vision

    PubMed Central

    Botella, Guillermo; Martín H., José Antonio; Santos, Matilde; Meyer-Baese, Uwe

    2011-01-01

    Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms. PMID:22164069

  9. FPGA-based multimodal embedded sensor system integrating low- and mid-level vision.

    PubMed

    Botella, Guillermo; Martín H, José Antonio; Santos, Matilde; Meyer-Baese, Uwe

    2011-01-01

    Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms.

  10. CMS Connect

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Bockelman, B.; Gardner, R., Jr.; Hurtado Anampa, K.; Jayatilaka, B.; Aftab Khan, F.; Lannon, K.; Larson, K.; Letts, J.; Marra Da Silva, J.; Mascheroni, M.; Mason, D.; Perez-Calero Yzquierdo, A.; Tiradani, A.

    2017-10-01

    The CMS experiment collects and analyzes large amounts of data coming from high energy particle collisions produced by the Large Hadron Collider (LHC) at CERN. This involves a huge amount of real and simulated data processing that needs to be handled in batch-oriented platforms. The CMS Global Pool of computing resources provide +100K dedicated CPU cores and another 50K to 100K CPU cores from opportunistic resources for these kind of tasks and even though production and event processing analysis workflows are already managed by existing tools, there is still a lack of support to submit final stage condor-like analysis jobs familiar to Tier-3 or local Computing Facilities users into these distributed resources in an integrated (with other CMS services) and friendly way. CMS Connect is a set of computing tools and services designed to augment existing services in the CMS Physics community focusing on these kind of condor analysis jobs. It is based on the CI-Connect platform developed by the Open Science Grid and uses the CMS GlideInWMS infrastructure to transparently plug CMS global grid resources into a virtual pool accessed via a single submission machine. This paper describes the specific developments and deployment of CMS Connect beyond the CI-Connect platform in order to integrate the service with CMS specific needs, including specific Site submission, accounting of jobs and automated reporting to standard CMS monitoring resources in an effortless way to their users.

  11. Use of Multiple GPUs to Speedup the Execution of a Three-Dimensional Computational Model of the Innate Immune System

    NASA Astrophysics Data System (ADS)

    Xavier, M. P.; do Nascimento, T. M.; dos Santos, R. W.; Lobosco, M.

    2014-03-01

    The development of computational systems that mimics the physiological response of organs or even the entire body is a complex task. One of the issues that makes this task extremely complex is the huge computational resources needed to execute the simulations. For this reason, the use of parallel computing is mandatory. In this work, we focus on the simulation of temporal and spatial behaviour of some human innate immune system cells and molecules in a small three-dimensional section of a tissue. To perform this simulation, we use multiple Graphics Processing Units (GPUs) in a shared-memory environment. Despite of high initialization and communication costs imposed by the use of GPUs, the techniques used to implement the HIS simulator have shown to be very effective to achieve this purpose.

  12. Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation

    NASA Astrophysics Data System (ADS)

    Anisenkov, A. V.

    2018-03-01

    In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).

  13. Seismic waveform modeling over cloud

    NASA Astrophysics Data System (ADS)

    Luo, Cong; Friederich, Wolfgang

    2016-04-01

    With the fast growing computational technologies, numerical simulation of seismic wave propagation achieved huge successes. Obtaining the synthetic waveforms through numerical simulation receives an increasing amount of attention from seismologists. However, computational seismology is a data-intensive research field, and the numerical packages usually come with a steep learning curve. Users are expected to master considerable amount of computer knowledge and data processing skills. Training users to use the numerical packages, correctly access and utilize the computational resources is a troubled task. In addition to that, accessing to HPC is also a common difficulty for many users. To solve these problems, a cloud based solution dedicated on shallow seismic waveform modeling has been developed with the state-of-the-art web technologies. It is a web platform integrating both software and hardware with multilayer architecture: a well designed SQL database serves as the data layer, HPC and dedicated pipeline for it is the business layer. Through this platform, users will no longer need to compile and manipulate various packages on the local machine within local network to perform a simulation. By providing users professional access to the computational code through its interfaces and delivering our computational resources to the users over cloud, users can customize the simulation at expert-level, submit and run the job through it.

  14. Extending the farm on external sites: the INFN Tier-1 experience

    NASA Astrophysics Data System (ADS)

    Boccali, T.; Cavalli, A.; Chiarelli, L.; Chierici, A.; Cesini, D.; Ciaschini, V.; Dal Pra, S.; dell'Agnello, L.; De Girolamo, D.; Falabella, A.; Fattibene, E.; Maron, G.; Prosperini, A.; Sapunenko, V.; Virgilio, S.; Zani, S.

    2017-10-01

    The Tier-1 at CNAF is the main INFN computing facility offering computing and storage resources to more than 30 different scientific collaborations including the 4 experiments at the LHC. It is also foreseen a huge increase in computing needs in the following years mainly driven by the experiments at the LHC (especially starting with the run 3 from 2021) but also by other upcoming experiments such as CTA[1] While we are considering the upgrade of the infrastructure of our data center, we are also evaluating the possibility of using CPU resources available in other data centres or even leased from commercial cloud providers. Hence, at INFN Tier-1, besides participating to the EU project HNSciCloud, we have also pledged a small amount of computing resources (˜ 2000 cores) located at the Bari ReCaS[2] for the WLCG experiments for 2016 and we are testing the use of resources provided by a commercial cloud provider. While the Bari ReCaS data center is directly connected to the GARR network[3] with the obvious advantage of a low latency and high bandwidth connection, in the case of the commercial provider we rely only on the General Purpose Network. In this paper we describe the set-up phase and the first results of these installations started in the last quarter of 2015, focusing on the issues that we have had to cope with and discussing the measured results in terms of efficiency.

  15. Developing cloud-based Business Process Management (BPM): a survey

    NASA Astrophysics Data System (ADS)

    Mercia; Gunawan, W.; Fajar, A. N.; Alianto, H.; Inayatulloh

    2018-03-01

    In today’s highly competitive business environment, modern enterprises are dealing difficulties to cut unnecessary costs, eliminate wastes and delivery huge benefits for the organization. Companies are increasingly turning to a more flexible IT environment to help them realize this goal. For this reason, the article applies cloud based Business Process Management (BPM) that enables to focus on modeling, monitoring and process management. Cloud based BPM consists of business processes, business information and IT resources, which help build real-time intelligence systems, based on business management and cloud technology. Cloud computing is a paradigm that involves procuring dynamically measurable resources over the internet as an IT resource service. Cloud based BPM service enables to address common problems faced by traditional BPM, especially in promoting flexibility, event-driven business process to exploit opportunities in the marketplace.

  16. Analysis of the cylinder’s movement characteristics after entering water based on CFD

    NASA Astrophysics Data System (ADS)

    Liu, Xianlong

    2017-10-01

    It’s a variable speed motion after the cylinder vertical entry the water. Using dynamic mesh is mostly unstructured grid, and the calculation results are not ideal and consume huge computing resources. CFD method is used to calculate the resistance of the cylinder at different velocities. Cubic spline interpolation method is used to obtain the resistance at fixed speeds. The finite difference method is used to solve the motion equation, and the acceleration, velocity, displacement and other physical quantities are obtained after the cylinder enters the water.

  17. Little ice bodies, huge ice lands, and the up-going of the big water body

    NASA Astrophysics Data System (ADS)

    Ultee, E.; Bassis, J. N.

    2017-12-01

    Ice moving out of the huge ice lands causes the big water body to go up. That can cause bad things to happen in places close to the big water body - the land might even disappear! If that happens, people living close to the big water body might lose their homes. Knowing how much ice will come out of the huge ice lands, and when, can help the world plan for the up-going of the big water body. We study the huge ice land closest to us. All around the edge of that huge ice land, there are smaller ice bodies that control how much ice makes it into the big water body. Most ways of studying the huge ice land with computers struggle to tell the computer about those little ice bodies, but we have found a new way. We will talk about our way of studying little ice bodies and how their moving brings about up-going of the big water.

  18. A Scheduling Algorithm for Computational Grids that Minimizes Centralized Processing in Genome Assembly of Next-Generation Sequencing Data

    PubMed Central

    Lima, Jakelyne; Cerdeira, Louise Teixeira; Bol, Erick; Schneider, Maria Paula Cruz; Silva, Artur; Azevedo, Vasco; Abelém, Antônio Jorge Gomes

    2012-01-01

    Improvements in genome sequencing techniques have resulted in generation of huge volumes of data. As a consequence of this progress, the genome assembly stage demands even more computational power, since the incoming sequence files contain large amounts of data. To speed up the process, it is often necessary to distribute the workload among a group of machines. However, this requires hardware and software solutions specially configured for this purpose. Grid computing try to simplify this process of aggregate resources, but do not always offer the best performance possible due to heterogeneity and decentralized management of its resources. Thus, it is necessary to develop software that takes into account these peculiarities. In order to achieve this purpose, we developed an algorithm aimed to optimize the functionality of de novo assembly software ABySS in order to optimize its operation in grids. We run ABySS with and without the algorithm we developed in the grid simulator SimGrid. Tests showed that our algorithm is viable, flexible, and scalable even on a heterogeneous environment, which improved the genome assembly time in computational grids without changing its quality. PMID:22461785

  19. Large Scale Computing and Storage Requirements for High Energy Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. Themore » effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes a section that describes efforts already underway or planned at NERSC that address requirements collected at the workshop. NERSC has many initiatives in progress that address key workshop findings and are aligned with NERSC's strategic plans.« less

  20. BlueSky Cloud Framework: An E-Learning Framework Embracing Cloud Computing

    NASA Astrophysics Data System (ADS)

    Dong, Bo; Zheng, Qinghua; Qiao, Mu; Shu, Jian; Yang, Jie

    Currently, E-Learning has grown into a widely accepted way of learning. With the huge growth of users, services, education contents and resources, E-Learning systems are facing challenges of optimizing resource allocations, dealing with dynamic concurrency demands, handling rapid storage growth requirements and cost controlling. In this paper, an E-Learning framework based on cloud computing is presented, namely BlueSky cloud framework. Particularly, the architecture and core components of BlueSky cloud framework are introduced. In BlueSky cloud framework, physical machines are virtualized, and allocated on demand for E-Learning systems. Moreover, BlueSky cloud framework combines with traditional middleware functions (such as load balancing and data caching) to serve for E-Learning systems as a general architecture. It delivers reliable, scalable and cost-efficient services to E-Learning systems, and E-Learning organizations can establish systems through these services in a simple way. BlueSky cloud framework solves the challenges faced by E-Learning, and improves the performance, availability and scalability of E-Learning systems.

  1. Multi-GPGPU Tsunami simulation at Toyama-bay

    NASA Astrophysics Data System (ADS)

    Furuyama, Shoichi; Ueda, Yuki

    2017-07-01

    Accelerated multi General Purpose Graphics Processing Unit (GPGPU) calculation for Tsunami run-up simulation was achieved at the wide area (whole Toyama-bay in Japan) by faster computation technique. Toyama-bay has active-faults at the sea-bed. It has a high possibility to occur earthquakes and Tsunami waves in the case of the huge earthquake, that's why to predict the area of Tsunami run-up is important for decreasing damages to residents by the disaster. However it is very hard task to achieve the simulation by the computer resources problem. A several meter's order of the high resolution calculation is required for the running-up Tsunami simulation because artificial structures on the ground such as roads, buildings, and houses are very small. On the other hand the huge area simulation is also required. In the Toyama-bay case the area is 42 [km] × 15 [km]. When 5 [m] × 5 [m] size computational cells are used for the simulation, over 26,000,000 computational cells are generated. To calculate the simulation, a normal CPU desktop computer took about 10 hours for the calculation. An improvement of calculation time is important problem for the immediate prediction system of Tsunami running-up, as a result it will contribute to protect a lot of residents around the coastal region. The study tried to decrease this calculation time by using multi GPGPU system which is equipped with six NVIDIA TESLA K20xs, InfiniBand network connection between computer nodes by MVAPICH library. As a result 5.16 times faster calculation was achieved on six GPUs than one GPU case and it was 86% parallel efficiency to the linear speed up.

  2. Development of hybrid computer plasma models for different pressure regimes

    NASA Astrophysics Data System (ADS)

    Hromadka, Jakub; Ibehej, Tomas; Hrach, Rudolf

    2016-09-01

    With increased performance of contemporary computers during last decades numerical simulations became a very powerful tool applicable also in plasma physics research. Plasma is generally an ensemble of mutually interacting particles that is out of the thermodynamic equilibrium and for this reason fluid computer plasma models give results with only limited accuracy. On the other hand, much more precise particle models are often limited only on 2D problems because of their huge demands on the computer resources. Our contribution is devoted to hybrid modelling techniques that combine advantages of both modelling techniques mentioned above, particularly to their so-called iterative version. The study is focused on mutual relations between fluid and particle models that are demonstrated on the calculations of sheath structures of low temperature argon plasma near a cylindrical Langmuir probe for medium and higher pressures. Results of a simple iterative hybrid plasma computer model are also given. The authors acknowledge the support of the Grant Agency of Charles University in Prague (project 220215).

  3. Evolutionary Computation with Spatial Receding Horizon Control to Minimize Network Coding Resources

    PubMed Central

    Leeson, Mark S.

    2014-01-01

    The minimization of network coding resources, such as coding nodes and links, is a challenging task, not only because it is a NP-hard problem, but also because the problem scale is huge; for example, networks in real world may have thousands or even millions of nodes and links. Genetic algorithms (GAs) have a good potential of resolving NP-hard problems like the network coding problem (NCP), but as a population-based algorithm, serious scalability and applicability problems are often confronted when GAs are applied to large- or huge-scale systems. Inspired by the temporal receding horizon control in control engineering, this paper proposes a novel spatial receding horizon control (SRHC) strategy as a network partitioning technology, and then designs an efficient GA to tackle the NCP. Traditional network partitioning methods can be viewed as a special case of the proposed SRHC, that is, one-step-wide SRHC, whilst the method in this paper is a generalized N-step-wide SRHC, which can make a better use of global information of network topologies. Besides the SRHC strategy, some useful designs are also reported in this paper. The advantages of the proposed SRHC and GA for the NCP are illustrated by extensive experiments, and they have a good potential of being extended to other large-scale complex problems. PMID:24883371

  4. Cloud access to interoperable IVOA-compliant VOSpace storage

    NASA Astrophysics Data System (ADS)

    Bertocco, S.; Dowler, P.; Gaudet, S.; Major, B.; Pasian, F.; Taffoni, G.

    2018-07-01

    Handling, processing and archiving the huge amount of data produced by the new generation of experiments and instruments in Astronomy and Astrophysics are among the more exciting challenges to address in designing the future data management infrastructures and computing services. We investigated the feasibility of a data management and computation infrastructure, available world-wide, with the aim of merging the FAIR data management provided by IVOA standards with the efficiency and reliability of a cloud approach. Our work involved the Canadian Advanced Network for Astronomy Research (CANFAR) infrastructure and the European EGI federated cloud (EFC). We designed and deployed a pilot data management and computation infrastructure that provides IVOA-compliant VOSpace storage resources and wide access to interoperable federated clouds. In this paper, we detail the main user requirements covered, the technical choices and the implemented solutions and we describe the resulting Hybrid cloud Worldwide infrastructure, its benefits and limitations.

  5. Real-time high-level video understanding using data warehouse

    NASA Astrophysics Data System (ADS)

    Lienard, Bruno; Desurmont, Xavier; Barrie, Bertrand; Delaigle, Jean-Francois

    2006-02-01

    High-level Video content analysis such as video-surveillance is often limited by computational aspects of automatic image understanding, i.e. it requires huge computing resources for reasoning processes like categorization and huge amount of data to represent knowledge of objects, scenarios and other models. This article explains how to design and develop a "near real-time adaptive image datamart", used, as a decisional support system for vision algorithms, and then as a mass storage system. Using RDF specification as storing format of vision algorithms meta-data, we can optimise the data warehouse concepts for video analysis, add some processes able to adapt the current model and pre-process data to speed-up queries. In this way, when new data is sent from a sensor to the data warehouse for long term storage, using remote procedure call embedded in object-oriented interfaces to simplified queries, they are processed and in memory data-model is updated. After some processing, possible interpretations of this data can be returned back to the sensor. To demonstrate this new approach, we will present typical scenarios applied to this architecture such as people tracking and events detection in a multi-camera network. Finally we will show how this system becomes a high-semantic data container for external data-mining.

  6. Huge ascending aortic aneurysm with an intraluminal thrombus in an embolic event-free patient

    PubMed Central

    Parato, Vito Maurizio; Pezzuoli, Franco; Labanti, Benedetto; Baboci, Arben

    2015-01-01

    We present a case of an 87-year-old male patient with a huge ascending aortic aneurysm, filled by a huge thrombus most probably due to previous dissection. This finding was detected by two-dimensional transthoracic echocardiography and contrast-enhanced computed tomography (CT) angiography scan. The patient refused surgical treatment and was medically treated. Despite the huge and mobile intraluminal thrombus, the patient remained embolic event-free up to 6 years later, and this makes the case unique. PMID:25838924

  7. Userscripts for the life sciences.

    PubMed

    Willighagen, Egon L; O'Boyle, Noel M; Gopalakrishnan, Harini; Jiao, Dazhi; Guha, Rajarshi; Steinbeck, Christoph; Wild, David J

    2007-12-21

    The web has seen an explosion of chemistry and biology related resources in the last 15 years: thousands of scientific journals, databases, wikis, blogs and resources are available with a wide variety of types of information. There is a huge need to aggregate and organise this information. However, the sheer number of resources makes it unrealistic to link them all in a centralised manner. Instead, search engines to find information in those resources flourish, and formal languages like Resource Description Framework and Web Ontology Language are increasingly used to allow linking of resources. A recent development is the use of userscripts to change the appearance of web pages, by on-the-fly modification of the web content. This opens possibilities to aggregate information and computational results from different web resources into the web page of one of those resources. Several userscripts are presented that enrich biology and chemistry related web resources by incorporating or linking to other computational or data sources on the web. The scripts make use of Greasemonkey-like plugins for web browsers and are written in JavaScript. Information from third-party resources are extracted using open Application Programming Interfaces, while common Universal Resource Locator schemes are used to make deep links to related information in that external resource. The userscripts presented here use a variety of techniques and resources, and show the potential of such scripts. This paper discusses a number of userscripts that aggregate information from two or more web resources. Examples are shown that enrich web pages with information from other resources, and show how information from web pages can be used to link to, search, and process information in other resources. Due to the nature of userscripts, scientists are able to select those scripts they find useful on a daily basis, as the scripts run directly in their own web browser rather than on the web server. This flexibility allows the scientists to tune the features of web resources to optimise their productivity.

  8. Userscripts for the Life Sciences

    PubMed Central

    Willighagen, Egon L; O'Boyle, Noel M; Gopalakrishnan, Harini; Jiao, Dazhi; Guha, Rajarshi; Steinbeck, Christoph; Wild, David J

    2007-01-01

    Background The web has seen an explosion of chemistry and biology related resources in the last 15 years: thousands of scientific journals, databases, wikis, blogs and resources are available with a wide variety of types of information. There is a huge need to aggregate and organise this information. However, the sheer number of resources makes it unrealistic to link them all in a centralised manner. Instead, search engines to find information in those resources flourish, and formal languages like Resource Description Framework and Web Ontology Language are increasingly used to allow linking of resources. A recent development is the use of userscripts to change the appearance of web pages, by on-the-fly modification of the web content. This opens possibilities to aggregate information and computational results from different web resources into the web page of one of those resources. Results Several userscripts are presented that enrich biology and chemistry related web resources by incorporating or linking to other computational or data sources on the web. The scripts make use of Greasemonkey-like plugins for web browsers and are written in JavaScript. Information from third-party resources are extracted using open Application Programming Interfaces, while common Universal Resource Locator schemes are used to make deep links to related information in that external resource. The userscripts presented here use a variety of techniques and resources, and show the potential of such scripts. Conclusion This paper discusses a number of userscripts that aggregate information from two or more web resources. Examples are shown that enrich web pages with information from other resources, and show how information from web pages can be used to link to, search, and process information in other resources. Due to the nature of userscripts, scientists are able to select those scripts they find useful on a daily basis, as the scripts run directly in their own web browser rather than on the web server. This flexibility allows the scientists to tune the features of web resources to optimise their productivity. PMID:18154664

  9. Cloud4Psi: cloud computing for 3D protein structure similarity searching.

    PubMed

    Mrozek, Dariusz; Małysiak-Mrozek, Bożena; Kłapciński, Artur

    2014-10-01

    Popular methods for 3D protein structure similarity searching, especially those that generate high-quality alignments such as Combinatorial Extension (CE) and Flexible structure Alignment by Chaining Aligned fragment pairs allowing Twists (FATCAT) are still time consuming. As a consequence, performing similarity searching against large repositories of structural data requires increased computational resources that are not always available. Cloud computing provides huge amounts of computational power that can be provisioned on a pay-as-you-go basis. We have developed the cloud-based system that allows scaling of the similarity searching process vertically and horizontally. Cloud4Psi (Cloud for Protein Similarity) was tested in the Microsoft Azure cloud environment and provided good, almost linearly proportional acceleration when scaled out onto many computational units. Cloud4Psi is available as Software as a Service for testing purposes at: http://cloud4psi.cloudapp.net/. For source code and software availability, please visit the Cloud4Psi project home page at http://zti.polsl.pl/dmrozek/science/cloud4psi.htm. © The Author 2014. Published by Oxford University Press.

  10. Sign: large-scale gene network estimation environment for high performance computing.

    PubMed

    Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .

  11. Cloud4Psi: cloud computing for 3D protein structure similarity searching

    PubMed Central

    Mrozek, Dariusz; Małysiak-Mrozek, Bożena; Kłapciński, Artur

    2014-01-01

    Summary: Popular methods for 3D protein structure similarity searching, especially those that generate high-quality alignments such as Combinatorial Extension (CE) and Flexible structure Alignment by Chaining Aligned fragment pairs allowing Twists (FATCAT) are still time consuming. As a consequence, performing similarity searching against large repositories of structural data requires increased computational resources that are not always available. Cloud computing provides huge amounts of computational power that can be provisioned on a pay-as-you-go basis. We have developed the cloud-based system that allows scaling of the similarity searching process vertically and horizontally. Cloud4Psi (Cloud for Protein Similarity) was tested in the Microsoft Azure cloud environment and provided good, almost linearly proportional acceleration when scaled out onto many computational units. Availability and implementation: Cloud4Psi is available as Software as a Service for testing purposes at: http://cloud4psi.cloudapp.net/. For source code and software availability, please visit the Cloud4Psi project home page at http://zti.polsl.pl/dmrozek/science/cloud4psi.htm. Contact: dariusz.mrozek@polsl.pl PMID:24930141

  12. Stability and Scalability of the CMS Global Pool: Pushing HTCondor and GlideinWMS to New Limits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balcas, J.; Bockelman, B.; Hufnagel, D.

    The CMS Global Pool, based on HTCondor and glideinWMS, is the main computing resource provisioning system for all CMS workflows, including analysis, Monte Carlo production, and detector data reprocessing activities. The total resources at Tier-1 and Tier-2 grid sites pledged to CMS exceed 100,000 CPU cores, while another 50,000 to 100,000 CPU cores are available opportunistically, pushing the needs of the Global Pool to higher scales each year. These resources are becoming more diverse in their accessibility and configuration over time. Furthermore, the challenge of stably running at higher and higher scales while introducing new modes of operation such asmore » multi-core pilots, as well as the chaotic nature of physics analysis workflows, places huge strains on the submission infrastructure. This paper details some of the most important challenges to scalability and stability that the CMS Global Pool has faced since the beginning of the LHC Run II and how they were overcome.« less

  13. Stability and scalability of the CMS Global Pool: Pushing HTCondor and glideinWMS to new limits

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Bockelman, B.; Hufnagel, D.; Hurtado Anampa, K.; Aftab Khan, F.; Larson, K.; Letts, J.; Marra da Silva, J.; Mascheroni, M.; Mason, D.; Perez-Calero Yzquierdo, A.; Tiradani, A.

    2017-10-01

    The CMS Global Pool, based on HTCondor and glideinWMS, is the main computing resource provisioning system for all CMS workflows, including analysis, Monte Carlo production, and detector data reprocessing activities. The total resources at Tier-1 and Tier-2 grid sites pledged to CMS exceed 100,000 CPU cores, while another 50,000 to 100,000 CPU cores are available opportunistically, pushing the needs of the Global Pool to higher scales each year. These resources are becoming more diverse in their accessibility and configuration over time. Furthermore, the challenge of stably running at higher and higher scales while introducing new modes of operation such as multi-core pilots, as well as the chaotic nature of physics analysis workflows, places huge strains on the submission infrastructure. This paper details some of the most important challenges to scalability and stability that the CMS Global Pool has faced since the beginning of the LHC Run II and how they were overcome.

  14. Coarse Grid CFD for underresolved simulation

    NASA Astrophysics Data System (ADS)

    Class, Andreas G.; Viellieber, Mathias O.; Himmel, Steffen R.

    2010-11-01

    CFD simulation of the complete reactor core of a nuclear power plant requires exceedingly huge computational resources so that this crude power approach has not been pursued yet. The traditional approach is 1D subchannel analysis employing calibrated transport models. Coarse grid CFD is an attractive alternative technique based on strongly under-resolved CFD and the inviscid Euler equations. Obviously, using inviscid equations and coarse grids does not resolve all the physics requiring additional volumetric source terms modelling viscosity and other sub-grid effects. The source terms are implemented via correlations derived from fully resolved representative simulations which can be tabulated or computed on the fly. The technique is demonstrated for a Carnot diffusor and a wire-wrap fuel assembly [1]. [4pt] [1] Himmel, S.R. phd thesis, Stuttgart University, Germany 2009, http://bibliothek.fzk.de/zb/berichte/FZKA7468.pdf

  15. Accelerating electron tomography reconstruction algorithm ICON with GPU.

    PubMed

    Chen, Yu; Wang, Zihao; Zhang, Jingrong; Li, Lun; Wan, Xiaohua; Sun, Fei; Zhang, Fa

    2017-01-01

    Electron tomography (ET) plays an important role in studying in situ cell ultrastructure in three-dimensional space. Due to limited tilt angles, ET reconstruction always suffers from the "missing wedge" problem. With a validation procedure, iterative compressed-sensing optimized NUFFT reconstruction (ICON) demonstrates its power in the restoration of validated missing information for low SNR biological ET dataset. However, the huge computational demand has become a major problem for the application of ICON. In this work, we analyzed the framework of ICON and classified the operations of major steps of ICON reconstruction into three types. Accordingly, we designed parallel strategies and implemented them on graphics processing units (GPU) to generate a parallel program ICON-GPU. With high accuracy, ICON-GPU has a great acceleration compared to its CPU version, up to 83.7×, greatly relieving ICON's dependence on computing resource.

  16. Parallel workflow manager for non-parallel bioinformatic applications to solve large-scale biological problems on a supercomputer.

    PubMed

    Suplatov, Dmitry; Popova, Nina; Zhumatiy, Sergey; Voevodin, Vladimir; Švedas, Vytas

    2016-04-01

    Rapid expansion of online resources providing access to genomic, structural, and functional information associated with biological macromolecules opens an opportunity to gain a deeper understanding of the mechanisms of biological processes due to systematic analysis of large datasets. This, however, requires novel strategies to optimally utilize computer processing power. Some methods in bioinformatics and molecular modeling require extensive computational resources. Other algorithms have fast implementations which take at most several hours to analyze a common input on a modern desktop station, however, due to multiple invocations for a large number of subtasks the full task requires a significant computing power. Therefore, an efficient computational solution to large-scale biological problems requires both a wise parallel implementation of resource-hungry methods as well as a smart workflow to manage multiple invocations of relatively fast algorithms. In this work, a new computer software mpiWrapper has been developed to accommodate non-parallel implementations of scientific algorithms within the parallel supercomputing environment. The Message Passing Interface has been implemented to exchange information between nodes. Two specialized threads - one for task management and communication, and another for subtask execution - are invoked on each processing unit to avoid deadlock while using blocking calls to MPI. The mpiWrapper can be used to launch all conventional Linux applications without the need to modify their original source codes and supports resubmission of subtasks on node failure. We show that this approach can be used to process huge amounts of biological data efficiently by running non-parallel programs in parallel mode on a supercomputer. The C++ source code and documentation are available from http://biokinet.belozersky.msu.ru/mpiWrapper .

  17. Giant solitary fibrous tumor of the diaphragm: a case report and review of literature

    PubMed Central

    Ge, Wei; Yu, De-Cai; Jiang, Chun-Ping; Ding, Yi-Tao

    2014-01-01

    A young gentleman presented with difficulty in breathing. Computed tomography (CT) scan showed a huge mass located between the heart and stomach, which might have rooted in the diaphragm. Magnetic resonance imaging (MRI) with enhanced three dimensional construction showed a lobulated, heterogeneous soft tissue mass with short T1 weighted imaging signal and flake long T2-weighted imaging (T2WI). Tumor-enhanced scanning demonstrated heterogeneous contrast enhancement. The preliminary diagnosis was intra-abdominal huge mass and considering sarcoma. Resection was conducted where the base of the tumor was located in the diaphragm oppressing the left liver lobe and heart. The base of the tumor, together with partial surrounding of the diaphragm, pericardium base, and the left lateral hepatic segment, was resected. The defect in the diaphragm and pericardium was repaired by patching, and thoracic close drainage and abdominal drainage were placed following the surgical operation. The pathological report showed giant solitary fibrous tumor (SFT). This case report may provide a reference resource for the diagnosis and treatment of SFT located in the diaphragm. PMID:25674285

  18. ICON-MIC: Implementing a CPU/MIC Collaboration Parallel Framework for ICON on Tianhe-2 Supercomputer.

    PubMed

    Wang, Zihao; Chen, Yu; Zhang, Jingrong; Li, Lun; Wan, Xiaohua; Liu, Zhiyong; Sun, Fei; Zhang, Fa

    2018-03-01

    Electron tomography (ET) is an important technique for studying the three-dimensional structures of the biological ultrastructure. Recently, ET has reached sub-nanometer resolution for investigating the native and conformational dynamics of macromolecular complexes by combining with the sub-tomogram averaging approach. Due to the limited sampling angles, ET reconstruction typically suffers from the "missing wedge" problem. Using a validation procedure, iterative compressed-sensing optimized nonuniform fast Fourier transform (NUFFT) reconstruction (ICON) demonstrates its power in restoring validated missing information for a low-signal-to-noise ratio biological ET dataset. However, the huge computational demand has become a bottleneck for the application of ICON. In this work, we implemented a parallel acceleration technology ICON-many integrated core (MIC) on Xeon Phi cards to address the huge computational demand of ICON. During this step, we parallelize the element-wise matrix operations and use the efficient summation of a matrix to reduce the cost of matrix computation. We also developed parallel versions of NUFFT on MIC to achieve a high acceleration of ICON by using more efficient fast Fourier transform (FFT) calculation. We then proposed a hybrid task allocation strategy (two-level load balancing) to improve the overall performance of ICON-MIC by making full use of the idle resources on Tianhe-2 supercomputer. Experimental results using two different datasets show that ICON-MIC has high accuracy in biological specimens under different noise levels and a significant acceleration, up to 13.3 × , compared with the CPU version. Further, ICON-MIC has good scalability efficiency and overall performance on Tianhe-2 supercomputer.

  19. Increasing the sampling efficiency of protein conformational transition using velocity-scaling optimized hybrid explicit/implicit solvent REMD simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Yuqi; Wang, Jinan; Shao, Qiang, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn

    2015-03-28

    The application of temperature replica exchange molecular dynamics (REMD) simulation on protein motion is limited by its huge requirement of computational resource, particularly when explicit solvent model is implemented. In the previous study, we developed a velocity-scaling optimized hybrid explicit/implicit solvent REMD method with the hope to reduce the temperature (replica) number on the premise of maintaining high sampling efficiency. In this study, we utilized this method to characterize and energetically identify the conformational transition pathway of a protein model, the N-terminal domain of calmodulin. In comparison to the standard explicit solvent REMD simulation, the hybrid REMD is much lessmore » computationally expensive but, meanwhile, gives accurate evaluation of the structural and thermodynamic properties of the conformational transition which are in well agreement with the standard REMD simulation. Therefore, the hybrid REMD could highly increase the computational efficiency and thus expand the application of REMD simulation to larger-size protein systems.« less

  20. Thermo-hydro-mechanical-chemical processes in fractured-porous media: Benchmarks and examples

    NASA Astrophysics Data System (ADS)

    Kolditz, O.; Shao, H.; Görke, U.; Kalbacher, T.; Bauer, S.; McDermott, C. I.; Wang, W.

    2012-12-01

    The book comprises an assembly of benchmarks and examples for porous media mechanics collected over the last twenty years. Analysis of thermo-hydro-mechanical-chemical (THMC) processes is essential to many applications in environmental engineering, such as geological waste deposition, geothermal energy utilisation, carbon capture and storage, water resources management, hydrology, even climate change. In order to assess the feasibility as well as the safety of geotechnical applications, process-based modelling is the only tool to put numbers, i.e. to quantify future scenarios. This charges a huge responsibility concerning the reliability of computational tools. Benchmarking is an appropriate methodology to verify the quality of modelling tools based on best practices. Moreover, benchmarking and code comparison foster community efforts. The benchmark book is part of the OpenGeoSys initiative - an open source project to share knowledge and experience in environmental analysis and scientific computation.

  1. Mobile Clinical Decision Support Systems in Our Hands - Great Potential but also a Concern.

    PubMed

    Masic, Izet; Begic, Edin

    2016-01-01

    Due to the powerful computer resources as well as the availability of today's mobile devices, a special field of mobile systems for clinical decision support in medicine has been developed. The benefits of these applications (systems) are: availability of necessary hardware (mobile phones, tablets and phablets are widespread, and can be purchased at a relatively affordable price), availability of mobile applications (free or for a "small" amount of money) and also mobile applications are tailored for easy use and save time of clinicians in their daily work. In these systems lies a huge potential, and certainly a great economic benefit, so this issue must be approached multidisciplinary.

  2. Modeling of biological intelligence for SCM system optimization.

    PubMed

    Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang

    2012-01-01

    This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms.

  3. MEGADOCK 4.0: an ultra-high-performance protein-protein docking software for heterogeneous supercomputers.

    PubMed

    Ohue, Masahito; Shimoda, Takehiro; Suzuki, Shuji; Matsuzaki, Yuri; Ishida, Takashi; Akiyama, Yutaka

    2014-11-15

    The application of protein-protein docking in large-scale interactome analysis is a major challenge in structural bioinformatics and requires huge computing resources. In this work, we present MEGADOCK 4.0, an FFT-based docking software that makes extensive use of recent heterogeneous supercomputers and shows powerful, scalable performance of >97% strong scaling. MEGADOCK 4.0 is written in C++ with OpenMPI and NVIDIA CUDA 5.0 (or later) and is freely available to all academic and non-profit users at: http://www.bi.cs.titech.ac.jp/megadock. akiyama@cs.titech.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  4. Modeling of Biological Intelligence for SCM System Optimization

    PubMed Central

    Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang

    2012-01-01

    This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms. PMID:22162724

  5. UBioLab: a web-LABoratory for Ubiquitous in-silico experiments.

    PubMed

    Bartocci, E; Di Berardini, M R; Merelli, E; Vito, L

    2012-03-01

    The huge and dynamic amount of bioinformatic resources (e.g., data and tools) available nowadays in Internet represents a big challenge for biologists -for what concerns their management and visualization- and for bioinformaticians -for what concerns the possibility of rapidly creating and executing in-silico experiments involving resources and activities spread over the WWW hyperspace. Any framework aiming at integrating such resources as in a physical laboratory has imperatively to tackle -and possibly to handle in a transparent and uniform way- aspects concerning physical distribution, semantic heterogeneity, co-existence of different computational paradigms and, as a consequence, of different invocation interfaces (i.e., OGSA for Grid nodes, SOAP for Web Services, Java RMI for Java objects, etc.). The framework UBioLab has been just designed and developed as a prototype following the above objective. Several architectural features -as those ones of being fully Web-based and of combining domain ontologies, Semantic Web and workflow techniques- give evidence of an effort in such a direction. The integration of a semantic knowledge management system for distributed (bioinformatic) resources, a semantic-driven graphic environment for defining and monitoring ubiquitous workflows and an intelligent agent-based technology for their distributed execution allows UBioLab to be a semantic guide for bioinformaticians and biologists providing (i) a flexible environment for visualizing, organizing and inferring any (semantics and computational) "type" of domain knowledge (e.g., resources and activities, expressed in a declarative form), (ii) a powerful engine for defining and storing semantic-driven ubiquitous in-silico experiments on the domain hyperspace, as well as (iii) a transparent, automatic and distributed environment for correct experiment executions.

  6. Language Identification in Short Utterances Using Long Short-Term Memory (LSTM) Recurrent Neural Networks.

    PubMed

    Zazo, Ruben; Lozano-Diez, Alicia; Gonzalez-Dominguez, Javier; Toledano, Doroteo T; Gonzalez-Rodriguez, Joaquin

    2016-01-01

    Long Short Term Memory (LSTM) Recurrent Neural Networks (RNNs) have recently outperformed other state-of-the-art approaches, such as i-vector and Deep Neural Networks (DNNs), in automatic Language Identification (LID), particularly when dealing with very short utterances (∼3s). In this contribution we present an open-source, end-to-end, LSTM RNN system running on limited computational resources (a single GPU) that outperforms a reference i-vector system on a subset of the NIST Language Recognition Evaluation (8 target languages, 3s task) by up to a 26%. This result is in line with previously published research using proprietary LSTM implementations and huge computational resources, which made these former results hardly reproducible. Further, we extend those previous experiments modeling unseen languages (out of set, OOS, modeling), which is crucial in real applications. Results show that a LSTM RNN with OOS modeling is able to detect these languages and generalizes robustly to unseen OOS languages. Finally, we also analyze the effect of even more limited test data (from 2.25s to 0.1s) proving that with as little as 0.5s an accuracy of over 50% can be achieved.

  7. Language Identification in Short Utterances Using Long Short-Term Memory (LSTM) Recurrent Neural Networks

    PubMed Central

    Zazo, Ruben; Lozano-Diez, Alicia; Gonzalez-Dominguez, Javier; T. Toledano, Doroteo; Gonzalez-Rodriguez, Joaquin

    2016-01-01

    Long Short Term Memory (LSTM) Recurrent Neural Networks (RNNs) have recently outperformed other state-of-the-art approaches, such as i-vector and Deep Neural Networks (DNNs), in automatic Language Identification (LID), particularly when dealing with very short utterances (∼3s). In this contribution we present an open-source, end-to-end, LSTM RNN system running on limited computational resources (a single GPU) that outperforms a reference i-vector system on a subset of the NIST Language Recognition Evaluation (8 target languages, 3s task) by up to a 26%. This result is in line with previously published research using proprietary LSTM implementations and huge computational resources, which made these former results hardly reproducible. Further, we extend those previous experiments modeling unseen languages (out of set, OOS, modeling), which is crucial in real applications. Results show that a LSTM RNN with OOS modeling is able to detect these languages and generalizes robustly to unseen OOS languages. Finally, we also analyze the effect of even more limited test data (from 2.25s to 0.1s) proving that with as little as 0.5s an accuracy of over 50% can be achieved. PMID:26824467

  8. The Interdependence of Computers, Robots, and People.

    ERIC Educational Resources Information Center

    Ludden, Laverne; And Others

    Computers and robots are becoming increasingly more advanced, with smaller and cheaper computers now doing jobs once reserved for huge multimillion dollar computers and with robots performing feats such as painting cars and using television cameras to simulate vision as they perform factory tasks. Technicians expect computers to become even more…

  9. Virtual Resources Centers and Their Role in Small Rural Schools.

    ERIC Educational Resources Information Center

    Freitas, Candido Varela de; Silva, Antonio Pedro da

    Virtual resources centers have been considered a pedagogical tool since the increasing development of electronic means allowed for the storage of huge amounts of information and its easy retrieval. Bearing in mind the need for enhancing the appearance of those centers, a discipline of "Management of Resources Centers" was included in a…

  10. Democratizing Computer Science

    ERIC Educational Resources Information Center

    Margolis, Jane; Goode, Joanna; Ryoo, Jean J.

    2015-01-01

    Computer science programs are too often identified with a narrow stratum of the student population, often white or Asian boys who have access to computers at home. But because computers play such a huge role in our world today, all students can benefit from the study of computer science and the opportunity to build skills related to computing. The…

  11. A Computational Method for Enabling Teaching-Learning Process in Huge Online Courses and Communities

    ERIC Educational Resources Information Center

    Mora, Higinio; Ferrández, Antonio; Gil, David; Peral, Jesús

    2017-01-01

    Massive Open Online Courses and e-learning represent the future of the teaching-learning processes through the development of Information and Communication Technologies. They are the response to the new education needs of society. However, this future also presents many challenges such as the processing of online forums when a huge number of…

  12. GBOOST: a GPU-based tool for detecting gene-gene interactions in genome-wide case control studies.

    PubMed

    Yung, Ling Sing; Yang, Can; Wan, Xiang; Yu, Weichuan

    2011-05-01

    Collecting millions of genetic variations is feasible with the advanced genotyping technology. With a huge amount of genetic variations data in hand, developing efficient algorithms to carry out the gene-gene interaction analysis in a timely manner has become one of the key problems in genome-wide association studies (GWAS). Boolean operation-based screening and testing (BOOST), a recent work in GWAS, completes gene-gene interaction analysis in 2.5 days on a desktop computer. Compared with central processing units (CPUs), graphic processing units (GPUs) are highly parallel hardware and provide massive computing resources. We are, therefore, motivated to use GPUs to further speed up the analysis of gene-gene interactions. We implement the BOOST method based on a GPU framework and name it GBOOST. GBOOST achieves a 40-fold speedup compared with BOOST. It completes the analysis of Wellcome Trust Case Control Consortium Type 2 Diabetes (WTCCC T2D) genome data within 1.34 h on a desktop computer equipped with Nvidia GeForce GTX 285 display card. GBOOST code is available at http://bioinformatics.ust.hk/BOOST.html#GBOOST.

  13. Integrated Sustainable Planning for Industrial Region Using Geospatial Technology

    NASA Astrophysics Data System (ADS)

    Tiwari, Manish K.; Saxena, Aruna; Katare, Vivek

    2012-07-01

    The Geospatial techniques and its scope of applications have undergone an order of magnitude change since its advent and now it has been universally accepted as a most important and modern tool for mapping and monitoring of various natural resources as well as amenities and infrastructure. The huge and voluminous spatial database generated from various Remote Sensing platforms needs proper management like storage, retrieval, manipulation and analysis to extract desired information, which is beyond the capability of human brain. This is where the computer aided GIS technology came into existence. A GIS with major input from Remote Sensing satellites for the natural resource management applications must be able to handle the spatiotemporal data, supporting spatiotemporal quarries and other spatial operations. Software and the computer-based tools are designed to make things easier to the user and to improve the efficiency and quality of information processing tasks. The natural resources are a common heritage, which we have shared with the past generations, and our future generation will be inheriting these resources from us. Our greed for resource and our tremendous technological capacity to exploit them at a much larger scale has created a situation where we have started withdrawing from the future stocks. Bhopal capital region had attracted the attention of the planners from the beginning of the five-year plan strategy for Industrial development. However, a number of projects were carried out in the individual Districts (Bhopal, Rajgarh, Shajapur, Raisen, Sehore) which also gave fruitful results, but no serious efforts have been made to involve the entire region. No use of latest Geospatial technique (Remote Sensing, GIS, GPS) to prepare a well structured computerized data base without which it is very different to retrieve, analyze and compare the data for monitoring as well as for planning the developmental activities in future.

  14. UBioLab: a web-laboratory for ubiquitous in-silico experiments.

    PubMed

    Bartocci, Ezio; Cacciagrano, Diletta; Di Berardini, Maria Rita; Merelli, Emanuela; Vito, Leonardo

    2012-07-09

    The huge and dynamic amount of bioinformatic resources (e.g., data and tools) available nowadays in Internet represents a big challenge for biologists –for what concerns their management and visualization– and for bioinformaticians –for what concerns the possibility of rapidly creating and executing in-silico experiments involving resources and activities spread over the WWW hyperspace. Any framework aiming at integrating such resources as in a physical laboratory has imperatively to tackle –and possibly to handle in a transparent and uniform way– aspects concerning physical distribution, semantic heterogeneity, co-existence of different computational paradigms and, as a consequence, of different invocation interfaces (i.e., OGSA for Grid nodes, SOAP for Web Services, Java RMI for Java objects, etc.). The framework UBioLab has been just designed and developed as a prototype following the above objective. Several architectural features –as those ones of being fully Web-based and of combining domain ontologies, Semantic Web and workflow techniques– give evidence of an effort in such a direction. The integration of a semantic knowledge management system for distributed (bioinformatic) resources, a semantic-driven graphic environment for defining and monitoring ubiquitous workflows and an intelligent agent-based technology for their distributed execution allows UBioLab to be a semantic guide for bioinformaticians and biologists providing (i) a flexible environment for visualizing, organizing and inferring any (semantics and computational) "type" of domain knowledge (e.g., resources and activities, expressed in a declarative form), (ii) a powerful engine for defining and storing semantic-driven ubiquitous in-silico experiments on the domain hyperspace, as well as (iii) a transparent, automatic and distributed environment for correct experiment executions.

  15. @neurIST: infrastructure for advanced disease management through integration of heterogeneous data, computing, and complex processing services.

    PubMed

    Benkner, Siegfried; Arbona, Antonio; Berti, Guntram; Chiarini, Alessandro; Dunlop, Robert; Engelbrecht, Gerhard; Frangi, Alejandro F; Friedrich, Christoph M; Hanser, Susanne; Hasselmeyer, Peer; Hose, Rod D; Iavindrasana, Jimison; Köhler, Martin; Iacono, Luigi Lo; Lonsdale, Guy; Meyer, Rodolphe; Moore, Bob; Rajasekaran, Hariharan; Summers, Paul E; Wöhrer, Alexander; Wood, Steven

    2010-11-01

    The increasing volume of data describing human disease processes and the growing complexity of understanding, managing, and sharing such data presents a huge challenge for clinicians and medical researchers. This paper presents the @neurIST system, which provides an infrastructure for biomedical research while aiding clinical care, by bringing together heterogeneous data and complex processing and computing services. Although @neurIST targets the investigation and treatment of cerebral aneurysms, the system's architecture is generic enough that it could be adapted to the treatment of other diseases. Innovations in @neurIST include confining the patient data pertaining to aneurysms inside a single environment that offers clinicians the tools to analyze and interpret patient data and make use of knowledge-based guidance in planning their treatment. Medical researchers gain access to a critical mass of aneurysm related data due to the system's ability to federate distributed information sources. A semantically mediated grid infrastructure ensures that both clinicians and researchers are able to seamlessly access and work on data that is distributed across multiple sites in a secure way in addition to providing computing resources on demand for performing computationally intensive simulations for treatment planning and research.

  16. A data management system to enable urgent natural disaster computing

    NASA Astrophysics Data System (ADS)

    Leong, Siew Hoon; Kranzlmüller, Dieter; Frank, Anton

    2014-05-01

    Civil protection, in particular natural disaster management, is very important to most nations and civilians in the world. When disasters like flash floods, earthquakes and tsunamis are expected or have taken place, it is of utmost importance to make timely decisions for managing the affected areas and reduce casualties. Computer simulations can generate information and provide predictions to facilitate this decision making process. Getting the data to the required resources is a critical requirement to enable the timely computation of the predictions. An urgent data management system to support natural disaster computing is thus necessary to effectively carry out data activities within a stipulated deadline. Since the trigger of a natural disaster is usually unpredictable, it is not always possible to prepare required resources well in advance. As such, an urgent data management system for natural disaster computing has to be able to work with any type of resources. Additional requirements include the need to manage deadlines and huge volume of data, fault tolerance, reliable, flexibility to changes, ease of usage, etc. The proposed data management platform includes a service manager to provide a uniform and extensible interface for the supported data protocols, a configuration manager to check and retrieve configurations of available resources, a scheduler manager to ensure that the deadlines can be met, a fault tolerance manager to increase the reliability of the platform and a data manager to initiate and perform the data activities. These managers will enable the selection of the most appropriate resource, transfer protocol, etc. such that the hard deadline of an urgent computation can be met for a particular urgent activity, e.g. data staging or computation. We associated 2 types of deadlines [2] with an urgent computing system. Soft-hard deadline: Missing a soft-firm deadline will render the computation less useful resulting in a cost that can have severe consequences Hard deadline: Missing a hard deadline renders the computation useless and results in full catastrophic consequences. A prototype of this system has a REST-based service manager. The REST-based implementation provides a uniform interface that is easy to use. New and upcoming file transfer protocols can easily be extended and accessed via the service manager. The service manager interacts with the other four managers to coordinate the data activities so that the fundamental natural disaster urgent computing requirement, i.e. deadline, can be fulfilled in a reliable manner. A data activity can include data storing, data archiving and data storing. Reliability is ensured by the choice of a network of managers organisation model[1] the configuration manager and the fault tolerance manager. With this proposed design, an easy to use, resource-independent data management system that can support and fulfill the computation of a natural disaster prediction within stipulated deadlines can thus be realised. References [1] H. G. Hegering, S. Abeck, and B. Neumair, Integrated management of networked systems - concepts, architectures, and their operational application, Morgan Kaufmann Publishers, 340 Pine Stret, Sixth Floor, San Francisco, CA 94104-3205, USA, 1999. [2] H. Kopetz, Real-time systems design principles for distributed embedded applications, second edition, Springer, LLC, 233 Spring Street, New York, NY 10013, USA, 2011. [3] S. H. Leong, A. Frank, and D. Kranzlmu¨ ller, Leveraging e-infrastructures for urgent computing, Procedia Computer Science 18 (2013), no. 0, 2177 - 2186, 2013 International Conference on Computational Science. [4] N. Trebon, Enabling urgent computing within the existing distributed computing infrastructure, Ph.D. thesis, University of Chicago, August 2011, http://people.cs.uchicago.edu/~ntrebon/docs/dissertation.pdf.

  17. Hardware-software face detection system based on multi-block local binary patterns

    NASA Astrophysics Data System (ADS)

    Acasandrei, Laurentiu; Barriga, Angel

    2015-03-01

    Face detection is an important aspect for biometrics, video surveillance and human computer interaction. Due to the complexity of the detection algorithms any face detection system requires a huge amount of computational and memory resources. In this communication an accelerated implementation of MB LBP face detection algorithm targeting low frequency, low memory and low power embedded system is presented. The resulted implementation is time deterministic and uses a customizable AMBA IP hardware accelerator. The IP implements the kernel operations of the MB-LBP algorithm and can be used as universal accelerator for MB LBP based applications. The IP employs 8 parallel MB-LBP feature evaluators cores, uses a deterministic bandwidth, has a low area profile and the power consumption is ~95 mW on a Virtex5 XC5VLX50T. The resulted implementation acceleration gain is between 5 to 8 times, while the hardware MB-LBP feature evaluation gain is between 69 and 139 times.

  18. Embedded ubiquitous services on hospital information systems.

    PubMed

    Kuroda, Tomohiro; Sasaki, Hiroshi; Suenaga, Takatoshi; Masuda, Yasushi; Yasumuro, Yoshihiro; Hori, Kenta; Ohboshi, Naoki; Takemura, Tadamasa; Chihara, Kunihiro; Yoshihara, Hiroyuki

    2012-11-01

    A Hospital Information Systems (HIS) have turned a hospital into a gigantic computer with huge computational power, huge storage and wired/wireless local area network. On the other hand, a modern medical device, such as echograph, is a computer system with several functional units connected by an internal network named a bus. Therefore, we can embed such a medical device into the HIS by simply replacing the bus with the local area network. This paper designed and developed two embedded systems, a ubiquitous echograph system and a networked digital camera. Evaluations of the developed systems clearly show that the proposed approach, embedding existing clinical systems into HIS, drastically changes productivity in the clinical field. Once a clinical system becomes a pluggable unit for a gigantic computer system, HIS, the combination of multiple embedded systems with application software designed under deep consideration about clinical processes may lead to the emergence of disruptive innovation in the clinical field.

  19. WRF4SG: A Scientific Gateway for climate experiment workflows

    NASA Astrophysics Data System (ADS)

    Blanco, Carlos; Cofino, Antonio S.; Fernandez-Quiruelas, Valvanuz

    2013-04-01

    The Weather Research and Forecasting model (WRF) is a community-driven and public domain model widely used by the weather and climate communities. As opposite to other application-oriented models, WRF provides a flexible and computationally-efficient framework which allows solving a variety of problems for different time-scales, from weather forecast to climate change projection. Furthermore, WRF is also widely used as a research tool in modeling physics, dynamics, and data assimilation by the research community. Climate experiment workflows based on Weather Research and Forecasting (WRF) are nowadays among the one of the most cutting-edge applications. These workflows are complex due to both large storage and the huge number of simulations executed. In order to manage that, we have developed a scientific gateway (SG) called WRF for Scientific Gateway (WRF4SG) based on WS-PGRADE/gUSE and WRF4G frameworks to ease achieve WRF users needs (see [1] and [2]). WRF4SG provides services for different use cases that describe the different interactions between WRF users and the WRF4SG interface in order to show how to run a climate experiment. As WS-PGRADE/gUSE uses portlets (see [1]) to interact with users, its portlets will support these use cases. A typical experiment to be carried on by a WRF user will consist on a high-resolution regional re-forecast. These re-forecasts are common experiments used as input data form wind power energy and natural hazards (wind and precipitation fields). In the cases below, the user is able to access to different resources such as Grid due to the fact that WRF needs a huge amount of computing resources in order to generate useful simulations: * Resource configuration and user authentication: The first step is to authenticate on users' Grid resources by virtual organizations. After login, the user is able to select which virtual organization is going to be used by the experiment. * Data assimilation: In order to assimilate the data sources, the user has to select them browsing through LFC Portlet. * Design Experiment workflow: In order to configure the experiment, the user will define the type of experiment (i.e. re-forecast), and its attributes to simulate. In this case the main attributes are: the field of interest (wind, precipitation, ...), the start and end date simulation and the requirements of the experiment. * Monitor workflow: In order to monitor the experiment the user will receive notification messages based on events and also the gateway will display the progress of the experiment. * Data storage: Like Data assimilation case, the user is able to browse and view the output data simulations using LFC Portlet. The objectives of WRF4SG can be described by considering two goals. The first goal is to show how WRF4SG facilitates to execute, monitor and manage climate workflows based on the WRF4G framework. And the second goal of WRF4SG is to help WRF users to execute their experiment workflows concurrently using heterogeneous computing resources such as HPC and Grid. [1] Kacsuk, P.: P-GRADE portal family for grid infrastructures. Concurrency and Computation: Practice and Experience. 23, 235-245 (2011). [2] http://www.meteo.unican.es/software/wrf4g

  20. Genome and proteome annotation: organization, interpretation and integration

    PubMed Central

    Reeves, Gabrielle A.; Talavera, David; Thornton, Janet M.

    2008-01-01

    Recent years have seen a huge increase in the generation of genomic and proteomic data. This has been due to improvements in current biological methodologies, the development of new experimental techniques and the use of computers as support tools. All these raw data are useless if they cannot be properly analysed, annotated, stored and displayed. Consequently, a vast number of resources have been created to present the data to the wider community. Annotation tools and databases provide the means to disseminate these data and to comprehend their biological importance. This review examines the various aspects of annotation: type, methodology and availability. Moreover, it puts a special interest on novel annotation fields, such as that of phenotypes, and highlights the recent efforts focused on the integrating annotations. PMID:19019817

  1. Enabling the Discovery of Gravitational Radiation

    NASA Astrophysics Data System (ADS)

    Isaacson, Richard

    2017-01-01

    The discovery of gravitational radiation was announced with the publication of the results of a physics experiment involving over a thousand participants. This was preceded by a century of theoretical work, involving a similarly large group of physicists, mathematicians, and computer scientists. This huge effort was enabled by a substantial commitment of resources, both public and private, to develop the different strands of this complex research enterprise, and to build a community of scientists to carry it out. In the excitement following the discovery, the role of key enablers of this success has not always been adequately recognized in popular accounts. In this talk, I will try to call attention to a few of the key ingredients that proved crucial to enabling the successful discovery of gravitational waves, and the opening of a new field of science.

  2. Collaborative workbench for cyberinfrastructure to accelerate science algorithm development

    NASA Astrophysics Data System (ADS)

    Ramachandran, R.; Maskey, M.; Kuo, K.; Lynnes, C.

    2013-12-01

    There are significant untapped resources for information and knowledge creation within the Earth Science community in the form of data, algorithms, services, analysis workflows or scripts, and the related knowledge about these resources. Despite the huge growth in social networking and collaboration platforms, these resources often reside on an investigator's workstation or laboratory and are rarely shared. A major reason for this is that there are very few scientific collaboration platforms, and those that exist typically require the use of a new set of analysis tools and paradigms to leverage the shared infrastructure. As a result, adoption of these collaborative platforms for science research is inhibited by the high cost to an individual scientist of switching from his or her own familiar environment and set of tools to a new environment and tool set. This presentation will describe an ongoing project developing an Earth Science Collaborative Workbench (CWB). The CWB approach will eliminate this barrier by augmenting a scientist's current research environment and tool set to allow him or her to easily share diverse data and algorithms. The CWB will leverage evolving technologies such as commodity computing and social networking to design an architecture for scalable collaboration that will support the emerging vision of an Earth Science Collaboratory. The CWB is being implemented on the robust and open source Eclipse framework and will be compatible with widely used scientific analysis tools such as IDL. The myScience Catalog built into CWB will capture and track metadata and provenance about data and algorithms for the researchers in a non-intrusive manner with minimal overhead. Seamless interfaces to multiple Cloud services will support sharing algorithms, data, and analysis results, as well as access to storage and computer resources. A Community Catalog will track the use of shared science artifacts and manage collaborations among researchers.

  3. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm

    PubMed Central

    Abdulhamid, Shafi’i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239

  4. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm.

    PubMed

    Abdulhamid, Shafi'i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.

  5. Large-scale parallel genome assembler over cloud computing environment.

    PubMed

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  6. Grid Computing Environment using a Beowulf Cluster

    NASA Astrophysics Data System (ADS)

    Alanis, Fransisco; Mahmood, Akhtar

    2003-10-01

    Custom-made Beowulf clusters using PCs are currently replacing expensive supercomputers to carry out complex scientific computations. At the University of Texas - Pan American, we built a 8 Gflops Beowulf Cluster for doing HEP research using RedHat Linux 7.3 and the LAM-MPI middleware. We will describe how we built and configured our Cluster, which we have named the Sphinx Beowulf Cluster. We will describe the results of our cluster benchmark studies and the run-time plots of several parallel application codes that were compiled in C on the cluster using the LAM-XMPI graphics user environment. We will demonstrate a "simple" prototype grid environment, where we will submit and run parallel jobs remotely across multiple cluster nodes over the internet from the presentation room at Texas Tech. University. The Sphinx Beowulf Cluster will be used for monte-carlo grid test-bed studies for the LHC-ATLAS high energy physics experiment. Grid is a new IT concept for the next generation of the "Super Internet" for high-performance computing. The Grid will allow scientist worldwide to view and analyze huge amounts of data flowing from the large-scale experiments in High Energy Physics. The Grid is expected to bring together geographically and organizationally dispersed computational resources, such as CPUs, storage systems, communication systems, and data sources.

  7. Next Generation Sequence Analysis and Computational Genomics Using Graphical Pipeline Workflows

    PubMed Central

    Torri, Federica; Dinov, Ivo D.; Zamanyan, Alen; Hobel, Sam; Genco, Alex; Petrosyan, Petros; Clark, Andrew P.; Liu, Zhizhong; Eggert, Paul; Pierce, Jonathan; Knowles, James A.; Ames, Joseph; Kesselman, Carl; Toga, Arthur W.; Potkin, Steven G.; Vawter, Marquis P.; Macciardi, Fabio

    2012-01-01

    Whole-genome and exome sequencing have already proven to be essential and powerful methods to identify genes responsible for simple Mendelian inherited disorders. These methods can be applied to complex disorders as well, and have been adopted as one of the current mainstream approaches in population genetics. These achievements have been made possible by next generation sequencing (NGS) technologies, which require substantial bioinformatics resources to analyze the dense and complex sequence data. The huge analytical burden of data from genome sequencing might be seen as a bottleneck slowing the publication of NGS papers at this time, especially in psychiatric genetics. We review the existing methods for processing NGS data, to place into context the rationale for the design of a computational resource. We describe our method, the Graphical Pipeline for Computational Genomics (GPCG), to perform the computational steps required to analyze NGS data. The GPCG implements flexible workflows for basic sequence alignment, sequence data quality control, single nucleotide polymorphism analysis, copy number variant identification, annotation, and visualization of results. These workflows cover all the analytical steps required for NGS data, from processing the raw reads to variant calling and annotation. The current version of the pipeline is freely available at http://pipeline.loni.ucla.edu. These applications of NGS analysis may gain clinical utility in the near future (e.g., identifying miRNA signatures in diseases) when the bioinformatics approach is made feasible. Taken together, the annotation tools and strategies that have been developed to retrieve information and test hypotheses about the functional role of variants present in the human genome will help to pinpoint the genetic risk factors for psychiatric disorders. PMID:23139896

  8. Wireless Internet Gateways (WINGS)

    DTIC Science & Technology

    1997-01-01

    WIRELESS INTERNET GATEWAYS (WINGS) J.J. Garcia-Luna-Aceves, Chane L. Fullmer, Ewerton Madruga Computer Engineering Department University of...rooftop.com Abstract— Today’s internetwork technology has been extremely success- ful in linking huge numbers of computers and users. However, to date...this technology has been oriented to computer interconnection in relatively stable operational environments, and thus cannot adequately support many of

  9. Three pillars for achieving quantum mechanical molecular dynamics simulations of huge systems: Divide-and-conquer, density-functional tight-binding, and massively parallel computation.

    PubMed

    Nishizawa, Hiroaki; Nishimura, Yoshifumi; Kobayashi, Masato; Irle, Stephan; Nakai, Hiromi

    2016-08-05

    The linear-scaling divide-and-conquer (DC) quantum chemical methodology is applied to the density-functional tight-binding (DFTB) theory to develop a massively parallel program that achieves on-the-fly molecular reaction dynamics simulations of huge systems from scratch. The functions to perform large scale geometry optimization and molecular dynamics with DC-DFTB potential energy surface are implemented to the program called DC-DFTB-K. A novel interpolation-based algorithm is developed for parallelizing the determination of the Fermi level in the DC method. The performance of the DC-DFTB-K program is assessed using a laboratory computer and the K computer. Numerical tests show the high efficiency of the DC-DFTB-K program, a single-point energy gradient calculation of a one-million-atom system is completed within 60 s using 7290 nodes of the K computer. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  10. Analyzing huge pathology images with open source software.

    PubMed

    Deroulers, Christophe; Ameisen, David; Badoual, Mathilde; Gerin, Chloé; Granier, Alexandre; Lartaud, Marc

    2013-06-06

    Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer's memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. The virtual slide(s) for this article can be found here:http://www.diagnosticpathology.diagnomx.eu/vs/5955513929846272.

  11. FilTer BaSe: A web accessible chemical database for small compound libraries.

    PubMed

    Kolte, Baban S; Londhe, Sanjay R; Solanki, Bhushan R; Gacche, Rajesh N; Meshram, Rohan J

    2018-03-01

    Finding novel chemical agents for targeting disease associated drug targets often requires screening of large number of new chemical libraries. In silico methods are generally implemented at initial stages for virtual screening. Filtering of such compound libraries on physicochemical and substructure ground is done to ensure elimination of compounds with undesired chemical properties. Filtering procedure, is redundant, time consuming and requires efficient bioinformatics/computer manpower along with high end software involving huge capital investment that forms a major obstacle in drug discovery projects in academic setup. We present an open source resource, FilTer BaSe- a chemoinformatics platform (http://bioinfo.net.in/filterbase/) that host fully filtered, ready to use compound libraries with workable size. The resource also hosts a database that enables efficient searching the chemical space of around 348,000 compounds on the basis of physicochemical and substructure properties. Ready to use compound libraries and database presented here is expected to aid a helping hand for new drug developers and medicinal chemists. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Quantifying effectiveness of failure prediction and response in HPC systems : methodology and example.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayo, Jackson R.; Chen, Frank Xiaoxiao; Pebay, Philippe Pierre

    2010-06-01

    Effective failure prediction and mitigation strategies in high-performance computing systems could provide huge gains in resilience of tightly coupled large-scale scientific codes. These gains would come from prediction-directed process migration and resource servicing, intelligent resource allocation, and checkpointing driven by failure predictors rather than at regular intervals based on nominal mean time to failure. Given probabilistic associations of outlier behavior in hardware-related metrics with eventual failure in hardware, system software, and/or applications, this paper explores approaches for quantifying the effects of prediction and mitigation strategies and demonstrates these using actual production system data. We describe context-relevant methodologies for determining themore » accuracy and cost-benefit of predictors. While many research studies have quantified the expected impact of growing system size, and the associated shortened mean time to failure (MTTF), on application performance in large-scale high-performance computing (HPC) platforms, there has been little if any work to quantify the possible gains from predicting system resource failures with significant but imperfect accuracy. This possibly stems from HPC system complexity and the fact that, to date, no one has established any good predictors of failure in these systems. Our work in the OVIS project aims to discover these predictors via a variety of data collection techniques and statistical analysis methods that yield probabilistic predictions. The question then is, 'How good or useful are these predictions?' We investigate methods for answering this question in a general setting, and illustrate them using a specific failure predictor discovered on a production system at Sandia.« less

  13. Can cloud computing benefit health services? - a SWOT analysis.

    PubMed

    Kuo, Mu-Hsing; Kushniruk, Andre; Borycki, Elizabeth

    2011-01-01

    In this paper, we discuss cloud computing, the current state of cloud computing in healthcare, and the challenges and opportunities of adopting cloud computing in healthcare. A Strengths, Weaknesses, Opportunities and Threats (SWOT) analysis was used to evaluate the feasibility of adopting this computing model in healthcare. The paper concludes that cloud computing could have huge benefits for healthcare but there are a number of issues that will need to be addressed before its widespread use in healthcare.

  14. ACToR Chemical Structure processing using Open Source ...

    EPA Pesticide Factsheets

    ACToR (Aggregated Computational Toxicology Resource) is a centralized database repository developed by the National Center for Computational Toxicology (NCCT) at the U.S. Environmental Protection Agency (EPA). Free and open source tools were used to compile toxicity data from over 1,950 public sources. ACToR contains chemical structure information and toxicological data for over 558,000 unique chemicals. The database primarily includes data from NCCT research programs, in vivo toxicity data from ToxRef, human exposure data from ExpoCast, high-throughput screening data from ToxCast and high quality chemical structure information from the EPA DSSTox program. The DSSTox database is a chemical structure inventory for the NCCT programs and currently has about 16,000 unique structures. Included are also data from PubChem, ChemSpider, USDA, FDA, NIH and several other public data sources. ACToR has been a resource to various international and national research groups. Most of our recent efforts on ACToR are focused on improving the structural identifiers and Physico-Chemical properties of the chemicals in the database. Organizing this huge collection of data and improving the chemical structure quality of the database has posed some major challenges. Workflows have been developed to process structures, calculate chemical properties and identify relationships between CAS numbers. The Structure processing workflow integrates web services (PubChem and NIH NCI Cactus) to d

  15. Knowledge portal: a tool to capture university requirements

    NASA Astrophysics Data System (ADS)

    Mansourvar, Marjan; Binti Mohd Yasin, Norizan

    2011-10-01

    New technologies, especially, the Internet have made a huge impact on knowledge management and information dissemination in education. The web portal as a knowledge management system is very popular topics in many organizations including universities. Generally, a web portal defines as a gateway to online network accessible resources through the intranet, extranet or Internet. This study develops a knowledge portal for the students in the Faculty of Computer Science and Information Technology (FCSIT), University of Malaya (UM). The goals of this portal are to provide information for the students to help them to choose the right courses and major that are relevant to their intended future jobs or career in IT. A quantitative approach used as the selected method for this research. Quantitative method provides an easy and useful way to collect data from a large sample population.

  16. New Technology and Information Explosion.

    ERIC Educational Resources Information Center

    Johns, David

    A flood of new electronic technologies promises to usher in the Information Age and alter economic and social structures. Telematics, a potent combination of telecommunications and computer technologies, could eventually bring huge volumes of information to great numbers of people by making large data bases accessible to computer terminals in…

  17. A heuristic re-mapping algorithm reducing inter-level communication in SAMR applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steensland, Johan; Ray, Jaideep

    2003-07-01

    This paper aims at decreasing execution time for large-scale structured adaptive mesh refinement (SAMR) applications by proposing a new heuristic re-mapping algorithm and experimentally showing its effectiveness in reducing inter-level communication. Tests were done for five different SAMR applications. The overall goal is to engineer a dynamically adaptive meta-partitioner capable of selecting and configuring the most appropriate partitioning strategy at run-time based on current system and application state. Such a metapartitioner can significantly reduce execution times for general SAMR applications. Computer simulations of physical phenomena are becoming increasingly popular as they constitute an important complement to real-life testing. In manymore » cases, such simulations are based on solving partial differential equations by numerical methods. Adaptive methods are crucial to efficiently utilize computer resources such as memory and CPU. But even with adaption, the simulations are computationally demanding and yield huge data sets. Thus parallelization and the efficient partitioning of data become issues of utmost importance. Adaption causes the workload to change dynamically, calling for dynamic (re-) partitioning to maintain efficient resource utilization. The proposed heuristic algorithm reduced inter-level communication substantially. Since the complexity of the proposed algorithm is low, this decrease comes at a relatively low cost. As a consequence, we draw the conclusion that the proposed re-mapping algorithm would be useful to lower overall execution times for many large SAMR applications. Due to its usefulness and its parameterization, the proposed algorithm would constitute a natural and important component of the meta-partitioner.« less

  18. The Right Tools for the Job: How Can Aquatic Resource Education Succeed in the Classroom?

    ERIC Educational Resources Information Center

    Fortner, Rosanne W.

    Because of its bases in science and stewardship, aquatic resource education may be seen as a type of environmental education. The range of environmental education (EE) programs includes a huge variety designed for different groups and settings. This chapter takes the perspective of environmental education as it is done in the formal K-12 classroom…

  19. Detecting Abnormal Machine Characteristics in Cloud Infrastructures

    NASA Technical Reports Server (NTRS)

    Bhaduri, Kanishka; Das, Kamalika; Matthews, Bryan L.

    2011-01-01

    In the cloud computing environment resources are accessed as services rather than as a product. Monitoring this system for performance is crucial because of typical pay-peruse packages bought by the users for their jobs. With the huge number of machines currently in the cloud system, it is often extremely difficult for system administrators to keep track of all machines using distributed monitoring programs such as Ganglia1 which lacks system health assessment and summarization capabilities. To overcome this problem, we propose a technique for automated anomaly detection using machine performance data in the cloud. Our algorithm is entirely distributed and runs locally on each computing machine on the cloud in order to rank the machines in order of their anomalous behavior for given jobs. There is no need to centralize any of the performance data for the analysis and at the end of the analysis, our algorithm generates error reports, thereby allowing the system administrators to take corrective actions. Experiments performed on real data sets collected for different jobs validate the fact that our algorithm has a low overhead for tracking anomalous machines in a cloud infrastructure.

  20. Advanced Optical Burst Switched Network Concepts

    NASA Astrophysics Data System (ADS)

    Nejabati, Reza; Aracil, Javier; Castoldi, Piero; de Leenheer, Marc; Simeonidou, Dimitra; Valcarenghi, Luca; Zervas, Georgios; Wu, Jian

    In recent years, as the bandwidth and the speed of networks have increased significantly, a new generation of network-based applications using the concept of distributed computing and collaborative services is emerging (e.g., Grid computing applications). The use of the available fiber and DWDM infrastructure for these applications is a logical choice offering huge amounts of cheap bandwidth and ensuring global reach of computing resources [230]. Currently, there is a great deal of interest in deploying optical circuit (wavelength) switched network infrastructure for distributed computing applications that require long-lived wavelength paths and address the specific needs of a small number of well-known users. Typical users are particle physicists who, due to their international collaborations and experiments, generate enormous amounts of data (Petabytes per year). These users require a network infrastructures that can support processing and analysis of large datasets through globally distributed computing resources [230]. However, providing wavelength granularity bandwidth services is not an efficient and scalable solution for applications and services that address a wider base of user communities with different traffic profiles and connectivity requirements. Examples of such applications may be: scientific collaboration in smaller scale (e.g., bioinformatics, environmental research), distributed virtual laboratories (e.g., remote instrumentation), e-health, national security and defense, personalized learning environments and digital libraries, evolving broadband user services (i.e., high resolution home video editing, real-time rendering, high definition interactive TV). As a specific example, in e-health services and in particular mammography applications due to the size and quantity of images produced by remote mammography, stringent network requirements are necessary. Initial calculations have shown that for 100 patients to be screened remotely, the network would have to securely transport 1.2 GB of data every 30 s [230]. According to the above explanation it is clear that these types of applications need a new network infrastructure and transport technology that makes large amounts of bandwidth at subwavelength granularity, storage, computation, and visualization resources potentially available to a wide user base for specified time durations. As these types of collaborative and network-based applications evolve addressing a wide range and large number of users, it is infeasible to build dedicated networks for each application type or category. Consequently, there should be an adaptive network infrastructure able to support all application types, each with their own access, network, and resource usage patterns. This infrastructure should offer flexible and intelligent network elements and control mechanism able to deploy new applications quickly and efficiently.

  1. Mobile devices in medicine: a survey of how medical students, residents, and faculty use smartphones and other mobile devices to find information.

    PubMed

    Boruff, Jill T; Storie, Dale

    2014-01-01

    The research investigated the extent to which students, residents, and faculty members in Canadian medical faculties use mobile devices, such as smartphones (e.g., iPhone, Android, Blackberry) and tablet computers (e.g., iPad), to answer clinical questions and find medical information. The results of this study will inform how health libraries can effectively support mobile technology and collections. An electronic survey was distributed by medical librarians at four Canadian universities to medical students, residents, and faculty members via departmental email discussion lists, personal contacts, and relevant websites. It investigated the types of information sought, facilitators to mobile device use in medical information seeking, barriers to access, support needs, familiarity with institutionally licensed resources, and most frequently used resources. The survey of 1,210 respondents indicated widespread use of smartphones and tablets in clinical settings in 4 Canadian universities. Third- and fourth-year undergraduate students (i.e., those in their clinical clerkships) and medical residents, compared to other graduate students and faculty, used their mobile devices more often, used them for a broader range of activities, and purchased more resources for their devices. Technological and intellectual barriers do not seem to prevent medical trainees and faculty from regularly using mobile devices for their medical information searches; however, barriers to access and lack of awareness might keep them from using reliable, library-licensed resources. Libraries should focus on providing access to a smaller number of highly used mobile resources instead of a huge collection until library-licensed mobile resources have streamlined authentication processes.

  2. Problems and Prospects of Science Education in Bangladesh

    NASA Astrophysics Data System (ADS)

    Choudhury, Shamima K.

    2009-04-01

    Scientific and technological know-how, not the amount of natural resources, determines the development of a country. Bangladesh, with insignificant natural resources and a huge population on a small piece of land, can be developed through scientific and technological means. Whereas it was once the most sought-after subject at secondary and postsecondary levels, science is losing its appeal in an alarming shift of choice. Problems in science education and possible solutions for Bangladesh, which has limited resources for encouraging science education, are presented.

  3. The use of computers to teach human anatomy and physiology to allied health and nursing students

    NASA Astrophysics Data System (ADS)

    Bergeron, Valerie J.

    Educational institutions are under tremendous pressure to adopt the newest technologies in order to prepare their students to meet the challenges of the twenty-first century. For the last twenty years huge amounts of money have been spent on computers, printers, software, multimedia projection equipment, and so forth. A reasonable question is, "Has it worked?" Has this infusion of resources, financial as well as human, resulted in improved learning? Are the students meeting the intended learning goals? Any attempt to develop answers to these questions should include examining the intended goals and exploring the effects of the changes on students and faculty. This project investigated the impact of a specific application of a computer program in a community college setting on students' attitudes and understanding of human anatomy and physiology. In this investigation two sites of the same community college with seemingly similar students populations, seven miles apart, used different laboratory activities to teach human anatomy and physiology. At one site nursing students were taught using traditional dissections and laboratory activities; at the other site two of the dissections, specifically cat and sheep pluck, were replaced with the A.D.A.M.RTM (Animated Dissection of Anatomy for Medicine) computer program. Analysis of the attitude data indicated that students at both sites were extremely positive about their laboratory experiences. Analysis of the content data indicated a statistically significant difference in performance between the two sites in two of the eight content areas that were studied. For both topics the students using the computer program scored higher. A detailed analysis of the surveys, interviews with faculty and students, examination of laboratory materials, and observations of laboratory facilities in both sites, and cost-benefit analysis led to the development of seven recommendations. The recommendations call for action at the level of the institution requiring investment in additional resources, and at the level of the faculty requiring a commitment to exploration and reflective practice.

  4. Award-Winning Animation Helps Scientists See Nature at Work | News | NREL

    Science.gov Websites

    Scientists See Nature at Work August 8, 2008 A computer-aided image combines a photo of a man with a three -dimensional, computer-generated image. The man has long brown hair and a long beard. He is wearing a blue - simultaneously. "It is very difficult to parallelize the process to run even on a huge computer,"

  5. Automatic 3D Building Detection and Modeling from Airborne LiDAR Point Clouds

    ERIC Educational Resources Information Center

    Sun, Shaohui

    2013-01-01

    Urban reconstruction, with an emphasis on man-made structure modeling, is an active research area with broad impact on several potential applications. Urban reconstruction combines photogrammetry, remote sensing, computer vision, and computer graphics. Even though there is a huge volume of work that has been done, many problems still remain…

  6. Computer Literacy Course for Teacher for the 21st Century

    ERIC Educational Resources Information Center

    Tatkovic, Nevenka; Ruzic, Maja

    2004-01-01

    The life and activities of every man in the transitional period from the second to the third millennium has been characterized by huge changes that resulted from scientific and technological revolution in which dominates a highly developed IT-Communicational Technology. This paper concludes that to attain IT-literacy and computer literacy would…

  7. Range wise busy checking 2-way imbalanced algorithm for cloudlet allocation in cloud environment

    NASA Astrophysics Data System (ADS)

    Alanzy, Mohammed; Latip, Rohaya; Muhammed, Abdullah

    2018-05-01

    Cloud computing considers as a new business paradigm and a popular platform over the last few years. Many organizations, agencies, and departments consider responsible tasks time and tasks needed to be accomplished as soon as possible. These agencies counter IT issues due to the massive arise of data, applications, and solution scopes. Currently, the main issue related with the cloud is the way of making the environment of the cloud computing more qualified, and this way needs a competent allocation strategy of the cloudlet, Thus, there are huge number of studies conducted with regards to this matter that sought to assign the cloudlets to VMs or resources by variety of strategies. In this paper we have proposed range wise busy checking 2-way imbalanced Algorithm in cloud computing. Compare to other methods, it decreases the completion time to finish tasks’ execution, it is considered the fundamental part to enhance the system performance such as the makespan. This algorithm was simulated using Cloudsim to give more opportunity to the higher VM speed to accommodate more Cloudlets in its local queue without considering the threshold balance condition. The simulation result shows that the average makespan time is lesser compare to the previous cloudlet allocation strategy.

  8. National equity of health resource allocation in China: data from 2009 to 2013.

    PubMed

    Liu, Wen; Liu, Ying; Twum, Peter; Li, Shixue

    2016-04-19

    The inequitable allocation of health resources is a worldwide problem, and it is also one of the obstacles facing for health services utilization in China. A new round of health care reform which contains the important aspect of improving the equity in health resource allocation was released by Chinese government in 2009. The aim of this study is to understand the changes of equity in health resource allocation from 2009 to 2013, and make a further inquiry of the main factors which influence the equity conditions in China. Data resources are the China Health Statistics Yearbook (2014) and the China Statistical Yearbook (2014). Four indicators were chosen to measure the trends in equity of health resource allocation. Data were disaggregated by three geographical regions: west, central, and east. Theil index was used to calculate the degree of unfairness. The total amount of health care resources in China had been increasing in recent years. However, the per 10, 000 km(2) number of health resources showed a huge gap in different regions, and per 10, 000 capita health resources ownership showed a relatively small disparities at the same time. The index of health resources showed an overall downward trend, in which health financial investment the most unfair from 2009 to 2012 and the number of health institutions the most unfair in 2013. The equity of health resources allocation in eastern regions was the worst except for the aspect of health technical personnel allocation. The regional contribution rates were lower than that of the inter-regional contribution rates which were all beyond 60 %. The equity of health resource allocation improved gradually from 2009 to 2013. However, the internal differences within the eastern region still have a huge impact on the overall equity in health resource allocation. The tough issues of inequitable in health resource allocation should be resolved by comprehensive measures from a multidisciplinary perspective.

  9. Comparative Modeling of Proteins: A Method for Engaging Students' Interest in Bioinformatics Tools

    ERIC Educational Resources Information Center

    Badotti, Fernanda; Barbosa, Alan Sales; Reis, André Luiz Martins; do Valle, Ítalo Faria; Ambrósio, Lara; Bitar, Mainá

    2014-01-01

    The huge increase in data being produced in the genomic era has produced a need to incorporate computers into the research process. Sequence generation, its subsequent storage, interpretation, and analysis are now entirely computer-dependent tasks. Universities from all over the world have been challenged to seek a way of encouraging students to…

  10. Securing SIFT: Privacy-preserving Outsourcing Computation of Feature Extractions Over Encrypted Image Data.

    PubMed

    Hu, Shengshan; Wang, Qian; Wang, Jingjun; Qin, Zhan; Ren, Kui

    2016-05-13

    Advances in cloud computing have greatly motivated data owners to outsource their huge amount of personal multimedia data and/or computationally expensive tasks onto the cloud by leveraging its abundant resources for cost saving and flexibility. Despite the tremendous benefits, the outsourced multimedia data and its originated applications may reveal the data owner's private information, such as the personal identity, locations or even financial profiles. This observation has recently aroused new research interest on privacy-preserving computations over outsourced multimedia data. In this paper, we propose an effective and practical privacy-preserving computation outsourcing protocol for the prevailing scale-invariant feature transform (SIFT) over massive encrypted image data. We first show that previous solutions to this problem have either efficiency/security or practicality issues, and none can well preserve the important characteristics of the original SIFT in terms of distinctiveness and robustness. We then present a new scheme design that achieves efficiency and security requirements simultaneously with the preservation of its key characteristics, by randomly splitting the original image data, designing two novel efficient protocols for secure multiplication and comparison, and carefully distributing the feature extraction computations onto two independent cloud servers. We both carefully analyze and extensively evaluate the security and effectiveness of our design. The results show that our solution is practically secure, outperforms the state-of-theart, and performs comparably to the original SIFT in terms of various characteristics, including rotation invariance, image scale invariance, robust matching across affine distortion, addition of noise and change in 3D viewpoint and illumination.

  11. SecSIFT: Privacy-preserving Outsourcing Computation of Feature Extractions Over Encrypted Image Data.

    PubMed

    Hu, Shengshan; Wang, Qian; Wang, Jingjun; Qin, Zhan; Ren, Kui

    2016-05-13

    Advances in cloud computing have greatly motivated data owners to outsource their huge amount of personal multimedia data and/or computationally expensive tasks onto the cloud by leveraging its abundant resources for cost saving and flexibility. Despite the tremendous benefits, the outsourced multimedia data and its originated applications may reveal the data owner's private information, such as the personal identity, locations or even financial profiles. This observation has recently aroused new research interest on privacy-preserving computations over outsourced multimedia data. In this paper, we propose an effective and practical privacy-preserving computation outsourcing protocol for the prevailing scale-invariant feature transform (SIFT) over massive encrypted image data. We first show that previous solutions to this problem have either efficiency/security or practicality issues, and none can well preserve the important characteristics of the original SIFT in terms of distinctiveness and robustness. We then present a new scheme design that achieves efficiency and security requirements simultaneously with the preservation of its key characteristics, by randomly splitting the original image data, designing two novel efficient protocols for secure multiplication and comparison, and carefully distributing the feature extraction computations onto two independent cloud servers. We both carefully analyze and extensively evaluate the security and effectiveness of our design. The results show that our solution is practically secure, outperforms the state-of-theart, and performs comparably to the original SIFT in terms of various characteristics, including rotation invariance, image scale invariance, robust matching across affine distortion, addition of noise and change in 3D viewpoint and illumination.

  12. Analyzing huge pathology images with open source software

    PubMed Central

    2013-01-01

    Background Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer’s memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. Results We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Conclusions Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. Virtual slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5955513929846272 PMID:23829479

  13. Mobile devices in medicine: a survey of how medical students, residents, and faculty use smartphones and other mobile devices to find information*

    PubMed Central

    Boruff, Jill T.; Storie, Dale

    2014-01-01

    Objectives: The research investigated the extent to which students, residents, and faculty members in Canadian medical faculties use mobile devices, such as smartphones (e.g., iPhone, Android, Blackberry) and tablet computers (e.g., iPad), to answer clinical questions and find medical information. The results of this study will inform how health libraries can effectively support mobile technology and collections. Methods: An electronic survey was distributed by medical librarians at four Canadian universities to medical students, residents, and faculty members via departmental email discussion lists, personal contacts, and relevant websites. It investigated the types of information sought, facilitators to mobile device use in medical information seeking, barriers to access, support needs, familiarity with institutionally licensed resources, and most frequently used resources. Results: The survey of 1,210 respondents indicated widespread use of smartphones and tablets in clinical settings in 4 Canadian universities. Third- and fourth-year undergraduate students (i.e., those in their clinical clerkships) and medical residents, compared to other graduate students and faculty, used their mobile devices more often, used them for a broader range of activities, and purchased more resources for their devices. Conclusions: Technological and intellectual barriers do not seem to prevent medical trainees and faculty from regularly using mobile devices for their medical information searches; however, barriers to access and lack of awareness might keep them from using reliable, library-licensed resources. Implications: Libraries should focus on providing access to a smaller number of highly used mobile resources instead of a huge collection until library-licensed mobile resources have streamlined authentication processes. PMID:24415916

  14. Constructing Optimal Coarse-Grained Sites of Huge Biomolecules by Fluctuation Maximization.

    PubMed

    Li, Min; Zhang, John Zenghui; Xia, Fei

    2016-04-12

    Coarse-grained (CG) models are valuable tools for the study of functions of large biomolecules on large length and time scales. The definition of CG representations for huge biomolecules is always a formidable challenge. In this work, we propose a new method called fluctuation maximization coarse-graining (FM-CG) to construct the CG sites of biomolecules. The defined residual in FM-CG converges to a maximal value as the number of CG sites increases, allowing an optimal CG model to be rigorously defined on the basis of the maximum. More importantly, we developed a robust algorithm called stepwise local iterative optimization (SLIO) to accelerate the process of coarse-graining large biomolecules. By means of the efficient SLIO algorithm, the computational cost of coarse-graining large biomolecules is reduced to within the time scale of seconds, which is far lower than that of conventional simulated annealing. The coarse-graining of two huge systems, chaperonin GroEL and lengsin, indicates that our new methods can coarse-grain huge biomolecular systems with up to 10,000 residues within the time scale of minutes. The further parametrization of CG sites derived from FM-CG allows us to construct the corresponding CG models for studies of the functions of huge biomolecular systems.

  15. Reactive transport modeling in the subsurface environment with OGS-IPhreeqc

    NASA Astrophysics Data System (ADS)

    He, Wenkui; Beyer, Christof; Fleckenstein, Jan; Jang, Eunseon; Kalbacher, Thomas; Naumov, Dimitri; Shao, Haibing; Wang, Wenqing; Kolditz, Olaf

    2015-04-01

    Worldwide, sustainable water resource management becomes an increasingly challenging task due to the growth of population and extensive applications of fertilizer in agriculture. Moreover, climate change causes further stresses to both water quantity and quality. Reactive transport modeling in the coupled soil-aquifer system is a viable approach to assess the impacts of different land use and groundwater exploitation scenarios on the water resources. However, the application of this approach is usually limited in spatial scale and to simplified geochemical systems due to the huge computational expense involved. Such computational expense is not only caused by solving the high non-linearity of the initial boundary value problems of water flow in the unsaturated zone numerically with rather fine spatial and temporal discretization for the correct mass balance and numerical stability, but also by the intensive computational task of quantifying geochemical reactions. In the present study, a flexible and efficient tool for large scale reactive transport modeling in variably saturated porous media and its applications are presented. The open source scientific software OpenGeoSys (OGS) is coupled with the IPhreeqc module of the geochemical solver PHREEQC. The new coupling approach makes full use of advantages from both codes: OGS provides a flexible choice of different numerical approaches for simulation of water flow in the vadose zone such as the pressure-based or mixed forms of Richards equation; whereas the IPhreeqc module leads to a simplification of data storage and its communication with OGS, which greatly facilitates the coupling and code updating. Moreover, a parallelization scheme with MPI (Message Passing Interface) is applied, in which the computational task of water flow and mass transport is partitioned through domain decomposition, whereas the efficient parallelization of geochemical reactions is achieved by smart allocation of computational workload over multiple compute nodes. The plausibility of the new coupling is verified by several benchmark tests. In addition, the efficiency of the new coupling approach is demonstrated by its application in a large scale scenario, in which the environmental fate of pesticides in a complex soil-aquifer system is studied.

  16. Reactive transport modeling in variably saturated porous media with OGS-IPhreeqc

    NASA Astrophysics Data System (ADS)

    He, W.; Beyer, C.; Fleckenstein, J. H.; Jang, E.; Kalbacher, T.; Shao, H.; Wang, W.; Kolditz, O.

    2014-12-01

    Worldwide, sustainable water resource management becomes an increasingly challenging task due to the growth of population and extensive applications of fertilizer in agriculture. Moreover, climate change causes further stresses to both water quantity and quality. Reactive transport modeling in the coupled soil-aquifer system is a viable approach to assess the impacts of different land use and groundwater exploitation scenarios on the water resources. However, the application of this approach is usually limited in spatial scale and to simplified geochemical systems due to the huge computational expense involved. Such computational expense is not only caused by solving the high non-linearity of the initial boundary value problems of water flow in the unsaturated zone numerically with rather fine spatial and temporal discretization for the correct mass balance and numerical stability, but also by the intensive computational task of quantifying geochemical reactions. In the present study, a flexible and efficient tool for large scale reactive transport modeling in variably saturated porous media and its applications are presented. The open source scientific software OpenGeoSys (OGS) is coupled with the IPhreeqc module of the geochemical solver PHREEQC. The new coupling approach makes full use of advantages from both codes: OGS provides a flexible choice of different numerical approaches for simulation of water flow in the vadose zone such as the pressure-based or mixed forms of Richards equation; whereas the IPhreeqc module leads to a simplification of data storage and its communication with OGS, which greatly facilitates the coupling and code updating. Moreover, a parallelization scheme with MPI (Message Passing Interface) is applied, in which the computational task of water flow and mass transport is partitioned through domain decomposition, whereas the efficient parallelization of geochemical reactions is achieved by smart allocation of computational workload over multiple compute nodes. The plausibility of the new coupling is verified by several benchmark tests. In addition, the efficiency of the new coupling approach is demonstrated by its application in a large scale scenario, in which the environmental fate of pesticides in a complex soil-aquifer system is studied.

  17. Organisational aspects and benchmarking of e-learning initiatives: a case study with South African community health workers.

    PubMed

    Reisach, Ulrike; Weilemann, Mitja

    2016-06-01

    South Africa desperately needs a comprehensive approach to fight HIV/AIDS. Education is crucial to reach this goal and Internet and e-learning could offer huge opportunities to broaden and deepen the knowledge basis. But due to the huge societal and digital divide between rich and poor areas, e-learning is difficult to realize in the townships. Community health workers often act as mediators and coaches for people seeking medical and personal help. They could give good advice regarding hygiene, nutrition, protection of family members in case of HIV/AIDS and finding legal ways to earn one's living if they were trained to do so. Therefore they need to have a broader general knowledge. Since learning opportunities in the townships are scarce, a system for e-learning has to be created in order to overcome the lack of experience with computers or the Internet and to enable them to implement a network of expertise. The article describes how the best international resources on basic medical knowledge, HIV/AIDS as well as on basic economic and entrepreneurial skills were benchmarked to be integrated into an e-learning system. After tests with community health workers, researchers developed recommendations on building a self-sustaining system for learning, including a network of expertise and best practice sharing. The article explains the opportunities and challenges for community health workers, which could provide information for other parts of the world with similar preconditions of rural poverty. © The Author(s) 2015.

  18. RAP: RNA-Seq Analysis Pipeline, a new cloud-based NGS web application

    PubMed Central

    2015-01-01

    Background The study of RNA has been dramatically improved by the introduction of Next Generation Sequencing platforms allowing massive and cheap sequencing of selected RNA fractions, also providing information on strand orientation (RNA-Seq). The complexity of transcriptomes and of their regulative pathways make RNA-Seq one of most complex field of NGS applications, addressing several aspects of the expression process (e.g. identification and quantification of expressed genes and transcripts, alternative splicing and polyadenylation, fusion genes and trans-splicing, post-transcriptional events, etc.). Moreover, the huge volume of data generated by NGS platforms introduces unprecedented computational and technological challenges to efficiently analyze and store sequence data and results. Methods In order to provide researchers with an effective and friendly resource for analyzing RNA-Seq data, we present here RAP (RNA-Seq Analysis Pipeline), a cloud computing web application implementing a complete but modular analysis workflow. This pipeline integrates both state-of-the-art bioinformatics tools for RNA-Seq analysis and in-house developed scripts to offer to the user a comprehensive strategy for data analysis. RAP is able to perform quality checks (adopting FastQC and NGS QC Toolkit), identify and quantify expressed genes and transcripts (with Tophat, Cufflinks and HTSeq), detect alternative splicing events (using SpliceTrap) and chimeric transcripts (with ChimeraScan). This pipeline is also able to identify splicing junctions and constitutive or alternative polyadenylation sites (implementing custom analysis modules) and call for statistically significant differences in genes and transcripts expression, splicing pattern and polyadenylation site usage (using Cuffdiff2 and DESeq). Results Through a user friendly web interface, the RAP workflow can be suitably customized by the user and it is automatically executed on our cloud computing environment. This strategy allows to access to bioinformatics tools and computational resources without specific bioinformatics and IT skills. RAP provides a set of tabular and graphical results that can be helpful to browse, filter and export analyzed data, according to the user needs. PMID:26046471

  19. The Mayak Worker Dosimetry System (MWDS-2013): Implementation of the Dose Calculations.

    PubMed

    Zhdanov, А; Vostrotin, V; Efimov, А; Birchall, A; Puncher, M

    2016-07-15

    The calculation of internal doses for the Mayak Worker Dosimetry System (MWDS-2013) involved extensive computational resources due to the complexity and sheer number of calculations required. The required output consisted of a set of 1000 hyper-realizations: each hyper-realization consists of a set (1 for each worker) of probability distributions of organ doses. This report describes the hardware components and computational approaches required to make the calculation tractable. Together with the software, this system is referred to here as the 'PANDORA system'. It is based on a commercial SQL server database in a series of six work stations. A complete run of the entire Mayak worker cohort entailed a huge amount of calculations in PANDORA and due to the relatively slow speed of writing the data into the SQL server, each run took about 47 days. Quality control was monitored by comparing doses calculated in PANDORA with those in a specially modified version of the commercial software 'IMBA Professional Plus'. Suggestions are also made for increasing calculation and storage efficiency for future dosimetry calculations using PANDORA. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Efficiency Evaluation of Handling of Geologic-Geophysical Information by Means of Computer Systems

    NASA Astrophysics Data System (ADS)

    Nuriyahmetova, S. M.; Demyanova, O. V.; Zabirova, L. M.; Gataullin, I. I.; Fathutdinova, O. A.; Kaptelinina, E. A.

    2018-05-01

    Development of oil and gas resources, considering difficult geological, geographical and economic conditions, requires considerable finance costs; therefore their careful reasons, application of the most perspective directions and modern technologies from the point of view of cost efficiency of planned activities are necessary. For ensuring high precision of regional and local forecasts and modeling of reservoirs of fields of hydrocarbonic raw materials, it is necessary to analyze huge arrays of the distributed information which is constantly changing spatial. The solution of this task requires application of modern remote methods of a research of the perspective oil-and-gas territories, complex use of materials remote, nondestructive the environment of geologic-geophysical and space methods of sounding of Earth and the most perfect technologies of their handling. In the article, the authors considered experience of handling of geologic-geophysical information by means of computer systems by the Russian and foreign companies. Conclusions that the multidimensional analysis of geologicgeophysical information space, effective planning and monitoring of exploration works requires broad use of geoinformation technologies as one of the most perspective directions in achievement of high profitability of an oil and gas industry are drawn.

  1. Calculating the Mean Amplitude of Glycemic Excursions from Continuous Glucose Data Using an Open-Code Programmable Algorithm Based on the Integer Nonlinear Method.

    PubMed

    Yu, Xuefei; Lin, Liangzhuo; Shen, Jie; Chen, Zhi; Jian, Jun; Li, Bin; Xin, Sherman Xuegang

    2018-01-01

    The mean amplitude of glycemic excursions (MAGE) is an essential index for glycemic variability assessment, which is treated as a key reference for blood glucose controlling at clinic. However, the traditional "ruler and pencil" manual method for the calculation of MAGE is time-consuming and prone to error due to the huge data size, making the development of robust computer-aided program an urgent requirement. Although several software products are available instead of manual calculation, poor agreement among them is reported. Therefore, more studies are required in this field. In this paper, we developed a mathematical algorithm based on integer nonlinear programming. Following the proposed mathematical method, an open-code computer program named MAGECAA v1.0 was developed and validated. The results of the statistical analysis indicated that the developed program was robust compared to the manual method. The agreement among the developed program and currently available popular software is satisfied, indicating that the worry about the disagreement among different software products is not necessary. The open-code programmable algorithm is an extra resource for those peers who are interested in the related study on methodology in the future.

  2. DICOMGrid: a middleware to integrate PACS and EELA-2 grid infrastructure

    NASA Astrophysics Data System (ADS)

    Moreno, Ramon A.; de Sá Rebelo, Marina; Gutierrez, Marco A.

    2010-03-01

    Medical images provide lots of information for physicians, but the huge amount of data produced by medical image equipments in a modern Health Institution is not completely explored in its full potential yet. Nowadays medical images are used in hospitals mostly as part of routine activities while its intrinsic value for research is underestimated. Medical images can be used for the development of new visualization techniques, new algorithms for patient care and new image processing techniques. These research areas usually require the use of huge volumes of data to obtain significant results, along with enormous computing capabilities. Such qualities are characteristics of grid computing systems such as EELA-2 infrastructure. The grid technologies allow the sharing of data in large scale in a safe and integrated environment and offer high computing capabilities. In this paper we describe the DicomGrid to store and retrieve medical images, properly anonymized, that can be used by researchers to test new processing techniques, using the computational power offered by grid technology. A prototype of the DicomGrid is under evaluation and permits the submission of jobs into the EELA-2 grid infrastructure while offering a simple interface that requires minimal understanding of the grid operation.

  3. Analysis of an algorithm for distributed recognition and accountability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ko, C.; Frincke, D.A.; Goan, T. Jr.

    1993-08-01

    Computer and network systems are available to attacks. Abandoning the existing huge infrastructure of possibly-insecure computer and network systems is impossible, and replacing them by totally secure systems may not be feasible or cost effective. A common element in many attacks is that a single user will often attempt to intrude upon multiple resources throughout a network. Detecting the attack can become significantly easier by compiling and integrating evidence of such intrusion attempts across the network rather than attempting to assess the situation from the vantage point of only a single host. To solve this problem, we suggest an approachmore » for distributed recognition and accountability (DRA), which consists of algorithms which ``process,`` at a central location, distributed and asynchronous ``reports`` generated by computers (or a subset thereof) throughout the network. Our highest-priority objectives are to observe ways by which an individual moves around in a network of computers, including changing user names to possibly hide his/her true identity, and to associate all activities of multiple instance of the same individual to the same network-wide user. We present the DRA algorithm and a sketch of its proof under an initial set of simplifying albeit realistic assumptions. Later, we relax these assumptions to accommodate pragmatic aspects such as missing or delayed ``reports,`` clock slew, tampered ``reports,`` etc. We believe that such algorithms will have widespread applications in the future, particularly in intrusion-detection system.« less

  4. On finding bicliques in bipartite graphs: a novel algorithm and its application to the integration of diverse biological data types

    PubMed Central

    2014-01-01

    Background Integrating and analyzing heterogeneous genome-scale data is a huge algorithmic challenge for modern systems biology. Bipartite graphs can be useful for representing relationships across pairs of disparate data types, with the interpretation of these relationships accomplished through an enumeration of maximal bicliques. Most previously-known techniques are generally ill-suited to this foundational task, because they are relatively inefficient and without effective scaling. In this paper, a powerful new algorithm is described that produces all maximal bicliques in a bipartite graph. Unlike most previous approaches, the new method neither places undue restrictions on its input nor inflates the problem size. Efficiency is achieved through an innovative exploitation of bipartite graph structure, and through computational reductions that rapidly eliminate non-maximal candidates from the search space. An iterative selection of vertices for consideration based on non-decreasing common neighborhood sizes boosts efficiency and leads to more balanced recursion trees. Results The new technique is implemented and compared to previously published approaches from graph theory and data mining. Formal time and space bounds are derived. Experiments are performed on both random graphs and graphs constructed from functional genomics data. It is shown that the new method substantially outperforms the best previous alternatives. Conclusions The new method is streamlined, efficient, and particularly well-suited to the study of huge and diverse biological data. A robust implementation has been incorporated into GeneWeaver, an online tool for integrating and analyzing functional genomics experiments, available at http://geneweaver.org. The enormous increase in scalability it provides empowers users to study complex and previously unassailable gene-set associations between genes and their biological functions in a hierarchical fashion and on a genome-wide scale. This practical computational resource is adaptable to almost any applications environment in which bipartite graphs can be used to model relationships between pairs of heterogeneous entities. PMID:24731198

  5. Desperately Seeking Special Ed Teachers

    ERIC Educational Resources Information Center

    Butler, Kevin

    2008-01-01

    It's no secret that the dearth of special education teachers has created huge headaches for district human resources departments, especially in suburban and rural areas. In addition to insufficient numbers of candidates applying for special education jobs, retention of special education teachers is an ever-greater problem, as research indicates…

  6. Molecular Nanotechnology and Space Settlement

    NASA Technical Reports Server (NTRS)

    Globus, Al; Saini, Subhash (Technical Monitor)

    1998-01-01

    Atomically precise manipulation of matter is becoming increasingly common in laboratories around the world. As this control moves into aerospace systems, huge improvements in computers, high-strength materials, and other systems are expected. For example, studies suggest that it may be possible to build: 10(exp 18) MIPS computers, 10(exp 15) bytes/sq cm write once memory, $153-412/kg-of-cargo single- stage-to-orbit launch vehicles and active materials which sense their environment and react intelligently. All of NASA's enterprises should benefit significantly from molecular nanotechnology. Although the time may be measured in decades and the precise path to molecular nanotechnology is unclear, all paths (diamondoid, fullerene, self-assembly, biomolecular, etc.) will require very substantial computation. This talk will discuss fullerene nanotechnology and early work on hypothetical active materials consisting of large numbers of identical machines. The speaker will also discuss aerospace applications, particularly missions leading to widespread space settlement (e.g., small near-Earth - object retrieval). It is interesting to note that control of the tiny - individual atoms and molecules - may lead to colonization of the huge -first the solar system, then the galaxy.

  7. Huge mediastinal liposarcoma resected by clamshell thoracotomy: a case report.

    PubMed

    Toda, Michihito; Izumi, Nobuhiro; Tsukioka, Takuma; Komatsu, Hiroaki; Okada, Satoshi; Hara, Kantaro; Ito, Ryuichi; Shibata, Toshihiko; Nishiyama, Noritoshi

    2017-12-01

    Liposarcoma is the single most common soft tissue sarcoma. Because mediastinal liposarcomas often grow rapidly and frequently recur locally despite adjuvant chemotherapy and radiotherapy, they require complete excision. Therefore, the feasibility of achieving complete surgical excision must be carefully considered. We here report a case of a huge mediastinal liposarcoma resected via clamshell thoracotomy. A 64-year-old man presented with dyspnea on effort. Cardiomegaly had been diagnosed 6 years previously, but had been left untreated. A computed tomography scan showed a huge (36 cm diameter) anterior mediastinal tumor expanding into the pleural cavities bilaterally. The tumor comprised mostly fatty tissue but contained two solid areas. Echo-guided needle biopsies were performed and a diagnosis of an atypical lipomatous tumor was established by pathological examination of the biopsy samples. Surgical resection was performed via a clamshell incision, enabling en bloc resection of this huge tumor. Although there was no invasion of surrounding organs, the left brachiocephalic vein was resected because it was circumferentially surrounded by tumor and could not be preserved. The tumor weighed 3500 g. Pathologic examination of the resected tumor resulted in a diagnosis of a biphasic tumor comprising dedifferentiated liposarcoma and non-adipocytic sarcoma with necrotic areas. The patient remains free of recurrent tumor 20 months postoperatively. Clamshell incision provides an excellent surgical field and can be performed safely in patients with huge mediastinal liposarcomas.

  8. Librarians as Community Partners: An Outreach Handbook

    ERIC Educational Resources Information Center

    Smallwood, Carol, Ed.

    2010-01-01

    Including 66 focused snapshots of outreach in action, this resource reflects the creative solutions of librarians searching for new and innovative ways to build programs that meet customer needs while expanding the library's scope into the community. This contributed volume includes: (1) A huge array of program options for partnering with other…

  9. Hoop Hoop Hooray!

    ERIC Educational Resources Information Center

    Tomsett, Ruth

    2008-01-01

    The author believes that Venn diagrams are a useful, yet hugely underused resource, to encourage purposeful talk, reasoning and logical thinking both within mathematics and across the curriculum. Here, she describes ways in which Venn diagrams can be used to add challenge and develop reasoning, discussion and mathematical thinking at Key Stage 2.…

  10. Leading for Learning

    ERIC Educational Resources Information Center

    Edwards, Virginia B., Ed.

    2006-01-01

    After a decade or so spent largely on setting academic standards against which to hold schools accountable, states are themselves being held accountable for helping schools figure out how to meet them. The result is a huge leadership challenge. With few or no added resources, state education agencies are retooling to provide more technical support…

  11. Usefulness and preference for tablet personal computers by medical students: are the features worth the money?

    PubMed

    Wiese, Dawn; Atreja, Ashish; Mehta, Neil

    2008-11-06

    Tablet Personal Computers (PCs) have a huge potential in medical education due to their interactive human- computer interface and the need for anatomical diagrams, annotations, biochemistry flow charts etc. We conducted an online survey of medical students to determine their pattern of usage of the tablet features. The results revealed that the majority of medical students use the tablet features infrequently and most do not place a high value on the tablet features.

  12. Calculating semantic relatedness for biomedical use in a knowledge-poor environment.

    PubMed

    Rybinski, Maciej; Aldana-Montes, José

    2014-01-01

    Computing semantic relatedness between textual labels representing biological and medical concepts is a crucial task in many automated knowledge extraction and processing applications relevant to the biomedical domain, specifically due to the huge amount of new findings being published each year. Most methods benefit from making use of highly specific resources, thus reducing their usability in many real world scenarios that differ from the original assumptions. In this paper we present a simple resource-efficient method for calculating semantic relatedness in a knowledge-poor environment. The method obtains results comparable to state-of-the-art methods, while being more generic and flexible. The solution being presented here was designed to use only a relatively generic and small document corpus and its statistics, without referring to a previously defined knowledge base, thus it does not assume a 'closed' problem. We propose a method in which computation for two input texts is based on the idea of comparing the vocabulary associated with the best-fit documents related to those texts. As keyterm extraction is a costly process, it is done in a preprocessing step on a 'per-document' basis in order to limit the on-line processing. The actual computations are executed in a compact vector space, limited by the most informative extraction results. The method has been evaluated on five direct benchmarks by calculating correlation coefficients w.r.t. average human answers. It also has been used on Gene - Disease and Disease- Disease data pairs to highlight its potential use as a data analysis tool. Apart from comparisons with reported results, some interesting features of the method have been studied, i.e. the relationship between result quality, efficiency and applicable trimming threshold for size reduction. Experimental evaluation shows that the presented method obtains results that are comparable with current state of the art methods, even surpassing them on a majority of the benchmarks. Additionally, a possible usage scenario for the method is showcased with a real-world data experiment. Our method improves flexibility of the existing methods without a notable loss of quality. It is a legitimate alternative to the costly construction of specialized knowledge-rich resources.

  13. Estimation of evapotranspiration rate in irrigated lands using stable isotopes

    NASA Astrophysics Data System (ADS)

    Umirzakov, Gulomjon; Windhorst, David; Forkutsa, Irina; Brauer, Lutz; Frede, Hans-Georg

    2013-04-01

    Agriculture in the Aral Sea basin is the main consumer of water resources and due to the current agricultural management practices inefficient water usage causes huge losses of freshwater resources. There is huge potential to save water resources in order to reach a more efficient water use in irrigated areas. Therefore, research is required to reveal the mechanisms of hydrological fluxes in irrigated areas. This paper focuses on estimation of evapotranspiration which is one of the crucial components in the water balance of irrigated lands. Our main objective is to estimate the rate of evapotranspiration on irrigated lands and partitioning of evaporation into transpiration using stable isotopes measurements. Experiments has done in 2 different soil types (sandy and sandy loam) irrigated areas in Ferghana Valley (Uzbekistan). Soil samples were collected during the vegetation period. The soil water from these samples was extracted via a cryogenic extraction method and analyzed for the isotopic ratio of the water isotopes (2H and 18O) based on a laser spectroscopy method (DLT 100, Los Gatos USA). Evapotranspiration rates were estimated with Isotope Mass Balance method. The results of evapotranspiration obtained using isotope mass balance method is compared with the results of Catchment Modeling Framework -1D model results which has done in the same area and the same time.

  14. Procedures for Geometric Data Reduction in Solid Log Modelling

    Treesearch

    Luis G. Occeña; Wenzhen Chen; Daniel L. Schmoldt

    1995-01-01

    One of the difficulties in solid log modelling is working with huge data sets, such as those that come from computed axial tomographic imaging. Algorithmic procedures are described in this paper that have successfully reduced data without sacrificing modelling integrity.

  15. Blinded by Science.

    ERIC Educational Resources Information Center

    Snyder, Tom

    1994-01-01

    Huge infusion of technology is coming into education; nothing can stop it, because so much money is involved. With computer marketers in driver seat instead of teachers, schools risk being blinded by science. Vendors have coopted progressive education buzzwords, including "frontal teaching,""linear thinking," and "computer…

  16. SPHINX--an algorithm for taxonomic binning of metagenomic sequences.

    PubMed

    Mohammed, Monzoorul Haque; Ghosh, Tarini Shankar; Singh, Nitin Kumar; Mande, Sharmila S

    2011-01-01

    Compared with composition-based binning algorithms, the binning accuracy and specificity of alignment-based binning algorithms is significantly higher. However, being alignment-based, the latter class of algorithms require enormous amount of time and computing resources for binning huge metagenomic datasets. The motivation was to develop a binning approach that can analyze metagenomic datasets as rapidly as composition-based approaches, but nevertheless has the accuracy and specificity of alignment-based algorithms. This article describes a hybrid binning approach (SPHINX) that achieves high binning efficiency by utilizing the principles of both 'composition'- and 'alignment'-based binning algorithms. Validation results with simulated sequence datasets indicate that SPHINX is able to analyze metagenomic sequences as rapidly as composition-based algorithms. Furthermore, the binning efficiency (in terms of accuracy and specificity of assignments) of SPHINX is observed to be comparable with results obtained using alignment-based algorithms. A web server for the SPHINX algorithm is available at http://metagenomics.atc.tcs.com/SPHINX/.

  17. 1990 censuses to increase use of automation.

    PubMed

    Ward, S E

    1988-12-01

    This article summarizes information from selected reports presented at the 12th Population Census Conference. Ward reports that plans for the 1990 census in many countries of Asia and the Pacific call for increased use of automation, with applications ranging from the use of computer-generated maps of enumeration areas and optical mark readers for data processing to desktop publishing and electronic mail for disseminating the results. Recent advances in automation offer opportunities for improved accuracy and speed of census operations while reducing the need for clerical personnel. Most of the technologies discussed at the 12th Population Census are designed to make the planning, editing, processing, analysis, and publication of census data more reliable and efficient. However, technology alone cannot overcome high rates of illiteracy that preclude having respondents complete the census forms themselves. But it enables even China, India, Indonesia and Pakistan - the countries with huge population and limited financial resources - to make significant improvements in their forthcoming censuses.

  18. Large Scale Document Inversion using a Multi-threaded Computing System

    PubMed Central

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2018-01-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. CCS Concepts •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations. PMID:29861701

  19. Large Scale Document Inversion using a Multi-threaded Computing System.

    PubMed

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2017-06-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.

  20. Automated Finite State Workflow for Distributed Data Production

    NASA Astrophysics Data System (ADS)

    Hajdu, L.; Didenko, L.; Lauret, J.; Amol, J.; Betts, W.; Jang, H. J.; Noh, S. Y.

    2016-10-01

    In statistically hungry science domains, data deluges can be both a blessing and a curse. They allow the narrowing of statistical errors from known measurements, and open the door to new scientific opportunities as research programs mature. They are also a testament to the efficiency of experimental operations. However, growing data samples may need to be processed with little or no opportunity for huge increases in computing capacity. A standard strategy has thus been to share resources across multiple experiments at a given facility. Another has been to use middleware that “glues” resources across the world so they are able to locally run the experimental software stack (either natively or virtually). We describe a framework STAR has successfully used to reconstruct a ~400 TB dataset consisting of over 100,000 jobs submitted to a remote site in Korea from STAR's Tier 0 facility at the Brookhaven National Laboratory. The framework automates the full workflow, taking raw data files from tape and writing Physics-ready output back to tape without operator or remote site intervention. Through hardening we have demonstrated 97(±2)% efficiency, over a period of 7 months of operation. The high efficiency is attributed to finite state checking with retries to encourage resilience in the system over capricious and fallible infrastructure.

  1. Classroom Interaction: Potential or Problem? The Case of Karagwe

    ERIC Educational Resources Information Center

    Wedin, Asa

    2010-01-01

    This paper discusses interactional patterns in classrooms in primary school in rural Tanzania, based on an ethnographic study on literacy practices. The paper argues that the official policy of Swahili-only in primary school, together with the huge gap between high expectations on educational outcome and lack of resources, have resulted in the…

  2. Teacher Management and Educational Reforms: Paradigm Shifts

    ERIC Educational Resources Information Center

    Cheng, Yin Cheong

    2009-01-01

    In the past 15 years, numerous reforms and initiatives in many countries in the Asia-Pacific region have aimed to change education and promote new learning to prepare the new generation for the future. Unfortunately, despite good intentions and huge investments of resources, many of these reforms have been found to be ineffective and…

  3. Christchurch, New Zealand

    ERIC Educational Resources Information Center

    Rofes, Eric

    2003-01-01

    Out of the moral panic surrounding the education of boys comes at least one good resource: this valuable book for teachers. While work on gender theory, queer theory, and the social construction of identity (Davies, 1995) have made huge inroads within the academy over the past dozen years, such theoretical thinking often seems exiled from K-12…

  4. Marking and Moderation in the UK: False Assumptions and Wasted Resources

    ERIC Educational Resources Information Center

    Bloxham, Sue

    2009-01-01

    This article challenges a number of assumptions underlying marking of student work in British universities. It argues that, in developing rigorous moderation procedures, we have created a huge burden for markers which adds little to accuracy and reliability but creates additional work for staff, constrains assessment choices and slows down…

  5. Crocodile Chemistry. [CD-ROM].

    ERIC Educational Resources Information Center

    1999

    This high school chemistry resource is an on-screen chemistry lab. In the program, students can experiment with a huge range of chemicals, choosing the form, quantity and concentrations. Dangerous or difficult experiments can be investigated safely and easily. A vast range of equipment can be set up, and complex simulations can be put together and…

  6. Knowledge Maps for E-Literacy in ICT-Rich Learning Environments

    ERIC Educational Resources Information Center

    Taha, Ahmed

    2005-01-01

    The Web-based information and communication technology (w-ICT) has become a powerful means for delivery and dissemination of digitised information among the emerging virtual learning and business communities. The w-ICT has engendered a growing cybersphere paradigm to accommodate a huge mass of e-resources cast over the Web. Such abundance of…

  7. Out of This World

    ERIC Educational Resources Information Center

    Wiskow, Julie

    2016-01-01

    Julie Wiskow's fascination with space started in 2013-2014 when her year 5 (age 9-10 year) class took part in the European Space Education Resource Office (ESERO) Primary Space project and they were one of the first primary schools in the UK to achieve a Gold "Space Education Quality Mark". The experience was a huge success, with…

  8. Effects of spatial allocation and parameter variability on lakewide estimates from surveys of Lake Superior, North America’s largest lake

    EPA Science Inventory

    Lake Superior was sampled in 2011 using a Generalized Random Tessellation Stratified design (n=54 sites) to characterize biological and chemical properties of this huge aquatic resource, with statistical confidence. The lake was divided into two strata (inshore <100m and offsh...

  9. Look who's talking. A guide to interoperability groups and resources.

    PubMed

    2011-06-01

    There are huge challenges in getting medical devices to communicate with other devices and to information systems. Fortunately, a number of groups have emerged to help hospitals cope. Here's a description of the most prominent ones, including useful web links for each. We also discuss the latest and most pertinent interoperability standards.

  10. Risk Management: Earning Recognition with an Automated Safety Program

    ERIC Educational Resources Information Center

    Lansberry, Linden; Strasburger, Tom

    2012-01-01

    Risk management is a huge task that requires diligent oversight to avoid penalties, fines, or lawsuits. Add in the burden of limited resources that schools face today, and the challenge of meeting the required training, reporting, compliance, and other administrative issues associated with a safety program is almost insurmountable. Despite an…

  11. On the 'principle of the quantumness', the quantumness of Relativity, and the computational grand-unification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Ariano, Giacomo Mauro

    2010-05-04

    I will argue that the proposal of establishing operational foundations of Quantum Theory should have top-priority, and that the Lucien Hardy's program on Quantum Gravity should be paralleled by an analogous program on Quantum Field Theory (QFT), which needs to be reformulated, notwithstanding its experimental success. In this paper, after reviewing recently suggested operational 'principles of the quantumness', I address the problem on whether Quantum Theory and Special Relativity are unrelated theories, or instead, if the one implies the other. I show how Special Relativity can be indeed derived from causality of Quantum Theory, within the computational paradigm 'the universemore » is a huge quantum computer', reformulating QFT as a Quantum-Computational Field Theory (QCFT). In QCFT Special Relativity emerges from the fabric of the computational network, which also naturally embeds gauge invariance. In this scheme even the quantization rule and the Planck constant can in principle be derived as emergent from the underlying causal tapestry of space-time. In this way Quantum Theory remains the only theory operating the huge computer of the universe.Is the computational paradigm only a speculative tautology (theory as simulation of reality), or does it have a scientific value? The answer will come from Occam's razor, depending on the mathematical simplicity of QCFT. Here I will just start scratching the surface of QCFT, analyzing simple field theories, including Dirac's. The number of problems and unmotivated recipes that plague QFT strongly motivates us to undertake the QCFT project, since QCFT makes all such problems manifest, and forces a re-foundation of QFT.« less

  12. Comparative study of surrogate models for groundwater contamination source identification at DNAPL-contaminated sites

    NASA Astrophysics Data System (ADS)

    Hou, Zeyu; Lu, Wenxi

    2018-05-01

    Knowledge of groundwater contamination sources is critical for effectively protecting groundwater resources, estimating risks, mitigating disaster, and designing remediation strategies. Many methods for groundwater contamination source identification (GCSI) have been developed in recent years, including the simulation-optimization technique. This study proposes utilizing a support vector regression (SVR) model and a kernel extreme learning machine (KELM) model to enrich the content of the surrogate model. The surrogate model was itself key in replacing the simulation model, reducing the huge computational burden of iterations in the simulation-optimization technique to solve GCSI problems, especially in GCSI problems of aquifers contaminated by dense nonaqueous phase liquids (DNAPLs). A comparative study between the Kriging, SVR, and KELM models is reported. Additionally, there is analysis of the influence of parameter optimization and the structure of the training sample dataset on the approximation accuracy of the surrogate model. It was found that the KELM model was the most accurate surrogate model, and its performance was significantly improved after parameter optimization. The approximation accuracy of the surrogate model to the simulation model did not always improve with increasing numbers of training samples. Using the appropriate number of training samples was critical for improving the performance of the surrogate model and avoiding unnecessary computational workload. It was concluded that the KELM model developed in this work could reasonably predict system responses in given operation conditions. Replacing the simulation model with a KELM model considerably reduced the computational burden of the simulation-optimization process and also maintained high computation accuracy.

  13. Asteroids@Home

    NASA Astrophysics Data System (ADS)

    Durech, Josef; Hanus, J.; Vanco, R.

    2012-10-01

    We present a new project called Asteroids@home (http://asteroidsathome.net/boinc). It is a volunteer-computing project that uses an open-source BOINC (Berkeley Open Infrastructure for Network Computing) software to distribute tasks to volunteers, who provide their computing resources. The project was created at the Astronomical Institute, Charles University in Prague, in cooperation with the Czech National Team. The scientific aim of the project is to solve a time-consuming inverse problem of shape reconstruction of asteroids from sparse-in-time photometry. The time-demanding nature of the problem comes from the fact that with sparse-in-time photometry the rotation period of an asteroid is not apriori known and a huge parameter space must be densely scanned for the best solution. The nature of the problem makes it an ideal task to be solved by distributed computing - the period parameter space can be divided into small bins that can be scanned separately and then joined together to give the globally best solution. In the framework of the the project, we process asteroid photometric data from surveys together with asteroid lightcurves and we derive asteroid shapes and spin states. The algorithm is based on the lightcurve inversion method developed by Kaasalainen et al. (Icarus 153, 37, 2001). The enormous potential of distributed computing will enable us to effectively process also the data from future surveys (Large Synoptic Survey Telescope, Gaia mission, etc.). We also plan to process data of a synthetic asteroid population to reveal biases of the method. In our presentation, we will describe the project, show the first results (new models of asteroids), and discuss the possibilities of its further development. This work has been supported by the grant GACR P209/10/0537 of the Czech Science Foundation and by the Research Program MSM0021620860 of the Ministry of Education of the Czech Republic.

  14. Planform: an application and database of graph-encoded planarian regenerative experiments.

    PubMed

    Lobo, Daniel; Malone, Taylor J; Levin, Michael

    2013-04-15

    Understanding the mechanisms governing the regeneration capabilities of many organisms is a fundamental interest in biology and medicine. An ever-increasing number of manipulation and molecular experiments are attempting to discover a comprehensive model for regeneration, with the planarian flatworm being one of the most important model species. Despite much effort, no comprehensive, constructive, mechanistic models exist yet, and it is now clear that computational tools are needed to mine this huge dataset. However, until now, there is no database of regenerative experiments, and the current genotype-phenotype ontologies and databases are based on textual descriptions, which are not understandable by computers. To overcome these difficulties, we present here Planform (Planarian formalization), a manually curated database and software tool for planarian regenerative experiments, based on a mathematical graph formalism. The database contains more than a thousand experiments from the main publications in the planarian literature. The software tool provides the user with a graphical interface to easily interact with and mine the database. The presented system is a valuable resource for the regeneration community and, more importantly, will pave the way for the application of novel artificial intelligence tools to extract knowledge from this dataset. The database and software tool are freely available at http://planform.daniel-lobo.com.

  15. Smart sensors and virtual physiology human approach as a basis of personalized therapies in diabetes mellitus.

    PubMed

    Fernández Peruchena, Carlos M; Prado-Velasco, Manuel

    2010-01-01

    Diabetes mellitus (DM) has a growing incidence and prevalence in modern societies, pushed by the aging and change of life styles. Despite the huge resources dedicated to improve their quality of life, mortality and morbidity rates, these are still very poor. In this work, DM pathology is revised from clinical and metabolic points of view, as well as mathematical models related to DM, with the aim of justifying an evolution of DM therapies towards the correction of the physiological metabolic loops involved. We analyze the reliability of mathematical models, under the perspective of virtual physiological human (VPH) initiatives, for generating and integrating customized knowledge about patients, which is needed for that evolution. Wearable smart sensors play a key role in this frame, as they provide patient's information to the models.A telehealthcare computational architecture based on distributed smart sensors (first processing layer) and personalized physiological mathematical models integrated in Human Physiological Images (HPI) computational components (second processing layer), is presented. This technology was designed for a renal disease telehealthcare in earlier works and promotes crossroads between smart sensors and the VPH initiative. We suggest that it is able to support a truly personalized, preventive, and predictive healthcare model for the delivery of evolved DM therapies.

  16. Smart Sensors and Virtual Physiology Human Approach as a Basis of Personalized Therapies in Diabetes Mellitus

    PubMed Central

    Fernández Peruchena, Carlos M; Prado-Velasco, Manuel

    2010-01-01

    Diabetes mellitus (DM) has a growing incidence and prevalence in modern societies, pushed by the aging and change of life styles. Despite the huge resources dedicated to improve their quality of life, mortality and morbidity rates, these are still very poor. In this work, DM pathology is revised from clinical and metabolic points of view, as well as mathematical models related to DM, with the aim of justifying an evolution of DM therapies towards the correction of the physiological metabolic loops involved. We analyze the reliability of mathematical models, under the perspective of virtual physiological human (VPH) initiatives, for generating and integrating customized knowledge about patients, which is needed for that evolution. Wearable smart sensors play a key role in this frame, as they provide patient’s information to the models. A telehealthcare computational architecture based on distributed smart sensors (first processing layer) and personalized physiological mathematical models integrated in Human Physiological Images (HPI) computational components (second processing layer), is presented. This technology was designed for a renal disease telehealthcare in earlier works and promotes crossroads between smart sensors and the VPH initiative. We suggest that it is able to support a truly personalized, preventive, and predictive healthcare model for the delivery of evolved DM therapies. PMID:21625646

  17. Capacitated location of collection sites in an urban waste management system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghiani, Gianpaolo, E-mail: gianpaolo.ghiani@unisalento.it; Itaca S.r.l., via P. Bucci 41C, 87036 Rende; Lagana, Demetrio, E-mail: dlagana@deis.unical.it

    2012-07-15

    Urban waste management is becoming an increasingly complex task, absorbing a huge amount of resources, and having a major environmental impact. The design of a waste management system consists in various activities, and one of these is related to the location of waste collection sites. In this paper, we propose an integer programming model that helps decision makers in choosing the sites where to locate the unsorted waste collection bins in a residential town, as well as the capacities of the bins to be located at each collection site. This model helps in assessing tactical decisions through constraints that forcemore » each collection area to be capacitated enough to fit the expected waste to be directed to that area, while taking into account Quality of Service constraints from the citizens' point of view. Moreover, we propose an effective constructive heuristic approach whose aim is to provide a good solution quality in an extremely reduced computational time. Computational results on data related to the city of Nardo, in the south of Italy, show that both exact and heuristic approaches provide consistently better solutions than that currently implemented, resulting in a lower number of activated collection sites, and a lower number of bins to be used.« less

  18. Illuminator, a desktop program for mutation detection using short-read clonal sequencing.

    PubMed

    Carr, Ian M; Morgan, Joanne E; Diggle, Christine P; Sheridan, Eamonn; Markham, Alexander F; Logan, Clare V; Inglehearn, Chris F; Taylor, Graham R; Bonthron, David T

    2011-10-01

    Current methods for sequencing clonal populations of DNA molecules yield several gigabases of data per day, typically comprising reads of < 100 nt. Such datasets permit widespread genome resequencing and transcriptome analysis or other quantitative tasks. However, this huge capacity can also be harnessed for the resequencing of smaller (gene-sized) target regions, through the simultaneous parallel analysis of multiple subjects, using sample "tagging" or "indexing". These methods promise to have a huge impact on diagnostic mutation analysis and candidate gene testing. Here we describe a software package developed for such studies, offering the ability to resolve pooled samples carrying barcode tags and to align reads to a reference sequence using a mutation-tolerant process. The program, Illuminator, can identify rare sequence variants, including insertions and deletions, and permits interactive data analysis on standard desktop computers. It facilitates the effective analysis of targeted clonal sequencer data without dedicated computational infrastructure or specialized training. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. [Pulmonary Carcinosarcoma Presenting Hemothorax Caused by Pleural Invasion;Report of a Case].

    PubMed

    Kazawa, Nobukata; Shibamoto, Yuta; Kitabayashi, Yukiya; Ishihara, Yumi; Gotoh, Taeko; Sawada, Yuusuke; Inukai, Ryo; Tsujimura, Takashi; Hattori, Hideo; Niimi, Akio; Nakanishi, Ryoichi; Kitaichi, Masanori

    2016-11-01

    A 71-year-old man presented with hemothorax with cough, sputa and worsening dyspnea. On chest X-ray and computed tomography(CT), a huge tumor in the right upper lobe with hematoma and small amount of gas suggesting hemopneumothorax was revealed. No apparent lymphadenopathy nor intrapulmonary metastases were observed. The tumor showed a little enhancement on the contrastenhanced CT. Then the resction of the tumor was performed, and the pathological evaluation revealed a carcionosarcoma (adenocarcinoma+osteosarcoma) pT3N0 (stage II B) G4 pl2. Sarcomatoid carcinoma such as carcinosarcoma should be considered as a possible cause of hemothorax in making a diagnosis of hemorrhagic hypovascular huge lung tumor.

  20. Immunoinformatics: an integrated scenario

    PubMed Central

    Tomar, Namrata; De, Rajat K

    2010-01-01

    Genome sequencing of humans and other organisms has led to the accumulation of huge amounts of data, which include immunologically relevant data. A large volume of clinical data has been deposited in several immunological databases and as a result immunoinformatics has emerged as an important field which acts as an intersection between experimental immunology and computational approaches. It not only helps in dealing with the huge amount of data but also plays a role in defining new hypotheses related to immune responses. This article reviews classical immunology, different databases and prediction tools. It also describes applications of immunoinformatics in designing in silico vaccination and immune system modelling. All these efforts save time and reduce cost. PMID:20722763

  1. The Possibility of Learning Curved Mirrors' Structure by a Normal Blind Inborn Students

    ERIC Educational Resources Information Center

    Bulbul, M. Sahin

    2009-01-01

    To take a physics course blind students must be assisted using teaching methods and aids adapted to their own perception capabilities. Touchable objects are very important for them because they have huge difficulties to visualize the third spatial dimension. However, appropriate resources and methods for blind students are not yet available. In…

  2. Gift Planning: You Can't Afford Not to

    ERIC Educational Resources Information Center

    Morley, Richard H.; Gaudette, Mike

    2009-01-01

    The process of reaching out to donors and securing gifts from alumni and other community members presents its share of challenges for community colleges. But, as funding experts Richard H. Morley and Mike Gaudette of the Council for Resource Development write in "Gift Planning: You Can't Afford Not To," there exists a huge financial incentive for…

  3. Research on the Learning Effects of Multimedia Assisted Instruction Using Information Technology Model

    ERIC Educational Resources Information Center

    Chen, Chen-Yuan

    2012-01-01

    As technology advances, whether from the previous multi-media teaching, online teaching, or now interactive whiteboard, the various changes in both hardware and software resources as well as information are very huge. The information is quickly circulating under the changes in the old and new technology, and the new knowledge has been created.…

  4. Six Networking Tips to Advance Your Career Goals

    ERIC Educational Resources Information Center

    Jones, Angela

    2013-01-01

    Teachers may wonder why networking is relevant. The point of networking is to cultivate relationships for the exchange of information, services, or resources for employment or business. This may sound cold to those in the educational world, where children and youth are the No. 1 customers, but a network can be a huge support as it pertains to…

  5. Sounding Off about Noise

    ERIC Educational Resources Information Center

    Crumpton, Michael A.

    2005-01-01

    Noise in a community college library can be part of the nature of the environment. It can also become a huge distraction for those who see the library as their sanctuary for quiet study and review of resources. This article describes the steps that should be taken by library staff in order to be proactive about noise and the library environment,…

  6. How Within-District Spending Inequities Help Some Schools to Fail

    ERIC Educational Resources Information Center

    Roza, Marguerite; Hill, Paul Thomas

    2004-01-01

    School district budgets typically hide as much as they reveal. Superintendents are finding this as they discover huge deficits that nobody saw coming. District budgets are opaque by design, and they often mask important facts about resource allocation within a district, as well as about total spending. This paper reports the results of an original…

  7. Okinawan Subtropical Plants as a Promising Resource for Novel Chemical Treasury.

    PubMed

    Matsunami, Katsuyoshi; Otsuka, Hideaki

    2018-01-01

    The Okinawa Islands are a crescent-shaped archipelago and their natural forests hold a huge variety of unique subtropical plants with relatively high endemism. We have performed phytochemical study on Okinawan subtropical plants for many years. In this review, we describe our recent research progress on the isolation of new compounds and their various bioactivities.

  8. The Rural South: From Shadows to Sunshine.

    ERIC Educational Resources Information Center

    Winter, William F.

    2000-01-01

    The South can move out of the shadows of the harsh economic realities of the last 15 years and into the sunshine of developing new strategies to take advantage of the region's strengths. These strengths include a vast wealth of natural resources; a Sunbelt location; and most important, a huge reservoir of undeveloped human capital. The road to…

  9. Professional Development Needs of School Principals in the Context of Educational Reform

    ERIC Educational Resources Information Center

    Hussin, Sufean; Al Abri, Saleh

    2015-01-01

    Retraining and upskilling of human resources in organizations are deemed vital whenever a reform takes place, or whenever a huge policy is being implemented on a comprehensive scale. In an education system, officers, principals, and teachers need to be retrained so as to enable them implement and manage new changes, which are manifested in the…

  10. Untangling the complexity of blood coagulation network: use of computational modelling in pharmacology and diagnostics.

    PubMed

    Shibeko, Alexey M; Panteleev, Mikhail A

    2016-05-01

    Blood coagulation is a complex biochemical network that plays critical roles in haemostasis (a physiological process that stops bleeding on injury) and thrombosis (pathological vessel occlusion). Both up- and down-regulation of coagulation remain a major challenge for modern medicine, with the ultimate goal to correct haemostasis without causing thrombosis and vice versa. Mathematical/computational modelling is potentially an important tool for understanding blood coagulation disorders and their treatment. It can save a huge amount of time and resources, and provide a valuable alternative or supplement when clinical studies are limited, or not ethical, or technically impossible. This article reviews contemporary state of the art in the modelling of blood coagulation for practical purposes: to reveal the molecular basis of a disease, to understand mechanisms of drug action, to predict pharmacodynamics and drug-drug interactions, to suggest potential drug targets or to improve quality of diagnostics. Different model types and designs used for this are discussed. Functional mechanisms of procoagulant bypassing agents and investigations of coagulation inhibitors were the two particularly popular applications of computational modelling that gave non-trivial results. Yet, like any other tool, modelling has its limitations, mainly determined by insufficient knowledge of the system, uncertainty and unreliability of complex models. We show how to some extent this can be overcome and discuss what can be expected from the mathematical modelling of coagulation in not-so-far future. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  11. Integrated web visualizations for protein-protein interaction databases.

    PubMed

    Jeanquartier, Fleur; Jean-Quartier, Claire; Holzinger, Andreas

    2015-06-16

    Understanding living systems is crucial for curing diseases. To achieve this task we have to understand biological networks based on protein-protein interactions. Bioinformatics has come up with a great amount of databases and tools that support analysts in exploring protein-protein interactions on an integrated level for knowledge discovery. They provide predictions and correlations, indicate possibilities for future experimental research and fill the gaps to complete the picture of biochemical processes. There are numerous and huge databases of protein-protein interactions used to gain insights into answering some of the many questions of systems biology. Many computational resources integrate interaction data with additional information on molecular background. However, the vast number of diverse Bioinformatics resources poses an obstacle to the goal of understanding. We present a survey of databases that enable the visual analysis of protein networks. We selected M=10 out of N=53 resources supporting visualization, and we tested against the following set of criteria: interoperability, data integration, quantity of possible interactions, data visualization quality and data coverage. The study reveals differences in usability, visualization features and quality as well as the quantity of interactions. StringDB is the recommended first choice. CPDB presents a comprehensive dataset and IntAct lets the user change the network layout. A comprehensive comparison table is available via web. The supplementary table can be accessed on http://tinyurl.com/PPI-DB-Comparison-2015. Only some web resources featuring graph visualization can be successfully applied to interactive visual analysis of protein-protein interaction. Study results underline the necessity for further enhancements of visualization integration in biochemical analysis tools. Identified challenges are data comprehensiveness, confidence, interactive feature and visualization maturing.

  12. An economic analysis for optimal distributed computing resources for mask synthesis and tape-out in production environment

    NASA Astrophysics Data System (ADS)

    Cork, Chris; Lugg, Robert; Chacko, Manoj; Levi, Shimon

    2005-06-01

    With the exponential increase in output database size due to the aggressive optical proximity correction (OPC) and resolution enhancement technique (RET) required for deep sub-wavelength process nodes, the CPU time required for mask tape-out continues to increase significantly. For integrated device manufacturers (IDMs), this can impact the time-to-market for their products where even a few days delay could have a huge commercial impact and loss of market window opportunity. For foundries, a shorter turnaround time provides a competitive advantage in their demanding market, too slow could mean customers looking elsewhere for these services; while a fast turnaround may even command a higher price. With FAB turnaround of a mature, plain-vanilla CMOS process of around 20-30 days, a delay of several days in mask tapeout would contribute a significant fraction to the total time to deliver prototypes. Unlike silicon processing, masks tape-out time can be decreased by simply purchasing extra computing resources and software licenses. Mask tape-out groups are taking advantage of the ever-decreasing hardware cost and increasing power of commodity processors. The significant distributability inherent in some commercial Mask Synthesis software can be leveraged to address this critical business issue. Different implementations have different fractions of the code that cannot be parallelized and this affects the efficiency with which it scales, as is described by Amdahl"s law. Very few are efficient enough to allow the effective use of 1000"s of processors, enabling run times to drop from days to only minutes. What follows is a cost aware methodology to quantify the scalability of this class of software, and thus act as a guide to estimating the optimal investment in terms of hardware and software licenses.

  13. Improved Algorithms Speed It Up for Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hazi, A

    2005-09-20

    Huge computers, huge codes, complex problems to solve. The longer it takes to run a code, the more it costs. One way to speed things up and save time and money is through hardware improvements--faster processors, different system designs, bigger computers. But another side of supercomputing can reap savings in time and speed: software improvements to make codes--particularly the mathematical algorithms that form them--run faster and more efficiently. Speed up math? Is that really possible? According to Livermore physicist Eugene Brooks, the answer is a resounding yes. ''Sure, you get great speed-ups by improving hardware,'' says Brooks, the deputy leadermore » for Computational Physics in N Division, which is part of Livermore's Physics and Advanced Technologies (PAT) Directorate. ''But the real bonus comes on the software side, where improvements in software can lead to orders of magnitude improvement in run times.'' Brooks knows whereof he speaks. Working with Laboratory physicist Abraham Szoeke and others, he has been instrumental in devising ways to shrink the running time of what has, historically, been a tough computational nut to crack: radiation transport codes based on the statistical or Monte Carlo method of calculation. And Brooks is not the only one. Others around the Laboratory, including physicists Andrew Williamson, Randolph Hood, and Jeff Grossman, have come up with innovative ways to speed up Monte Carlo calculations using pure mathematics.« less

  14. EIAGRID: In-field optimization of seismic data acquisition by real-time subsurface imaging using a remote GRID computing environment.

    NASA Astrophysics Data System (ADS)

    Heilmann, B. Z.; Vallenilla Ferrara, A. M.

    2009-04-01

    The constant growth of contaminated sites, the unsustainable use of natural resources, and, last but not least, the hydrological risk related to extreme meteorological events and increased climate variability are major environmental issues of today. Finding solutions for these complex problems requires an integrated cross-disciplinary approach, providing a unified basis for environmental science and engineering. In computer science, grid computing is emerging worldwide as a formidable tool allowing distributed computation and data management with administratively-distant resources. Utilizing these modern High Performance Computing (HPC) technologies, the GRIDA3 project bundles several applications from different fields of geoscience aiming to support decision making for reasonable and responsible land use and resource management. In this abstract we present a geophysical application called EIAGRID that uses grid computing facilities to perform real-time subsurface imaging by on-the-fly processing of seismic field data and fast optimization of the processing workflow. Even though, seismic reflection profiling has a broad application range spanning from shallow targets in a few meters depth to targets in a depth of several kilometers, it is primarily used by the hydrocarbon industry and hardly for environmental purposes. The complexity of data acquisition and processing poses severe problems for environmental and geotechnical engineering: Professional seismic processing software is expensive to buy and demands large experience from the user. In-field processing equipment needed for real-time data Quality Control (QC) and immediate optimization of the acquisition parameters is often not available for this kind of studies. As a result, the data quality will be suboptimal. In the worst case, a crucial parameter such as receiver spacing, maximum offset, or recording time turns out later to be inappropriate and the complete acquisition campaign has to be repeated. The EIAGRID portal provides an innovative solution to this problem combining state-of-the-art data processing methods and modern remote grid computing technology. In field-processing equipment is substituted by remote access to high performance grid computing facilities. The latter can be ubiquitously controlled by a user-friendly web-browser interface accessed from the field by any mobile computer using wireless data transmission technology such as UMTS (Universal Mobile Telecommunications System) or HSUPA/HSDPA (High-Speed Uplink/Downlink Packet Access). The complexity of data-manipulation and processing and thus also the time demanding user interaction is minimized by a data-driven, and highly automated velocity analysis and imaging approach based on the Common-Reflection-Surface (CRS) stack. Furthermore, the huge computing power provided by the grid deployment allows parallel testing of alternative processing sequences and parameter settings, a feature which considerably reduces the turn-around times. A shared data storage using georeferencing tools and data grid technology is under current development. It will allow to publish already accomplished projects, making results, processing workflows and parameter settings available in a transparent and reproducible way. Creating a unified database shared by all users will facilitate complex studies and enable the use of data-crossing techniques to incorporate results of other environmental applications hosted on the GRIDA3 portal.

  15. Cactus: Writing an Article

    ERIC Educational Resources Information Center

    Hyde, Hartley; Spencer, Toby

    2010-01-01

    Some people became mathematics or science teachers by default. There was once such a limited range of subjects that students who could not write essays did mathematics and science. Computers changed that. Word processor software helped some people overcome huge spelling and grammar hurdles and made it easy to edit and manipulate text. Would-be…

  16. A Grid Metadata Service for Earth and Environmental Sciences

    NASA Astrophysics Data System (ADS)

    Fiore, Sandro; Negro, Alessandro; Aloisio, Giovanni

    2010-05-01

    Critical challenges for climate modeling researchers are strongly connected with the increasingly complex simulation models and the huge quantities of produced datasets. Future trends in climate modeling will only increase computational and storage requirements. For this reason the ability to transparently access to both computational and data resources for large-scale complex climate simulations must be considered as a key requirement for Earth Science and Environmental distributed systems. From the data management perspective (i) the quantity of data will continuously increases, (ii) data will become more and more distributed and widespread, (iii) data sharing/federation will represent a key challenging issue among different sites distributed worldwide, (iv) the potential community of users (large and heterogeneous) will be interested in discovery experimental results, searching of metadata, browsing collections of files, compare different results, display output, etc.; A key element to carry out data search and discovery, manage and access huge and distributed amount of data is the metadata handling framework. What we propose for the management of distributed datasets is the GRelC service (a data grid solution focusing on metadata management). Despite the classical approaches, the proposed data-grid solution is able to address scalability, transparency, security and efficiency and interoperability. The GRelC service we propose is able to provide access to metadata stored in different and widespread data sources (relational databases running on top of MySQL, Oracle, DB2, etc. leveraging SQL as query language, as well as XML databases - XIndice, eXist, and libxml2 based documents, adopting either XPath or XQuery) providing a strong data virtualization layer in a grid environment. Such a technological solution for distributed metadata management leverages on well known adopted standards (W3C, OASIS, etc.); (ii) supports role-based management (based on VOMS), which increases flexibility and scalability; (iii) provides full support for Grid Security Infrastructure, which means (authorization, mutual authentication, data integrity, data confidentiality and delegation); (iv) is compatible with existing grid middleware such as gLite and Globus and finally (v) is currently adopted at the Euro-Mediterranean Centre for Climate Change (CMCC - Italy) to manage the entire CMCC data production activity as well as in the international Climate-G testbed.

  17. A Life Cycle Assessment Framework for Pavement Maintenance and Rehabilitation Technologies : or An Integrated Life Cycle Assessment (LCA) – Life Cycle Cost Analysis (LCCA) Framework for Pavement Maintenance and Rehabilitation

    DOT National Transportation Integrated Search

    2018-02-01

    Qing Lu (ORCID ID 0000-0002-9120-9218) Given a huge amount of annual investment and large inputs of energy and natural resources in pavement maintenance and rehabilitation (M&R) activities, significant environmental improvement and budget saving can ...

  18. Investigating Problems of English Literature Teaching to EFL High School Students in Turkey with Focus on Language Proficiency

    ERIC Educational Resources Information Center

    Isikli, Ceren; Tarakçioglu, Asli Ö.

    2017-01-01

    Introduction of English literature as a separate school subject into Turkish high school curriculum has revealed a huge number of problems during its practical applications: students' low levels of proficiency in English, teacher incompetence, low motivation, lack of confidence, limited resources, lack of materials etc. Given the great extent and…

  19. Visitor and recreation impact monitoring: Is it lost in the gulf between science and management?

    Treesearch

    David N. Cole

    2006-01-01

    Park managers have seldom had the scientific information on recreation and its impacts that they need. Despite allocating substantial portions of park budgets to visitor management, few resources are typically allocated to recreation science. This is hugely problematic. Visitors are a focal species in every park and yet we have little systematic information about how...

  20. From High School to University: Impact of Social Networking Sites on Social Capital in the Transitions of Emerging Adults

    ERIC Educational Resources Information Center

    Mazzoni, Elvis; Iannone, Maria

    2014-01-01

    In recent years, the huge success of social network sites (SNSs) has principally been determined by their ability to link people and their respective relationships. These relationships allow people to access different resources, information, emotional and social support, entertainment, as well as providing them with the opportunity to extend…

  1. The role of synergies within generative models of action execution and recognition: A computational perspective. Comment on "Grasping synergies: A motor-control approach to the mirror neuron mechanism" by A. D'Ausilio et al.

    NASA Astrophysics Data System (ADS)

    Pezzulo, Giovanni; Donnarumma, Francesco; Iodice, Pierpaolo; Prevete, Roberto; Dindo, Haris

    2015-03-01

    Controlling the body - given its huge number of degrees of freedom - poses severe computational challenges. Mounting evidence suggests that the brain alleviates this problem by exploiting "synergies", or patterns of muscle activities (and/or movement dynamics and kinematics) that can be combined to control action, rather than controlling individual muscles of joints [1-10].

  2. Trinary arithmetic and logic unit (TALU) using savart plate and spatial light modulator (SLM) suitable for optical computation in multivalued logic

    NASA Astrophysics Data System (ADS)

    Ghosh, Amal K.; Bhattacharya, Animesh; Raul, Moumita; Basuray, Amitabha

    2012-07-01

    Arithmetic logic unit (ALU) is the most important unit in any computing system. Optical computing is becoming popular day-by-day because of its ultrahigh processing speed and huge data handling capability. Obviously for the fast processing we need the optical TALU compatible with the multivalued logic. In this regard we are communicating the trinary arithmetic and logic unit (TALU) in modified trinary number (MTN) system, which is suitable for the optical computation and other applications in multivalued logic system. Here the savart plate and spatial light modulator (SLM) based optoelectronic circuits have been used to exploit the optical tree architecture (OTA) in optical interconnection network.

  3. Profiling and Improving I/O Performance of a Large-Scale Climate Scientific Application

    NASA Technical Reports Server (NTRS)

    Liu, Zhuo; Wang, Bin; Wang, Teng; Tian, Yuan; Xu, Cong; Wang, Yandong; Yu, Weikuan; Cruz, Carlos A.; Zhou, Shujia; Clune, Tom; hide

    2013-01-01

    Exascale computing systems are soon to emerge, which will pose great challenges on the huge gap between computing and I/O performance. Many large-scale scientific applications play an important role in our daily life. The huge amounts of data generated by such applications require highly parallel and efficient I/O management policies. In this paper, we adopt a mission-critical scientific application, GEOS-5, as a case to profile and analyze the communication and I/O issues that are preventing applications from fully utilizing the underlying parallel storage systems. Through in-detail architectural and experimental characterization, we observe that current legacy I/O schemes incur significant network communication overheads and are unable to fully parallelize the data access, thus degrading applications' I/O performance and scalability. To address these inefficiencies, we redesign its I/O framework along with a set of parallel I/O techniques to achieve high scalability and performance. Evaluation results on the NASA discover cluster show that our optimization of GEOS-5 with ADIOS has led to significant performance improvements compared to the original GEOS-5 implementation.

  4. The dynamical analysis of modified two-compartment neuron model and FPGA implementation

    NASA Astrophysics Data System (ADS)

    Lin, Qianjin; Wang, Jiang; Yang, Shuangming; Yi, Guosheng; Deng, Bin; Wei, Xile; Yu, Haitao

    2017-10-01

    The complexity of neural models is increasing with the investigation of larger biological neural network, more various ionic channels and more detailed morphologies, and the implementation of biological neural network is a task with huge computational complexity and power consumption. This paper presents an efficient digital design using piecewise linearization on field programmable gate array (FPGA), to succinctly implement the reduced two-compartment model which retains essential features of more complicated models. The design proposes an approximate neuron model which is composed of a set of piecewise linear equations, and it can reproduce different dynamical behaviors to depict the mechanisms of a single neuron model. The consistency of hardware implementation is verified in terms of dynamical behaviors and bifurcation analysis, and the simulation results including varied ion channel characteristics coincide with the biological neuron model with a high accuracy. Hardware synthesis on FPGA demonstrates that the proposed model has reliable performance and lower hardware resource compared with the original two-compartment model. These investigations are conducive to scalability of biological neural network in reconfigurable large-scale neuromorphic system.

  5. Seasonal forecasts of groundwater levels in Lanyang Plain in Taiwan

    NASA Astrophysics Data System (ADS)

    Chang, Ya-Chi; Lin, Yi-Chiu

    2017-04-01

    Groundwater plays a critical and important role in world's freshwater resources and it is also an important part of Taiwan's water supply for domestic, agricultural and industrial use. Prolonged dry climatic conditions can induce groundwater drought and may have huge impact on water resources. Therefore, this study utilizes seasonal rainfall forecasts from the Model for Prediction Across Scales (MPAS) to simulate groundwater levels in Lanyang Plain in Taiwan up to three months into future. The MPAS is setup with 120 km uniform grid and the physics schemes including WSM6 micorphysics scheme, Kain-Fritsch cumulus scheme, RRTMG radiation scheme, and YSU planetary boundary layer scheme are used to provide the rainfall forecasts. Results of this study can provide a reference for water resources management to ensure the sustainability of groundwater resources in Lanyang Plain.

  6. Grid-Enabled High Energy Physics Research using a Beowulf Cluster

    NASA Astrophysics Data System (ADS)

    Mahmood, Akhtar

    2005-04-01

    At Edinboro University of Pennsylvania, we have built a 8-node 25 Gflops Beowulf Cluster with 2.5 TB of disk storage space to carry out grid-enabled, data-intensive high energy physics research for the ATLAS experiment via Grid3. We will describe how we built and configured our Cluster, which we have named the Sphinx Beowulf Cluster. We will describe the results of our cluster benchmark studies and the run-time plots of several parallel application codes. Once fully functional, the Cluster will be part of Grid3[www.ivdgl.org/grid3]. The current ATLAS simulation grid application, models the entire physical processes from the proton anti-proton collisions and detector's response to the collision debri through the complete reconstruction of the event from analyses of these responses. The end result is a detailed set of data that simulates the real physical collision event inside a particle detector. Grid is the new IT infrastructure for the 21^st century science -- a new computing paradigm that is poised to transform the practice of large-scale data-intensive research in science and engineering. The Grid will allow scientist worldwide to view and analyze huge amounts of data flowing from the large-scale experiments in High Energy Physics. The Grid is expected to bring together geographically and organizationally dispersed computational resources, such as CPUs, storage systems, communication systems, and data sources.

  7. Reviews

    NASA Astrophysics Data System (ADS)

    2005-01-01

    WE RECOMMEND Advancing Physics CD Quick Tour This software makes the Advancing Physics CD easier to use. From Silicon to Computer This CD on computer technology operates like an electronic textbook. Powers of Ten This documentary film gives pupils a feel for the scale of our universe. Multimedia Waves The material on this CD demonstrates various wave phenomena. Infrared thermometer This instant response, remote sensor has numerous lab applications. Magic Universe, The Oxford Guide to Modern Science Acollection of short essays, this book is aimed at A-level students. Fermi Remembered Ajoy to read, this piece of non-fiction leaves you eager for more. Big Bang (lecture and book) Both the book and the lecture are engaging and hugely entertaining. WORTH A LOOK The Way Things Go Lasting just 30 minutes, this film will liven up any mechanics lesson. The Video Encyclopaedia of Physics Demonstrations It may blow your budget, but this DVD is a superb physics resource. Go!Link and Go!Temp Go!Link is a useful, cheap datalogger. Go!Temp seems superfluous. Cracker snaps Cheap and cheerful, cracker snaps can be used to demonstrate force. VPython This 3D animation freeware can be adapted to fit your needs. HANDLE WITH CARE Physics A-Level Presentations It might be better to generate slides yourself rather than modify these. London Planetarium and Madame Tussaud's A day out here is definitely not a worthwhile science excursion.

  8. Global Seabed Materials and Habitats Mapped: The Computational Methods

    NASA Astrophysics Data System (ADS)

    Jenkins, C. J.

    2016-02-01

    What the seabed is made of has proven difficult to map on the scale of whole ocean-basins. Direct sampling and observation can be augmented with proxy-parameter methods such as acoustics. Both avenues are essential to obtain enough detail and coverage, and also to validate the mapping methods. We focus on the direct observations such as samplings, photo and video, probes, diver and sub reports, and surveyed features. These are often in word-descriptive form: over 85% of the records for site materials are in this form, whether as sample/view descriptions or classifications, or described parameters such as consolidation, color, odor, structures and components. Descriptions are absolutely necessary for unusual materials and for processes - in other words, for research. This project dbSEABED not only has the largest collection of seafloor materials data worldwide, but it uses advanced computing math to obtain the best possible coverages and detail. Included in those techniques are linguistic text analysis (e.g., Natural Language Processing, NLP), fuzzy set theory (FST), and machine learning (ML, e.g., Random Forest). These techniques allow efficient and accurate import of huge datasets, thereby optimizing the data that exists. They merge quantitative and qualitative types of data for rich parameter sets, and extrapolate where the data are sparse for best map production. The dbSEABED data resources are now very widely used worldwide in oceanographic research, environmental management, the geosciences, engineering and survey.

  9. Historical and modern disturbance regimes, stand structures, and landscape dynamics in pinon-juniper vegetation of the western U.S.

    Treesearch

    William H. Romme; Craig D. Allen; John D. Bailey; William L. Baker; Brandon T. Bestelmeyer; Peter M. Brown; Karen S. Eisenhart; Lisa Floyd-Hanna; Dustin W. Huffman; Brian F. Jacobs; Richard F. Miller; Esteban H. Muldavin; Thomas W. Swetnam; Robin J. Tausch; Peter J. Weisberg

    2008-01-01

    Pinon-juniper is one of the major vegetation types in western North America. It covers a huge area, provides many resources and ecosystem services, and is of great management concern. Management of pinon-juniper vegetation has been hindered, especially where ecological restoration is a goal, by inadequate understanding of the variability in historical and modern...

  10. Rice-Map: a new-generation rice genome browser.

    PubMed

    Wang, Jun; Kong, Lei; Zhao, Shuqi; Zhang, He; Tang, Liang; Li, Zhe; Gu, Xiaocheng; Luo, Jingchu; Gao, Ge

    2011-03-30

    The concurrent release of rice genome sequences for two subspecies (Oryza sativa L. ssp. japonica and Oryza sativa L. ssp. indica) facilitates rice studies at the whole genome level. Since the advent of high-throughput analysis, huge amounts of functional genomics data have been delivered rapidly, making an integrated online genome browser indispensable for scientists to visualize and analyze these data. Based on next-generation web technologies and high-throughput experimental data, we have developed Rice-Map, a novel genome browser for researchers to navigate, analyze and annotate rice genome interactively. More than one hundred annotation tracks (81 for japonica and 82 for indica) have been compiled and loaded into Rice-Map. These pre-computed annotations cover gene models, transcript evidences, expression profiling, epigenetic modifications, inter-species and intra-species homologies, genetic markers and other genomic features. In addition to these pre-computed tracks, registered users can interactively add comments and research notes to Rice-Map as User-Defined Annotation entries. By smoothly scrolling, dragging and zooming, users can browse various genomic features simultaneously at multiple scales. On-the-fly analysis for selected entries could be performed through dedicated bioinformatic analysis platforms such as WebLab and Galaxy. Furthermore, a BioMart-powered data warehouse "Rice Mart" is offered for advanced users to fetch bulk datasets based on complex criteria. Rice-Map delivers abundant up-to-date japonica and indica annotations, providing a valuable resource for both computational and bench biologists. Rice-Map is publicly accessible at http://www.ricemap.org/, with all data available for free downloading.

  11. Roles for text mining in protein function prediction.

    PubMed

    Verspoor, Karin M

    2014-01-01

    The Human Genome Project has provided science with a hugely valuable resource: the blueprints for life; the specification of all of the genes that make up a human. While the genes have all been identified and deciphered, it is proteins that are the workhorses of the human body: they are essential to virtually all cell functions and are the primary mechanism through which biological function is carried out. Hence in order to fully understand what happens at a molecular level in biological organisms, and eventually to enable development of treatments for diseases where some aspect of a biological system goes awry, we must understand the functions of proteins. However, experimental characterization of protein function cannot scale to the vast amount of DNA sequence data now available. Computational protein function prediction has therefore emerged as a problem at the forefront of modern biology (Radivojac et al., Nat Methods 10(13):221-227, 2013).Within the varied approaches to computational protein function prediction that have been explored, there are several that make use of biomedical literature mining. These methods take advantage of information in the published literature to associate specific proteins with specific protein functions. In this chapter, we introduce two main strategies for doing this: association of function terms, represented as Gene Ontology terms (Ashburner et al., Nat Genet 25(1):25-29, 2000), to proteins based on information in published articles, and a paradigm called LEAP-FS (Literature-Enhanced Automated Prediction of Functional Sites) in which literature mining is used to validate the predictions of an orthogonal computational protein function prediction method.

  12. Controlling user access to electronic resources without password

    DOEpatents

    Smith, Fred Hewitt

    2015-06-16

    Described herein are devices and techniques for remotely controlling user access to a restricted computer resource. The process includes pre-determining an association of the restricted computer resource and computer-resource-proximal environmental information. Indicia of user-proximal environmental information are received from a user requesting access to the restricted computer resource. Received indicia of user-proximal environmental information are compared to associated computer-resource-proximal environmental information. User access to the restricted computer resource is selectively granted responsive to a favorable comparison in which the user-proximal environmental information is sufficiently similar to the computer-resource proximal environmental information. In at least some embodiments, the process further includes comparing user-supplied biometric measure and comparing it with a predetermined association of at least one biometric measure of an authorized user. Access to the restricted computer resource is granted in response to a favorable comparison.

  13. Laboratory Computing Resource Center

    Science.gov Websites

    Systems Computing and Data Resources Purchasing Resources Future Plans For Users Getting Started Using LCRC Software Best Practices and Policies Getting Help Support Laboratory Computing Resource Center Laboratory Computing Resource Center Latest Announcements See All April 27, 2018, Announcements, John Low

  14. [Facing the challenges of ubiquitous computing in the health care sector].

    PubMed

    Georgieff, Peter; Friedewald, Michael

    2010-01-01

    The steady progress of microelectronics, communications and information technology will enable the realisation of the vision for "ubiquitous computing" where the Internet extends into the real world embracing everyday objects. The necessary technical basis is already in place. Due to their diminishing size, constantly falling price and declining energy consumption, processors, communications modules and sensors are being increasingly integrated into everyday objects today. This development is opening up huge opportunities for both the economy and individuals. In the present paper we discuss possible applications, but also technical, social and economic barriers to a wide-spread use of ubiquitous computing in the health care sector. .

  15. Spin-transfer torque magnetoresistive random-access memory technologies for normally off computing (invited)

    NASA Astrophysics Data System (ADS)

    Ando, K.; Fujita, S.; Ito, J.; Yuasa, S.; Suzuki, Y.; Nakatani, Y.; Miyazaki, T.; Yoda, H.

    2014-05-01

    Most parts of present computer systems are made of volatile devices, and the power to supply them to avoid information loss causes huge energy losses. We can eliminate this meaningless energy loss by utilizing the non-volatile function of advanced spin-transfer torque magnetoresistive random-access memory (STT-MRAM) technology and create a new type of computer, i.e., normally off computers. Critical tasks to achieve normally off computers are implementations of STT-MRAM technologies in the main memory and low-level cache memories. STT-MRAM technology for applications to the main memory has been successfully developed by using perpendicular STT-MRAMs, and faster STT-MRAM technologies for applications to the cache memory are now being developed. The present status of STT-MRAMs and challenges that remain for normally off computers are discussed.

  16. Research on elastic resource management for multi-queue under cloud computing environment

    NASA Astrophysics Data System (ADS)

    CHENG, Zhenjing; LI, Haibo; HUANG, Qiulan; Cheng, Yaodong; CHEN, Gang

    2017-10-01

    As a new approach to manage computing resource, virtualization technology is more and more widely applied in the high-energy physics field. A virtual computing cluster based on Openstack was built at IHEP, using HTCondor as the job queue management system. In a traditional static cluster, a fixed number of virtual machines are pre-allocated to the job queue of different experiments. However this method cannot be well adapted to the volatility of computing resource requirements. To solve this problem, an elastic computing resource management system under cloud computing environment has been designed. This system performs unified management of virtual computing nodes on the basis of job queue in HTCondor based on dual resource thresholds as well as the quota service. A two-stage pool is designed to improve the efficiency of resource pool expansion. This paper will present several use cases of the elastic resource management system in IHEPCloud. The practical run shows virtual computing resource dynamically expanded or shrunk while computing requirements change. Additionally, the CPU utilization ratio of computing resource was significantly increased when compared with traditional resource management. The system also has good performance when there are multiple condor schedulers and multiple job queues.

  17. The largest renewable, easily exploitable, and economically sustainable energy resource

    NASA Astrophysics Data System (ADS)

    Abbate, Giancarlo; Saraceno, Eugenio

    2018-02-01

    Sun, the ultimate energy resource of our planet, transfers energy to the Earth at an average power of 23,000 TW. Earth surface can be regarded as a huge panel transforming solar energy into a more convenient mechanical form, the wind. Since millennia wind is recognized as an exploitable form of energy and it is common knowledge that the higher you go, the stronger the winds flow. To go high is difficult; however Bill Gates cites high wind among possible energy miracles in the near future. Public awareness of this possible miracle is still missing, but today's technology is ready for it.

  18. Beowulf Distributed Processing and the United States Geological Survey

    USGS Publications Warehouse

    Maddox, Brian G.

    2002-01-01

    Introduction In recent years, the United States Geological Survey's (USGS) National Mapping Discipline (NMD) has expanded its scientific and research activities. Work is being conducted in areas such as emergency response research, scientific visualization, urban prediction, and other simulation activities. Custom-produced digital data have become essential for these types of activities. High-resolution, remotely sensed datasets are also seeing increased use. Unfortunately, the NMD is also finding that it lacks the resources required to perform some of these activities. Many of these projects require large amounts of computer processing resources. Complex urban-prediction simulations, for example, involve large amounts of processor-intensive calculations on large amounts of input data. This project was undertaken to learn and understand the concepts of distributed processing. Experience was needed in developing these types of applications. The idea was that this type of technology could significantly aid the needs of the NMD scientific and research programs. Porting a numerically intensive application currently being used by an NMD science program to run in a distributed fashion would demonstrate the usefulness of this technology. There are several benefits that this type of technology can bring to the USGS's research programs. Projects can be performed that were previously impossible due to a lack of computing resources. Other projects can be performed on a larger scale than previously possible. For example, distributed processing can enable urban dynamics research to perform simulations on larger areas without making huge sacrifices in resolution. The processing can also be done in a more reasonable amount of time than with traditional single-threaded methods (a scaled version of Chester County, Pennsylvania, took about fifty days to finish its first calibration phase with a single-threaded program). This paper has several goals regarding distributed processing technology. It will describe the benefits of the technology. Real data about a distributed application will be presented as an example of the benefits that this technology can bring to USGS scientific programs. Finally, some of the issues with distributed processing that relate to USGS work will be discussed.

  19. The Diabetic foot: A global threat and a huge challenge for Greece

    PubMed Central

    Papanas, N; Maltezos, E

    2009-01-01

    The diabetic foot continues to be a major cause of morbidity, posing a global threat. Substantial progress has been now accomplished in the treatment of foot lesions, but further improvement is required. Treatment options may be classified into established measures (revascularisation, casting and debridement) and new modalities. All therapeutic measures should be provided by specialised dedicated multidisciplinary foot clinics. In particular, the diabetic foot is a huge challenge for Greece. There is a dramatic need to increase the number of engaged foot care teams and their resources throughout the country. It is also desirable to continue education of both physicians and general diabetic population on the magnitude of the problem and on the suitable preventative measures. At the same time, more data on the prevalence and clinical manifestations of the diabetic foot in Greece should be carefully collected. Finally, additional research should investigate feasible ways of implementing current knowledge in everyday clinical practice. PMID:20011082

  20. Laboratory challenges in the scaling up of HIV, TB, and malaria programs: The interaction of health and laboratory systems, clinical research, and service delivery.

    PubMed

    Birx, Deborah; de Souza, Mark; Nkengasong, John N

    2009-06-01

    Strengthening national health laboratory systems in resource-poor countries is critical to meeting the United Nations Millennium Development Goals. Despite strong commitment from the international community to fight major infectious diseases, weak laboratory infrastructure remains a huge rate-limiting step. Some major challenges facing laboratory systems in resource-poor settings include dilapidated infrastructure; lack of human capacity, laboratory policies, and strategic plans; and limited synergies between clinical and research laboratories. Together, these factors compromise the quality of test results and impact patient management. With increased funding, the target of laboratory strengthening efforts in resource-poor countries should be the integrating of laboratory services across major diseases to leverage resources with respect to physical infrastructure; types of assays; supply chain management of reagents and equipment; and maintenance of equipment.

  1. An Atlas of annotations of Hydra vulgaris transcriptome.

    PubMed

    Evangelista, Daniela; Tripathi, Kumar Parijat; Guarracino, Mario Rosario

    2016-09-22

    RNA sequencing takes advantage of the Next Generation Sequencing (NGS) technologies for analyzing RNA transcript counts with an excellent accuracy. Trying to interpret this huge amount of data in biological information is still a key issue, reason for which the creation of web-resources useful for their analysis is highly desiderable. Starting from a previous work, Transcriptator, we present the Atlas of Hydra's vulgaris, an extensible web tool in which its complete transcriptome is annotated. In order to provide to the users an advantageous resource that include the whole functional annotated transcriptome of Hydra vulgaris water polyp, we implemented the Atlas web-tool contains 31.988 accesible and downloadable transcripts of this non-reference model organism. Atlas, as a freely available resource, can be considered a valuable tool to rapidly retrieve functional annotation for transcripts differentially expressed in Hydra vulgaris exposed to the distinct experimental treatments. WEB RESOURCE URL: http://www-labgtp.na.icar.cnr.it/Atlas .

  2. E-Learning as a new tool in bioinformatics teaching

    PubMed Central

    Saravanan, Vijayakumar; Shanmughavel, Piramanayagam

    2007-01-01

    In recent years, virtual learning is growing rapidly. Universities, colleges, and secondary schools are now delivering training and education over the internet. Beside this, resources available over the WWW are huge and understanding the various techniques employed in the field of Bioinformatics is increasingly complex for students during implementation. Here, we discuss its importance in developing and delivering an educational system in Bioinformatics based on e-learning environment. PMID:18292800

  3. Alternative Feedstocks Program Technical and Economic Assessment: Thermal/Chemical and Bioprocessing Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bozell, J. J.; Landucci, R.

    This resource document on biomass to chemicals opportunities describes the development of a technical and market rationale for incorporating renewable feedstocks into the chemical industry in both a qualitative and quantitative sense. The term "renewable feedstock?s" can be defined to include a huge number of materials such as agricultural crops rich in starch, lignocellulosic materials (biomass), or biomass material recovered from a variety of processing wastes.

  4. A study of computer graphics technology in application of communication resource management

    NASA Astrophysics Data System (ADS)

    Li, Jing; Zhou, Liang; Yang, Fei

    2017-08-01

    With the development of computer technology, computer graphics technology has been widely used. Especially, the success of object-oriented technology and multimedia technology promotes the development of graphics technology in the computer software system. Therefore, the computer graphics theory and application technology have become an important topic in the field of computer, while the computer graphics technology becomes more and more extensive in various fields of application. In recent years, with the development of social economy, especially the rapid development of information technology, the traditional way of communication resource management cannot effectively meet the needs of resource management. In this case, the current communication resource management is still using the original management tools and management methods, resource management equipment management and maintenance, which brought a lot of problems. It is very difficult for non-professionals to understand the equipment and the situation in communication resource management. Resource utilization is relatively low, and managers cannot quickly and accurately understand the resource conditions. Aimed at the above problems, this paper proposes to introduce computer graphics technology into the communication resource management. The introduction of computer graphics not only makes communication resource management more vivid, but also reduces the cost of resource management and improves work efficiency.

  5. [Huge aspergilloma developed within a zone of scleroderma-related pulmonary fibrosis].

    PubMed

    Rakotoson, J L; Vololontiana, H M D; Raherison, R E; Andrianasolo, R L; Rakotomizao, J R; Rakotoharivelo, H; Rajaoarifetra, J; Randria, M J D; Rapelanoro, R F; Andrianarisoa, A C F; Rajaona, H R

    2012-02-01

    In pulmonary aspergilloma, Aspergillus colonizes and proliferates as a saprophyte in deterged cavities deprived of local defense. Although pulmonary tuberculosis constitutes the one well-know predisposing factor, other causes can create favorable conditions. We describe a first published case of a huge aspergilloma which developed within a zone of pulmonary fibrosis secondary to systemic scleroderma. The patient was a 58-year-old woman in poor general health who experienced repeated episodes of hemoptysis and dyspnea. Physical examination disclosed sclerodactyly, generalized cutaneous sclerosis and Raynaud's phenomenon. There was no clinical history of pulmonary tuberculosis or bronchectasis. Aspergillosis serology was positive. Broncho-alveolar liquid was positive for Aspergillus fumigatus at direct examination and after culture. Immunological assessment confirmed scleroderma. The chest computed tomography scan showed a huge oblong-shaped opacity in the upper left lobe which had developed within a zone of pulmonary fibrosis. Medical management was instituted. The clinical course was marked by repeating hemoptysis and the stability of pulmonary lesions after two years. Management of scleroderma-related pulmonary aspergiloma remains difficult and complicated. Prognosis depends on the course of both conditions, scleroderma and aspergillosis. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  6. Research on Formation Mechanisms of Hot Dry Rock Resources in China

    NASA Astrophysics Data System (ADS)

    Wang, G.; Xi, Y.

    2017-12-01

    As an important geothermal resource, hot dry rock(HDR) reserves have been studied in many countries. HDR resources in China have huge capacity and have become one of the most important resources for the potential replacement of fossil fuels. However, HDR resources are difficult to develop and utilise. Technologies for use with HDR, such as high-temperature drilling, reservoir characterisation, reservoir fracturing, microseismic monitoring and high-temperature power stations, originate from the field of oil and drilling. Addressing how to take advantage of these developed technologies is a key factor in the development of HDR reserves. Based on the thermal crustal structure in China, HDR resources can be divided into four types: high radioactive heat production, sedimentary basin, modern volcano and the inner-plate active tectonic belt. The prospective regions of HDR resources are located in South Tibet, West Yunnan, the southeast coast of China, Bohai Rim, Songliao Basin and Guanzhong Basin. The related essential technologies are relatively mature, and the prospect of HDR power generation is promising. Therefore, analysing the formation mechanisms of HDR resources and promoting the transformation of technological achievements, large-scale development and the utilisation of HDR resources can be achieved in China.

  7. A resource management architecture based on complex network theory in cloud computing federation

    NASA Astrophysics Data System (ADS)

    Zhang, Zehua; Zhang, Xuejie

    2011-10-01

    Cloud Computing Federation is a main trend of Cloud Computing. Resource Management has significant effect on the design, realization, and efficiency of Cloud Computing Federation. Cloud Computing Federation has the typical characteristic of the Complex System, therefore, we propose a resource management architecture based on complex network theory for Cloud Computing Federation (abbreviated as RMABC) in this paper, with the detailed design of the resource discovery and resource announcement mechanisms. Compare with the existing resource management mechanisms in distributed computing systems, a Task Manager in RMABC can use the historical information and current state data get from other Task Managers for the evolution of the complex network which is composed of Task Managers, thus has the advantages in resource discovery speed, fault tolerance and adaptive ability. The result of the model experiment confirmed the advantage of RMABC in resource discovery performance.

  8. Advanced mathematical on-line analysis in nuclear experiments. Usage of parallel computing CUDA routines in standard root analysis

    NASA Astrophysics Data System (ADS)

    Grzeszczuk, A.; Kowalski, S.

    2015-04-01

    Compute Unified Device Architecture (CUDA) is a parallel computing platform developed by Nvidia for increase speed of graphics by usage of parallel mode for processes calculation. The success of this solution has opened technology General-Purpose Graphic Processor Units (GPGPUs) for applications not coupled with graphics. The GPGPUs system can be applying as effective tool for reducing huge number of data for pulse shape analysis measures, by on-line recalculation or by very quick system of compression. The simplified structure of CUDA system and model of programming based on example Nvidia GForce GTX580 card are presented by our poster contribution in stand-alone version and as ROOT application.

  9. Performance evaluation of throughput computing workloads using multi-core processors and graphics processors

    NASA Astrophysics Data System (ADS)

    Dave, Gaurav P.; Sureshkumar, N.; Blessy Trencia Lincy, S. S.

    2017-11-01

    Current trend in processor manufacturing focuses on multi-core architectures rather than increasing the clock speed for performance improvement. Graphic processors have become as commodity hardware for providing fast co-processing in computer systems. Developments in IoT, social networking web applications, big data created huge demand for data processing activities and such kind of throughput intensive applications inherently contains data level parallelism which is more suited for SIMD architecture based GPU. This paper reviews the architectural aspects of multi/many core processors and graphics processors. Different case studies are taken to compare performance of throughput computing applications using shared memory programming in OpenMP and CUDA API based programming.

  10. Bespoke physics for living technology.

    PubMed

    Ackley, David H

    2013-01-01

    In the physics of the natural world, basic tasks of life, such as homeostasis and reproduction, are extremely complex operations, requiring the coordination of billions of atoms even in simple cases. By contrast, artificial living organisms can be implemented in computers using relatively few bits, and copying a data structure is trivial. Of course, the physical overheads of the computers themselves are huge, but since their programmability allows digital "laws of physics" to be tailored like a custom suit, deploying living technology atop an engineered computational substrate might be as or more effective than building directly on the natural laws of physics, for a substantial range of desirable purposes. This article suggests basic criteria and metrics for bespoke physics computing architectures, describes one such architecture, and offers data and illustrations of custom living technology competing to reproduce while collaborating on an externally useful computation.

  11. A resource-sharing model based on a repeated game in fog computing.

    PubMed

    Sun, Yan; Zhang, Nan

    2017-03-01

    With the rapid development of cloud computing techniques, the number of users is undergoing exponential growth. It is difficult for traditional data centers to perform many tasks in real time because of the limited bandwidth of resources. The concept of fog computing is proposed to support traditional cloud computing and to provide cloud services. In fog computing, the resource pool is composed of sporadic distributed resources that are more flexible and movable than a traditional data center. In this paper, we propose a fog computing structure and present a crowd-funding algorithm to integrate spare resources in the network. Furthermore, to encourage more resource owners to share their resources with the resource pool and to supervise the resource supporters as they actively perform their tasks, we propose an incentive mechanism in our algorithm. Simulation results show that our proposed incentive mechanism can effectively reduce the SLA violation rate and accelerate the completion of tasks.

  12. A review of Computer Science resources for learning and teaching with K-12 computing curricula: an Australian case study

    NASA Astrophysics Data System (ADS)

    Falkner, Katrina; Vivian, Rebecca

    2015-10-01

    To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age children, with the intention to engage children and increase interest, rather than to formally teach concepts and skills. What is the educational quality of existing Computer Science resources and to what extent are they suitable for classroom learning and teaching? In this paper, an assessment framework is presented to evaluate the quality of online Computer Science resources. Further, a semi-systematic review of available online Computer Science resources was conducted to evaluate resources available for classroom learning and teaching and to identify gaps in resource availability, using the Australian curriculum as a case study analysis. The findings reveal a predominance of quality resources, however, a number of critical gaps were identified. This paper provides recommendations and guidance for the development of new and supplementary resources and future research.

  13. A large giant cell tumor of the sacrum. Advantage of an abdomino-sacral approach.

    PubMed

    Alla, Abubakr H; Mahadi, Seif I; Elhassan, Ahmed M; Ahmed, Mohamed E

    2005-01-01

    We report a case of giant cell tumor of the sacrum, presenting with sacral pain, swelling, and change of bowel habits. Rectal examination revealed a huge retrorectal mass fixed to the sacrum but not to the wall of the rectum. Abdominal ultrasonography, computed tomography CT scan, and magnetic resonance imaging MRI showed a huge pelvic mass invading the sacrum. Exploration via posterior sacral approach was not successful due to both, extensive bleeding and difficult accessibility. Re-exploration was carried out 2 days later with the patient in lithotomy position. Using abdomino-sacral approach the mass together with part of the sacrum and the whole coccyx were excised. Histopathology reported giant cell tumor of the sacrum with no evidence of mitosis. The patient was symptomless 12 months after surgery and on follow up.

  14. Current challenges in genome annotation through structural biology and bioinformatics.

    PubMed

    Furnham, Nicholas; de Beer, Tjaart A P; Thornton, Janet M

    2012-10-01

    With the huge volume in genomic sequences being generated from high-throughout sequencing projects the requirement for providing accurate and detailed annotations of gene products has never been greater. It is proving to be a huge challenge for computational biologists to use as much information as possible from experimental data to provide annotations for genome data of unknown function. A central component to this process is to use experimentally determined structures, which provide a means to detect homology that is not discernable from just the sequence and permit the consequences of genomic variation to be realized at the molecular level. In particular, structures also form the basis of many bioinformatics methods for improving the detailed functional annotations of enzymes in combination with similarities in sequence and chemistry. Copyright © 2012. Published by Elsevier Ltd.

  15. Representing Farmer Irrigation Decisions in Northern India: Model Development from the Bottom Up.

    NASA Astrophysics Data System (ADS)

    O'Keeffe, J.; Buytaert, W.; Brozovic, N.; Mijic, A.

    2017-12-01

    The plains of northern India are among the most intensely populated and irrigated regions of the world. Sustaining water demand has been made possible by exploiting the vast and hugely productive aquifers underlying the Indo-Gangetic basin. However, an increasing demand from a growing population and highly variable socio-economic and environmental variables mean present resources may not be sustainable, resulting in water security becoming one of India's biggest challenges. Unless solutions which take into consideration the regions evolving anthropogenic and environmental conditions are found, the sustainability of India's water resources looks bleak. Understanding water user decisions and their potential outcome is important for development of suitable water resource management options. Computational models are commonly used to assist water use decision making, typically representing natural processes well. The inclusion of human decision making however, one of the dominant drivers of change, has lagged behind. Improved representation of irrigation water user behaviour within models provides more accurate, relevant information for irrigation management. This research conceptualizes and proceduralizes observed farmer irrigation practices, highlighting feedbacks between the environment and livelihood. It is developed using a bottom up approach, informed through field experience and stakeholder interaction in Uttar Pradesh, northern India. Real world insights are incorporated through collected information creating a realistic representation of field conditions, providing a useful tool for policy analysis and water management. The modelling framework is applied to four districts. Results suggest predicted future climate will have little direct impact on water resources, crop yields or farmer income. In addition, increased abstraction may be sustainable in some areas under carefully managed conditions. By simulating dynamic decision making, feedbacks and interactions between water users, irrigation officials, agricultural practices, and external influences such as energy pricing and farming subsidies, this work highlights the importance of directly including water user behaviour in policy making and operational tools, which will help achieve water and livelihood security.

  16. BingEO: Enable Distributed Earth Observation Data for Environmental Research

    NASA Astrophysics Data System (ADS)

    Wu, H.; Yang, C.; Xu, Y.

    2010-12-01

    Our planet is facing great environmental challenges including global climate change, environmental vulnerability, extreme poverty, and a shortage of clean cheap energy. To address these problems, scientists are developing various models to analysis, forecast, simulate various geospatial phenomena to support critical decision making. These models not only challenge our computing technology, but also challenge us to feed huge demands of earth observation data. Through various policies and programs, open and free sharing of earth observation data are advocated in earth science. Currently, thousands of data sources are freely available online through open standards such as Web Map Service (WMS), Web Feature Service (WFS) and Web Coverage Service (WCS). Seamless sharing and access to these resources call for a spatial Cyberinfrastructure (CI) to enable the use of spatial data for the advancement of related applied sciences including environmental research. Based on Microsoft Bing Search Engine and Bing Map, a seamlessly integrated and visual tool is under development to bridge the gap between researchers/educators and earth observation data providers. With this tool, earth science researchers/educators can easily and visually find the best data sets for their research and education. The tool includes a registry and its related supporting module at server-side and an integrated portal as its client. The proposed portal, Bing Earth Observation (BingEO), is based on Bing Search and Bing Map to: 1) Use Bing Search to discover Web Map Services (WMS) resources available over the internet; 2) Develop and maintain a registry to manage all the available WMS resources and constantly monitor their service quality; 3) Allow users to manually register data services; 4) Provide a Bing Maps-based Web application to visualize the data on a high-quality and easy-to-manipulate map platform and enable users to select the best data layers online. Given the amount of observation data accumulated already and still growing, BingEO will allow these resources to be utilized more widely, intensively, efficiently and economically in earth science applications.

  17. iTools: a framework for classification, categorization and integration of computational biology resources.

    PubMed

    Dinov, Ivo D; Rubin, Daniel; Lorensen, William; Dugan, Jonathan; Ma, Jeff; Murphy, Shawn; Kirschner, Beth; Bug, William; Sherman, Michael; Floratos, Aris; Kennedy, David; Jagadish, H V; Schmidt, Jeanette; Athey, Brian; Califano, Andrea; Musen, Mark; Altman, Russ; Kikinis, Ron; Kohane, Isaac; Delp, Scott; Parker, D Stott; Toga, Arthur W

    2008-05-28

    The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management. We demonstrate several applications of iTools as a framework for integrated bioinformatics. iTools and the complete details about its specifications, usage and interfaces are available at the iTools web page http://iTools.ccb.ucla.edu.

  18. Large Spatial Scale Ground Displacement Mapping through the P-SBAS Processing of Sentinel-1 Data on a Cloud Computing Environment

    NASA Astrophysics Data System (ADS)

    Casu, F.; Bonano, M.; de Luca, C.; Lanari, R.; Manunta, M.; Manzo, M.; Zinno, I.

    2017-12-01

    Since its launch in 2014, the Sentinel-1 (S1) constellation has played a key role on SAR data availability and dissemination all over the World. Indeed, the free and open access data policy adopted by the European Copernicus program together with the global coverage acquisition strategy, make the Sentinel constellation as a game changer in the Earth Observation scenario. Being the SAR data become ubiquitous, the technological and scientific challenge is focused on maximizing the exploitation of such huge data flow. In this direction, the use of innovative processing algorithms and distributed computing infrastructures, such as the Cloud Computing platforms, can play a crucial role. In this work we present a Cloud Computing solution for the advanced interferometric (DInSAR) processing chain based on the Parallel SBAS (P-SBAS) approach, aimed at processing S1 Interferometric Wide Swath (IWS) data for the generation of large spatial scale deformation time series in efficient, automatic and systematic way. Such a DInSAR chain ingests Sentinel 1 SLC images and carries out several processing steps, to finally compute deformation time series and mean deformation velocity maps. Different parallel strategies have been designed ad hoc for each processing step of the P-SBAS S1 chain, encompassing both multi-core and multi-node programming techniques, in order to maximize the computational efficiency achieved within a Cloud Computing environment and cut down the relevant processing times. The presented P-SBAS S1 processing chain has been implemented on the Amazon Web Services platform and a thorough analysis of the attained parallel performances has been performed to identify and overcome the major bottlenecks to the scalability. The presented approach is used to perform national-scale DInSAR analyses over Italy, involving the processing of more than 3000 S1 IWS images acquired from both ascending and descending orbits. Such an experiment confirms the big advantage of exploiting large computational and storage resources of Cloud Computing platforms for large scale DInSAR analysis. The presented Cloud Computing P-SBAS processing chain can be a precious tool in the perspective of developing operational services disposable for the EO scientific community related to hazard monitoring and risk prevention and mitigation.

  19. Provider-Independent Use of the Cloud

    NASA Astrophysics Data System (ADS)

    Harmer, Terence; Wright, Peter; Cunningham, Christina; Perrott, Ron

    Utility computing offers researchers and businesses the potential of significant cost-savings, making it possible for them to match the cost of their computing and storage to their demand for such resources. A utility compute provider enables the purchase of compute infrastructures on-demand; when a user requires computing resources a provider will provision a resource for them and charge them only for their period of use of that resource. There has been a significant growth in the number of cloud computing resource providers and each has a different resource usage model, application process and application programming interface (API)-developing generic multi-resource provider applications is thus difficult and time consuming. We have developed an abstraction layer that provides a single resource usage model, user authentication model and API for compute providers that enables cloud-provider neutral applications to be developed. In this paper we outline the issues in using external resource providers, give examples of using a number of the most popular cloud providers and provide examples of developing provider neutral applications. In addition, we discuss the development of the API to create a generic provisioning model based on a common architecture for cloud computing providers.

  20. Interactive Scripting for Analysis and Visualization of Arbitrarily Large, Disparately Located Climate Data Ensembles Using a Progressive Runtime Server

    NASA Astrophysics Data System (ADS)

    Christensen, C.; Summa, B.; Scorzelli, G.; Lee, J. W.; Venkat, A.; Bremer, P. T.; Pascucci, V.

    2017-12-01

    Massive datasets are becoming more common due to increasingly detailed simulations and higher resolution acquisition devices. Yet accessing and processing these huge data collections for scientific analysis is still a significant challenge. Solutions that rely on extensive data transfers are increasingly untenable and often impossible due to lack of sufficient storage at the client side as well as insufficient bandwidth to conduct such large transfers, that in some cases could entail petabytes of data. Large-scale remote computing resources can be useful, but utilizing such systems typically entails some form of offline batch processing with long delays, data replications, and substantial cost for any mistakes. Both types of workflows can severely limit the flexible exploration and rapid evaluation of new hypotheses that are crucial to the scientific process and thereby impede scientific discovery. In order to facilitate interactivity in both analysis and visualization of these massive data ensembles, we introduce a dynamic runtime system suitable for progressive computation and interactive visualization of arbitrarily large, disparately located spatiotemporal datasets. Our system includes an embedded domain-specific language (EDSL) that allows users to express a wide range of data analysis operations in a simple and abstract manner. The underlying runtime system transparently resolves issues such as remote data access and resampling while at the same time maintaining interactivity through progressive and interruptible processing. Computations involving large amounts of data can be performed remotely in an incremental fashion that dramatically reduces data movement, while the client receives updates progressively thereby remaining robust to fluctuating network latency or limited bandwidth. This system facilitates interactive, incremental analysis and visualization of massive remote datasets up to petabytes in size. Our system is now available for general use in the community through both docker and anaconda.

  1. dV/dt - Accelerating the Rate of Progress towards Extreme Scale Collaborative Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livny, Miron

    This report introduces publications that report the results of a project that aimed to design a computational framework that enables computational experimentation at scale while supporting the model of “submit locally, compute globally”. The project focuses on estimating application resource needs, finding the appropriate computing resources, acquiring those resources,deploying the applications and data on the resources, managing applications and resources during run.

  2. iTools: A Framework for Classification, Categorization and Integration of Computational Biology Resources

    PubMed Central

    Dinov, Ivo D.; Rubin, Daniel; Lorensen, William; Dugan, Jonathan; Ma, Jeff; Murphy, Shawn; Kirschner, Beth; Bug, William; Sherman, Michael; Floratos, Aris; Kennedy, David; Jagadish, H. V.; Schmidt, Jeanette; Athey, Brian; Califano, Andrea; Musen, Mark; Altman, Russ; Kikinis, Ron; Kohane, Isaac; Delp, Scott; Parker, D. Stott; Toga, Arthur W.

    2008-01-01

    The advancement of the computational biology field hinges on progress in three fundamental directions – the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources–data, software tools and web-services. The iTools design, implementation and resource meta - data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management. We demonstrate several applications of iTools as a framework for integrated bioinformatics. iTools and the complete details about its specifications, usage and interfaces are available at the iTools web page http://iTools.ccb.ucla.edu. PMID:18509477

  3. Rough set soft computing cancer classification and network: one stone, two birds.

    PubMed

    Zhang, Yue

    2010-07-15

    Gene expression profiling provides tremendous information to help unravel the complexity of cancer. The selection of the most informative genes from huge noise for cancer classification has taken centre stage, along with predicting the function of such identified genes and the construction of direct gene regulatory networks at different system levels with a tuneable parameter. A new study by Wang and Gotoh described a novel Variable Precision Rough Sets-rooted robust soft computing method to successfully address these problems and has yielded some new insights. The significance of this progress and its perspectives will be discussed in this article.

  4. TethysCluster: A comprehensive approach for harnessing cloud resources for hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Nelson, J.; Jones, N.; Ames, D. P.

    2015-12-01

    Advances in water resources modeling are improving the information that can be supplied to support decisions affecting the safety and sustainability of society. However, as water resources models become more sophisticated and data-intensive they require more computational power to run. Purchasing and maintaining the computing facilities needed to support certain modeling tasks has been cost-prohibitive for many organizations. With the advent of the cloud, the computing resources needed to address this challenge are now available and cost-effective, yet there still remains a significant technical barrier to leverage these resources. This barrier inhibits many decision makers and even trained engineers from taking advantage of the best science and tools available. Here we present the Python tools TethysCluster and CondorPy, that have been developed to lower the barrier to model computation in the cloud by providing (1) programmatic access to dynamically scalable computing resources, (2) a batch scheduling system to queue and dispatch the jobs to the computing resources, (3) data management for job inputs and outputs, and (4) the ability to dynamically create, submit, and monitor computing jobs. These Python tools leverage the open source, computing-resource management, and job management software, HTCondor, to offer a flexible and scalable distributed-computing environment. While TethysCluster and CondorPy can be used independently to provision computing resources and perform large modeling tasks, they have also been integrated into Tethys Platform, a development platform for water resources web apps, to enable computing support for modeling workflows and decision-support systems deployed as web apps.

  5. Challenge Online Time Series Clustering For Demand Response A Theory to Break the ‘Curse of Dimensionality'

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pal, Ranjan; Chelmis, Charalampos; Aman, Saima

    The advent of smart meters and advanced communication infrastructures catalyzes numerous smart grid applications such as dynamic demand response, and paves the way to solve challenging research problems in sustainable energy consumption. The space of solution possibilities are restricted primarily by the huge amount of generated data requiring considerable computational resources and efficient algorithms. To overcome this Big Data challenge, data clustering techniques have been proposed. Current approaches however do not scale in the face of the “increasing dimensionality” problem where a cluster point is represented by the entire customer consumption time series. To overcome this aspect we first rethinkmore » the way cluster points are created and designed, and then design an efficient online clustering technique for demand response (DR) in order to analyze high volume, high dimensional energy consumption time series data at scale, and on the fly. Our online algorithm is randomized in nature, and provides optimal performance guarantees in a computationally efficient manner. Unlike prior work we (i) study the consumption properties of the whole population simultaneously rather than developing individual models for each customer separately, claiming it to be a ‘killer’ approach that breaks the “curse of dimensionality” in online time series clustering, and (ii) provide tight performance guarantees in theory to validate our approach. Our insights are driven by the field of sociology, where collective behavior often emerges as the result of individual patterns and lifestyles.« less

  6. Advancements in medium and high resolution Earth observation for land-surface imaging: Evolutions, future trends and contributions to sustainable development

    NASA Astrophysics Data System (ADS)

    Ouma, Yashon O.

    2016-01-01

    Technologies for imaging the surface of the Earth, through satellite based Earth observations (EO) have enormously evolved over the past 50 years. The trends are likely to evolve further as the user community increases and their awareness and demands for EO data also increases. In this review paper, a development trend on EO imaging systems is presented with the objective of deriving the evolving patterns for the EO user community. From the review and analysis of medium-to-high resolution EO-based land-surface sensor missions, it is observed that there is a predictive pattern in the EO evolution trends such that every 10-15 years, more sophisticated EO imaging systems with application specific capabilities are seen to emerge. Such new systems, as determined in this review, are likely to comprise of agile and small payload-mass EO land surface imaging satellites with the ability for high velocity data transmission and huge volumes of spatial, spectral, temporal and radiometric resolution data. This availability of data will magnify the phenomenon of ;Big Data; in Earth observation. Because of the ;Big Data; issue, new computing and processing platforms such as telegeoprocessing and grid-computing are expected to be incorporated in EO data processing and distribution networks. In general, it is observed that the demand for EO is growing exponentially as the application and cost-benefits are being recognized in support of resource management.

  7. Degradation of metallic materials studied by correlative tomography

    NASA Astrophysics Data System (ADS)

    Burnett, T. L.; Holroyd, N. J. H.; Lewandowski, J. J.; Ogurreck, M.; Rau, C.; Kelley, R.; Pickering, E. J.; Daly, M.; Sherry, A. H.; Pawar, S.; Slater, T. J. A.; Withers, P. J.

    2017-07-01

    There are a huge array of characterization techniques available today and increasingly powerful computing resources allowing for the effective analysis and modelling of large datasets. However, each experimental and modelling tool only spans limited time and length scales. Correlative tomography can be thought of as the extension of correlative microscopy into three dimensions connecting different techniques, each providing different types of information, or covering different time or length scales. Here the focus is on the linking of time lapse X-ray computed tomography (CT) and serial section electron tomography using the focussed ion beam (FIB)-scanning electron microscope to study the degradation of metals. Correlative tomography can provide new levels of detail by delivering a multiscale 3D picture of key regions of interest. Specifically, the Xe+ Plasma FIB is used as an enabling tool for large-volume high-resolution serial sectioning of materials, and also as a tool for preparation of microscale test samples and samples for nanoscale X-ray CT imaging. The exemplars presented illustrate general aspects relating to correlative workflows, as well as to the time-lapse characterisation of metal microstructures during various failure mechanisms, including ductile fracture of steel and the corrosion of aluminium and magnesium alloys. Correlative tomography is already providing significant insights into materials behaviour, linking together information from different instruments across different scales. Multiscale and multifaceted work flows will become increasingly routine, providing a feed into multiscale materials models as well as illuminating other areas, particularly where hierarchical structures are of interest.

  8. Complex three dimensional modelling of porous media using high performance computing and multi-scale incompressible approach

    NASA Astrophysics Data System (ADS)

    Martin, R.; Orgogozo, L.; Noiriel, C. N.; Guibert, R.; Golfier, F.; Debenest, G.; Quintard, M.

    2013-05-01

    In the context of biofilm growth in porous media, we developed high performance computing tools to study the impact of biofilms on the fluid transport through pores of a solid matrix. Indeed, biofilms are consortia of micro-organisms that are developing in polymeric extracellular substances that are generally located at a fluid-solid interfaces like pore interfaces in a water-saturated porous medium. Several applications of biofilms in porous media are encountered for instance in bio-remediation methods by allowing the dissolution of organic pollutants. Many theoretical studies have been done on the resulting effective properties of these modified media ([1],[2], [3]) but the bio-colonized porous media under consideration are mainly described following simplified theoretical media (stratified media, cubic networks of spheres ...). Therefore, recent experimental advances have provided tomography images of bio-colonized porous media which allow us to observe realistic biofilm micro-structures inside the porous media [4]. To solve closure system of equations related to upscaling procedures in realistic porous media, we solve the velocity field of fluids through pores on complex geometries that are described with a huge number of cells (up to billions). Calculations are made on a realistic 3D sample geometry obtained by X micro-tomography. Cell volumes are coming from a percolation experiment performed to estimate the impact of precipitation processes on the properties of a fluid transport phenomena in porous media [5]. Average permeabilities of the sample are obtained from velocities by using MPI-based high performance computing on up to 1000 processors. Steady state Stokes equations are solved using finite volume approach. Relaxation pre-conditioning is introduced to accelerate the code further. Good weak or strong scaling are reached with results obtained in hours instead of weeks. Factors of accelerations of 20 up to 40 can be reached. Tens of geometries can now be computed by sending batteries of codes in a mass production procedure. Some constraints can now be provided for poro-elastic imaging at the scale of reservoirs, for CO2 storage monitoring or geophysical exploration. 1. Golfier F. et al., Biofilms in porous media: Development of macroscopic transport equations va volume averaging with closure for local mass equilibrium conditions, Advances in Water Resources, 32, 463-485 (2009). 2. Orgogozo L. et al., Upscaling of transport processes in porous media with biofilms in non-equilibrium conditions, Advances in Water Resources, 33(5), 585-600 (2010). 3. Davit Y. et al., Modeling non-equilibrium mass transport in biologically reactive porous media, Advances in Water Resources, 33, 1075-1093, (2010). 4. Davit Y. et al., Imaging biofilm in porous media using X-ray computed micro-tomography, Journal of Microscopy, 242(1), 15-25 (2010). 5. Noiriel C. et al., Upscaling calcium carbonate precipitation rates from pore to continuum scale, Chemical Geology, 318-319, 60-74 (2012).

  9. INDIGO-DataCloud solutions for Earth Sciences

    NASA Astrophysics Data System (ADS)

    Aguilar Gómez, Fernando; de Lucas, Jesús Marco; Fiore, Sandro; Monna, Stephen; Chen, Yin

    2017-04-01

    INDIGO-DataCloud (https://www.indigo-datacloud.eu/) is a European Commission funded project aiming to develop a data and computing platform targeting scientific communities, deployable on multiple hardware and provisioned over hybrid (private or public) e-infrastructures. The development of INDIGO solutions covers the different layers in cloud computing (IaaS, PaaS, SaaS), and provides tools to exploit resources like HPC or GPGPUs. INDIGO is oriented to support European Scientific research communities, that are well represented in the project. Twelve different Case Studies have been analyzed in detail from different fields: Biological & Medical sciences, Social sciences & Humanities, Environmental and Earth sciences and Physics & Astrophysics. INDIGO-DataCloud provides solutions to emerging challenges in Earth Science like: -Enabling an easy deployment of community services at different cloud sites. Many Earth Science research infrastructures often involve distributed observation stations across countries, and also have distributed data centers to support the corresponding data acquisition and curation. There is a need to easily deploy new data center services while the research infrastructure continuous spans. As an example: LifeWatch (ESFRI, Ecosystems and Biodiversity) uses INDIGO solutions to manage the deployment of services to perform complex hydrodynamics and water quality modelling over a Cloud Computing environment, predicting algae blooms, using the Docker technology: TOSCA requirement description, Docker repository, Orchestrator for deployment, AAI (AuthN, AuthZ) and OneData (Distributed Storage System). -Supporting Big Data Analysis. Nowadays, many Earth Science research communities produce large amounts of data and and are challenged by the difficulties of processing and analysing it. A climate models intercomparison data analysis case study for the European Network for Earth System Modelling (ENES) community has been setup, based on the Ophidia big data analysis framework and the Kepler workflow management system. Such services normally involve a large and distributed set of data and computing resources. In this regard, this case study exploits the INDIGO PaaS for a flexible and dynamic allocation of the resources at the infrastructural level. -Providing Distributed Data Storage Solutions. In order to allow scientific communities to perform heavy computation on huge datasets, INDIGO provides global data access solutions allowing researchers to access data in a distributed environment like fashion regardless of its location, and also to publish and share their research results with public or close communities. INDIGO solutions that support the access to distributed data storage (OneData) are being tested on EMSO infrastructure (Ocean Sciences and Geohazards) data. Another aspect of interest for the EMSO community is in efficient data processing by exploiting INDIGO services like PaaS Orchestrator. Further, for HPC exploitation, a new solution named Udocker has been implemented, enabling users to execute docker containers in supercomputers, without requiring administration privileges. This presentation will overview INDIGO solutions that are interesting and useful for Earth science communities and will show how they can be applied to other Case Studies.

  10. Scrap computer recycling in Taiwan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C.H.; Chang, S.L.; Wang, K.M.

    1999-07-01

    It is estimated that approximately 700,000 scrap personal computers will be generated each year in Taiwan. The disposal of such a huge amount of scrap computers presents a difficult task for the island due to the scarcity of landfills and incineration facilities available locally. Also, the hazardous materials contained in the scrap computers may cause serious pollution to the environment, if they are not properly disposed. Thus, EPA of Taiwan has declared scrap personal computers as a producer responsibility recycling product on July 1997 to mandate that the manufacturers, importers and sellers of personal computers have to recover and recyclemore » their scrap computers properly. Beginning on June 1, 1998, a scrap computer recycling plan is officially implemented on the island. Under this plan, consumers can deliver their unwanted personal computers to the designated collection points to receive reward money. Currently, only six items are mandated to be recycled in this recycling plan. They are notebooks, monitor and the hard disk, power supply, printed circuit board and shell of the main frame of the personal computer. This paper presents the current scrap computer recycling system in Taiwan.« less

  11. The use of high technology in STEM education

    NASA Astrophysics Data System (ADS)

    Lakshminarayanan, Vasudevan; McBride, Annette C.

    2015-10-01

    There has been a huge increase in the use of high technology in education. In this paper we discuss some aspects of technology that have major applications in STEM education, namely, (a) virtual reality systems, (b) personal electronic response systems aka "clickers", (c) flipped classrooms, (d) mobile learning "m-Learning", (e) massive open online courses "MOOCS", (f) internet-of-things and (g) cloud computing.

  12. Wafer level reliability testing: An idea whose time has come

    NASA Technical Reports Server (NTRS)

    Trapp, O. D.

    1987-01-01

    Wafer level reliability testing has been nurtured in the DARPA supported workshops, held each autumn since 1982. The seeds planted in 1982 have produced an active crop of very large scale integration manufacturers applying wafer level reliability test methods. Computer Aided Reliability (CAR) is a new seed being nurtured. Users are now being awakened by the huge economic value of the wafer reliability testing technology.

  13. TORT/MCNP coupling method for the calculation of neutron flux around a core of BWR.

    PubMed

    Kurosawa, Masahiko

    2005-01-01

    For the analysis of BWR neutronics performance, accurate data are required for neutron flux distribution over the In-Reactor Pressure Vessel equipments taking into account the detailed geometrical arrangement. The TORT code can calculate neutron flux around a core of BWR in a three-dimensional geometry model, but has difficulties in fine geometrical modelling and lacks huge computer resource. On the other hand, the MCNP code enables the calculation of the neutron flux with a detailed geometry model, but requires very long sampling time to give enough number of particles. Therefore, a TORT/MCNP coupling method has been developed to eliminate the two problems mentioned above in each code. In this method, the TORT code calculates angular flux distribution on the core surface and the MCNP code calculates neutron spectrum at the points of interest using the flux distribution. The coupling method will be used as the DOT-DOMINO-MORSE code system. This TORT/MCNP coupling method was applied to calculate the neutron flux at points where induced radioactivity data were measured for 54Mn and 60Co and the radioactivity calculations based on the neutron flux obtained from the above method were compared with the measured data.

  14. Distributed denial of service (DDoS) attack in cloud- assisted wireless body area networks: a systematic literature review.

    PubMed

    Latif, Rabia; Abbas, Haider; Assar, Saïd

    2014-11-01

    Wireless Body Area Networks (WBANs) have emerged as a promising technology that has shown enormous potential in improving the quality of healthcare, and has thus found a broad range of medical applications from ubiquitous health monitoring to emergency medical response systems. The huge amount of highly sensitive data collected and generated by WBAN nodes requires an ascendable and secure storage and processing infrastructure. Given the limited resources of WBAN nodes for storage and processing, the integration of WBANs and cloud computing may provide a powerful solution. However, despite the benefits of cloud-assisted WBAN, several security issues and challenges remain. Among these, data availability is the most nagging security issue. The most serious threat to data availability is a distributed denial of service (DDoS) attack that directly affects the all-time availability of a patient's data. The existing solutions for standalone WBANs and sensor networks are not applicable in the cloud. The purpose of this review paper is to identify the most threatening types of DDoS attacks affecting the availability of a cloud-assisted WBAN and review the state-of-the-art detection mechanisms for the identified DDoS attacks.

  15. The GMOS cyber(e)-infrastructure: advanced services for supporting science and policy.

    PubMed

    Cinnirella, S; D'Amore, F; Bencardino, M; Sprovieri, F; Pirrone, N

    2014-03-01

    The need for coordinated, systematized and catalogued databases on mercury in the environment is of paramount importance as improved information can help the assessment of the effectiveness of measures established to phase out and ban mercury. Long-term monitoring sites have been established in a number of regions and countries for the measurement of mercury in ambient air and wet deposition. Long term measurements of mercury concentration in biota also produced a huge amount of information, but such initiatives are far from being within a global, systematic and interoperable approach. To address these weaknesses the on-going Global Mercury Observation System (GMOS) project ( www.gmos.eu ) established a coordinated global observation system for mercury as well it retrieved historical data ( www.gmos.eu/sdi ). To manage such large amount of information a technological infrastructure was planned. This high-performance back-end resource associated with sophisticated client applications enables data storage, computing services, telecommunications networks and all services necessary to support the activity. This paper reports the architecture definition of the GMOS Cyber(e)-Infrastructure and the services developed to support science and policy, including the United Nation Environmental Program. It finally describes new possibilities in data analysis and data management through client applications.

  16. Symbolic Computation of Strongly Connected Components Using Saturation

    NASA Technical Reports Server (NTRS)

    Zhao, Yang; Ciardo, Gianfranco

    2010-01-01

    Finding strongly connected components (SCCs) in the state-space of discrete-state models is a critical task in formal verification of LTL and fair CTL properties, but the potentially huge number of reachable states and SCCs constitutes a formidable challenge. This paper is concerned with computing the sets of states in SCCs or terminal SCCs of asynchronous systems. Because of its advantages in many applications, we employ saturation on two previously proposed approaches: the Xie-Beerel algorithm and transitive closure. First, saturation speeds up state-space exploration when computing each SCC in the Xie-Beerel algorithm. Then, our main contribution is a novel algorithm to compute the transitive closure using saturation. Experimental results indicate that our improved algorithms achieve a clear speedup over previous algorithms in some cases. With the help of the new transitive closure computation algorithm, up to 10(exp 150) SCCs can be explored within a few seconds.

  17. Snore related signals processing in a private cloud computing system.

    PubMed

    Qian, Kun; Guo, Jian; Xu, Huijie; Zhu, Zhaomeng; Zhang, Gongxuan

    2014-09-01

    Snore related signals (SRS) have been demonstrated to carry important information about the obstruction site and degree in the upper airway of Obstructive Sleep Apnea-Hypopnea Syndrome (OSAHS) patients in recent years. To make this acoustic signal analysis method more accurate and robust, big SRS data processing is inevitable. As an emerging concept and technology, cloud computing has motivated numerous researchers and engineers to exploit applications both in academic and industry field, which could have an ability to implement a huge blue print in biomedical engineering. Considering the security and transferring requirement of biomedical data, we designed a system based on private cloud computing to process SRS. Then we set the comparable experiments of processing a 5-hour audio recording of an OSAHS patient by a personal computer, a server and a private cloud computing system to demonstrate the efficiency of the infrastructure we proposed.

  18. Processing of the WLCG monitoring data using NoSQL

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Beche, A.; Belov, S.; Dzhunov, I.; Kadochnikov, I.; Karavakis, E.; Saiz, P.; Schovancova, J.; Tuckett, D.

    2014-06-01

    The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation, performance and reliability. Furthermore, the generated monitoring flow is constantly increasing, which represents another challenge for the monitoring systems. While existing solutions are traditionally based on Oracle for data storage and processing, recent developments evaluate NoSQL for processing large-scale monitoring datasets. NoSQL databases are getting increasingly popular for processing datasets at the terabyte and petabyte scale using commodity hardware. In this contribution, the integration of NoSQL data processing in the Experiment Dashboard framework is described along with first experiences of using this technology for monitoring the LHC computing activities.

  19. Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources

    NASA Astrophysics Data System (ADS)

    Evans, D.; Fisk, I.; Holzman, B.; Melo, A.; Metson, S.; Pordes, R.; Sheldon, P.; Tiradani, A.

    2011-12-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely "on-demand" as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the "base-line" needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.

  20. Statistics Online Computational Resource for Education

    ERIC Educational Resources Information Center

    Dinov, Ivo D.; Christou, Nicolas

    2009-01-01

    The Statistics Online Computational Resource (http://www.SOCR.ucla.edu) provides one of the largest collections of free Internet-based resources for probability and statistics education. SOCR develops, validates and disseminates two core types of materials--instructional resources and computational libraries. (Contains 2 figures.)

  1. The taxonomy, biology and chemistry of the fungal Pestalotiopsis genus.

    PubMed

    Yang, Xiao-Long; Zhang, Jing-Ze; Luo, Du-Qiang

    2012-06-01

    A growing body of evidence indicates that the Pestalotiopsis genus represents a huge and largely untapped resource of natural products with chemical structures that have been optimized by evolution for biological and ecological relevance. So far, 196 secondary metabolites have been encountered in this genus. This review systematically surveys the taxonomy, biology and chemistry of the Pestalotiopsis genus. It also summarises the biosynthetic relationships and chemical synthesis of metabolites from this genus. There are 184 references.

  2. Countering the Resource Curse: A Comparative Analysis of Political Economy for Chile and Australia

    DTIC Science & Technology

    2015-06-01

    built around laissez - faire , liberal-market economics, which is still prevalent today. Pinochet recruited several Chilean students from the University...fluctuations and indebted the state. The laissez - faire experiment ended with a huge slide, as Chile’s economy fell by almost 15percent.51 Commonly known as...once optimistic outlook has been overshadowed by poor leadership choices from the Australian Labor Party (ALP). Policy errors by the ALP created many

  3. Aspects of tar sands development in Nigeria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adewusi, V.A.

    1992-07-01

    Development of Nigerian massive reserves of crude bitumen and associated heavy oil is imminent in view of the impacts that the huge importation of these materials and their products have on the nation's economy, coupled with the depleting reserves of Nigeria and highlights the appropriate production technology options and their environmental implications. The utilization potentials of these resources are also enumerated, as well as the government's role in achieving accelerated, long-term tar sands development in the country.

  4. Think globally and solve locally: secondary memory-based network learning for automated multi-species function prediction

    PubMed Central

    2014-01-01

    Background Network-based learning algorithms for automated function prediction (AFP) are negatively affected by the limited coverage of experimental data and limited a priori known functional annotations. As a consequence their application to model organisms is often restricted to well characterized biological processes and pathways, and their effectiveness with poorly annotated species is relatively limited. A possible solution to this problem might consist in the construction of big networks including multiple species, but this in turn poses challenging computational problems, due to the scalability limitations of existing algorithms and the main memory requirements induced by the construction of big networks. Distributed computation or the usage of big computers could in principle respond to these issues, but raises further algorithmic problems and require resources not satisfiable with simple off-the-shelf computers. Results We propose a novel framework for scalable network-based learning of multi-species protein functions based on both a local implementation of existing algorithms and the adoption of innovative technologies: we solve “locally” the AFP problem, by designing “vertex-centric” implementations of network-based algorithms, but we do not give up thinking “globally” by exploiting the overall topology of the network. This is made possible by the adoption of secondary memory-based technologies that allow the efficient use of the large memory available on disks, thus overcoming the main memory limitations of modern off-the-shelf computers. This approach has been applied to the analysis of a large multi-species network including more than 300 species of bacteria and to a network with more than 200,000 proteins belonging to 13 Eukaryotic species. To our knowledge this is the first work where secondary-memory based network analysis has been applied to multi-species function prediction using biological networks with hundreds of thousands of proteins. Conclusions The combination of these algorithmic and technological approaches makes feasible the analysis of large multi-species networks using ordinary computers with limited speed and primary memory, and in perspective could enable the analysis of huge networks (e.g. the whole proteomes available in SwissProt), using well-equipped stand-alone machines. PMID:24843788

  5. An Architecture for Cross-Cloud System Management

    NASA Astrophysics Data System (ADS)

    Dodda, Ravi Teja; Smith, Chris; van Moorsel, Aad

    The emergence of the cloud computing paradigm promises flexibility and adaptability through on-demand provisioning of compute resources. As the utilization of cloud resources extends beyond a single provider, for business as well as technical reasons, the issue of effectively managing such resources comes to the fore. Different providers expose different interfaces to their compute resources utilizing varied architectures and implementation technologies. This heterogeneity poses a significant system management problem, and can limit the extent to which the benefits of cross-cloud resource utilization can be realized. We address this problem through the definition of an architecture to facilitate the management of compute resources from different cloud providers in an homogenous manner. This preserves the flexibility and adaptability promised by the cloud computing paradigm, whilst enabling the benefits of cross-cloud resource utilization to be realized. The practical efficacy of the architecture is demonstrated through an implementation utilizing compute resources managed through different interfaces on the Amazon Elastic Compute Cloud (EC2) service. Additionally, we provide empirical results highlighting the performance differential of these different interfaces, and discuss the impact of this performance differential on efficiency and profitability.

  6. Quantum Computing: Selected Internet Resources for Librarians, Researchers, and the Casually Curious

    ERIC Educational Resources Information Center

    Cirasella, Jill

    2009-01-01

    This article presents an annotated selection of the most important and informative Internet resources for learning about quantum computing, finding quantum computing literature, and tracking quantum computing news. All of the quantum computing resources described in this article are freely available, English-language web sites that fall into one…

  7. Dynamic Simulations for the Seismic Behavior on the Shallow Part of the Fault Plane in the Subduction Zone during Mega-Thrust Earthquakes

    NASA Astrophysics Data System (ADS)

    Tsuda, K.; Dorjapalam, S.; Dan, K.; Ogawa, S.; Watanabe, T.; Uratani, H.; Iwase, S.

    2012-12-01

    The 2011 Tohoku-Oki earthquake (M9.0) produced some distinct features such as huge slips on the order of several ten meters around the shallow part of the fault and different areas with radiating seismic waves for different periods (e.g., Lay et al., 2012). These features, also reported during the past mega-thrust earthquakes in the subduction zone such as the 2004 Sumatra earthquake (M9.2) and the 2010 Chile earthquake (M8.8), get attentions as the distinct features if the rupture of the mega-thrust earthquakes reaches to the shallow part of the fault plane. Although various kinds of observations for the seismic behavior (rupture process and ground motion characteristics etc.) on the shallow part of the fault plane during the mega-trust earthquakes have been reported, the number of analytical or numerical studies based on dynamic simulation is still limited. Wendt et al. (2009), for example, revealed that the different distribution of initial stress produces huge differences in terms of the seismic behavior and vertical displacements on the surface. In this study, we carried out the dynamic simulations in order to get a better understanding about the seismic behavior on the shallow part of the fault plane during mega-thrust earthquakes. We used the spectral element method (Ampuero, 2009) that is able to incorporate the complex fault geometry into simulation as well as to save computational resources. The simulation utilizes the slip-weakening law (Ida, 1972). In order to get a better understanding about the seismic behavior on the shallow part of the fault plane, some parameters controlling seismic behavior for dynamic faulting such as critical slip distance (Dc), initial stress conditions and friction coefficients were changed and we also put the asperity on the fault plane. These understandings are useful for the ground motion prediction for future mega-thrust earthquakes such as the earthquakes along the Nankai Trough.

  8. Development of a Computer-Assisted Instrumentation Curriculum for Physics Students: Using LabVIEW and Arduino Platform

    NASA Astrophysics Data System (ADS)

    Kuan, Wen-Hsuan; Tseng, Chi-Hung; Chen, Sufen; Wong, Ching-Chang

    2016-06-01

    We propose an integrated curriculum to establish essential abilities of computer programming for the freshmen of a physics department. The implementation of the graphical-based interfaces from Scratch to LabVIEW then to LabVIEW for Arduino in the curriculum `Computer-Assisted Instrumentation in the Design of Physics Laboratories' brings rigorous algorithm and syntax protocols together with imagination, communication, scientific applications and experimental innovation. The effectiveness of the curriculum was evaluated via statistical analysis of questionnaires, interview responses, the increase in student numbers majoring in physics, and performance in a competition. The results provide quantitative support that the curriculum remove huge barriers to programming which occur in text-based environments, helped students gain knowledge of programming and instrumentation, and increased the students' confidence and motivation to learn physics and computer languages.

  9. Large-scale ground motion simulation using GPGPU

    NASA Astrophysics Data System (ADS)

    Aoi, S.; Maeda, T.; Nishizawa, N.; Aoki, T.

    2012-12-01

    Huge computation resources are required to perform large-scale ground motion simulations using 3-D finite difference method (FDM) for realistic and complex models with high accuracy. Furthermore, thousands of various simulations are necessary to evaluate the variability of the assessment caused by uncertainty of the assumptions of the source models for future earthquakes. To conquer the problem of restricted computational resources, we introduced the use of GPGPU (General purpose computing on graphics processing units) which is the technique of using a GPU as an accelerator of the computation which has been traditionally conducted by the CPU. We employed the CPU version of GMS (Ground motion Simulator; Aoi et al., 2004) as the original code and implemented the function for GPU calculation using CUDA (Compute Unified Device Architecture). GMS is a total system for seismic wave propagation simulation based on 3-D FDM scheme using discontinuous grids (Aoi&Fujiwara, 1999), which includes the solver as well as the preprocessor tools (parameter generation tool) and postprocessor tools (filter tool, visualization tool, and so on). The computational model is decomposed in two horizontal directions and each decomposed model is allocated to a different GPU. We evaluated the performance of our newly developed GPU version of GMS on the TSUBAME2.0 which is one of the Japanese fastest supercomputer operated by the Tokyo Institute of Technology. First we have performed a strong scaling test using the model with about 22 million grids and achieved 3.2 and 7.3 times of the speed-up by using 4 and 16 GPUs. Next, we have examined a weak scaling test where the model sizes (number of grids) are increased in proportion to the degree of parallelism (number of GPUs). The result showed almost perfect linearity up to the simulation with 22 billion grids using 1024 GPUs where the calculation speed reached to 79.7 TFlops and about 34 times faster than the CPU calculation using the same number of cores. Finally, we applied GPU calculation to the simulation of the 2011 Tohoku-oki earthquake. The model was constructed using a slip model from inversion of strong motion data (Suzuki et al., 2012), and a geological- and geophysical-based velocity structure model comprising all the Tohoku and Kanto regions as well as the large source area, which consists of about 1.9 billion grids. The overall characteristics of observed velocity seismograms for a longer period than range of 8 s were successfully reproduced (Maeda et al., 2012 AGU meeting). The turn around time for 50 thousand-step calculation (which correspond to 416 s in seismograph) using 100 GPUs was 52 minutes which is fairly short, especially considering this is the performance for the realistic and complex model.

  10. An OAIS-Based Hospital Information System on the Cloud: Analysis of a NoSQL Column-Oriented Approach.

    PubMed

    Celesti, Antonio; Fazio, Maria; Romano, Agata; Bramanti, Alessia; Bramanti, Placido; Villari, Massimo

    2018-05-01

    The Open Archive Information System (OAIS) is a reference model for organizing people and resources in a system, and it is already adopted in care centers and medical systems to efficiently manage clinical data, medical personnel, and patients. Archival storage systems are typically implemented using traditional relational database systems, but the relation-oriented technology strongly limits the efficiency in the management of huge amount of patients' clinical data, especially in emerging cloud-based, that are distributed. In this paper, we present an OAIS healthcare architecture useful to manage a huge amount of HL7 clinical documents in a scalable way. Specifically, it is based on a NoSQL column-oriented Data Base Management System deployed in the cloud, thus to benefit from a big tables and wide rows available over a virtual distributed infrastructure. We developed a prototype of the proposed architecture at the IRCCS, and we evaluated its efficiency in a real case of study.

  11. Polio programme: let us declare victory and move on.

    PubMed

    Vashisht, Neetu; Puliyel, Jacob

    2012-01-01

    It was hoped that following polio eradication, immunisation could be stopped. However the synthesis of polio virus in 2002, made eradication impossible. It is argued that getting poor countries to expend their scarce resources on an impossible dream over the last 10 years was unethical. Furthermore, while India has been polio-free for a year, there has been a huge increase in non-polio acute flaccid paralysis (NPAFP). In 2011, there were an extra 47,500 new cases of NPAFP. Clinically indistinguishable from polio paralysis but twice as deadly, the incidence of NPAFP was directly proportional to doses of oral polio received. Though this data was collected within the polio surveillance system, it was not investigated. The principle of primum-non-nocere was violated. The authors suggest that the huge bill of US$ 8 billion spent on the programme, is a small sum to pay if the world learns to be wary of such vertical programmes in the future.

  12. Scientific Discovery through Advanced Computing in Plasma Science

    NASA Astrophysics Data System (ADS)

    Tang, William

    2005-03-01

    Advanced computing is generally recognized to be an increasingly vital tool for accelerating progress in scientific research during the 21st Century. For example, the Department of Energy's ``Scientific Discovery through Advanced Computing'' (SciDAC) Program was motivated in large measure by the fact that formidable scientific challenges in its research portfolio could best be addressed by utilizing the combination of the rapid advances in super-computing technology together with the emergence of effective new algorithms and computational methodologies. The imperative is to translate such progress into corresponding increases in the performance of the scientific codes used to model complex physical systems such as those encountered in high temperature plasma research. If properly validated against experimental measurements and analytic benchmarks, these codes can provide reliable predictive capability for the behavior of a broad range of complex natural and engineered systems. This talk reviews recent progress and future directions for advanced simulations with some illustrative examples taken from the plasma science applications area. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by the combination of access to powerful new computational resources together with innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning a huge range in time and space scales. In particular, the plasma science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of plasma turbulence in magnetically-confined high temperature plasmas. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to the computational science area.

  13. Environmental geology and hydrology

    NASA Astrophysics Data System (ADS)

    Nakić, Zoran; Mileusnić, Marta; Pavlić, Krešimir; Kovač, Zoran

    2017-10-01

    Environmental geology is scientific discipline dealing with the interactions between humans and the geologic environment. Many natural hazards, which have great impact on humans and their environment, are caused by geological settings. On the other hand, human activities have great impact on the physical environment, especially in the last decades due to dramatic human population growth. Natural disasters often hit densely populated areas causing tremendous death toll and material damage. Demand for resources enhanced remarkably, as well as waste production. Exploitation of mineral resources deteriorate huge areas of land, produce enormous mine waste and pollute soil, water and air. Environmental geology is a broad discipline and only selected themes will be presented in the following subchapters: (1) floods as natural hazard, (2) water as geological resource and (3) the mining and mineral processing as types of human activities dealing with geological materials that affect the environment and human health.

  14. A Contract Management Guide for Air Force Environmental Restoration

    DTIC Science & Technology

    1991-09-01

    literature in their area of interest in an attempt to locate a market niche. These topical studies often take the form of guides to specific areas of the...computer remote bulletin board system entitled the Hazardous Materials Information Exchange (HMIX). HMIX has information on training for response to... market for two reasons: the size of the appropriations under the Superfund Amendment and Reauthorization Act, and the huge number of contaminated

  15. Flexible services for the support of research.

    PubMed

    Turilli, Matteo; Wallom, David; Williams, Chris; Gough, Steve; Curran, Neal; Tarrant, Richard; Bretherton, Dan; Powell, Andy; Johnson, Matt; Harmer, Terry; Wright, Peter; Gordon, John

    2013-01-28

    Cloud computing has been increasingly adopted by users and providers to promote a flexible, scalable and tailored access to computing resources. Nonetheless, the consolidation of this paradigm has uncovered some of its limitations. Initially devised by corporations with direct control over large amounts of computational resources, cloud computing is now being endorsed by organizations with limited resources or with a more articulated, less direct control over these resources. The challenge for these organizations is to leverage the benefits of cloud computing while dealing with limited and often widely distributed computing resources. This study focuses on the adoption of cloud computing by higher education institutions and addresses two main issues: flexible and on-demand access to a large amount of storage resources, and scalability across a heterogeneous set of cloud infrastructures. The proposed solutions leverage a federated approach to cloud resources in which users access multiple and largely independent cloud infrastructures through a highly customizable broker layer. This approach allows for a uniform authentication and authorization infrastructure, a fine-grained policy specification and the aggregation of accounting and monitoring. Within a loosely coupled federation of cloud infrastructures, users can access vast amount of data without copying them across cloud infrastructures and can scale their resource provisions when the local cloud resources become insufficient.

  16. EMRlog method for computer security for electronic medical records with logic and data mining.

    PubMed

    Martínez Monterrubio, Sergio Mauricio; Frausto Solis, Juan; Monroy Borja, Raúl

    2015-01-01

    The proper functioning of a hospital computer system is an arduous work for managers and staff. However, inconsistent policies are frequent and can produce enormous problems, such as stolen information, frequent failures, and loss of the entire or part of the hospital data. This paper presents a new method named EMRlog for computer security systems in hospitals. EMRlog is focused on two kinds of security policies: directive and implemented policies. Security policies are applied to computer systems that handle huge amounts of information such as databases, applications, and medical records. Firstly, a syntactic verification step is applied by using predicate logic. Then data mining techniques are used to detect which security policies have really been implemented by the computer systems staff. Subsequently, consistency is verified in both kinds of policies; in addition these subsets are contrasted and validated. This is performed by an automatic theorem prover. Thus, many kinds of vulnerabilities can be removed for achieving a safer computer system.

  17. EMRlog Method for Computer Security for Electronic Medical Records with Logic and Data Mining

    PubMed Central

    Frausto Solis, Juan; Monroy Borja, Raúl

    2015-01-01

    The proper functioning of a hospital computer system is an arduous work for managers and staff. However, inconsistent policies are frequent and can produce enormous problems, such as stolen information, frequent failures, and loss of the entire or part of the hospital data. This paper presents a new method named EMRlog for computer security systems in hospitals. EMRlog is focused on two kinds of security policies: directive and implemented policies. Security policies are applied to computer systems that handle huge amounts of information such as databases, applications, and medical records. Firstly, a syntactic verification step is applied by using predicate logic. Then data mining techniques are used to detect which security policies have really been implemented by the computer systems staff. Subsequently, consistency is verified in both kinds of policies; in addition these subsets are contrasted and validated. This is performed by an automatic theorem prover. Thus, many kinds of vulnerabilities can be removed for achieving a safer computer system. PMID:26495300

  18. Use of Massive Parallel Computing Libraries in the Context of Global Gravity Field Determination from Satellite Data

    NASA Astrophysics Data System (ADS)

    Brockmann, J. M.; Schuh, W.-D.

    2011-07-01

    The estimation of the global Earth's gravity field parametrized as a finite spherical harmonic series is computationally demanding. The computational effort depends on the one hand on the maximal resolution of the spherical harmonic expansion (i.e. the number of parameters to be estimated) and on the other hand on the number of observations (which are several millions for e.g. observations from the GOCE satellite missions). To circumvent these restrictions, a massive parallel software based on high-performance computing (HPC) libraries as ScaLAPACK, PBLAS and BLACS was designed in the context of GOCE HPF WP6000 and the GOCO consortium. A prerequisite for the use of these libraries is that all matrices are block-cyclic distributed on a processor grid comprised by a large number of (distributed memory) computers. Using this set of standard HPC libraries has the benefit that once the matrices are distributed across the computer cluster, a huge set of efficient and highly scalable linear algebra operations can be used.

  19. The use of interactive graphical maps for browsing medical/health Internet information resources

    PubMed Central

    Boulos, Maged N Kamel

    2003-01-01

    As online information portals accumulate metadata descriptions of Web resources, it becomes necessary to develop effective ways for visualising and navigating the resultant huge metadata repositories as well as the different semantic relationships and attributes of described Web resources. Graphical maps provide a good method to visualise, understand and navigate a world that is too large and complex to be seen directly like the Web. Several examples of maps designed as a navigational aid for Web resources are presented in this review with an emphasis on maps of medical and health-related resources. The latter include HealthCyberMap maps , which can be classified as conceptual information space maps, and the very abstract and geometric Visual Net maps of PubMed (for demos). Information resources can be also organised and navigated based on their geographic attributes. Some of the maps presented in this review use a Kohonen Self-Organising Map algorithm, and only HealthCyberMap uses a Geographic Information System to classify Web resource data and render the maps. Maps based on familiar metaphors taken from users' everyday life are much easier to understand. Associative and pictorial map icons that enable instant recognition and comprehension are preferred to geometric ones and are key to successful maps for browsing medical/health Internet information resources. PMID:12556244

  20. Resource Aware Intelligent Network Services (RAINS) Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehman, Tom; Yang, Xi

    The Resource Aware Intelligent Network Services (RAINS) project conducted research and developed technologies in the area of cyber infrastructure resource modeling and computation. The goal of this work was to provide a foundation to enable intelligent, software defined services which spanned the network AND the resources which connect to the network. A Multi-Resource Service Plane (MRSP) was defined, which allows resource owners/managers to locate and place themselves from a topology and service availability perspective within the dynamic networked cyberinfrastructure ecosystem. The MRSP enables the presentation of integrated topology views and computation results which can include resources across the spectrum ofmore » compute, storage, and networks. The RAINS project developed MSRP includes the following key components: i) Multi-Resource Service (MRS) Ontology/Multi-Resource Markup Language (MRML), ii) Resource Computation Engine (RCE), iii) Modular Driver Framework (to allow integration of a variety of external resources). The MRS/MRML is a general and extensible modeling framework that allows for resource owners to model, or describe, a wide variety of resource types. All resources are described using three categories of elements: Resources, Services, and Relationships between the elements. This modeling framework defines a common method for the transformation of cyber infrastructure resources into data in the form of MRML models. In order to realize this infrastructure datification, the RAINS project developed a model based computation system, i.e. “RAINS Computation Engine (RCE)”. The RCE has the ability to ingest, process, integrate, and compute based on automatically generated MRML models. The RCE interacts with the resources thru system drivers which are specific to the type of external network or resource controller. The RAINS project developed a modular and pluggable driver system which facilities a variety of resource controllers to automatically generate, maintain, and distribute MRML based resource descriptions. Once all of the resource topologies are absorbed by the RCE, a connected graph of the full distributed system topology is constructed, which forms the basis for computation and workflow processing. The RCE includes a Modular Computation Element (MCE) framework which allows for tailoring of the computation process to the specific set of resources under control, and the services desired. The input and output of an MCE are both model data based on MRS/MRML ontology and schema. Some of the RAINS project accomplishments include: Development of general and extensible multi-resource modeling framework; Design of a Resource Computation Engine (RCE) system which includes the following key capabilities; Absorb a variety of multi-resource model types and build integrated models; Novel architecture which uses model based communications across the full stack for all Flexible provision of abstract or intent based user facing interfaces; Workflow processing based on model descriptions; Release of the RCE as an open source software; Deployment of RCE in the University of Maryland/Mid-Atlantic Crossroad ScienceDMZ in prototype mode with a plan under way to transition to production; Deployment at the Argonne National Laboratory DTN Facility in prototype mode; Selection of RCE by the DOE SENSE (SDN for End-to-end Networked Science at the Exascale) project as the basis for their orchestration service.« less

  1. Rough Set Soft Computing Cancer Classification and Network: One Stone, Two Birds

    PubMed Central

    Zhang, Yue

    2010-01-01

    Gene expression profiling provides tremendous information to help unravel the complexity of cancer. The selection of the most informative genes from huge noise for cancer classification has taken centre stage, along with predicting the function of such identified genes and the construction of direct gene regulatory networks at different system levels with a tuneable parameter. A new study by Wang and Gotoh described a novel Variable Precision Rough Sets-rooted robust soft computing method to successfully address these problems and has yielded some new insights. The significance of this progress and its perspectives will be discussed in this article. PMID:20706619

  2. Genetic algorithms in teaching artificial intelligence (automated generation of specific algebras)

    NASA Astrophysics Data System (ADS)

    Habiballa, Hashim; Jendryscik, Radek

    2017-11-01

    The problem of teaching essential Artificial Intelligence (AI) methods is an important task for an educator in the branch of soft-computing. The key focus is often given to proper understanding of the principle of AI methods in two essential points - why we use soft-computing methods at all and how we apply these methods to generate reasonable results in sensible time. We present one interesting problem solved in the non-educational research concerning automated generation of specific algebras in the huge search space. We emphasize above mentioned points as an educational case study of an interesting problem in automated generation of specific algebras.

  3. Construction of the energy matrix for complex atoms. Part VIII: Hyperfine structure HPC calculations for terbium atom

    NASA Astrophysics Data System (ADS)

    Elantkowska, Magdalena; Ruczkowski, Jarosław; Sikorski, Andrzej; Dembczyński, Jerzy

    2017-11-01

    A parametric analysis of the hyperfine structure (hfs) for the even parity configurations of atomic terbium (Tb I) is presented in this work. We introduce the complete set of 4fN-core states in our high-performance computing (HPC) calculations. For calculations of the huge hyperfine structure matrix, requiring approximately 5000 hours when run on a single CPU, we propose the methods utilizing a personal computer cluster or, alternatively a cluster of Microsoft Azure virtual machines (VM). These methods give a factor 12 performance boost, enabling the calculations to complete in an acceptable time.

  4. Requirements for a multifunctional code architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tiihonen, O.; Juslin, K.

    1997-07-01

    The present paper studies a set of requirements for a multifunctional simulation software architecture in the light of experiences gained in developing and using the APROS simulation environment. The huge steps taken in the development of computer hardware and software during the last ten years are changing the status of the traditional nuclear safety analysis software. The affordable computing power on the safety analysts table by far exceeds the possibilities offered to him/her ten years ago. At the same time the features of everyday office software tend to set standards to the way the input data and calculational results aremore » managed.« less

  5. Optimization of tomographic reconstruction workflows on geographically distributed resources

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar; ...

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less

  6. Optimization of tomographic reconstruction workflows on geographically distributed resources

    PubMed Central

    Bicer, Tekin; Gürsoy, Doǧa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T.

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Moreover, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks. PMID:27359149

  7. Optimization of tomographic reconstruction workflows on geographically distributed resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less

  8. Multimedia and physiology: a new way to ensure the quality of medical education and medical knowledge.

    PubMed

    Lessard, Yvon; Siregar, Pridi; Julen, Nathalie; Sinteff, Jean-Paul; Le Beux, Pierre

    2006-01-01

    since the eighties and the existence of virtual campuses, the value of computers in distance education has been acknowledged. The development of information and communication technologies is driving at discriminating distance education and on-line education. the aim of the "Campus Numérique de Physiologie" is not to reproduce an on-line copy of classical textbooks but to put at students' and physicians' disposal the huge possibilities of multimedia resources for an active and easier understanding of complex physiopathological phenomena. the on-line course materials were created using both original IBC-made and registered trade-mark software tools. Multiscale modelling and corresponding knowledge bases were implemented by mathematicians, biologists and software engineers from Rennes. The website, which is accessible through a server of the French Virtual Medical University, was developed in the language HTML/PHP connected to a MySQL database. the content managing system is consistent with classical home page facilities and multicriteria browser. Interactive resources are freely available for the site's users. Two- and three-dimensional simulations born out of mathematical qualitative and quantitative models at the molecular, cellular or organic level keep students active with regards to fundamental mechanisms by interactively manipulating the simulation environment. authors comment the already available course materials which should stimulate the creation of new documents following a validation by a qualified commission of the "Société de Physiologie". Providing evaluation tests, teachers anticipate that the increasing content of this virtual campus will allow users to gain a complete understanding and an integrative view of many physiopathological mechanisms.

  9. Enabling opportunistic resources for CMS Computing Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hufnagel, Dirk

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize opportunistic resources resources not owned by, or a priori configured for CMS to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enablemore » access and bring the CMS environment into these non CMS resources. Finally, we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less

  10. Enabling opportunistic resources for CMS Computing Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hufnagel, Dick

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize “opportunistic” resources — resources not owned by, or a priori configured for CMS — to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are usedmore » to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less

  11. Enabling opportunistic resources for CMS Computing Operations

    DOE PAGES

    Hufnagel, Dirk

    2015-12-23

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize opportunistic resources resources not owned by, or a priori configured for CMS to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enablemore » access and bring the CMS environment into these non CMS resources. Finally, we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less

  12. Interoperability of GADU in using heterogeneous Grid resources for bioinformatics applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sulakhe, D.; Rodriguez, A.; Wilde, M.

    2008-03-01

    Bioinformatics tools used for efficient and computationally intensive analysis of genetic sequences require large-scale computational resources to accommodate the growing data. Grid computational resources such as the Open Science Grid and TeraGrid have proved useful for scientific discovery. The genome analysis and database update system (GADU) is a high-throughput computational system developed to automate the steps involved in accessing the Grid resources for running bioinformatics applications. This paper describes the requirements for building an automated scalable system such as GADU that can run jobs on different Grids. The paper describes the resource-independent configuration of GADU using the Pegasus-based virtual datamore » system that makes high-throughput computational tools interoperable on heterogeneous Grid resources. The paper also highlights the features implemented to make GADU a gateway to computationally intensive bioinformatics applications on the Grid. The paper will not go into the details of problems involved or the lessons learned in using individual Grid resources as it has already been published in our paper on genome analysis research environment (GNARE) and will focus primarily on the architecture that makes GADU resource independent and interoperable across heterogeneous Grid resources.« less

  13. Using Mosix for Wide-Area Compuational Resources

    USGS Publications Warehouse

    Maddox, Brian G.

    2004-01-01

    One of the problems with using traditional Beowulf-type distributed processing clusters is that they require an investment in dedicated computer resources. These resources are usually needed in addition to pre-existing ones such as desktop computers and file servers. Mosix is a series of modifications to the Linux kernel that creates a virtual computer, featuring automatic load balancing by migrating processes from heavily loaded nodes to less used ones. An extension of the Beowulf concept is to run a Mosixenabled Linux kernel on a large number of computer resources in an organization. This configuration would provide a very large amount of computational resources based on pre-existing equipment. The advantage of this method is that it provides much more processing power than a traditional Beowulf cluster without the added costs of dedicating resources.

  14. Contextuality as a Resource for Models of Quantum Computation with Qubits

    NASA Astrophysics Data System (ADS)

    Bermejo-Vega, Juan; Delfosse, Nicolas; Browne, Dan E.; Okay, Cihan; Raussendorf, Robert

    2017-09-01

    A central question in quantum computation is to identify the resources that are responsible for quantum speed-up. Quantum contextuality has been recently shown to be a resource for quantum computation with magic states for odd-prime dimensional qudits and two-dimensional systems with real wave functions. The phenomenon of state-independent contextuality poses a priori an obstruction to characterizing the case of regular qubits, the fundamental building block of quantum computation. Here, we establish contextuality of magic states as a necessary resource for a large class of quantum computation schemes on qubits. We illustrate our result with a concrete scheme related to measurement-based quantum computation.

  15. Computing arrival times of firefighting resources for initial attack

    Treesearch

    Romain M. Mees

    1978-01-01

    Dispatching of firefighting resources requires instantaneous or precalculated decisions. A FORTRAN computer program has been developed that can provide a list of resources in order of computed arrival time for initial attack on a fire. The program requires an accurate description of the existing road system and a list of all resources available on a planning unit....

  16. Decision support methods for the detection of adverse events in post-marketing data.

    PubMed

    Hauben, M; Bate, A

    2009-04-01

    Spontaneous reporting is a crucial component of post-marketing drug safety surveillance despite its significant limitations. The size and complexity of some spontaneous reporting system databases represent a challenge for drug safety professionals who traditionally have relied heavily on the scientific and clinical acumen of the prepared mind. Computer algorithms that calculate statistical measures of reporting frequency for huge numbers of drug-event combinations are increasingly used to support pharamcovigilance analysts screening large spontaneous reporting system databases. After an overview of pharmacovigilance and spontaneous reporting systems, we discuss the theory and application of contemporary computer algorithms in regular use, those under development, and the practical considerations involved in the implementation of computer algorithms within a comprehensive and holistic drug safety signal detection program.

  17. CFD Research, Parallel Computation and Aerodynamic Optimization

    NASA Technical Reports Server (NTRS)

    Ryan, James S.

    1995-01-01

    During the last five years, CFD has matured substantially. Pure CFD research remains to be done, but much of the focus has shifted to integration of CFD into the design process. The work under these cooperative agreements reflects this trend. The recent work, and work which is planned, is designed to enhance the competitiveness of the US aerospace industry. CFD and optimization approaches are being developed and tested, so that the industry can better choose which methods to adopt in their design processes. The range of computer architectures has been dramatically broadened, as the assumption that only huge vector supercomputers could be useful has faded. Today, researchers and industry can trade off time, cost, and availability, choosing vector supercomputers, scalable parallel architectures, networked workstations, or heterogenous combinations of these to complete required computations efficiently.

  18. A parallel solver for huge dense linear systems

    NASA Astrophysics Data System (ADS)

    Badia, J. M.; Movilla, J. L.; Climente, J. I.; Castillo, M.; Marqués, M.; Mayo, R.; Quintana-Ortí, E. S.; Planelles, J.

    2011-11-01

    HDSS (Huge Dense Linear System Solver) is a Fortran Application Programming Interface (API) to facilitate the parallel solution of very large dense systems to scientists and engineers. The API makes use of parallelism to yield an efficient solution of the systems on a wide range of parallel platforms, from clusters of processors to massively parallel multiprocessors. It exploits out-of-core strategies to leverage the secondary memory in order to solve huge linear systems O(100.000). The API is based on the parallel linear algebra library PLAPACK, and on its Out-Of-Core (OOC) extension POOCLAPACK. Both PLAPACK and POOCLAPACK use the Message Passing Interface (MPI) as the communication layer and BLAS to perform the local matrix operations. The API provides a friendly interface to the users, hiding almost all the technical aspects related to the parallel execution of the code and the use of the secondary memory to solve the systems. In particular, the API can automatically select the best way to store and solve the systems, depending of the dimension of the system, the number of processes and the main memory of the platform. Experimental results on several parallel platforms report high performance, reaching more than 1 TFLOP with 64 cores to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors. New version program summaryProgram title: Huge Dense System Solver (HDSS) Catalogue identifier: AEHU_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHU_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 87 062 No. of bytes in distributed program, including test data, etc.: 1 069 110 Distribution format: tar.gz Programming language: Fortran90, C Computer: Parallel architectures: multiprocessors, computer clusters Operating system: Linux/Unix Has the code been vectorized or parallelized?: Yes, includes MPI primitives. RAM: Tested for up to 190 GB Classification: 6.5 External routines: MPI ( http://www.mpi-forum.org/), BLAS ( http://www.netlib.org/blas/), PLAPACK ( http://www.cs.utexas.edu/~plapack/), POOCLAPACK ( ftp://ftp.cs.utexas.edu/pub/rvdg/PLAPACK/pooclapack.ps) (code for PLAPACK and POOCLAPACK is included in the distribution). Catalogue identifier of previous version: AEHU_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 533 Does the new version supersede the previous version?: Yes Nature of problem: Huge scale dense systems of linear equations, Ax=B, beyond standard LAPACK capabilities. Solution method: The linear systems are solved by means of parallelized routines based on the LU factorization, using efficient secondary storage algorithms when the available main memory is insufficient. Reasons for new version: In many applications we need to guarantee a high accuracy in the solution of very large linear systems and we can do it by using double-precision arithmetic. Summary of revisions: Version 1.1 Can be used to solve linear systems using double-precision arithmetic. New version of the initialization routine. The user can choose the kind of arithmetic and the values of several parameters of the environment. Running time: About 5 hours to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors using double-precision arithmetic on an eight-node commodity cluster with a total of 64 Intel cores.

  19. Security Risks of Cloud Computing and Its Emergence as 5th Utility Service

    NASA Astrophysics Data System (ADS)

    Ahmad, Mushtaq

    Cloud Computing is being projected by the major cloud services provider IT companies such as IBM, Google, Yahoo, Amazon and others as fifth utility where clients will have access for processing those applications and or software projects which need very high processing speed for compute intensive and huge data capacity for scientific, engineering research problems and also e- business and data content network applications. These services for different types of clients are provided under DASM-Direct Access Service Management based on virtualization of hardware, software and very high bandwidth Internet (Web 2.0) communication. The paper reviews these developments for Cloud Computing and Hardware/Software configuration of the cloud paradigm. The paper also examines the vital aspects of security risks projected by IT Industry experts, cloud clients. The paper also highlights the cloud provider's response to cloud security risks.

  20. A Review of Computer Science Resources for Learning and Teaching with K-12 Computing Curricula: An Australian Case Study

    ERIC Educational Resources Information Center

    Falkner, Katrina; Vivian, Rebecca

    2015-01-01

    To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age…

  1. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    NASA Technical Reports Server (NTRS)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  2. Operating Dedicated Data Centers - Is It Cost-Effective?

    NASA Astrophysics Data System (ADS)

    Ernst, M.; Hogue, R.; Hollowell, C.; Strecker-Kellog, W.; Wong, A.; Zaytsev, A.

    2014-06-01

    The advent of cloud computing centres such as Amazon's EC2 and Google's Computing Engine has elicited comparisons with dedicated computing clusters. Discussions on appropriate usage of cloud resources (both academic and commercial) and costs have ensued. This presentation discusses a detailed analysis of the costs of operating and maintaining the RACF (RHIC and ATLAS Computing Facility) compute cluster at Brookhaven National Lab and compares them with the cost of cloud computing resources under various usage scenarios. An extrapolation of likely future cost effectiveness of dedicated computing resources is also presented.

  3. New Trends in E-Science: Machine Learning and Knowledge Discovery in Databases

    NASA Astrophysics Data System (ADS)

    Brescia, Massimo

    2012-11-01

    Data mining, or Knowledge Discovery in Databases (KDD), while being the main methodology to extract the scientific information contained in Massive Data Sets (MDS), needs to tackle crucial problems since it has to orchestrate complex challenges posed by transparent access to different computing environments, scalability of algorithms, reusability of resources. To achieve a leap forward for the progress of e-science in the data avalanche era, the community needs to implement an infrastructure capable of performing data access, processing and mining in a distributed but integrated context. The increasing complexity of modern technologies carried out a huge production of data, whose related warehouse management and the need to optimize analysis and mining procedures lead to a change in concept on modern science. Classical data exploration, based on local user own data storage and limited computing infrastructures, is no more efficient in the case of MDS, worldwide spread over inhomogeneous data centres and requiring teraflop processing power. In this context modern experimental and observational science requires a good understanding of computer science, network infrastructures, Data Mining, etc. i.e. of all those techniques which fall into the domain of the so called e-science (recently assessed also by the Fourth Paradigm of Science). Such understanding is almost completely absent in the older generations of scientists and this reflects in the inadequacy of most academic and research programs. A paradigm shift is needed: statistical pattern recognition, object oriented programming, distributed computing, parallel programming need to become an essential part of scientific background. A possible practical solution is to provide the research community with easy-to understand, easy-to-use tools, based on the Web 2.0 technologies and Machine Learning methodology. Tools where almost all the complexity is hidden to the final user, but which are still flexible and able to produce efficient and reliable scientific results. All these considerations will be described in the detail in the chapter. Moreover, examples of modern applications offering to a wide variety of e-science communities a large spectrum of computational facilities to exploit the wealth of available massive data sets and powerful machine learning and statistical algorithms will be also introduced.

  4. Computing the Envelope for Stepwise-Constant Resource Allocations

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Computing tight resource-level bounds is a fundamental problem in the construction of flexible plans with resource utilization. In this paper we describe an efficient algorithm that builds a resource envelope, the tightest possible such bound. The algorithm is based on transforming the temporal network of resource consuming and producing events into a flow network with nodes equal to the events and edges equal to the necessary predecessor links between events. A staged maximum flow problem on the network is then used to compute the time of occurrence and the height of each step of the resource envelope profile. Each stage has the same computational complexity of solving a maximum flow problem on the entire flow network. This makes this method computationally feasible and promising for use in the inner loop of flexible-time scheduling algorithms.

  5. The Human EST Ontology Explorer: a tissue-oriented visualization system for ontologies distribution in human EST collections.

    PubMed

    Merelli, Ivan; Caprera, Andrea; Stella, Alessandra; Del Corvo, Marcello; Milanesi, Luciano; Lazzari, Barbara

    2009-10-15

    The NCBI dbEST currently contains more than eight million human Expressed Sequenced Tags (ESTs). This wide collection represents an important source of information for gene expression studies, provided it can be inspected according to biologically relevant criteria. EST data can be browsed using different dedicated web resources, which allow to investigate library specific gene expression levels and to make comparisons among libraries, highlighting significant differences in gene expression. Nonetheless, no tool is available to examine distributions of quantitative EST collections in Gene Ontology (GO) categories, nor to retrieve information concerning library-dependent EST involvement in metabolic pathways. In this work we present the Human EST Ontology Explorer (HEOE) http://www.itb.cnr.it/ptp/human_est_explorer, a web facility for comparison of expression levels among libraries from several healthy and diseased tissues. The HEOE provides library-dependent statistics on the distribution of sequences in the GO Direct Acyclic Graph (DAG) that can be browsed at each GO hierarchical level. The tool is based on large-scale BLAST annotation of EST sequences. Due to the huge number of input sequences, this BLAST analysis was performed with the aid of grid computing technology, which is particularly suitable to address data parallel task. Relying on the achieved annotation, library-specific distributions of ESTs in the GO Graph were inferred. A pathway-based search interface was also implemented, for a quick evaluation of the representation of libraries in metabolic pathways. EST processing steps were integrated in a semi-automatic procedure that relies on Perl scripts and stores results in a MySQL database. A PHP-based web interface offers the possibility to simultaneously visualize, retrieve and compare data from the different libraries. Statistically significant differences in GO categories among user selected libraries can also be computed. The HEOE provides an alternative and complementary way to inspect EST expression levels with respect to approaches currently offered by other resources. Furthermore, BLAST computation on the whole human EST dataset was a suitable test of grid scalability in the context of large-scale bioinformatics analysis. The HEOE currently comprises sequence analysis from 70 non-normalized libraries, representing a comprehensive overview on healthy and unhealthy tissues. As the analysis procedure can be easily applied to other libraries, the number of represented tissues is intended to increase.

  6. Bio-inspired Autonomic Structures: a middleware for Telecommunications Ecosystems

    NASA Astrophysics Data System (ADS)

    Manzalini, Antonio; Minerva, Roberto; Moiso, Corrado

    Today, people are making use of several devices for communications, for accessing multi-media content services, for data/information retrieving, for processing, computing, etc.: examples are laptops, PDAs, mobile phones, digital cameras, mp3 players, smart cards and smart appliances. One of the most attracting service scenarios for future Telecommunications and Internet is the one where people will be able to browse any object in the environment they live: communications, sensing and processing of data and services will be highly pervasive. In this vision, people, machines, artifacts and the surrounding space will create a kind of computational environment and, at the same time, the interfaces to the network resources. A challenging technological issue will be interconnection and management of heterogeneous systems and a huge amount of small devices tied together in networks of networks. Moreover, future network and service infrastructures should be able to provide Users and Application Developers (at different levels, e.g., residential Users but also SMEs, LEs, ASPs/Web2.0 Service roviders, ISPs, Content Providers, etc.) with the most appropriate "environment" according to their context and specific needs. Operators must be ready to manage such level of complication enabling their latforms with technological advanced allowing network and services self-supervision and self-adaptation capabilities. Autonomic software solutions, enhanced with innovative bio-inspired mechanisms and algorithms, are promising areas of long term research to face such challenges. This chapter proposes a bio-inspired autonomic middleware capable of leveraging the assets of the underlying network infrastructure whilst, at the same time, supporting the development of future Telecommunications and Internet Ecosystems.

  7. Distributed Accounting on the Grid

    NASA Technical Reports Server (NTRS)

    Thigpen, William; Hacker, Thomas J.; McGinnis, Laura F.; Athey, Brian D.

    2001-01-01

    By the late 1990s, the Internet was adequately equipped to move vast amounts of data between HPC (High Performance Computing) systems, and efforts were initiated to link together the national infrastructure of high performance computational and data storage resources together into a general computational utility 'grid', analogous to the national electrical power grid infrastructure. The purpose of the Computational grid is to provide dependable, consistent, pervasive, and inexpensive access to computational resources for the computing community in the form of a computing utility. This paper presents a fully distributed view of Grid usage accounting and a methodology for allocating Grid computational resources for use on a Grid computing system.

  8. Environmental filtering and phylogenetic clustering correlate with the distribution patterns of cryptic protist species.

    PubMed

    Singer, David; Kosakyan, Anush; Seppey, Christophe V W; Pillonel, Amandine; Fernández, Leonardo D; Fontaneto, Diego; Mitchell, Edward A D; Lara, Enrique

    2018-04-01

    The community composition of any group of organisms should theoretically be determined by a combination of assembly processes including resource partitioning, competition, environmental filtering, and phylogenetic legacy. Environmental DNA studies have revealed a huge diversity of protists in all environments, raising questions about the ecological significance of such diversity and the degree to which they obey to the same rules as macroscopic organisms. The fast-growing cultivable protist species on which hypotheses are usually experimentally tested represent only a minority of the protist diversity. Addressing these questions for the lesser known majority can only be inferred through observational studies. We conducted an environmental DNA survey of the genus Nebela, a group of closely related testate (shelled) amoeba species, in different habitats within Sphagnum-dominated peatlands. Identification based on the mitochondrial cytochrome c oxidase 1 gene, allowed species-level resolution as well as phylogenetic reconstruction. Community composition varied strongly across habitats and associated environmental gradients. Species showed little overlap in their realized niche, suggesting resource partitioning, and a strong influence of environmental filtering driving community composition. Furthermore, phylogenetic clustering was observed in the most nitrogen-poor samples, supporting phylogenetic inheritance of adaptations in the group of N. guttata. This study showed that the studied free-living unicellular eukaryotes follow to community assembly rules similar to those known to determine plant and animal communities; the same may be true for much of the huge functional and taxonomic diversity of protists. © 2018 by the Ecological Society of America.

  9. Distributed Problem Solving: Adaptive Networks with a Computer Intermediary Resource. Intelligent Executive Computer Communication

    DTIC Science & Technology

    1991-06-01

    Proceedings of The National Conference on Artificial Intelligence , pages 181-184, The American Association for Aritificial Intelligence , Pittsburgh...Intermediary Resource: Intelligent Executive Computer Communication John Lyman and Carla J. Conaway University of California at Los Angeles for Contracting...Include Security Classification) Interim Report: Distributed Problem Solving: Adaptive Networks With a Computer Intermediary Resource: Intelligent

  10. Experience in using commercial clouds in CMS

    NASA Astrophysics Data System (ADS)

    Bauerdick, L.; Bockelman, B.; Dykstra, D.; Fuess, S.; Garzoglio, G.; Girone, M.; Gutsche, O.; Holzman, B.; Hufnagel, D.; Kim, H.; Kennedy, R.; Mason, D.; Spentzouris, P.; Timm, S.; Tiradani, A.; Vaandering, E.; CMS Collaboration

    2017-10-01

    Historically high energy physics computing has been performed on large purpose-built computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.

  11. Experience in using commercial clouds in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauerdick, L.; Bockelman, B.; Dykstra, D.

    Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is amore » growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.« less

  12. Granular computing with multiple granular layers for brain big data processing.

    PubMed

    Wang, Guoyin; Xu, Ji

    2014-12-01

    Big data is the term for a collection of datasets so huge and complex that it becomes difficult to be processed using on-hand theoretical models and technique tools. Brain big data is one of the most typical, important big data collected using powerful equipments of functional magnetic resonance imaging, multichannel electroencephalography, magnetoencephalography, Positron emission tomography, near infrared spectroscopic imaging, as well as other various devices. Granular computing with multiple granular layers, referred to as multi-granular computing (MGrC) for short hereafter, is an emerging computing paradigm of information processing, which simulates the multi-granular intelligent thinking model of human brain. It concerns the processing of complex information entities called information granules, which arise in the process of data abstraction and derivation of information and even knowledge from data. This paper analyzes three basic mechanisms of MGrC, namely granularity optimization, granularity conversion, and multi-granularity joint computation, and discusses the potential of introducing MGrC into intelligent processing of brain big data.

  13. Study on the application of mobile internet cloud computing platform

    NASA Astrophysics Data System (ADS)

    Gong, Songchun; Fu, Songyin; Chen, Zheng

    2012-04-01

    The innovative development of computer technology promotes the application of the cloud computing platform, which actually is the substitution and exchange of a sort of resource service models and meets the needs of users on the utilization of different resources after changes and adjustments of multiple aspects. "Cloud computing" owns advantages in many aspects which not merely reduce the difficulties to apply the operating system and also make it easy for users to search, acquire and process the resources. In accordance with this point, the author takes the management of digital libraries as the research focus in this paper, and analyzes the key technologies of the mobile internet cloud computing platform in the operation process. The popularization and promotion of computer technology drive people to create the digital library models, and its core idea is to strengthen the optimal management of the library resource information through computers and construct an inquiry and search platform with high performance, allowing the users to access to the necessary information resources at any time. However, the cloud computing is able to promote the computations within the computers to distribute in a large number of distributed computers, and hence implement the connection service of multiple computers. The digital libraries, as a typical representative of the applications of the cloud computing, can be used to carry out an analysis on the key technologies of the cloud computing.

  14. Computer-Based Resource Accounting Model for Automobile Technology Impact Assessment

    DOT National Transportation Integrated Search

    1976-10-01

    A computer-implemented resource accounting model has been developed for assessing resource impacts of future automobile technology options. The resources tracked are materials, energy, capital, and labor. The model has been used in support of the Int...

  15. System Resource Allocations | High-Performance Computing | NREL

    Science.gov Websites

    Allocations System Resource Allocations To use NREL's high-performance computing (HPC) resources : Compute hours on NREL HPC Systems including Peregrine and Eagle Storage space (in Terabytes) on Peregrine , Eagle and Gyrfalcon. Allocations are principally done in response to an annual call for allocation

  16. Computers as learning resources in the health sciences: impact and issues.

    PubMed Central

    Ellis, L B; Hannigan, G G

    1986-01-01

    Starting with two computer terminals in 1972, the Health Sciences Learning Resources Center of the University of Minnesota Bio-Medical Library expanded its instructional facilities to ten terminals and thirty-five microcomputers by 1985. Computer use accounted for 28% of total center circulation. The impact of these resources on health sciences curricula is described and issues related to use, support, and planning are raised and discussed. Judged by their acceptance and educational value, computers are successful health sciences learning resources at the University of Minnesota. PMID:3518843

  17. An emulator for minimizing finite element analysis implementation resources

    NASA Technical Reports Server (NTRS)

    Melosh, R. J.; Utku, S.; Salama, M.; Islam, M.

    1982-01-01

    A finite element analysis emulator providing a basis for efficiently establishing an optimum computer implementation strategy when many calculations are involved is described. The SCOPE emulator determines computer resources required as a function of the structural model, structural load-deflection equation characteristics, the storage allocation plan, and computer hardware capabilities. Thereby, it provides data for trading analysis implementation options to arrive at a best strategy. The models contained in SCOPE lead to micro-operation computer counts of each finite element operation as well as overall computer resource cost estimates. Application of SCOPE to the Memphis-Arkansas bridge analysis provides measures of the accuracy of resource assessments. Data indicate that predictions are within 17.3 percent for calculation times and within 3.2 percent for peripheral storage resources for the ELAS code.

  18. The research of collapsibility test and FEA of collapse deformation in loess collapsible under overburden pressure

    NASA Astrophysics Data System (ADS)

    yu, Zhang; hui, Li; guibo, Bao; wuyu, Zhang; ningshan, Jiang; xiaoyun, Yang

    2018-05-01

    The collapsibility test in field may have huge error with computed results[1-4]. The writer gave a compare between single-line and double-line method and then compared with the field’s result. The writer’s purpose is to reduce the error of measured value to computed value and propose a way to decrease the error through consider the matric suction’s influence to unsaturated soil in using finite element analysis, field test was completed to verify the reasonability of this method and get some regulate of development of collapse deformation and supply some calculation basis of engineering design and forecast in emergency situation.

  19. Integer Linear Programming in Computational Biology

    NASA Astrophysics Data System (ADS)

    Althaus, Ernst; Klau, Gunnar W.; Kohlbacher, Oliver; Lenhof, Hans-Peter; Reinert, Knut

    Computational molecular biology (bioinformatics) is a young research field that is rich in NP-hard optimization problems. The problem instances encountered are often huge and comprise thousands of variables. Since their introduction into the field of bioinformatics in 1997, integer linear programming (ILP) techniques have been successfully applied to many optimization problems. These approaches have added much momentum to development and progress in related areas. In particular, ILP-based approaches have become a standard optimization technique in bioinformatics. In this review, we present applications of ILP-based techniques developed by members and former members of Kurt Mehlhorn’s group. These techniques were introduced to bioinformatics in a series of papers and popularized by demonstration of their effectiveness and potential.

  20. Modeling soil organic matter reallocation in soil enhanced by fungal growth

    NASA Astrophysics Data System (ADS)

    Battaïa, G.; Falconer, R. E.; Otten, W.

    2012-04-01

    Soil, as a huge carbon reservoir having a large interface with the atmosphere, has a major role in understanding global carbon cycle. Yet, its structure gives rise to an extremely complex ecosystem in which chemical fluxes are difficult to describe. Amongst microbial organisms that inhabit soil, fungi represent an entire kingdom of life that has developed its own strategy to adapt its environment. They are thus known to have a particular importance for the reallocation of carbon (and other elements) as they are able to build a mycelium structure that can spread over several meters and through which nutrients can be translocated. This study, based on simulations, is dedicated to enlighten the role of fungal colonization to generate an ecosystem in which coexists disperse biological hotspots. The simulation environment is reconstructed from thresholded computed tomography images of soil samples. Soil organic matter acting as a resource for fungi is assumed to occur first in a particulate solid state (POM). It is degraded into dissolved organic carbon (DOC) through enzymatic activity of fungi. Fungal uptake converts DOC into an internal resource that diffuses through the mycelium and helps it for further colonization. The fungal model is an adaptation of a previously developed model. In addition to internal resource, it accounts for two states of biomass: non-insulated and insulated. One is converted into the other by insulation which is the analog of an ageing process. Being insulated, the interaction rates of the biomass with the environment (degradation and uptake) become slower and the ability to diffuse in the pore space is lost. This aims at producing a more stable state of the mycelium when all resource has been consumed. Spatially simulations reveal a transient state in POM-fungi interaction characterized by a large spread of DOC in the pore space. It is then followed by an enhanced fungal growth toward these areas. Finally a steady state occurs in which DOC is produced and consumed in a close vicinity of the POM reducing its availability for other micro-organisms.

  1. Developing AN Emergency Response Model for Offshore Oil Spill Disaster Management Using Spatial Decision Support System (sdss)

    NASA Astrophysics Data System (ADS)

    Balogun, Abdul-Lateef; Matori, Abdul-Nasir; Wong Toh Kiak, Kelvin

    2018-04-01

    Environmental resources face severe risks during offshore oil spill disasters and Geographic Information System (GIS) Environmental Sensitivity Index (ESI) maps are increasingly being used as response tools to minimize the huge impacts of these spills. However, ESI maps are generally unable to independently harmonize the diverse preferences of the multiple stakeholders' involved in the response process, causing rancour and delay in response time. This paper's Spatial Decision Support System (SDSS) utilizes the Analytic Hierarchy Process (AHP) model to perform tradeoffs in determining the most significant resources to be secured considering the limited resources and time available to perform the response operation. The AHP approach is used to aggregate the diverse preferences of the stakeholders and reach a consensus. These preferences, represented as priority weights, are incorporated in a GIS platform to generate Environmental sensitivity risk (ESR) maps. The ESR maps provide a common operational platform and consistent situational awareness for the multiple parties involved in the emergency response operation thereby minimizing discord among the response teams and saving the most valuable resources.

  2. Management of adult and paediatric acute lymphoblastic leukaemia in Asia: resource-stratified guidelines from the Asian Oncology Summit 2013

    PubMed Central

    Yeoh, Allen EJ; Tan, Daryl; Li, Chi-Kong; Hori, Hiroki; Tse, Eric; Pui, Ching-Hon

    2014-01-01

    The survival rates for both adult and children with acute lymphoblastic leukaemia have improved substantially in recent years with wider use of improved risk-directed therapy and supportive care. In nearly all developed countries, clinical practice guidelines have been formulated by multidisciplinary panels of leukaemia experts, with the goal of providing recommendations on standard treatment approaches based on current evidence. However, those guidelines do not take into account resource limitations in low-income countries, including financial and technical challenges. In Asia, there are huge disparities in economy and infrastructure among the countries, and even among different regions in some large countries. This review summarizes the recommendations developed for Asian countries by a panel of adult and paediatric leukaemia therapists, based on the availability of financial, skill and logistical resources, at a consensus session held as part of the 2013 Asian Oncology Summit in Bangkok, Thailand. The management strategies described here are stratified by a four-tier system (basic, limited, enhanced and maximum) based on the resources available to a particular country or region. PMID:24176570

  3. SCEAPI: A unified Restful Web API for High-Performance Computing

    NASA Astrophysics Data System (ADS)

    Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi

    2017-10-01

    The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Lin, E-mail: godyalin@163.com; Singh, Uttam, E-mail: uttamsingh@hri.res.in; Pati, Arun K., E-mail: akpati@hri.res.in

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate thatmore » mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.« less

  5. Developments in the ATLAS Tracking Software ahead of LHC Run 2

    NASA Astrophysics Data System (ADS)

    Styles, Nicholas; Bellomo, Massimiliano; Salzburger, Andreas; ATLAS Collaboration

    2015-05-01

    After a hugely successful first run, the Large Hadron Collider (LHC) is currently in a shut-down period, during which essential maintenance and upgrades are being performed on the accelerator. The ATLAS experiment, one of the four large LHC experiments has also used this period for consolidation and further developments of the detector and of its software framework, ahead of the new challenges that will be brought by the increased centre-of-mass energy and instantaneous luminosity in the next run period. This is of particular relevance for the ATLAS Tracking software, responsible for reconstructing the trajectory of charged particles through the detector, which faces a steep increase in CPU consumption due to the additional combinatorics of the high-multiplicity environment. The steps taken to mitigate this increase and stay within the available computing resources while maintaining the excellent performance of the tracking software in terms of the information provided to the physics analyses will be presented. Particular focus will be given to changes to the Event Data Model, replacement of the maths library, and adoption of a new persistent output format. The resulting CPU profiling results will be discussed, as well as the performance of the algorithms for physics processes under the expected conditions for the next LHC run.

  6. A medical cost estimation with fuzzy neural network of acute hepatitis patients in emergency room.

    PubMed

    Kuo, R J; Cheng, W C; Lien, W C; Yang, T J

    2015-10-01

    Taiwan is an area where chronic hepatitis is endemic. Liver cancer is so common that it has been ranked first among cancer mortality rates since the early 1980s in Taiwan. Besides, liver cirrhosis and chronic liver diseases are the sixth or seventh in the causes of death. Therefore, as shown by the active research on hepatitis, it is not only a health threat, but also a huge medical cost for the government. The estimated total number of hepatitis B carriers in the general population aged more than 20 years old is 3,067,307. Thus, a case record review was conducted from all patients with diagnosis of acute hepatitis admitted to the Emergency Department (ED) of a well-known teaching-oriented hospital in Taipei. The cost of medical resource utilization is defined as the total medical fee. In this study, a fuzzy neural network is employed to develop the cost forecasting model. A total of 110 patients met the inclusion criteria. The computational results indicate that the FNN model can provide more accurate forecasts than the support vector regression (SVR) or artificial neural network (ANN). In addition, unlike SVR and ANN, FNN can also provide fuzzy IF-THEN rules for interpretation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  7. Understanding the spin-driven polarizations in Bi MO3 (M = 3 d transition metals) multiferroics

    NASA Astrophysics Data System (ADS)

    Kc, Santosh; Lee, Jun Hee; Cooper, Valentino R.

    Bismuth ferrite (BiFeO3) , a promising multiferroic, stabilizes in a perovskite type rhombohedral crystal structure (space group R3c) at room temperature. Recently, it has been reported that in its ground state it possess a huge spin-driven polarization. To probe the underlying mechanism of this large spin-phonon response, we examine these couplings within other Bi based 3 d transition metal oxides Bi MO3 (M = Ti, V, Cr, Mn, Fe, Co, Ni) using density functional theory. Our results demonstrate that this large spin-driven polarization is a consequence of symmetry breaking due to competition between ferroelectric distortions and anti-ferrodistortive octahedral rotations. Furthermore, we find a strong dependence of these enhanced spin-driven polarizations on the crystal structure; with the rhombohedral phase having the largest spin-induced atomic distortions along [111]. These results give us significant insights into the magneto-electric coupling in these materials which is essential to the magnetic and electric field control of electric polarization and magnetization in multiferroic based devices. Research is supported by the US Department of Energy, Office of Science, Basic Energy Sciences, Materials Science and Engineering Division and the Office of Science Early Career Research Program (V.R.C) and used computational resources at NERSC.

  8. Design of Low-Cost Impact Reporting System

    DTIC Science & Technology

    2015-12-01

    Single Board Computers (SBC) available. Arduino and Raspberry Pi are very low cost and have huge communities for hardware design. Most of the SBC... Raspberry Pi Model B has a considerably faster processor than the Arduino. Although it provides only approximately 25 General Purpose Input and Output...reporting system must be able to operate on its own power for more than 2 or 3 hours. The Raspberry Pi Model B operates on 5 volts direct current at

  9. Support for Implications of Compressive Sensing Concepts to Imaging Systems

    DTIC Science & Technology

    2015-08-02

    34pretty picture" is not only not needed, but is not ALLOWED due to privacy concerns. Remember the huge controversy caused by mmW imagers seeing people...in 2003, for experimental studies of quantum degenerate atomic gases. From 2004-2006 he was a postdoctoral researcher in the Electrical and...Computer Engineering at the University of Arizona. He was recently also a program manager at DARPA/DSO where he started programs on quantum information

  10. Recent Advances in Immersive Visualization of Ocean Data: Virtual Reality Through the Web on Your Laptop Computer

    NASA Astrophysics Data System (ADS)

    Hermann, A. J.; Moore, C.; Soreide, N. N.

    2002-12-01

    Ocean circulation is irrefutably three dimensional, and powerful new measurement technologies and numerical models promise to expand our three-dimensional knowledge of the dynamics further each year. Yet, most ocean data and model output is still viewed using two-dimensional maps. Immersive visualization techniques allow the investigator to view their data as a three dimensional world of surfaces and vectors which evolves through time. The experience is not unlike holding a part of the ocean basin in one's hand, turning and examining it from different angles. While immersive, three dimensional visualization has been possible for at least a decade, the technology was until recently inaccessible (both physically and financially) for most researchers. It is not yet fully appreciated by practicing oceanographers how new, inexpensive computing hardware and software (e.g. graphics cards and controllers designed for the huge PC gaming market) can be employed for immersive, three dimensional, color visualization of their increasingly huge datasets and model output. In fact, the latest developments allow immersive visualization through web servers, giving scientists the ability to "fly through" three-dimensional data stored half a world away. Here we explore what additional insight is gained through immersive visualization, describe how scientists of very modest means can easily avail themselves of the latest technology, and demonstrate its implementation on a web server for Pacific Ocean model output.

  11. Function Clustering Self-Organization Maps (FCSOMs) for mining differentially expressed genes in Drosophila and its correlation with the growth medium.

    PubMed

    Liu, L L; Liu, M J; Ma, M

    2015-09-28

    The central task of this study was to mine the gene-to-medium relationship. Adequate knowledge of this relationship could potentially improve the accuracy of differentially expressed gene mining. One of the approaches to differentially expressed gene mining uses conventional clustering algorithms to identify the gene-to-medium relationship. Compared to conventional clustering algorithms, self-organization maps (SOMs) identify the nonlinear aspects of the gene-to-medium relationships by mapping the input space into another higher dimensional feature space. However, SOMs are not suitable for huge datasets consisting of millions of samples. Therefore, a new computational model, the Function Clustering Self-Organization Maps (FCSOMs), was developed. FCSOMs take advantage of the theory of granular computing as well as advanced statistical learning methodologies, and are built specifically for each information granule (a function cluster of genes), which are intelligently partitioned by the clustering algorithm provided by the DAVID_6.7 software platform. However, only the gene functions, and not their expression values, are considered in the fuzzy clustering algorithm of DAVID. Compared to the clustering algorithm of DAVID, these experimental results show a marked improvement in the accuracy of classification with the application of FCSOMs. FCSOMs can handle huge datasets and their complex classification problems, as each FCSOM (modeled for each function cluster) can be easily parallelized.

  12. Software and resources for computational medicinal chemistry

    PubMed Central

    Liao, Chenzhong; Sitzmann, Markus; Pugliese, Angelo; Nicklaus, Marc C

    2011-01-01

    Computer-aided drug design plays a vital role in drug discovery and development and has become an indispensable tool in the pharmaceutical industry. Computational medicinal chemists can take advantage of all kinds of software and resources in the computer-aided drug design field for the purposes of discovering and optimizing biologically active compounds. This article reviews software and other resources related to computer-aided drug design approaches, putting particular emphasis on structure-based drug design, ligand-based drug design, chemical databases and chemoinformatics tools. PMID:21707404

  13. Fire safety distances for open pool fires

    NASA Astrophysics Data System (ADS)

    Sudheer, S.; Kumar, Lokendra; Manjunath, B. S.; Pasi, Amit; Meenakshi, G.; Prabhu, S. V.

    2013-11-01

    Fire accidents that carry huge loss with them have increased in the previous two decades than at any time in the history. Hence, there is a need for understanding the safety distances from different fires with different fuels. Fire safety distances are computed for different open pool fires. Diesel, gasoline and hexane are used as fuels for circular pool diameters of 0.5 m, 0.7 m and 1.0 m. A large square pool fire of 4 m × 4 m is also conducted with diesel as a fuel. All the prescribed distances in this study are purely based on the thermal analysis. IR camera is used to get the thermal images of pool fires and there by the irradiance at different locations is computed. The computed irradiance is presented with the threshold heat flux limits for human beings.

  14. Exploiting opportunistic resources for ATLAS with ARC CE and the Event Service

    NASA Astrophysics Data System (ADS)

    Cameron, D.; Filipčič, A.; Guan, W.; Tsulaia, V.; Walker, R.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    With ever-greater computing needs and fixed budgets, big scientific experiments are turning to opportunistic resources as a means to add much-needed extra computing power. These resources can be very different in design from those that comprise the Grid computing of most experiments, therefore exploiting them requires a change in strategy for the experiment. They may be highly restrictive in what can be run or in connections to the outside world, or tolerate opportunistic usage only on condition that tasks may be terminated without warning. The Advanced Resource Connector Computing Element (ARC CE) with its nonintrusive architecture is designed to integrate resources such as High Performance Computing (HPC) systems into a computing Grid. The ATLAS experiment developed the ATLAS Event Service (AES) primarily to address the issue of jobs that can be terminated at any point when opportunistic computing capacity is needed by someone else. This paper describes the integration of these two systems in order to exploit opportunistic resources for ATLAS in a restrictive environment. In addition to the technical details, results from deployment of this solution in the SuperMUC HPC centre in Munich are shown.

  15. Integration of Cloud resources in the LHCb Distributed Computing

    NASA Astrophysics Data System (ADS)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  16. Quantum digital-to-analog conversion algorithm using decoherence

    NASA Astrophysics Data System (ADS)

    SaiToh, Akira

    2015-08-01

    We consider the problem of mapping digital data encoded on a quantum register to analog amplitudes in parallel. It is shown to be unlikely that a fully unitary polynomial-time quantum algorithm exists for this problem; NP becomes a subset of BQP if it exists. In the practical point of view, we propose a nonunitary linear-time algorithm using quantum decoherence. It tacitly uses an exponentially large physical resource, which is typically a huge number of identical molecules. Quantumness of correlation appearing in the process of the algorithm is also discussed.

  17. New ways of working in theatres. Three session working days.

    PubMed

    Collins, Gill

    2006-01-01

    Innovative ideas will be required to meet government targets for the health service in the future. Increasing the number of hours available to surgical teams from two to three sessions could be one solution. Efficient and effective utilisation of a huge capital resource would appear to be further justification for increased hours, although the Audit Commission suggests that improving existing utilisation rather than extending hours should be the priority. However if operating sessions could be increased from two to three there is a potential to reduce waiting lists.

  18. Computational Physics' Greatest Hits

    NASA Astrophysics Data System (ADS)

    Bug, Amy

    2011-03-01

    The digital computer, has worked its way so effectively into our profession that now, roughly 65 years after its invention, it is virtually impossible to find a field of experimental or theoretical physics unaided by computational innovation. It is tough to think of another device about which one can make that claim. In the session ``What is computational physics?'' speakers will distinguish computation within the field of computational physics from this ubiquitous importance across all subfields of physics. This talk will recap the invited session ``Great Advances...Past, Present and Future'' in which five dramatic areas of discovery (five of our ``greatest hits'') are chronicled: The physics of many-boson systems via Path Integral Monte Carlo, the thermodynamic behavior of a huge number of diverse systems via Monte Carlo Methods, the discovery of new pharmaceutical agents via molecular dynamics, predictive simulations of global climate change via detailed, cross-disciplinary earth system models, and an understanding of the formation of the first structures in our universe via galaxy formation simulations. The talk will also identify ``greatest hits'' in our field from the teaching and research perspectives of other members of DCOMP, including its Executive Committee.

  19. Design & implementation of distributed spatial computing node based on WPS

    NASA Astrophysics Data System (ADS)

    Liu, Liping; Li, Guoqing; Xie, Jibo

    2014-03-01

    Currently, the research work of SIG (Spatial Information Grid) technology mostly emphasizes on the spatial data sharing in grid environment, while the importance of spatial computing resources is ignored. In order to implement the sharing and cooperation of spatial computing resources in grid environment, this paper does a systematical research of the key technologies to construct Spatial Computing Node based on the WPS (Web Processing Service) specification by OGC (Open Geospatial Consortium). And a framework of Spatial Computing Node is designed according to the features of spatial computing resources. Finally, a prototype of Spatial Computing Node is implemented and the relevant verification work under the environment is completed.

  20. Economic models for management of resources in peer-to-peer and grid computing

    NASA Astrophysics Data System (ADS)

    Buyya, Rajkumar; Stockinger, Heinz; Giddy, Jonathan; Abramson, David

    2001-07-01

    The accelerated development in Peer-to-Peer (P2P) and Grid computing has positioned them as promising next generation computing platforms. They enable the creation of Virtual Enterprises (VE) for sharing resources distributed across the world. However, resource management, application development and usage models in these environments is a complex undertaking. This is due to the geographic distribution of resources that are owned by different organizations or peers. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. In order to address complex resource management issues, we have proposed a computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments. The framework provides mechanisms for optimizing resource provider and consumer objective functions through trading and brokering services. In a real world market, there exist various economic models for setting the price for goods based on supply-and-demand and their value to the user. They include commodity market, posted price, tenders and auctions. In this paper, we discuss the use of these models for interaction between Grid components in deciding resource value and the necessary infrastructure to realize them. In addition to normal services offered by Grid computing systems, we need an infrastructure to support interaction protocols, allocation mechanisms, currency, secure banking, and enforcement services. Furthermore, we demonstrate the usage of some of these economic models in resource brokering through Nimrod/G deadline and cost-based scheduling for two different optimization strategies on the World Wide Grid (WWG) testbed that contains peer-to-peer resources located on five continents: Asia, Australia, Europe, North America, and South America.

  1. Tools and data services registry: a community effort to document bioinformatics resources

    PubMed Central

    Ison, Jon; Rapacki, Kristoffer; Ménager, Hervé; Kalaš, Matúš; Rydza, Emil; Chmura, Piotr; Anthon, Christian; Beard, Niall; Berka, Karel; Bolser, Dan; Booth, Tim; Bretaudeau, Anthony; Brezovsky, Jan; Casadio, Rita; Cesareni, Gianni; Coppens, Frederik; Cornell, Michael; Cuccuru, Gianmauro; Davidsen, Kristian; Vedova, Gianluca Della; Dogan, Tunca; Doppelt-Azeroual, Olivia; Emery, Laura; Gasteiger, Elisabeth; Gatter, Thomas; Goldberg, Tatyana; Grosjean, Marie; Grüning, Björn; Helmer-Citterich, Manuela; Ienasescu, Hans; Ioannidis, Vassilios; Jespersen, Martin Closter; Jimenez, Rafael; Juty, Nick; Juvan, Peter; Koch, Maximilian; Laibe, Camille; Li, Jing-Woei; Licata, Luana; Mareuil, Fabien; Mičetić, Ivan; Friborg, Rune Møllegaard; Moretti, Sebastien; Morris, Chris; Möller, Steffen; Nenadic, Aleksandra; Peterson, Hedi; Profiti, Giuseppe; Rice, Peter; Romano, Paolo; Roncaglia, Paola; Saidi, Rabie; Schafferhans, Andrea; Schwämmle, Veit; Smith, Callum; Sperotto, Maria Maddalena; Stockinger, Heinz; Vařeková, Radka Svobodová; Tosatto, Silvio C.E.; de la Torre, Victor; Uva, Paolo; Via, Allegra; Yachdav, Guy; Zambelli, Federico; Vriend, Gert; Rost, Burkhard; Parkinson, Helen; Løngreen, Peter; Brunak, Søren

    2016-01-01

    Life sciences are yielding huge data sets that underpin scientific discoveries fundamental to improvement in human health, agriculture and the environment. In support of these discoveries, a plethora of databases and tools are deployed, in technically complex and diverse implementations, across a spectrum of scientific disciplines. The corpus of documentation of these resources is fragmented across the Web, with much redundancy, and has lacked a common standard of information. The outcome is that scientists must often struggle to find, understand, compare and use the best resources for the task at hand. Here we present a community-driven curation effort, supported by ELIXIR—the European infrastructure for biological information—that aspires to a comprehensive and consistent registry of information about bioinformatics resources. The sustainable upkeep of this Tools and Data Services Registry is assured by a curation effort driven by and tailored to local needs, and shared amongst a network of engaged partners. As of November 2015, the registry includes 1785 resources, with depositions from 126 individual registrations including 52 institutional providers and 74 individuals. With community support, the registry can become a standard for dissemination of information about bioinformatics resources: we welcome everyone to join us in this common endeavour. The registry is freely available at https://bio.tools. PMID:26538599

  2. A cross-sectional evaluation of computer literacy among medical students at a tertiary care teaching hospital in Mumbai, Bombay.

    PubMed

    Panchabhai, T S; Dangayach, N S; Mehta, V S; Patankar, C V; Rege, N N

    2011-01-01

    Computer usage capabilities of medical students for introduction of computer-aided learning have not been adequately assessed. Cross-sectional study to evaluate computer literacy among medical students. Tertiary care teaching hospital in Mumbai, India. Participants were administered a 52-question questionnaire, designed to study their background, computer resources, computer usage, activities enhancing computer skills, and attitudes toward computer-aided learning (CAL). The data was classified on the basis of sex, native place, and year of medical school, and the computer resources were compared. The computer usage and attitudes toward computer-based learning were assessed on a five-point Likert scale, to calculate Computer usage score (CUS - maximum 55, minimum 11) and Attitude score (AS - maximum 60, minimum 12). The quartile distribution among the groups with respect to the CUS and AS was compared by chi-squared tests. The correlation between CUS and AS was then tested. Eight hundred and seventy-five students agreed to participate in the study and 832 completed the questionnaire. One hundred and twenty eight questionnaires were excluded and 704 were analyzed. Outstation students had significantly lesser computer resources as compared to local students (P<0.0001). The mean CUS for local students (27.0±9.2, Mean±SD) was significantly higher than outstation students (23.2±9.05). No such difference was observed for the AS. The means of CUS and AS did not differ between males and females. The CUS and AS had positive, but weak correlations for all subgroups. The weak correlation between AS and CUS for all students could be explained by the lack of computer resources or inadequate training to use computers for learning. Providing additional resources would benefit the subset of outstation students with lesser computer resources. This weak correlation between the attitudes and practices of all students needs to be investigated. We believe that this gap can be bridged with a structured computer learning program.

  3. Studies on marine oil spills and their ecological damage

    NASA Astrophysics Data System (ADS)

    Mei, Hong; Yin, Yanjie

    2009-09-01

    The sources of marine oil spills are mainly from accidents of marine oil tankers or freighters, marine oil-drilling platforms, marine oil pipelines, marine oilfields, terrestrial pollution, oil-bearing atmosphere, and offshore oil production equipment. It is concluded upon analysis that there are two main reasons for marine oil spills: (I) The motive for huge economic benefits of oil industry owners and oil shipping agents far surpasses their sense of ecological risks. (II) Marine ecological safety has not become the main concern of national security. Oil spills are disasters because humans spare no efforts to get economic benefits from oil. The present paper draws another conclusion that marine ecological damage caused by oil spills can be roughly divided into two categories: damage to marine resource value (direct value) and damage to marine ecosystem service value (indirect value). Marine oil spills cause damage to marine biological, fishery, seawater, tourism and mineral resources to various extents, which contributes to the lower quality and value of marine resources.

  4. Impact of remote sensing upon the planning, management, and development of water resources

    NASA Technical Reports Server (NTRS)

    Loats, H. L.; Fowler, T. R.; Frech, S. L.

    1974-01-01

    A survey of the principal water resource users was conducted to determine the impact of new remote data streams on hydrologic computer models. The analysis of the responses and direct contact demonstrated that: (1) the majority of water resource effort of the type suitable to remote sensing inputs is conducted by major federal water resources agencies or through federally stimulated research, (2) the federal government develops most of the hydrologic models used in this effort; and (3) federal computer power is extensive. The computers, computer power, and hydrologic models in current use were determined.

  5. Resource Provisioning in SLA-Based Cluster Computing

    NASA Astrophysics Data System (ADS)

    Xiong, Kaiqi; Suh, Sang

    Cluster computing is excellent for parallel computation. It has become increasingly popular. In cluster computing, a service level agreement (SLA) is a set of quality of services (QoS) and a fee agreed between a customer and an application service provider. It plays an important role in an e-business application. An application service provider uses a set of cluster computing resources to support e-business applications subject to an SLA. In this paper, the QoS includes percentile response time and cluster utilization. We present an approach for resource provisioning in such an environment that minimizes the total cost of cluster computing resources used by an application service provider for an e-business application that often requires parallel computation for high service performance, availability, and reliability while satisfying a QoS and a fee negotiated between a customer and the application service provider. Simulation experiments demonstrate the applicability of the approach.

  6. Acausal measurement-based quantum computing

    NASA Astrophysics Data System (ADS)

    Morimae, Tomoyuki

    2014-07-01

    In measurement-based quantum computing, there is a natural "causal cone" among qubits of the resource state, since the measurement angle on a qubit has to depend on previous measurement results in order to correct the effect of by-product operators. If we respect the no-signaling principle, by-product operators cannot be avoided. Here we study the possibility of acausal measurement-based quantum computing by using the process matrix framework [Oreshkov, Costa, and Brukner, Nat. Commun. 3, 1092 (2012), 10.1038/ncomms2076]. We construct a resource process matrix for acausal measurement-based quantum computing restricting local operations to projective measurements. The resource process matrix is an analog of the resource state of the standard causal measurement-based quantum computing. We find that if we restrict local operations to projective measurements the resource process matrix is (up to a normalization factor and trivial ancilla qubits) equivalent to the decorated graph state created from the graph state of the corresponding causal measurement-based quantum computing. We also show that it is possible to consider a causal game whose causal inequality is violated by acausal measurement-based quantum computing.

  7. Step-by-step magic state encoding for efficient fault-tolerant quantum computation

    PubMed Central

    Goto, Hayato

    2014-01-01

    Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation. PMID:25511387

  8. Step-by-step magic state encoding for efficient fault-tolerant quantum computation.

    PubMed

    Goto, Hayato

    2014-12-16

    Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation.

  9. A Review of Resources for Evaluating K-12 Computer Science Education Programs

    ERIC Educational Resources Information Center

    Randolph, Justus J.; Hartikainen, Elina

    2004-01-01

    Since computer science education is a key to preparing students for a technologically-oriented future, it makes sense to have high quality resources for conducting summative and formative evaluation of those programs. This paper describes the results of a critical analysis of the resources for evaluating K-12 computer science education projects.…

  10. Computing the Envelope for Stepwise Constant Resource Allocations

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Clancy, Daniel (Technical Monitor)

    2001-01-01

    Estimating tight resource level is a fundamental problem in the construction of flexible plans with resource utilization. In this paper we describe an efficient algorithm that builds a resource envelope, the tightest possible such bound. The algorithm is based on transforming the temporal network of resource consuming and producing events into a flow network with noises equal to the events and edges equal to the necessary predecessor links between events. The incremental solution of a staged maximum flow problem on the network is then used to compute the time of occurrence and the height of each step of the resource envelope profile. The staged algorithm has the same computational complexity of solving a maximum flow problem on the entire flow network. This makes this method computationally feasible for use in the inner loop of search-based scheduling algorithms.

  11. mORCA: ubiquitous access to life science web services.

    PubMed

    Diaz-Del-Pino, Sergio; Trelles, Oswaldo; Falgueras, Juan

    2018-01-16

    Technical advances in mobile devices such as smartphones and tablets have produced an extraordinary increase in their use around the world and have become part of our daily lives. The possibility of carrying these devices in a pocket, particularly mobile phones, has enabled ubiquitous access to Internet resources. Furthermore, in the life sciences world there has been a vast proliferation of data types and services that finish as Web Services. This suggests the need for research into mobile clients to deal with life sciences applications for effective usage and exploitation. Analysing the current features in existing bioinformatics applications managing Web Services, we have devised, implemented, and deployed an easy-to-use web-based lightweight mobile client. This client is able to browse, select, compose parameters, invoke, and monitor the execution of Web Services stored in catalogues or central repositories. The client is also able to deal with huge amounts of data between external storage mounts. In addition, we also present a validation use case, which illustrates the usage of the application while executing, monitoring, and exploring the results of a registered workflow. The software its available in the Apple Store and Android Market and the source code is publicly available in Github. Mobile devices are becoming increasingly important in the scientific world due to their strong potential impact on scientific applications. Bioinformatics should not fall behind this trend. We present an original software client that deals with the intrinsic limitations of such devices and propose different guidelines to provide location-independent access to computational resources in bioinformatics and biomedicine. Its modular design makes it easily expandable with the inclusion of new repositories, tools, types of visualization, etc.

  12. A lightweight distributed framework for computational offloading in mobile cloud computing.

    PubMed

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.

  13. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    PubMed Central

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  14. COMPUTATIONAL TOXICOLOGY-WHERE IS THE DATA? ...

    EPA Pesticide Factsheets

    This talk will briefly describe the state of the data world for computational toxicology and one approach to improve the situation, called ACToR (Aggregated Computational Toxicology Resource). This talk will briefly describe the state of the data world for computational toxicology and one approach to improve the situation, called ACToR (Aggregated Computational Toxicology Resource).

  15. LaRC local area networks to support distributed computing

    NASA Technical Reports Server (NTRS)

    Riddle, E. P.

    1984-01-01

    The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.

  16. Use of the Internet by burns patients, their families and friends.

    PubMed

    Rea, S; Lim, J; Falder, S; Wood, F

    2008-05-01

    The Internet has also become an increasingly important source of health-related information. However, with this exponential increase comes the problem that although the volume of information is huge, the quality, accuracy and completeness of the information are questionable, not only in the field of medicine. Previous studies of single medical conditions have suggested that web-based health information has limitations. The aim of this study was to evaluate Internet usage among burned patients and the people accompanying them to the outpatient clinic. A customised questionnaire was created and distributed to all patients and accompanying persons in the adult and paediatric burns clinics. This investigated computer usage, Internet access, usefulness of Internet search and topics searched. Two hundred and ten people completed the questionnaire, a response rate of 83%. Sixty three percent of responders were patients, parents 21.9%, spouses 3.3%, siblings, children and friends the remaining 10.8%. Seventy seven percent of attendees had been injured within the last year, 11% between 1 and 5 years previously, and 12% more than 5 years previously. Seventy four percent had computer and Internet access. Twelve percent had performed a search. Topics searched included skin grafts, scarring and scar management treatments such as pressure garments, silicone gel and massage. This study has shown that computer and Internet access is high, however a very small number actually used the Internet to access further medical information. Patients with longer standing injuries were more likely to access the Internet. Parents of burned children were more frequent Internet users. As more burn units develop their own web sites with information for patients and healthcare providers, it is important to inform patients, family members and friends that such a resource exists. By offering such a service patients are provided with accurate, reliable and easily accessible information which is appropriate to their needs.

  17. An approach for heterogeneous and loosely coupled geospatial data distributed computing

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Huang, Fengru; Fang, Yu; Huang, Zhou; Lin, Hui

    2010-07-01

    Most GIS (Geographic Information System) applications tend to have heterogeneous and autonomous geospatial information resources, and the availability of these local resources is unpredictable and dynamic under a distributed computing environment. In order to make use of these local resources together to solve larger geospatial information processing problems that are related to an overall situation, in this paper, with the support of peer-to-peer computing technologies, we propose a geospatial data distributed computing mechanism that involves loosely coupled geospatial resource directories and a term named as Equivalent Distributed Program of global geospatial queries to solve geospatial distributed computing problems under heterogeneous GIS environments. First, a geospatial query process schema for distributed computing as well as a method for equivalent transformation from a global geospatial query to distributed local queries at SQL (Structured Query Language) level to solve the coordinating problem among heterogeneous resources are presented. Second, peer-to-peer technologies are used to maintain a loosely coupled network environment that consists of autonomous geospatial information resources, thus to achieve decentralized and consistent synchronization among global geospatial resource directories, and to carry out distributed transaction management of local queries. Finally, based on the developed prototype system, example applications of simple and complex geospatial data distributed queries are presented to illustrate the procedure of global geospatial information processing.

  18. Inertial Manifolds for Navier-Stokes Equations and Related Dynamical Systems

    DTIC Science & Technology

    1991-05-31

    Graphics IRIS (SGI). The RLE files for the animation are loaded to an Abekas and recorded to tape by Betacam . This computational work was done by using the...scripts and comments, are loaded to the Abekas-A60 digital image storage device, and then recorded to the Betacam BVW-75 analog tape recorder. Static...interfacing, huge data files are output to the Data Vault parallelly with little cost. In addition to the SGIs, Abekas, Betacam and Solitaire, the

  19. Visual-area coding technique (VACT): optical parallel implementation of fuzzy logic and its visualization with the digital-halftoning process

    NASA Astrophysics Data System (ADS)

    Konishi, Tsuyoshi; Tanida, Jun; Ichioka, Yoshiki

    1995-06-01

    A novel technique, the visual-area coding technique (VACT), for the optical implementation of fuzzy logic with the capability of visualization of the results is presented. This technique is based on the microfont method and is considered to be an instance of digitized analog optical computing. Huge amounts of data can be processed in fuzzy logic with the VACT. In addition, real-time visualization of the processed result can be accomplished.

  20. Intracerebral venous thrombosis and hematoma secondary to high-voltage brain injury.

    PubMed

    Sure, U; Kleihues, P

    1997-06-01

    We report the case of a 19-year-old male who sustained an electrodynamic (16.67 Hz) high-voltage (15,000 V) railway overhead cable injury. He lost consciousness 30 minutes after contact and died secondary to brainstem herniation as a result of intracerebral swelling within 8 days. Repeated cranial computed tomography revealed a huge hemispheric mass bleeding accompanied by subarachnoidal hemorrhage. Additionally, necropsy showed an extensive thrombosis of the adjacent cerebral veins. The pathophysiological mechanism of this unusual injury is discussed.

  1. NASA Center for Computational Sciences: History and Resources

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.

  2. Roadblocks Hampering the Professional Development of Geoscientists in AFRICA.- a Case Study from the Ghanaian Perspective

    NASA Astrophysics Data System (ADS)

    Kabore, A.

    2010-12-01

    Ghana, like many African countries is in a strategic position to promote the development of early-career geoscientist because of the huge potential in terms of geological resources, huge number of interested students, and a number of institutions for training geoscientists. Ghana is often described as the gateway to Africa. As a result, situations that hamper the development of early career geoscientists in Ghana are likely to be replicated in many African countries. Over the last few decades, several institutions have been created to develop the technical-geoscientific expertise, and to deal with the disparity that exists between the amount of work that needs to be done in the geosciences and the small number of geoscientists working in the profession. There are more than four universities in Ghana that offer the study of geosciences. Available statistics indicate that the number of students enrolled in these institutions has seen a distinct increase over the last few decades. However, a significant percentage of the graduates from these institutions do not work in their core profession or even in closely-aligned disciplines. Unfortunately, the problem of a small national geosciences workforce is more pronounced today than it was over the last few decades. This problem is not a result of the lack of trained geoscientists, but rather a combination of several factors which may be socio-economic, cultural, passion, lack of mentorship etc. This presentation will focus on the broad challenges and institutional difficulties that geosciences graduates and early career professional face in Ghana and in Africa. Several recommendations will be proposed to address these problems and foster the establishment of professional development resources to boost the flow of geosciences graduates into the profession. These proposed resources will enable graduates to develop not only the skills and experience needed in the profession, but also the passion to become future leaders within the geosciences community.

  3. 30 CFR 1206.154 - Determination of quantities and qualities for computing royalties.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 3 2014-07-01 2014-07-01 false Determination of quantities and qualities for computing royalties. 1206.154 Section 1206.154 Mineral Resources OFFICE OF NATURAL RESOURCES REVENUE, DEPARTMENT OF THE INTERIOR NATURAL RESOURCES REVENUE PRODUCT VALUATION Federal Gas § 1206.154 Determination...

  4. 30 CFR 1206.154 - Determination of quantities and qualities for computing royalties.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 3 2012-07-01 2012-07-01 false Determination of quantities and qualities for computing royalties. 1206.154 Section 1206.154 Mineral Resources OFFICE OF NATURAL RESOURCES REVENUE, DEPARTMENT OF THE INTERIOR NATURAL RESOURCES REVENUE PRODUCT VALUATION Federal Gas § 1206.154 Determination...

  5. 30 CFR 1206.154 - Determination of quantities and qualities for computing royalties.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 3 2013-07-01 2013-07-01 false Determination of quantities and qualities for computing royalties. 1206.154 Section 1206.154 Mineral Resources OFFICE OF NATURAL RESOURCES REVENUE, DEPARTMENT OF THE INTERIOR NATURAL RESOURCES REVENUE PRODUCT VALUATION Federal Gas § 1206.154 Determination...

  6. [Water problems in the Eastern Mediterranean Region].

    PubMed

    Zeribl, T

    2005-01-01

    The Eastern Mediterranean Region of the World Health Organization is confronted with formidable water problems due to: increased water demand both for consumption and for irrigation in agriculture that is becoming more productive and more polluting; scarce water resources; drought, erosion and pollution; inappropriate management; inadequate policies; and institutional and legal considerations. Added to these problems are the risks of regional conflicts because of the lack of "shared" management of cross-border waters which are an object of contention between neighbouring countries. This report analyses the issues relating to water availability, health and development on the basis of the distribution of water resources, and their use by industry and the huge proportion for agricultural use. It raises the question whether countries in the Region are ready to review their strategies on water priorities, particularly in the areas of health, agriculture and food self-sufficiency.

  7. UnCover on the Web: search hints and applications in library environments.

    PubMed

    Galpern, N F; Albert, K M

    1997-01-01

    Among the huge maze of resources available on the Internet, UnCoverWeb stands out as a valuable tool for medical libraries. This up-to-date, free-access, multidisciplinary database of periodical references is searched through an easy-to-learn graphical user interface that is a welcome improvement over the telnet version. This article reviews the basic and advanced search techniques for UnCoverWeb, as well as providing information on the document delivery functions and table of contents alerting service called Reveal. UnCover's currency is evaluated and compared with other current awareness resources. System deficiencies are discussed, with the conclusion that although UnCoverWeb lacks the sophisticated features of many commercial database search services, it is nonetheless a useful addition to the repertoire of information sources available in a library.

  8. Challenges in Hospital-Associated Infection Management: A Unit Perspective.

    PubMed

    Stacy, Kathleen M

    2015-01-01

    Maintaining a successful unit-based continuous quality improvement program for managing hospital-associated infections is a huge challenge and an overwhelming task. It requires strong organizational support and unit leadership, human and fiscal resources, time, and a dedicated and motivated nursing staff. A great deal of effort goes into implementing, monitoring, reporting, and evaluating quality improvement initiatives and can lead to significant frustration on the part of the leadership team and nursing staff when quality improvement efforts fail to produce the desired results. Each initiative presents its own unique set of challenges; however, common issues influence all initiatives. These common issues include organization and unit culture, current clinical practice guidelines being used to drive the initiatives, performance discrepancies on the part of nursing staff, availability of resources including equipment and supplies, monitoring of the data, and conflicting quality improvement priorities.

  9. Spatial big data for disaster management

    NASA Astrophysics Data System (ADS)

    Shalini, R.; Jayapratha, K.; Ayeshabanu, S.; Chemmalar Selvi, G.

    2017-11-01

    Big data is an idea of informational collections that depicts huge measure of information and complex that conventional information preparing application program is lacking to manage them. Presently, big data is a widely known domain used in research, academic, and industries. It is utilized to store substantial measure of information in a solitary brought together one. Challenges integrate capture, allocation, analysis, information precise, visualization, distribution, interchange, delegation, inquiring, updating and information protection. In this digital world, to put away the information and recovering the data is enormous errand for the huge organizations and some time information ought to be misfortune due to circulated information putting away. For this issue the organization individuals are chosen to actualize the huge information to put away every one of the information identified with the organization they are put away in one enormous database that is known as large information. Remote sensor is a science getting data used to distinguish the items or break down the range from a separation. It is anything but difficult to discover the question effortlessly with the sensor. It makes geographic data from satellite and sensor information so in this paper dissect what are the structures are utilized for remote sensor in huge information and how the engineering is vary from each other and how they are identify with our investigations. This paper depicts how the calamity happens and figuring consequence of informational collection. And applied a seismic informational collection to compute the tremor calamity in view of classification and clustering strategy. The classical data mining algorithms for classification used are k-nearest, naive bayes and decision table and clustering used are hierarchical, make density based and simple k_means using XLMINER and WEKA tool. This paper also helps to predicts the spatial dataset by applying the XLMINER AND WEKA tool and thus the big spatial data can be well suited to this paper.

  10. Tools and Techniques for Measuring and Improving Grid Performance

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Frumkin, M.; Smith, W.; VanderWijngaart, R.; Wong, P.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    This viewgraph presentation provides information on NASA's geographically dispersed computing resources, and the various methods by which the disparate technologies are integrated within a nationwide computational grid. Many large-scale science and engineering projects are accomplished through the interaction of people, heterogeneous computing resources, information systems and instruments at different locations. The overall goal is to facilitate the routine interactions of these resources to reduce the time spent in design cycles, particularly for NASA's mission critical projects. The IPG (Information Power Grid) seeks to implement NASA's diverse computing resources in a fashion similar to the way in which electric power is made available.

  11. SaaS enabled admission control for MCMC simulation in cloud computing infrastructures

    NASA Astrophysics Data System (ADS)

    Vázquez-Poletti, J. L.; Moreno-Vozmediano, R.; Han, R.; Wang, W.; Llorente, I. M.

    2017-02-01

    Markov Chain Monte Carlo (MCMC) methods are widely used in the field of simulation and modelling of materials, producing applications that require a great amount of computational resources. Cloud computing represents a seamless source for these resources in the form of HPC. However, resource over-consumption can be an important drawback, specially if the cloud provision process is not appropriately optimized. In the present contribution we propose a two-level solution that, on one hand, takes advantage of approximate computing for reducing the resource demand and on the other, uses admission control policies for guaranteeing an optimal provision to running applications.

  12. Parallel computing in genomic research: advances and applications

    PubMed Central

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today’s genomic experiments have to process the so-called “biological big data” that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. PMID:26604801

  13. Contributions of computational chemistry and biophysical techniques to fragment-based drug discovery.

    PubMed

    Gozalbes, Rafael; Carbajo, Rodrigo J; Pineda-Lucena, Antonio

    2010-01-01

    In the last decade, fragment-based drug discovery (FBDD) has evolved from a novel approach in the search of new hits to a valuable alternative to the high-throughput screening (HTS) campaigns of many pharmaceutical companies. The increasing relevance of FBDD in the drug discovery universe has been concomitant with an implementation of the biophysical techniques used for the detection of weak inhibitors, e.g. NMR, X-ray crystallography or surface plasmon resonance (SPR). At the same time, computational approaches have also been progressively incorporated into the FBDD process and nowadays several computational tools are available. These stretch from the filtering of huge chemical databases in order to build fragment-focused libraries comprising compounds with adequate physicochemical properties, to more evolved models based on different in silico methods such as docking, pharmacophore modelling, QSAR and virtual screening. In this paper we will review the parallel evolution and complementarities of biophysical techniques and computational methods, providing some representative examples of drug discovery success stories by using FBDD.

  14. Parallel computing in genomic research: advances and applications.

    PubMed

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities.

  15. Functional requirements of computer systems for the U.S. Geological Survey, Water Resources Division, 1988-97

    USGS Publications Warehouse

    Hathaway, R.M.; McNellis, J.M.

    1989-01-01

    Investigating the occurrence, quantity, quality, distribution, and movement of the Nation 's water resources is the principal mission of the U.S. Geological Survey 's Water Resources Division. Reports of these investigations are published and available to the public. To accomplish this mission, the Division requires substantial computer technology to process, store, and analyze data from more than 57,000 hydrologic sites. The Division 's computer resources are organized through the Distributed Information System Program Office that manages the nationwide network of computers. The contract that provides the major computer components for the Water Resources Division 's Distributed information System expires in 1991. Five work groups were organized to collect the information needed to procure a new generation of computer systems for the U. S. Geological Survey, Water Resources Division. Each group was assigned a major Division activity and asked to describe its functional requirements of computer systems for the next decade. The work groups and major activities are: (1) hydrologic information; (2) hydrologic applications; (3) geographic information systems; (4) reports and electronic publishing; and (5) administrative. The work groups identified 42 functions and described their functional requirements for 1988, 1992, and 1997. A few new functions such as Decision Support Systems and Executive Information Systems, were identified, but most are the same as performed today. Although the number of functions will remain about the same, steady growth in the size, complexity, and frequency of many functions is predicted for the next decade. No compensating increase in the Division 's staff is anticipated during this period. To handle the increased workload and perform these functions, new approaches will be developed that use advanced computer technology. The advanced technology is required in a unified, tightly coupled system that will support all functions simultaneously. The new approaches and expanded use of computers will require substantial increases in the quantity and sophistication of the Division 's computer resources. The requirements presented in this report will be used to develop technical specifications that describe the computer resources needed during the 1990's. (USGS)

  16. Setting Up a Grid-CERT: Experiences of an Academic CSIRT

    ERIC Educational Resources Information Center

    Moller, Klaus

    2007-01-01

    Purpose: Grid computing has often been heralded as the next logical step after the worldwide web. Users of grids can access dynamic resources such as computer storage and use the computing resources of computers under the umbrella of a virtual organisation. Although grid computing is often compared to the worldwide web, it is vastly more complex…

  17. A Novel Resource Management Method of Providing Operating System as a Service for Mobile Transparent Computing

    PubMed Central

    Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua

    2014-01-01

    This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable. PMID:24883353

  18. A novel resource management method of providing operating system as a service for mobile transparent computing.

    PubMed

    Xiong, Yonghua; Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua

    2014-01-01

    This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable.

  19. Networking Micro-Processors for Effective Computer Utilization in Nursing

    PubMed Central

    Mangaroo, Jewellean; Smith, Bob; Glasser, Jay; Littell, Arthur; Saba, Virginia

    1982-01-01

    Networking as a social entity has important implications for maximizing computer resources for improved utilization in nursing. This paper describes the one process of networking of complementary resources at three institutions. Prairie View A&M University, Texas A&M University and the University of Texas School of Public Health, which has effected greater utilization of computers at the college. The results achieved in this project should have implications for nurses, users, and consumers in the development of computer resources.

  20. Combination of visual and symbolic knowledge: A survey in anatomy.

    PubMed

    Banerjee, Imon; Patané, Giuseppe; Spagnuolo, Michela

    2017-01-01

    In medicine, anatomy is considered as the most discussed field and results in a huge amount of knowledge, which is heterogeneous and covers aspects that are mostly independent in nature. Visual and symbolic modalities are mainly adopted for exemplifying knowledge about human anatomy and are crucial for the evolution of computational anatomy. In particular, a tight integration of visual and symbolic modalities is beneficial to support knowledge-driven methods for biomedical investigation. In this paper, we review previous work on the presentation and sharing of anatomical knowledge, and the development of advanced methods for computational anatomy, also focusing on the key research challenges for harmonizing symbolic knowledge and spatial 3D data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Local Alignment Tool Based on Hadoop Framework and GPU Architecture

    PubMed Central

    Hung, Che-Lun; Hua, Guan-Jie

    2014-01-01

    With the rapid growth of next generation sequencing technologies, such as Slex, more and more data have been discovered and published. To analyze such huge data the computational performance is an important issue. Recently, many tools, such as SOAP, have been implemented on Hadoop and GPU parallel computing architectures. BLASTP is an important tool, implemented on GPU architectures, for biologists to compare protein sequences. To deal with the big biology data, it is hard to rely on single GPU. Therefore, we implement a distributed BLASTP by combining Hadoop and multi-GPUs. The experimental results present that the proposed method can improve the performance of BLASTP on single GPU, and also it can achieve high availability and fault tolerance. PMID:24955362

  2. Local alignment tool based on Hadoop framework and GPU architecture.

    PubMed

    Hung, Che-Lun; Hua, Guan-Jie

    2014-01-01

    With the rapid growth of next generation sequencing technologies, such as Slex, more and more data have been discovered and published. To analyze such huge data the computational performance is an important issue. Recently, many tools, such as SOAP, have been implemented on Hadoop and GPU parallel computing architectures. BLASTP is an important tool, implemented on GPU architectures, for biologists to compare protein sequences. To deal with the big biology data, it is hard to rely on single GPU. Therefore, we implement a distributed BLASTP by combining Hadoop and multi-GPUs. The experimental results present that the proposed method can improve the performance of BLASTP on single GPU, and also it can achieve high availability and fault tolerance.

  3. A proposed framework on hybrid feature selection techniques for handling high dimensional educational data

    NASA Astrophysics Data System (ADS)

    Shahiri, Amirah Mohamed; Husain, Wahidah; Rashid, Nur'Aini Abd

    2017-10-01

    Huge amounts of data in educational datasets may cause the problem in producing quality data. Recently, data mining approach are increasingly used by educational data mining researchers for analyzing the data patterns. However, many research studies have concentrated on selecting suitable learning algorithms instead of performing feature selection process. As a result, these data has problem with computational complexity and spend longer computational time for classification. The main objective of this research is to provide an overview of feature selection techniques that have been used to analyze the most significant features. Then, this research will propose a framework to improve the quality of students' dataset. The proposed framework uses filter and wrapper based technique to support prediction process in future study.

  4. Assessing attitudes toward computers and the use of Internet resources among undergraduate microbiology students

    NASA Astrophysics Data System (ADS)

    Anderson, Delia Marie Castro

    Computer literacy and use have become commonplace in our colleges and universities. In an environment that demands the use of technology, educators should be knowledgeable of the components that make up the overall computer attitude of students and be willing to investigate the processes and techniques of effective teaching and learning that can take place with computer technology. The purpose of this study is two fold. First, it investigates the relationship between computer attitudes and gender, ethnicity, and computer experience. Second, it addresses the question of whether, and to what extent, students' attitudes toward computers change over a 16 week period in an undergraduate microbiology course that supplements the traditional lecture with computer-driven assignments. Multiple regression analyses, using data from the Computer Attitudes Scale (Loyd & Loyd, 1985), showed that, in the experimental group, no significant relationships were found between computer anxiety and gender or ethnicity or between computer confidence and gender or ethnicity. However, students who used computers the longest (p = .001) and who were self-taught (p = .046) had the lowest computer anxiety levels. Likewise students who used computers the longest (p = .001) and who were self-taught (p = .041) had the highest confidence levels. No significant relationships between computer liking, usefulness, or the use of Internet resources and gender, ethnicity, or computer experience were found. Dependent T-tests were performed to determine whether computer attitude scores (pretest and posttest) increased over a 16-week period for students who had been exposed to computer-driven assignments and other Internet resources. Results showed that students in the experimental group were less anxious about working with computers and considered computers to be more useful. In the control group, no significant changes in computer anxiety, confidence, liking, or usefulness were noted. Overall, students in the experimental group, who responded to the use of Internet Resources Survey, were positive (mean of 3.4 on the 4-point scale) toward their use of Internet resources which included the online courseware developed by the researcher. Findings from this study suggest that (1) the digital divide with respect to gender and ethnicity may be narrowing, and (2) students who are exposed to a course that augments computer-driven courseware with traditional teaching methods appear to have less anxiety, have a clearer perception of computer usefulness, and feel that online resources enhance their learning.

  5. Robot Design

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Martin Marietta Aero and Naval Systems has advanced the CAD art to a very high level at its Robotics Laboratory. One of the company's major projects is construction of a huge Field Material Handling Robot for the Army's Human Engineering Lab. Design of FMR, intended to move heavy and dangerous material such as ammunition, was a triumph in CAD Engineering. Separate computer problems modeled the robot's kinematics and dynamics, yielding such parameters as the strength of materials required for each component, the length of the arms, their degree of freedom and power of hydraulic system needed. The Robotics Lab went a step further and added data enabling computer simulation and animation of the robot's total operational capability under various loading and unloading conditions. NASA computer program (IAC), integrated Analysis Capability Engineering Database was used. Program contains a series of modules that can stand alone or be integrated with data from sensors or software tools.

  6. Fast computation of radiation pressure force exerted by multiple laser beams on red blood cell-like particles

    NASA Astrophysics Data System (ADS)

    Gou, Ming-Jiang; Yang, Ming-Lin; Sheng, Xin-Qing

    2016-10-01

    Mature red blood cells (RBC) do not contain huge complex nuclei and organelles, makes them can be approximately regarded as homogeneous medium particles. To compute the radiation pressure force (RPF) exerted by multiple laser beams on this kind of arbitrary shaped homogenous nano-particles, a fast electromagnetic optics method is demonstrated. In general, based on the Maxwell's equations, the matrix equation formed by the method of moment (MOM) has many right hand sides (RHS's) corresponding to the different laser beams. In order to accelerate computing the matrix equation, the algorithm conducts low-rank decomposition on the excitation matrix consisting of all RHS's to figure out the so-called skeleton laser beams by interpolative decomposition (ID). After the solutions corresponding to the skeletons are obtained, the desired responses can be reconstructed efficiently. Some numerical results are performed to validate the developed method.

  7. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation.

    PubMed

    Gray, Alan; Harlen, Oliver G; Harris, Sarah A; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J; Pearson, Arwen R; Read, Daniel J; Richardson, Robin A

    2015-01-01

    Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

  8. CANFAR + Skytree: Mining Massive Datasets as an Essential Part of the Future of Astronomy

    NASA Astrophysics Data System (ADS)

    Ball, Nicholas M.

    2013-01-01

    The future study of large astronomical datasets, consisting of hundreds of millions to billions of objects, will be dominated by large computing resources, and by analysis tools of the necessary scalability and sophistication to extract useful information. Significant effort will be required to fulfil their potential as a provider of the next generation of science results. To-date, computing systems have allowed either sophisticated analysis of small datasets, e.g., most astronomy software, or simple analysis of large datasets, e.g., database queries. At the Canadian Astronomy Data Centre, we have combined our cloud computing system, the Canadian Advanced Network for Astronomical Research (CANFAR), with the world's most advanced machine learning software, Skytree, to create the world's first cloud computing system for data mining in astronomy. This allows the full sophistication of the huge fields of data mining and machine learning to be applied to the hundreds of millions of objects that make up current large datasets. CANFAR works by utilizing virtual machines, which appear to the user as equivalent to a desktop. Each machine is replicated as desired to perform large-scale parallel processing. Such an arrangement carries far more flexibility than other cloud systems, because it enables the user to immediately install and run the same code that they already utilize for science on their desktop. We demonstrate the utility of the CANFAR + Skytree system by showing science results obtained, including assigning photometric redshifts with full probability density functions (PDFs) to a catalog of approximately 133 million galaxies from the MegaPipe reductions of the Canada-France-Hawaii Telescope Legacy Wide and Deep surveys. Each PDF is produced nonparametrically from 100 instances of the photometric parameters for each galaxy, generated by perturbing within the errors on the measurements. Hence, we produce, store, and assign redshifts to, a catalog of over 13 billion object instances. This catalog is comparable in size to those expected from next-generation surveys, such as Large Synoptic Survey Telescope. The CANFAR+Skytree system is open for use by any interested member of the astronomical community.

  9. Seismic probabilistic tsunami hazard: from regional to local analysis and use of geological and historical observations

    NASA Astrophysics Data System (ADS)

    Tonini, R.; Lorito, S.; Orefice, S.; Graziani, L.; Brizuela, B.; Smedile, A.; Volpe, M.; Romano, F.; De Martini, P. M.; Maramai, A.; Selva, J.; Piatanesi, A.; Pantosti, D.

    2016-12-01

    Site-specific probabilistic tsunami hazard analyses demand very high computational efforts that are often reduced by introducing approximations on tsunami sources and/or tsunami modeling. On one hand, the large variability of source parameters implies the definition of a huge number of potential tsunami scenarios, whose omission could easily lead to important bias in the analysis. On the other hand, detailed inundation maps computed by tsunami numerical simulations require very long running time. When tsunami effects are calculated at regional scale, a common practice is to propagate tsunami waves in deep waters (up to 50-100 m depth) neglecting non-linear effects and using coarse bathymetric meshes. Then, maximum wave heights on the coast are empirically extrapolated, saving a significant amount of computational time. However, moving to local scale, such assumptions drop out and tsunami modeling would require much greater computational resources. In this work, we perform a local Seismic Probabilistic Tsunami Hazard Analysis (SPTHA) for the 50 km long coastal segment between Augusta and Siracusa, a touristic and commercial area placed along the South-Eastern Sicily coast, Italy. The procedure consists in using the outcomes of a regional SPTHA as input for a two-step filtering method to select and substantially reduce the number of scenarios contributing to the specific target area. These selected scenarios are modeled using high resolution topo-bathymetry for producing detailed inundation maps. Results are presented as probabilistic hazard curves and maps, with the goal of analyze, compare and highlight the different results provided by regional and local hazard assessments. Moreover, the analysis is enriched by the use of local observed tsunami data, both geological and historical. Indeed, tsunami data-sets available for the selected target areas are particularly rich with respect to the scarce and heterogeneous data-sets usually available elsewhere. Therefore, they can represent valuable benchmarks for testing and strengthening the results of such kind of studies. The work is funded by the Italian Flagship Project RITMARE, the two EC FP7 projects ASTARTE (Grant agreement 603839) and STREST (Grant agreement 603389), and the INGV-DPC Agreement.

  10. Nine steps to risk-informed wellhead protection and management: Methods and application to the Burgberg Catchment

    NASA Astrophysics Data System (ADS)

    Nowak, W.; Enzenhoefer, R.; Bunk, T.

    2013-12-01

    Wellhead protection zones are commonly delineated via advective travel time analysis without considering any aspects of model uncertainty. In the past decade, research efforts produced quantifiable risk-based safety margins for protection zones. They are based on well vulnerability criteria (e.g., travel times, exposure times, peak concentrations) cast into a probabilistic setting, i.e., they consider model and parameter uncertainty. Practitioners still refrain from applying these new techniques for mainly three reasons. (1) They fear the possibly cost-intensive additional areal demand of probabilistic safety margins, (2) probabilistic approaches are allegedly complex, not readily available, and consume huge computing resources, and (3) uncertainty bounds are fuzzy, whereas final decisions are binary. The primary goal of this study is to show that these reservations are unjustified. We present a straightforward and computationally affordable framework based on a novel combination of well-known tools (e.g., MODFLOW, PEST, Monte Carlo). This framework provides risk-informed decision support for robust and transparent wellhead delineation under uncertainty. Thus, probabilistic risk-informed wellhead protection is possible with methods readily available for practitioners. As vivid proof of concept, we illustrate our key points on a pumped karstic well catchment, located in Germany. In the case study, we show that reliability levels can be increased by re-allocating the existing delineated area at no increase in delineated area. This is achieved by simply swapping delineated low-risk areas against previously non-delineated high-risk areas. Also, we show that further improvements may often be available at only low additional delineation area. Depending on the context, increases or reductions of delineated area directly translate to costs and benefits, if the land is priced, or if land owners need to be compensated for land use restrictions.

  11. Probabilistic inversion of AVO seismic data for reservoir properties and related uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Zunino, Andrea; Mosegaard, Klaus

    2017-04-01

    Sought-after reservoir properties of interest are linked only indirectly to the observable geophysical data which are recorded at the earth's surface. In this framework, seismic data represent one of the most reliable tool to study the structure and properties of the subsurface for natural resources. Nonetheless, seismic analysis is not an end in itself, as physical properties such as porosity are often of more interest for reservoir characterization. As such, inference of those properties implies taking into account also rock physics models linking porosity and other physical properties to elastic parameters. In the framework of seismic reflection data, we address this challenge for a reservoir target zone employing a probabilistic method characterized by a multi-step complex nonlinear forward modeling that combines: 1) a rock physics model with 2) the solution of full Zoeppritz equations and 3) a convolutional seismic forward modeling. The target property of this work is porosity, which is inferred using a Monte Carlo approach where porosity models, i.e., solutions to the inverse problem, are directly sampled from the posterior distribution. From a theoretical point of view, the Monte Carlo strategy can be particularly useful in the presence of nonlinear forward models, which is often the case when employing sophisticated rock physics models and full Zoeppritz equations and to estimate related uncertainty. However, the resulting computational challenge is huge. We propose to alleviate this computational burden by assuming some smoothness of the subsurface parameters and consequently parameterizing the model in terms of spline bases. This allows us a certain flexibility in that the number of spline bases and hence the resolution in each spatial direction can be controlled. The method is tested on a 3-D synthetic case and on a 2-D real data set.

  12. An integrated SNP mining and utilization (ISMU) pipeline for next generation sequencing data.

    PubMed

    Azam, Sarwar; Rathore, Abhishek; Shah, Trushar M; Telluri, Mohan; Amindala, BhanuPrakash; Ruperao, Pradeep; Katta, Mohan A V S K; Varshney, Rajeev K

    2014-01-01

    Open source single nucleotide polymorphism (SNP) discovery pipelines for next generation sequencing data commonly requires working knowledge of command line interface, massive computational resources and expertise which is a daunting task for biologists. Further, the SNP information generated may not be readily used for downstream processes such as genotyping. Hence, a comprehensive pipeline has been developed by integrating several open source next generation sequencing (NGS) tools along with a graphical user interface called Integrated SNP Mining and Utilization (ISMU) for SNP discovery and their utilization by developing genotyping assays. The pipeline features functionalities such as pre-processing of raw data, integration of open source alignment tools (Bowtie2, BWA, Maq, NovoAlign and SOAP2), SNP prediction (SAMtools/SOAPsnp/CNS2snp and CbCC) methods and interfaces for developing genotyping assays. The pipeline outputs a list of high quality SNPs between all pairwise combinations of genotypes analyzed, in addition to the reference genome/sequence. Visualization tools (Tablet and Flapjack) integrated into the pipeline enable inspection of the alignment and errors, if any. The pipeline also provides a confidence score or polymorphism information content value with flanking sequences for identified SNPs in standard format required for developing marker genotyping (KASP and Golden Gate) assays. The pipeline enables users to process a range of NGS datasets such as whole genome re-sequencing, restriction site associated DNA sequencing and transcriptome sequencing data at a fast speed. The pipeline is very useful for plant genetics and breeding community with no computational expertise in order to discover SNPs and utilize in genomics, genetics and breeding studies. The pipeline has been parallelized to process huge datasets of next generation sequencing. It has been developed in Java language and is available at http://hpc.icrisat.cgiar.org/ISMU as a standalone free software.

  13. Desktop Computing Integration Project

    NASA Technical Reports Server (NTRS)

    Tureman, Robert L., Jr.

    1992-01-01

    The Desktop Computing Integration Project for the Human Resources Management Division (HRMD) of LaRC was designed to help division personnel use personal computing resources to perform job tasks. The three goals of the project were to involve HRMD personnel in desktop computing, link mainframe data to desktop capabilities, and to estimate training needs for the division. The project resulted in increased usage of personal computers by Awards specialists, an increased awareness of LaRC resources to help perform tasks, and personal computer output that was used in presentation of information to center personnel. In addition, the necessary skills for HRMD personal computer users were identified. The Awards Office was chosen for the project because of the consistency of their data requests and the desire of employees in that area to use the personal computer.

  14. Computer vision syndrome-A common cause of unexplained visual symptoms in the modern era.

    PubMed

    Munshi, Sunil; Varghese, Ashley; Dhar-Munshi, Sushma

    2017-07-01

    The aim of this study was to assess the evidence and available literature on the clinical, pathogenetic, prognostic and therapeutic aspects of Computer vision syndrome. Information was collected from Medline, Embase & National Library of Medicine over the last 30 years up to March 2016. The bibliographies of relevant articles were searched for additional references. Patients with Computer vision syndrome present to a variety of different specialists, including General Practitioners, Neurologists, Stroke physicians and Ophthalmologists. While the condition is common, there is a poor awareness in the public and among health professionals. Recognising this condition in the clinic or in emergency situations like the TIA clinic is crucial. The implications are potentially huge in view of the extensive and widespread use of computers and visual display units. Greater public awareness of Computer vision syndrome and education of health professionals is vital. Preventive strategies should form part of work place ergonomics routinely. Prompt and correct recognition is important to allow management and avoid unnecessary treatments. © 2017 John Wiley & Sons Ltd.

  15. CA-LOD: Collision Avoidance Level of Detail for Scalable, Controllable Crowds

    NASA Astrophysics Data System (ADS)

    Paris, Sébastien; Gerdelan, Anton; O'Sullivan, Carol

    The new wave of computer-driven entertainment technology throws audiences and game players into massive virtual worlds where entire cities are rendered in real time. Computer animated characters run through inner-city streets teeming with pedestrians, all fully rendered with 3D graphics, animations, particle effects and linked to 3D sound effects to produce more realistic and immersive computer-hosted entertainment experiences than ever before. Computing all of this detail at once is enormously computationally expensive, and game designers as a rule, have sacrificed the behavioural realism in favour of better graphics. In this paper we propose a new Collision Avoidance Level of Detail (CA-LOD) algorithm that allows games to support huge crowds in real time with the appearance of more intelligent behaviour. We propose two collision avoidance models used for two different CA-LODs: a fuzzy steering focusing on the performances, and a geometric steering to obtain the best realism. Mixing these approaches allows to obtain thousands of autonomous characters in real time, resulting in a scalable but still controllable crowd.

  16. Quantum-assisted biomolecular modelling.

    PubMed

    Harris, Sarah A; Kendon, Vivien M

    2010-08-13

    Our understanding of the physics of biological molecules, such as proteins and DNA, is limited because the approximations we usually apply to model inert materials are not, in general, applicable to soft, chemically inhomogeneous systems. The configurational complexity of biomolecules means the entropic contribution to the free energy is a significant factor in their behaviour, requiring detailed dynamical calculations to fully evaluate. Computer simulations capable of taking all interatomic interactions into account are therefore vital. However, even with the best current supercomputing facilities, we are unable to capture enough of the most interesting aspects of their behaviour to properly understand how they work. This limits our ability to design new molecules, to treat diseases, for example. Progress in biomolecular simulation depends crucially on increasing the computing power available. Faster classical computers are in the pipeline, but these provide only incremental improvements. Quantum computing offers the possibility of performing huge numbers of calculations in parallel, when it becomes available. We discuss the current open questions in biomolecular simulation, how these might be addressed using quantum computation and speculate on the future importance of quantum-assisted biomolecular modelling.

  17. Towards a cyber-physical era: soft computing framework based multi-sensor array for water quality monitoring

    NASA Astrophysics Data System (ADS)

    Bhardwaj, Jyotirmoy; Gupta, Karunesh K.; Gupta, Rajiv

    2018-02-01

    New concepts and techniques are replacing traditional methods of water quality parameter measurement systems. This paper introduces a cyber-physical system (CPS) approach for water quality assessment in a distribution network. Cyber-physical systems with embedded sensors, processors and actuators can be designed to sense and interact with the water environment. The proposed CPS is comprised of sensing framework integrated with five different water quality parameter sensor nodes and soft computing framework for computational modelling. Soft computing framework utilizes the applications of Python for user interface and fuzzy sciences for decision making. Introduction of multiple sensors in a water distribution network generates a huge number of data matrices, which are sometimes highly complex, difficult to understand and convoluted for effective decision making. Therefore, the proposed system framework also intends to simplify the complexity of obtained sensor data matrices and to support decision making for water engineers through a soft computing framework. The target of this proposed research is to provide a simple and efficient method to identify and detect presence of contamination in a water distribution network using applications of CPS.

  18. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE PAGES

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...

    2017-09-29

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  19. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  20. The Relative Effectiveness of Computer-Based and Traditional Resources for Education in Anatomy

    ERIC Educational Resources Information Center

    Khot, Zaid; Quinlan, Kaitlyn; Norman, Geoffrey R.; Wainman, Bruce

    2013-01-01

    There is increasing use of computer-based resources to teach anatomy, although no study has compared computer-based learning to traditional. In this study, we examine the effectiveness of three formats of anatomy learning: (1) a virtual reality (VR) computer-based module, (2) a static computer-based module providing Key Views (KV), (3) a plastic…

  1. Infrastructures for Distributed Computing: the case of BESIII

    NASA Astrophysics Data System (ADS)

    Pellegrino, J.

    2018-05-01

    The BESIII is an electron-positron collision experiment hosted at BEPCII in Beijing and aimed to investigate Tau-Charm physics. Now BESIII has been running for several years and gathered more than 1PB raw data. In order to analyze these data and perform massive Monte Carlo simulations, a large amount of computing and storage resources is needed. The distributed computing system is based up on DIRAC and it is in production since 2012. It integrates computing and storage resources from different institutes and a variety of resource types such as cluster, grid, cloud or volunteer computing. About 15 sites from BESIII Collaboration from all over the world joined this distributed computing infrastructure, giving a significant contribution to the IHEP computing facility. Nowadays cloud computing is playing a key role in the HEP computing field, due to its scalability and elasticity. Cloud infrastructures take advantages of several tools, such as VMDirac, to manage virtual machines through cloud managers according to the job requirements. With the virtually unlimited resources from commercial clouds, the computing capacity could scale accordingly in order to deal with any burst demands. General computing models have been discussed in the talk and are addressed herewith, with particular focus on the BESIII infrastructure. Moreover new computing tools and upcoming infrastructures will be addressed.

  2. Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue

    DOE PAGES

    Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; ...

    2017-10-01

    Here, the Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a modelmore » does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.« less

  3. Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey

    Here, the Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a modelmore » does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.« less

  4. Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue

    NASA Astrophysics Data System (ADS)

    Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; Bagliesi, Giuseppe; Belforte, Stephano; Campana, Simone; Dimou, Maria; Flix, Jose; Forti, Alessandra; di Girolamo, A.; Karavakis, Edward; Lammel, Stephan; Litmaath, Maarten; Sciaba, Andrea; Valassi, Andrea

    2017-10-01

    The Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a model does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.

  5. Classical multiparty computation using quantum resources

    NASA Astrophysics Data System (ADS)

    Clementi, Marco; Pappa, Anna; Eckstein, Andreas; Walmsley, Ian A.; Kashefi, Elham; Barz, Stefanie

    2017-12-01

    In this work, we demonstrate a way to perform classical multiparty computing among parties with limited computational resources. Our method harnesses quantum resources to increase the computational power of the individual parties. We show how a set of clients restricted to linear classical processing are able to jointly compute a nonlinear multivariable function that lies beyond their individual capabilities. The clients are only allowed to perform classical xor gates and single-qubit gates on quantum states. We also examine the type of security that can be achieved in this limited setting. Finally, we provide a proof-of-concept implementation using photonic qubits that allows four clients to compute a specific example of a multiparty function, the pairwise and.

  6. Computer Network Resources for Physical Geography Instruction.

    ERIC Educational Resources Information Center

    Bishop, Michael P.; And Others

    1993-01-01

    Asserts that the use of computer networks provides an important and effective resource for geography instruction. Describes the use of the Internet network in physical geography instruction. Provides an example of the use of Internet resources in a climatology/meteorology course. (CFR)

  7. Self managing experiment resources

    NASA Astrophysics Data System (ADS)

    Stagni, F.; Ubeda, M.; Tsaregorodtsev, A.; Romanovskiy, V.; Roiser, S.; Charpentier, P.; Graciani, R.

    2014-06-01

    Within this paper we present an autonomic Computing resources management system, used by LHCb for assessing the status of their Grid resources. Virtual Organizations Grids include heterogeneous resources. For example, LHC experiments very often use resources not provided by WLCG, and Cloud Computing resources will soon provide a non-negligible fraction of their computing power. The lack of standards and procedures across experiments and sites generated the appearance of multiple information systems, monitoring tools, ticket portals, etc... which nowadays coexist and represent a very precious source of information for running HEP experiments Computing systems as well as sites. These two facts lead to many particular solutions for a general problem: managing the experiment resources. In this paper we present how LHCb, via the DIRAC interware, addressed such issues. With a renewed Central Information Schema hosting all resources metadata and a Status System (Resource Status System) delivering real time information, the system controls the resources topology, independently of the resource types. The Resource Status System applies data mining techniques against all possible information sources available and assesses the status changes, that are then propagated to the topology description. Obviously, giving full control to such an automated system is not risk-free. Therefore, in order to minimise the probability of misbehavior, a battery of tests has been developed in order to certify the correctness of its assessments. We will demonstrate the performance and efficiency of such a system in terms of cost reduction and reliability.

  8. Cooperative fault-tolerant distributed computing U.S. Department of Energy Grant DE-FG02-02ER25537 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunderam, Vaidy S.

    2007-01-09

    The Harness project has developed novel software frameworks for the execution of high-end simulations in a fault-tolerant manner on distributed resources. The H2O subsystem comprises the kernel of the Harness framework, and controls the key functions of resource management across multiple administrative domains, especially issues of access and allocation. It is based on a “pluggable” architecture that enables the aggregated use of distributed heterogeneous resources for high performance computing. The major contributions of the Harness II project result in significantly enhancing the overall computational productivity of high-end scientific applications by enabling robust, failure-resilient computations on cooperatively pooled resource collections.

  9. Construction and application of Red5 cluster based on OpenStack

    NASA Astrophysics Data System (ADS)

    Wang, Jiaqing; Song, Jianxin

    2017-08-01

    With the application and development of cloud computing technology in various fields, the resource utilization rate of the data center has been improved obviously, and the system based on cloud computing platform has also improved the expansibility and stability. In the traditional way, Red5 cluster resource utilization is low and the system stability is poor. This paper uses cloud computing to efficiently calculate the resource allocation ability, and builds a Red5 server cluster based on OpenStack. Multimedia applications can be published to the Red5 cloud server cluster. The system achieves the flexible construction of computing resources, but also greatly improves the stability of the cluster and service efficiency.

  10. Using Personal Computers To Acquire Special Education Information. Revised. ERIC Digest #429.

    ERIC Educational Resources Information Center

    ERIC Clearinghouse on Handicapped and Gifted Children, Reston, VA.

    This digest offers basic information about resources, available to users of personal computers, in the area of professional development in special education. Two types of resources are described: those that can be purchased on computer diskettes and those made available by linking personal computers through electronic telephone networks. Resources…

  11. Epilepsy Care in the World: results of an ILAE/IBE/WHO Global Campaign Against Epilepsy survey.

    PubMed

    Dua, Tarun; de Boer, Hanneke M; Prilipko, Leonid L; Saxena, Shekhar

    2006-07-01

    Information about existing resources available within the countries to tackle the huge medical, social, and economic burden caused by epilepsy is lacking. To fill this information gap, a survey of country resources available for epilepsy care was conducted within the framework of the ILAE/IBE/WHO Global Campaign Against Epilepsy. The study represents a major collaborative effort involving the World Health Organization (WHO), the International League Against Epilepsy (ILAE) and the International Bureau for Epilepsy (IBE). Data were collected from 160 countries representing 97.5% of the world population. The information included availability, role, and involvement of professional and patient associations for epilepsy, epilepsy treatment and services including antiepileptic drugs, human resources involved in epilepsy care, teaching in epileptology, disability benefits, and problems encountered by people with epilepsy and health professionals involved in epilepsy care. The data confirm that epilepsy care is grossly inadequate compared with the needs in most countries. In addition, large inequities exist across regions and income groups of countries, with low-income countries having extremely meager resources. Complete results of this survey can be found in the Atlas: Epilepsy Care in the World. The data reinforce the need for urgent, substantial, and systematic action to enhance resources for epilepsy care, especially in low-income countries.

  12. Research on the digital education resources of sharing pattern in independent colleges based on cloud computing environment

    NASA Astrophysics Data System (ADS)

    Xiong, Ting; He, Zhiwen

    2017-06-01

    Cloud computing was first proposed by Google Company in the United States, which was based on the Internet center, providing a standard and open network sharing service approach. With the rapid development of the higher education in China, the educational resources provided by colleges and universities had greatly gap in the actual needs of teaching resources. therefore, Cloud computing of using the Internet technology to provide shared methods liked the timely rain, which had become an important means of the Digital Education on sharing applications in the current higher education. Based on Cloud computing environment, the paper analyzed the existing problems about the sharing of digital educational resources in Jiangxi Province Independent Colleges. According to the sharing characteristics of mass storage, efficient operation and low input about Cloud computing, the author explored and studied the design of the sharing model about the digital educational resources of higher education in Independent College. Finally, the design of the shared model was put into the practical applications.

  13. Paper 8775 - Integrating Natural Resources and Ecological Science into the Disaster Risk CYCLE: Lessons Learned and Future Directions

    NASA Astrophysics Data System (ADS)

    Brosnan, D. M.

    2014-12-01

    Familiar to disaster risk reduction (DRR) scientists and professionals, the disaster cycle is an adaptive approach that involves planning, response and learning for the next event. It has proven effective in saving lives and helping communities around the world deal with natural and other hazards. But it has rarely been applied to natural resource and ecological science, despite the fact that many communities are dependent on these resources. This presentation will include lessons learned from applying science to tackle ecological consequences in several disasters in the US and globally, including the Colorado Floods, the SE Asia tsunami, the Montserrat volcanic eruption, and US SAFRR tsunami scenario. The presentation discusses the role that science and scientists can play at each phase of the disaster cycle. The consequences of not including disaster cycles in the management of natural systems leaves these resources and the huge investments made to protect highly vulnerable. The presentation discusses how The presentation discusses how science can help government and communities in planning and responding to these events. It concludes with a set of lessons learned and guidlines for moving forward.

  14. EMMA—mouse mutant resources for the international scientific community

    PubMed Central

    Wilkinson, Phil; Sengerova, Jitka; Matteoni, Raffaele; Chen, Chao-Kung; Soulat, Gaetan; Ureta-Vidal, Abel; Fessele, Sabine; Hagn, Michael; Massimi, Marzia; Pickford, Karen; Butler, Richard H.; Marschall, Susan; Mallon, Ann-Marie; Pickard, Amanda; Raspa, Marcello; Scavizzi, Ferdinando; Fray, Martin; Larrigaldie, Vanessa; Leyritz, Johan; Birney, Ewan; Tocchini-Valentini, Glauco P.; Brown, Steve; Herault, Yann; Montoliu, Lluis; de Angelis, Martin Hrabé; Smedley, Damian

    2010-01-01

    The laboratory mouse is the premier animal model for studying human disease and thousands of mutants have been identified or produced, most recently through gene-specific mutagenesis approaches. High throughput strategies by the International Knockout Mouse Consortium (IKMC) are producing mutants for all protein coding genes. Generating a knock-out line involves huge monetary and time costs so capture of both the data describing each mutant alongside archiving of the line for distribution to future researchers is critical. The European Mouse Mutant Archive (EMMA) is a leading international network infrastructure for archiving and worldwide provision of mouse mutant strains. It operates in collaboration with the other members of the Federation of International Mouse Resources (FIMRe), EMMA being the European component. Additionally EMMA is one of four repositories involved in the IKMC, and therefore the current figure of 1700 archived lines will rise markedly. The EMMA database gathers and curates extensive data on each line and presents it through a user-friendly website. A BioMart interface allows advanced searching including integrated querying with other resources e.g. Ensembl. Other resources are able to display EMMA data by accessing our Distributed Annotation System server. EMMA database access is publicly available at http://www.emmanet.org. PMID:19783817

  15. Rise of the dragon

    NASA Astrophysics Data System (ADS)

    2008-08-01

    China may be a vast and daunting place to outsiders but one thing is clear: the country is booming, with the economy growing by more than 10% each year. Its manufacturing industry is thriving and there is a huge demand for natural resources - last year, on average two new coal-fired power plants were being built in China each week. Indeed, the Chinese authorities are eager to conceal the environmental impact of the country's rampant economic growth. During this month's Olympic Games, for example, drivers in Beijing will only be allowed to use their cars every other day while the event takes place.

  16. Meeting basic human needs for water remains huge challenge, expert says

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    2011-11-01

    Since the 1998 publication of the first volume of The World's Water, a biennial report on freshwater resources from the Pacific Institute, some significant strides have been made in improving water management and quality. However, there has also been a continuing stream of bad news about the state of water in many parts of the world. With the 18 October publication of volume 7 in the series, two stark statistics stand out to lead author Peter Gleick: More than 1 billion people still lack safe drinking water, and more than 2.5 billion lack adequate sanitation.

  17. Data-driven battery product development: Turn battery performance into a competitive advantage.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sholklapper, Tal

    Poor battery performance is a primary source of user dissatisfaction across a broad range of applications, and is a key bottleneck hindering the growth of mobile technology, wearables, electric vehicles, and grid energy storage. Engineering battery systems is difficult, requiring extensive testing for vendor selection, BMS programming, and application-specific lifetime testing. This work also generates huge quantities of data. This presentation will explain how to leverage this data to help ship quality products faster using fewer resources while ensuring safety and reliability in the field, ultimately turning battery performance into a competitive advantage.

  18. Overcoming complexities for consistent, continental-scale flood mapping

    NASA Astrophysics Data System (ADS)

    Smith, Helen; Zaidman, Maxine; Davison, Charlotte

    2013-04-01

    The EU Floods Directive requires all member states to produce flood hazard maps by 2013. Although flood mapping practices are well developed in Europe, there are huge variations in the scale and resolution of the maps between individual countries. Since extreme flood events are rarely confined to a single country, this is problematic, particularly for the re/insurance industry whose exposures often extend beyond country boundaries. Here, we discuss the challenges of large-scale hydrological and hydraulic modelling, using our experience of developing a 12-country model and set of maps, to illustrate how consistent, high-resolution river flood maps across Europe can be produced. The main challenges addressed include: data acquisition; manipulating the vast quantities of high-resolution data; and computational resources. Our starting point was to develop robust flood-frequency models that are suitable for estimating peak flows for a range of design flood return periods. We used the index flood approach, based on a statistical analysis of historic river flow data pooled on the basis of catchment characteristics. Historical flow data were therefore sourced for each country and collated into a large pan-European database. After a lengthy validation these data were collated into 21 separate analysis zones or regions, grouping smaller river basins according to their physical and climatic characteristics. The very large continental scale basins were each modelled separately on account of their size (e.g. Danube, Elbe, Drava and Rhine). Our methodology allows the design flood hydrograph to be predicted at any point on the river network for a range of return periods. Using JFlow+, JBA's proprietary 2D hydraulic hydrodynamic model, the calculated out-of-bank flows for all watercourses with an upstream drainage area exceeding 50km2 were routed across two different Digital Terrain Models in order to map the extent and depth of floodplain inundation. This generated modelling for a total river length of approximately 250,000km. Such a large-scale, high-resolution modelling exercise is extremely demanding on computational resources and would have been unfeasible without the use of Graphics Processing Units on a network of standard specification gaming computers. Our GPU grid is the world's largest flood-dedicated computer grid. The European river basins were split out into approximately 100 separate hydraulic models and managed individually, although care was taken to ensure flow continuity was maintained between models. The flood hazard maps from the modelling were pieced together using GIS techniques, to provide flood depth and extent information across Europe to a consistent scale and standard. After discussing the methodological challenges, we shall present our flood hazard maps and, from extensive validation work, compare these against historical flow records and observed flood extents.

  19. Dynamic VM Provisioning for TORQUE in a Cloud Environment

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Boland, L.; Coddington, P.; Sevior, M.

    2014-06-01

    Cloud computing, also known as an Infrastructure-as-a-Service (IaaS), is attracting more interest from the commercial and educational sectors as a way to provide cost-effective computational infrastructure. It is an ideal platform for researchers who must share common resources but need to be able to scale up to massive computational requirements for specific periods of time. This paper presents the tools and techniques developed to allow the open source TORQUE distributed resource manager and Maui cluster scheduler to dynamically integrate OpenStack cloud resources into existing high throughput computing clusters.

  20. Pilots 2.0: DIRAC pilots for all the skies

    NASA Astrophysics Data System (ADS)

    Stagni, F.; Tsaregorodtsev, A.; McNab, A.; Luzzi, C.

    2015-12-01

    In the last few years, new types of computing infrastructures, such as IAAS (Infrastructure as a Service) and IAAC (Infrastructure as a Client), gained popularity. New resources may come as part of pledged resources, while others are opportunistic. Most of these new infrastructures are based on virtualization techniques. Meanwhile, some concepts, such as distributed queues, lost appeal, while still supporting a vast amount of resources. Virtual Organizations are therefore facing heterogeneity of the available resources and the use of an Interware software like DIRAC to hide the diversity of underlying resources has become essential. The DIRAC WMS is based on the concept of pilot jobs that was introduced back in 2004. A pilot is what creates the possibility to run jobs on a worker node. Within DIRAC, we developed a new generation of pilot jobs, that we dubbed Pilots 2.0. Pilots 2.0 are not tied to a specific infrastructure; rather they are generic, fully configurable and extendible pilots. A Pilot 2.0 can be sent, as a script to be run, or it can be fetched from a remote location. A pilot 2.0 can run on every computing resource, e.g.: on CREAM Computing elements, on DIRAC Computing elements, on Virtual Machines as part of the contextualization script, or IAAC resources, provided that these machines are properly configured, hiding all the details of the Worker Nodes (WNs) infrastructure. Pilots 2.0 can be generated server and client side. Pilots 2.0 are the “pilots to fly in all the skies”, aiming at easy use of computing power, in whatever form it is presented. Another aim is the unification and simplification of the monitoring infrastructure for all kinds of computing resources, by using pilots as a network of distributed sensors coordinated by a central resource monitoring system. Pilots 2.0 have been developed using the command pattern. VOs using DIRAC can tune pilots 2.0 as they need, and extend or replace each and every pilot command in an easy way. In this paper we describe how Pilots 2.0 work with distributed and heterogeneous resources providing the necessary abstraction to deal with different kind of computing resources.

  1. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    NASA Astrophysics Data System (ADS)

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-10-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.

  2. Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter

    PubMed Central

    Loganathan, Shyamala; Mukherjee, Saswati

    2015-01-01

    Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms. PMID:26473166

  3. Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter.

    PubMed

    Loganathan, Shyamala; Mukherjee, Saswati

    2015-01-01

    Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms.

  4. Huge retroperitoneal dedifferentiated liposarcoma presented as acute pancreatitis: report of a case.

    PubMed

    Arakawa, Yusuke; Yoshioka, Kazuo; Kamo, Hitomi; Kawano, Koichiro; Yamaguchi, Takeshi; Sumise, Yuko; Okitsu, Natsu; Ikeyama, Shizuo; Morimoto, Kojiro; Nakai, Yoshihiro; Tashiro, Seiki

    2013-01-01

    A 74-year-old male with abdominal pain was admitted to the emergency room in our hospital. The high value of serum amylase was shown in his blood test. The postcontrast computed tomography (CT) showed the huge retroperitoneal tumor with a thin-walled mass occupying most of the part of the right retroperitoneal space. The tumor spread into the soft tissues around the pancreas; as a result, the duodenum was compressed and the pancreas was displaced to the right side. The irregular pancreatic outline, obliterated peripancreatic fatty tissue and fluid in the left anterior pararenal space were revealed, so acute pancreatitis was diagnosed. The diagnostic biopsy of retroperitoneal tumor was done, and the pathological findings of retroperitoneal mass revealed dedifferentiated liposarcoma. The medical treatment against acute pancreatitis was performed firstly. After the patient recovered from that, the surgical resection of the tumor with the right kidney and right adrenal gland was completed successfully. The patient remained well, without any evidence of recurrence three months after surgery. However, the histology showed dedifferentiated liposarcoma; therefore, postoperative regular examination is necessary.

  5. A Proposal of TLS Implementation for Cross Certification Model

    NASA Astrophysics Data System (ADS)

    Kaji, Tadashi; Fujishiro, Takahiro; Tezuka, Satoru

    Today, TLS is widely used for achieving a secure communication system. And TLS is used PKI for server authentication and/or client authentication. However, its PKI environment, which is called as “multiple trust anchors environment,” causes the problem that the verifier has to maintain huge number of CA certificates in the ubiquitous network because the increase of terminals connected to the network brings the increase of CAs. However, most of terminals in the ubiquitous network will not have enough memory to hold such huge number of CA certificates. Therefore, another PKI environment, “cross certification environment”, is useful for the ubiquitous network. But, because current TLS is designed for the multiple trust anchors model, TLS cannot work efficiently on the cross-certification model. This paper proposes a TLS implementation method to support the cross certification model efficiently. Our proposal reduces the size of exchanged messages between the TLS client and the TLS server during the handshake process. Therefore, our proposal is suitable for implementing TLS in the terminals that do not have enough computing power and memory in ubiquitous network.

  6. Large-scale retrieval for medical image analytics: A comprehensive review.

    PubMed

    Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting

    2018-01-01

    Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. [A Case of Huge Colon Cancer Accompanied with Severe Hypoproteinemia].

    PubMed

    Hiraki, Sakurao; Kanesada, Kou; Harada, Toshio; Tada, Kousuke; Fukuda, Shintaro

    2017-11-01

    We report a case of huge colon cancer accompanied with severe hypoproteinemia. A7 4-year-old woman was referred to our hospital because of abdominal fullness. Blood examinations revealed anemia(hemoglobin 8.8 g/dL)and sever hypopro- teinemia(total protein 4.5 g/dL, albumin 1.1 g/dL). Computed tomography examination of abdomen revealed ascites and large tumor(12.5×10.5 cm)at the right side colon. By further examinations ascending colon cancer without distant metastasis was diagnosed, then we performed right hemicolectomy and primary intestinal anastomosis by open surgery. Ahuge type 1 tumor(18×12 cm)was observed in the excised specimen, which invaded to terminal ileum directly. The tumor was diagnosed moderately differentiated adenocarcinoma without lymph node metastasis(pT3N0M0, fStage II ). Postoperative course was uneventful and serum protein concentration recovered gradually to normal range. Protein leakage from the tumor cannot be proved by this case, so we can't diagnose as protein-losing enteropathy, but we strongly doubt this etiology from postoperative course in this case.

  8. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    PubMed

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  9. An automatic method to generate domain-specific investigator networks using PubMed abstracts.

    PubMed

    Yu, Wei; Yesupriya, Ajay; Wulf, Anja; Qu, Junfeng; Gwinn, Marta; Khoury, Muin J

    2007-06-20

    Collaboration among investigators has become critical to scientific research. This includes ad hoc collaboration established through personal contacts as well as formal consortia established by funding agencies. Continued growth in online resources for scientific research and communication has promoted the development of highly networked research communities. Extending these networks globally requires identifying additional investigators in a given domain, profiling their research interests, and collecting current contact information. We present a novel strategy for building investigator networks dynamically and producing detailed investigator profiles using data available in PubMed abstracts. We developed a novel strategy to obtain detailed investigator information by automatically parsing the affiliation string in PubMed records. We illustrated the results by using a published literature database in human genome epidemiology (HuGE Pub Lit) as a test case. Our parsing strategy extracted country information from 92.1% of the affiliation strings in a random sample of PubMed records and in 97.0% of HuGE records, with accuracies of 94.0% and 91.0%, respectively. Institution information was parsed from 91.3% of the general PubMed records (accuracy 86.8%) and from 94.2% of HuGE PubMed records (accuracy 87.0). We demonstrated the application of our approach to dynamic creation of investigator networks by creating a prototype information system containing a large database of PubMed abstracts relevant to human genome epidemiology (HuGE Pub Lit), indexed using PubMed medical subject headings converted to Unified Medical Language System concepts. Our method was able to identify 70-90% of the investigators/collaborators in three different human genetics fields; it also successfully identified 9 of 10 genetics investigators within the PREBIC network, an existing preterm birth research network. We successfully created a web-based prototype capable of creating domain-specific investigator networks based on an application that accurately generates detailed investigator profiles from PubMed abstracts combined with robust standard vocabularies. This approach could be used for other biomedical fields to efficiently establish domain-specific investigator networks.

  10. An automatic method to generate domain-specific investigator networks using PubMed abstracts

    PubMed Central

    Yu, Wei; Yesupriya, Ajay; Wulf, Anja; Qu, Junfeng; Gwinn, Marta; Khoury, Muin J

    2007-01-01

    Background Collaboration among investigators has become critical to scientific research. This includes ad hoc collaboration established through personal contacts as well as formal consortia established by funding agencies. Continued growth in online resources for scientific research and communication has promoted the development of highly networked research communities. Extending these networks globally requires identifying additional investigators in a given domain, profiling their research interests, and collecting current contact information. We present a novel strategy for building investigator networks dynamically and producing detailed investigator profiles using data available in PubMed abstracts. Results We developed a novel strategy to obtain detailed investigator information by automatically parsing the affiliation string in PubMed records. We illustrated the results by using a published literature database in human genome epidemiology (HuGE Pub Lit) as a test case. Our parsing strategy extracted country information from 92.1% of the affiliation strings in a random sample of PubMed records and in 97.0% of HuGE records, with accuracies of 94.0% and 91.0%, respectively. Institution information was parsed from 91.3% of the general PubMed records (accuracy 86.8%) and from 94.2% of HuGE PubMed records (accuracy 87.0). We demonstrated the application of our approach to dynamic creation of investigator networks by creating a prototype information system containing a large database of PubMed abstracts relevant to human genome epidemiology (HuGE Pub Lit), indexed using PubMed medical subject headings converted to Unified Medical Language System concepts. Our method was able to identify 70–90% of the investigators/collaborators in three different human genetics fields; it also successfully identified 9 of 10 genetics investigators within the PREBIC network, an existing preterm birth research network. Conclusion We successfully created a web-based prototype capable of creating domain-specific investigator networks based on an application that accurately generates detailed investigator profiles from PubMed abstracts combined with robust standard vocabularies. This approach could be used for other biomedical fields to efficiently establish domain-specific investigator networks. PMID:17584920

  11. Demonstration of NICT Space Weather Cloud --Integration of Supercomputer into Analysis and Visualization Environment--

    NASA Astrophysics Data System (ADS)

    Watari, S.; Morikawa, Y.; Yamamoto, K.; Inoue, S.; Tsubouchi, K.; Fukazawa, K.; Kimura, E.; Tatebe, O.; Kato, H.; Shimojo, S.; Murata, K. T.

    2010-12-01

    In the Solar-Terrestrial Physics (STP) field, spatio-temporal resolution of computer simulations is getting higher and higher because of tremendous advancement of supercomputers. A more advanced technology is Grid Computing that integrates distributed computational resources to provide scalable computing resources. In the simulation research, it is effective that a researcher oneself designs his physical model, performs calculations with a supercomputer, and analyzes and visualizes for consideration by a familiar method. A supercomputer is far from an analysis and visualization environment. In general, a researcher analyzes and visualizes in the workstation (WS) managed at hand because the installation and the operation of software in the WS are easy. Therefore, it is necessary to copy the data from the supercomputer to WS manually. Time necessary for the data transfer through long delay network disturbs high-accuracy simulations actually. In terms of usefulness, integrating a supercomputer and an analysis and visualization environment seamlessly with a researcher's familiar method is important. NICT has been developing a cloud computing environment (NICT Space Weather Cloud). In the NICT Space Weather Cloud, disk servers are located near its supercomputer and WSs for data analysis and visualization. They are connected to JGN2plus that is high-speed network for research and development. Distributed virtual high-capacity storage is also constructed by Grid Datafarm (Gfarm v2). Huge-size data output from the supercomputer is transferred to the virtual storage through JGN2plus. A researcher can concentrate on the research by a familiar method without regard to distance between a supercomputer and an analysis and visualization environment. Now, total 16 disk servers are setup in NICT headquarters (at Koganei, Tokyo), JGN2plus NOC (at Otemachi, Tokyo), Okinawa Subtropical Environment Remote-Sensing Center, and Cybermedia Center, Osaka University. They are connected on JGN2plus, and they constitute 1PB (physical size) virtual storage by Gfarm v2. These disk servers are connected with supercomputers of NICT and Osaka University. A system that data output from the supercomputers are automatically transferred to the virtual storage had been built up. Transfer rate is about 50 GB/hrs by actual measurement. It is estimated that the performance is reasonable for a certain simulation and analysis for reconstruction of coronal magnetic field. This research is assumed an experiment of the system, and the verification of practicality is advanced at the same time. Herein we introduce an overview of the space weather cloud system so far we have developed. We also demonstrate several scientific results using the space weather cloud system. We also introduce several web applications of the cloud as a service of the space weather cloud, which is named as "e-SpaceWeather" (e-SW). The e-SW provides with a variety of space weather online services from many aspects.

  12. 43 CFR 11.40 - What are type A procedures?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 11.40 Public Lands: Interior Office of the Secretary of the Interior NATURAL RESOURCE DAMAGE... marine environments incorporates a computer model called the Natural Resource Damage Assessment Model for... environments incorporates a computer model called the Natural Resource Damage Assessment Model for Great Lakes...

  13. 43 CFR 11.40 - What are type A procedures?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 11.40 Public Lands: Interior Office of the Secretary of the Interior NATURAL RESOURCE DAMAGE... marine environments incorporates a computer model called the Natural Resource Damage Assessment Model for... environments incorporates a computer model called the Natural Resource Damage Assessment Model for Great Lakes...

  14. An emulator for minimizing computer resources for finite element analysis

    NASA Technical Reports Server (NTRS)

    Melosh, R.; Utku, S.; Islam, M.; Salama, M.

    1984-01-01

    A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).

  15. Experience in Implementing Resource-Based Learning in Agrarian College of Management and Law Poltava State Agrarian Academy

    ERIC Educational Resources Information Center

    Kononets, Natalia

    2015-01-01

    The introduction of resource-based learning disciplines of computer cycles in Agrarian College. The article focused on the issue of implementation of resource-based learning courses in the agricultural cycle computer college. Tested approach to creating elearning resources through free hosting and their further use in the classroom. Noted that the…

  16. Virtual versus real water transfers within China.

    PubMed

    Ma, Jing; Hoekstra, Arjen Y; Wang, Hao; Chapagain, Ashok K; Wang, Dangxian

    2006-05-29

    North China faces severe water scarcity--more than 40% of the annual renewable water resources are abstracted for human use. Nevertheless, nearly 10% of the water used in agriculture is employed in producing food exported to south China. To compensate for this 'virtual water flow' and to reduce water scarcity in the north, the huge south-north Water Transfer Project is currently being implemented. This paradox--the transfer of huge volumes of water from the water-rich south to the water-poor north versus transfer of substantial volumes of food from the food-sufficient north to the food-deficit south--is receiving increased attention, but the research in this field has not yet reached further than rough estimation and qualitative description. The aim of this paper is to review and quantify the volumes of virtual water flows between the regions in China and to put them in the context of water availability per region. The analysis shows that north China annually exports about 52 billion m3 of water in virtual form to south China, which is more than the maximum proposed water transfer volume along the three routes of the Water Transfer Project from south to north.

  17. Virtual versus real water transfers within China

    PubMed Central

    Ma, Jing; Hoekstra, Arjen Y; Wang, Hao; Chapagain, Ashok K; Wang, Dangxian

    2005-01-01

    North China faces severe water scarcity—more than 40% of the annual renewable water resources are abstracted for human use. Nevertheless, nearly 10% of the water used in agriculture is employed in producing food exported to south China. To compensate for this ‘virtual water flow’ and to reduce water scarcity in the north, the huge south–north Water Transfer Project is currently being implemented. This paradox—the transfer of huge volumes of water from the water-rich south to the water-poor north versus transfer of substantial volumes of food from the food-sufficient north to the food-deficit south—is receiving increased attention, but the research in this field has not yet reached further than rough estimation and qualitative description. The aim of this paper is to review and quantify the volumes of virtual water flows between the regions in China and to put them in the context of water availability per region. The analysis shows that north China annually exports about 52 billion m3 of water in virtual form to south China, which is more than the maximum proposed water transfer volume along the three routes of the Water Transfer Project from south to north. PMID:16767828

  18. Configurable e-commerce-oriented distributed seckill system with high availability

    NASA Astrophysics Data System (ADS)

    Zhu, Liye

    2018-04-01

    The rapid development of e-commerce prompted the birth of seckill activity. Seckill activity greatly stimulated public shopping desire because of its significant attraction to customers. In a seckill activity, a limited number of products will be sold at varying degrees of discount, which brings a huge temptation for customers. The discounted products are usually sold out in seconds, which can be a huge challenge for e-commerce systems. In this case, a seckill system with high concurrency and high availability has very practical significance. This research cooperates with Huijin Department Store to design and implement a seckill system of e-commerce platform. The seckill system supports high concurrency network conditions and is highly available in unexpected situation. In addition, due to the short life cycle of seckill activity, the system has the flexibility to be configured and scalable, which means that it is able to add or re-move system resources on demand. Finally, this paper carried out the function test and the performance test of the whole system. The test results show that the system meets the functional requirements and performance requirements of suppliers, administrators as well as users.

  19. Sustainability assessment of regional water resources under the DPSIR framework

    NASA Astrophysics Data System (ADS)

    Sun, Shikun; Wang, Yubao; Liu, Jing; Cai, Huanjie; Wu, Pute; Geng, Qingling; Xu, Lijun

    2016-01-01

    Fresh water is a scarce and critical resource in both natural and socioeconomic systems. Increasing populations combined with an increasing demand for water resources have led to water shortages worldwide. Current water management strategies may not be sustainable, and comprehensive action should be taken to minimize the water budget deficit. Sustainable water resources management is essential because it ensures the integration of social, economic, and environmental issues into all stages of water resources management. This paper establishes the indicators to evaluate the sustainability of water utilization based on the Drive-Pressure-Status-Impact-Response (DPSIR) model. Based on the analytic hierarchy process (AHP) method, a comprehensive assessment of changes to the sustainability of the water resource system in the city of Bayannur was conducted using these indicators. The results indicate that there is an increase in the driving force of local water consumption due to changes in society, economic development, and the consumption structure of residents. The pressure on the water system increased, whereas the status of the water resources continued to decrease over the study period due to the increasing drive indicators. The local government adopted a series of response measures to relieve the decreasing water resources and alleviate the negative effects of the increasing driver in demand. The response measures improved the efficiency of water usage to a large extent, but the large-scale expansion in demands brought a rebounding effect, known as ;Jevons paradox; At the same time, the increasing emissions of industrial and agriculture pollutants brought huge pressures to the regional water resources environment, which caused a decrease in the sustainability of regional water resources. Changing medium and short-term factors, such as regional economic pattern, technological levels, and water utilization practices, can contribute to the sustainable utilization of regional water resources.

  20. ACToR A Aggregated Computational Toxicology Resource ...

    EPA Pesticide Factsheets

    We are developing the ACToR system (Aggregated Computational Toxicology Resource) to serve as a repository for a variety of types of chemical, biological and toxicological data that can be used for predictive modeling of chemical toxicology. We are developing the ACToR system (Aggregated Computational Toxicology Resource) to serve as a repository for a variety of types of chemical, biological and toxicological data that can be used for predictive modeling of chemical toxicology.

  1. ACToR A Aggregated Computational Toxicology Resource (S) ...

    EPA Pesticide Factsheets

    We are developing the ACToR system (Aggregated Computational Toxicology Resource) to serve as a repository for a variety of types of chemical, biological and toxicological data that can be used for predictive modeling of chemical toxicology. We are developing the ACToR system (Aggregated Computational Toxicology Resource) to serve as a repository for a variety of types of chemical, biological and toxicological data that can be used for predictive modeling of chemical toxicology.

  2. A study of compositional verification based IMA integration method

    NASA Astrophysics Data System (ADS)

    Huang, Hui; Zhang, Guoquan; Xu, Wanmeng

    2018-03-01

    The rapid development of avionics systems is driving the application of integrated modular avionics (IMA) systems. But meanwhile it is improving avionics system integration, complexity of system test. Then we need simplify the method of IMA system test. The IMA system supports a module platform that runs multiple applications, and shares processing resources. Compared with federated avionics system, IMA system is difficult to isolate failure. Therefore, IMA system verification will face the critical problem is how to test shared resources of multiple application. For a simple avionics system, traditional test methods are easily realizing to test a whole system. But for a complex system, it is hard completed to totally test a huge and integrated avionics system. Then this paper provides using compositional-verification theory in IMA system test, so that reducing processes of test and improving efficiency, consequently economizing costs of IMA system integration.

  3. Agricultural lands preservation: a sociology of survival

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, T.S.

    1983-01-01

    This is a rural sociological study investigating the viability of agricultural lands use-values and rural communities in the context of the structure of US agriculture. It outlines the theoretical foundation, ideology, and praxis of a sociology of survival. It is undertaken within the framework of environmental sociology, which focuses on the dynamic interpenetration of social and biotic systems. The concepts of carrying capacity, sustained multiple-use yield, and land-use compatibility and their significance are discussed. The phenomenon of phantom carrying capacity is explored, and its ominous portent noted; but the astonishing potential of agricultural lands to produce huge net gains inmore » use values and in real carrying capacity is affirmed. The theory of unlimited resources, substitution, and market-allocation is falsified. Absolute shortages of renewable and nonrenewable resources are documented, and the necessity for population control, conservation, expanded sustained-yield production, and social allocation is established.« less

  4. Update on Genomic Databases and Resources at the National Center for Biotechnology Information.

    PubMed

    Tatusova, Tatiana

    2016-01-01

    The National Center for Biotechnology Information (NCBI), as a primary public repository of genomic sequence data, collects and maintains enormous amounts of heterogeneous data. Data for genomes, genes, gene expressions, gene variation, gene families, proteins, and protein domains are integrated with the analytical, search, and retrieval resources through the NCBI website, text-based search and retrieval system, provides a fast and easy way to navigate across diverse biological databases.Comparative genome analysis tools lead to further understanding of evolution processes quickening the pace of discovery. Recent technological innovations have ignited an explosion in genome sequencing that has fundamentally changed our understanding of the biology of living organisms. This huge increase in DNA sequence data presents new challenges for the information management system and the visualization tools. New strategies have been designed to bring an order to this genome sequence shockwave and improve the usability of associated data.

  5. Controlling cardiovascular diseases in low and middle income countries by placing proof in pragmatism

    PubMed Central

    Owolabi, Mayowa; Miranda, Jaime J; Yaria, Joseph; Ovbiagele, Bruce

    2016-01-01

    Low and middle income countries (LMICs) bear a huge, disproportionate and growing burden of cardiovascular disease (CVD) which constitutes a threat to development. Efforts to tackle the global burden of CVD must therefore emphasise effective control in LMICs by addressing the challenge of scarce resources and lack of pragmatic guidelines for CVD prevention, treatment and rehabilitation. To address these gaps, in this analysis article, we present an implementation cycle for developing, contextualising, communicating and evaluating CVD recommendations for LMICs. This includes a translatability scale to rank the potential ease of implementing recommendations, prescriptions for engaging stakeholders in implementing the recommendations (stakeholders such as providers and physicians, patients and the populace, policymakers and payers) and strategies for enhancing feedback. This approach can help LMICs combat CVD despite limited resources, and can stimulate new implementation science hypotheses, research, evidence and impact. PMID:27840737

  6. This new field of inclusive education: beginning a dialogue on conceptual foundations.

    PubMed

    Danforth, Scot; Naraian, Srikala

    2015-02-01

    Numerous scholars have suggested that the standard knowledge base of the field of special education is not a suitable intellectual foundation for the development of research, policy, and practice in the field of inclusive education. Still, we have yet to have a dialogue on what conceptual foundations may be most generative for the growth and development of the field of inclusive education. This article imagines and initiates such a new dialogue among educational researchers and teacher educators about the intellectual resources that can best support inclusive educators everywhere. As inclusive education gets increasingly taken up within international policy discourses, it may be imperative to explore and identify theories and ideas that can be responsive to diverse and hugely unequal contexts of schooling. This article forwards an initial collection of intellectual resources for an inclusive education that can accommodate such complex schooling conditions and invites rich scholarly exchange on this issue.

  7. Aviation & Space Education: A Teacher's Resource Guide.

    ERIC Educational Resources Information Center

    Texas State Dept. of Aviation, Austin.

    This resource guide contains information on curriculum guides, resources for teachers, computer software and computer related programs, audio/visual presentations, model aircraft and demonstration aids, training seminars and career education, and an aerospace bibliography for primary grades. Each entry includes all or some of the following items:…

  8. Campus Computing Environment: University of Kentucky.

    ERIC Educational Resources Information Center

    CAUSE/EFFECT, 1989

    1989-01-01

    A dramatic growth in computing and communications was precipitated largely by the leadership of President David Roselle at the University of Kentucky. A new operational structure of information resource management includes not only computing (academic and administrative) and communications, instructional resources, and printing/mailing services,…

  9. Teaching Computer Literacy with Freeware and Shareware.

    ERIC Educational Resources Information Center

    Hobart, R. Dale; And Others

    1988-01-01

    Describes workshops given at Ferris State University for faculty and staff who want to acquire computer skills. Considered are a computer literacy and a software toolkit distributed to participants made from public domain/shareware resources. Stresses the benefits of shareware as an educational resource. (CW)

  10. Challenges in Securing the Interface Between the Cloud and Pervasive Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lagesse, Brent J

    2011-01-01

    Cloud computing presents an opportunity for pervasive systems to leverage computational and storage resources to accomplish tasks that would not normally be possible on such resource-constrained devices. Cloud computing can enable hardware designers to build lighter systems that last longer and are more mobile. Despite the advantages cloud computing offers to the designers of pervasive systems, there are some limitations of leveraging cloud computing that must be addressed. We take the position that cloud-based pervasive system must be secured holistically and discuss ways this might be accomplished. In this paper, we discuss a pervasive system utilizing cloud computing resources andmore » issues that must be addressed in such a system. In this system, the user's mobile device cannot always have network access to leverage resources from the cloud, so it must make intelligent decisions about what data should be stored locally and what processes should be run locally. As a result of these decisions, the user becomes vulnerable to attacks while interfacing with the pervasive system.« less

  11. Managing Emergency Situations in VANET Through Heterogeneous Technologies Cooperation.

    PubMed

    Santamaria, Amilcare Francesco; Tropea, Mauro; Fazio, Peppino; De Rango, Floriano

    2018-05-08

    Nowadays, the research on vehicular computing enhanced a very huge amount of services and protocols, aimed to vehicles security and comfort. The investigation of the IEEE802.11p, Wireless Access in Vehicular Environments (WAVE) and Dedicated Short Range Communication (DSRC) standards gave to the scientific world the chance to integrate new services, protocols, algorithms and devices inside vehicles. This opportunity attracted the attention of private/public organizations, which spent lot of resources and money to promote vehicular technologies. In this paper, the attention is focused on the design of a new approach for vehicular environments able to gather information during mobile node trips, for advising dangerous or emergency situations by exploiting on-board sensors. It is assumed that each vehicle has an integrated on-board unit composed of several sensors and Global Position System (GPS) device, able to spread alerting messages around the network, regarding warning and dangerous situations/conditions. On-board units, based on the standard communication protocols, share the collected information with the surrounding road-side units, while the sensing platform is able to recognize the environment that vehicles are passing through (obstacles, accidents, emergencies, dangerous situations, etc.). Finally, through the use of the GPS receiver, the exact location of the caught event is determined and spread along the network. In this way, if an accident occurs, the arriving cars will, probably, avoid delay and danger situations.

  12. Real cases study through computer applications for futures Agricultural Engineers

    NASA Astrophysics Data System (ADS)

    Moratiel, R.; Durán, J. M.; Tarquis, A. M.

    2010-05-01

    One of the huge concerns on the higher engineer education is the lag of real cases study that the future professionals need in the work and corporation market. This concern was reflected in Bologna higher education system including recommendations in this respect. The knowhow as why this or other methodology is one of the keys to resolve this problem. In the last courses given in Department of Crop Production, at the Agronomy Engineer School of Madrid (Escuela Técnica Superior de Ingenieros Agrónomos, UPM) we have developed more than one hundred applications in Microsoft Excel®. Our aim was to show different real scenarios which the future Agronomic Engineers can be found in their professional life and with items related to crop production field. In order to achieve our target, each application in Excel presents a file text in which is explained the theoretical concepts and the objectives, as well as some resources used from Excel syntax. In this way, the student can understand and use of such application, even they can modify and customize it for a real case presented in their context and/or master project. This electronic monograph gives an answer to the need to manage data in several real scenarios showed in lectures, calculus resolution, information analysis and manage worksheets in a professional and student level.

  13. Managing Emergency Situations in VANET Through Heterogeneous Technologies Cooperation

    PubMed Central

    Tropea, Mauro; De Rango, Floriano

    2018-01-01

    Nowadays, the research on vehicular computing enhanced a very huge amount of services and protocols, aimed to vehicles security and comfort. The investigation of the IEEE802.11p, Wireless Access in Vehicular Environments (WAVE) and Dedicated Short Range Communication (DSRC) standards gave to the scientific world the chance to integrate new services, protocols, algorithms and devices inside vehicles. This opportunity attracted the attention of private/public organizations, which spent lot of resources and money to promote vehicular technologies. In this paper, the attention is focused on the design of a new approach for vehicular environments able to gather information during mobile node trips, for advising dangerous or emergency situations by exploiting on-board sensors. It is assumed that each vehicle has an integrated on-board unit composed of several sensors and Global Position System (GPS) device, able to spread alerting messages around the network, regarding warning and dangerous situations/conditions. On-board units, based on the standard communication protocols, share the collected information with the surrounding road-side units, while the sensing platform is able to recognize the environment that vehicles are passing through (obstacles, accidents, emergencies, dangerous situations, etc.). Finally, through the use of the GPS receiver, the exact location of the caught event is determined and spread along the network. In this way, if an accident occurs, the arriving cars will, probably, avoid delay and danger situations. PMID:29738453

  14. Methods and systems for providing reconfigurable and recoverable computing resources

    NASA Technical Reports Server (NTRS)

    Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)

    2010-01-01

    A method for optimizing the use of digital computing resources to achieve reliability and availability of the computing resources is disclosed. The method comprises providing one or more processors with a recovery mechanism, the one or more processors executing one or more applications. A determination is made whether the one or more processors needs to be reconfigured. A rapid recovery is employed to reconfigure the one or more processors when needed. A computing system that provides reconfigurable and recoverable computing resources is also disclosed. The system comprises one or more processors with a recovery mechanism, with the one or more processors configured to execute a first application, and an additional processor configured to execute a second application different than the first application. The additional processor is reconfigurable with rapid recovery such that the additional processor can execute the first application when one of the one more processors fails.

  15. Polyphony: A Workflow Orchestration Framework for Cloud Computing

    NASA Technical Reports Server (NTRS)

    Shams, Khawaja S.; Powell, Mark W.; Crockett, Tom M.; Norris, Jeffrey S.; Rossi, Ryan; Soderstrom, Tom

    2010-01-01

    Cloud Computing has delivered unprecedented compute capacity to NASA missions at affordable rates. Missions like the Mars Exploration Rovers (MER) and Mars Science Lab (MSL) are enjoying the elasticity that enables them to leverage hundreds, if not thousands, or machines for short durations without making any hardware procurements. In this paper, we describe Polyphony, a resilient, scalable, and modular framework that efficiently leverages a large set of computing resources to perform parallel computations. Polyphony can employ resources on the cloud, excess capacity on local machines, as well as spare resources on the supercomputing center, and it enables these resources to work in concert to accomplish a common goal. Polyphony is resilient to node failures, even if they occur in the middle of a transaction. We will conclude with an evaluation of a production-ready application built on top of Polyphony to perform image-processing operations of images from around the solar system, including Mars, Saturn, and Titan.

  16. Infrastructure Systems for Advanced Computing in E-science applications

    NASA Astrophysics Data System (ADS)

    Terzo, Olivier

    2013-04-01

    In the e-science field are growing needs for having computing infrastructure more dynamic and customizable with a model of use "on demand" that follow the exact request in term of resources and storage capacities. The integration of grid and cloud infrastructure solutions allows us to offer services that can adapt the availability in terms of up scaling and downscaling resources. The main challenges for e-sciences domains will on implement infrastructure solutions for scientific computing that allow to adapt dynamically the demands of computing resources with a strong emphasis on optimizing the use of computing resources for reducing costs of investments. Instrumentation, data volumes, algorithms, analysis contribute to increase the complexity for applications who require high processing power and storage for a limited time and often exceeds the computational resources that equip the majority of laboratories, research Unit in an organization. Very often it is necessary to adapt or even tweak rethink tools, algorithms, and consolidate existing applications through a phase of reverse engineering in order to adapt them to a deployment on Cloud infrastructure. For example, in areas such as rainfall monitoring, meteorological analysis, Hydrometeorology, Climatology Bioinformatics Next Generation Sequencing, Computational Electromagnetic, Radio occultation, the complexity of the analysis raises several issues such as the processing time, the scheduling of tasks of processing, storage of results, a multi users environment. For these reasons, it is necessary to rethink the writing model of E-Science applications in order to be already adapted to exploit the potentiality of cloud computing services through the uses of IaaS, PaaS and SaaS layer. An other important focus is on create/use hybrid infrastructure typically a federation between Private and public cloud, in fact in this way when all resources owned by the organization are all used it will be easy with a federate cloud infrastructure to add some additional resources form the Public cloud for following the needs in term of computational and storage resources and release them where process are finished. Following the hybrid model, the scheduling approach is important for managing both cloud models. Thanks to this model infrastructure every time resources are available for additional request in term of IT capacities that can used "on demand" for a limited time without having to proceed to purchase additional servers.

  17. Data Characterization Using Artificial-Star Tests: Performance Evaluation

    NASA Astrophysics Data System (ADS)

    Hu, Yi; Deng, Licai; de Grijs, Richard; Liu, Qiang

    2011-01-01

    Traditional artificial-star tests are widely applied to photometry in crowded stellar fields. However, to obtain reliable binary fractions (and their uncertainties) of remote, dense, and rich star clusters, one needs to recover huge numbers of artificial stars. Hence, this will consume much computation time for data reduction of the images to which the artificial stars must be added. In this article, we present a new method applicable to data sets characterized by stable, well-defined, point-spread functions, in which we add artificial stars to the retrieved-data catalog instead of to the raw images. Taking the young Large Magellanic Cloud cluster NGC 1818 as an example, we compare results from both methods and show that they are equivalent, while our new method saves significant computational time.

  18. A Haptic-Enhanced System for Molecular Sensing

    NASA Astrophysics Data System (ADS)

    Comai, Sara; Mazza, Davide

    The science of haptics has received an enormous attention in the last decade. One of the major application trends of haptics technology is data visualization and training. In this paper, we present a haptically-enhanced system for manipulation and tactile exploration of molecules.The geometrical models of molecules is extracted either from theoretical or empirical data using file formats widely adopted in chemical and biological fields. The addition of information computed with computational chemistry tools, allows users to feel the interaction forces between an explored molecule and a charge associated to the haptic device, and to visualize a huge amount of numerical data in a more comprehensible way. The developed tool can be used either for teaching or research purposes due to its high reliance on both theoretical and experimental data.

  19. Parameterization of cloud lidar backscattering profiles by means of asymmetrical Gaussians

    NASA Astrophysics Data System (ADS)

    del Guasta, Massimo; Morandi, Marco; Stefanutti, Leopoldo

    1995-06-01

    A fitting procedure for cloud lidar data processing is shown that is based on the computation of the first three moments of the vertical-backscattering (or -extinction) profile. Single-peak clouds or single cloud layers are approximated to asymmetrical Gaussians. The algorithm is particularly stable with respect to noise and processing errors, and it is much faster than the equivalent least-squares approach. Multilayer clouds can easily be treated as a sum of single asymmetrical Gaussian peaks. The method is suitable for cloud-shape parametrization in noisy lidar signatures (like those expected from satellite lidars). It also permits an improvement of cloud radiative-property computations that are based on huge lidar data sets for which storage and careful examination of single lidar profiles can't be carried out.

  20. Application of importance sampling to the computation of large deviations in nonequilibrium processes.

    PubMed

    Kundu, Anupam; Sabhapandit, Sanjib; Dhar, Abhishek

    2011-03-01

    We present an algorithm for finding the probabilities of rare events in nonequilibrium processes. The algorithm consists of evolving the system with a modified dynamics for which the required event occurs more frequently. By keeping track of the relative weight of phase-space trajectories generated by the modified and the original dynamics one can obtain the required probabilities. The algorithm is tested on two model systems of steady-state particle and heat transport where we find a huge improvement from direct simulation methods.

  1. An extraction algorithm of pulmonary fissures from multislice CT image

    NASA Astrophysics Data System (ADS)

    Tachibana, Hiroyuki; Saita, Shinsuke; Yasutomo, Motokatsu; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Sasagawa, Michizo; Eguchi, Kenji; Moriyama, Noriyuki

    2005-04-01

    Aging and smoking history increases number of pulmonary emphysema. Alveoli restoration destroyed by pulmonary emphysema is difficult and early direction is important. Multi-slice CT technology has been improving 3-D image analysis with higher body axis resolution and shorter scan time. And low-dose high accuracy scanning becomes available. Multi-slice CT image helps physicians with accurate measuring but huge volume of the image data takes time and cost. This paper is intended for computer added emphysema region analysis and proves effectiveness of proposed algorithm.

  2. Diversity in computing technologies and strategies for dynamic resource allocation

    DOE PAGES

    Garzoglio, G.; Gutsche, O.

    2015-12-23

    Here, High Energy Physics (HEP) is a very data intensive and trivially parallelizable science discipline. HEP is probing nature at increasingly finer details requiring ever increasing computational resources to process and analyze experimental data. In this paper, we discuss how HEP provisioned resources so far using Grid technologies, how HEP is starting to include new resource providers like commercial Clouds and HPC installations, and how HEP is transparently provisioning resources at these diverse providers.

  3. Research on fast Fourier transforms algorithm of huge remote sensing image technology with GPU and partitioning technology.

    PubMed

    Yang, Xue; Li, Xue-You; Li, Jia-Guo; Ma, Jun; Zhang, Li; Yang, Jan; Du, Quan-Ye

    2014-02-01

    Fast Fourier transforms (FFT) is a basic approach to remote sensing image processing. With the improvement of capacity of remote sensing image capture with the features of hyperspectrum, high spatial resolution and high temporal resolution, how to use FFT technology to efficiently process huge remote sensing image becomes the critical step and research hot spot of current image processing technology. FFT algorithm, one of the basic algorithms of image processing, can be used for stripe noise removal, image compression, image registration, etc. in processing remote sensing image. CUFFT function library is the FFT algorithm library based on CPU and FFTW. FFTW is a FFT algorithm developed based on CPU in PC platform, and is currently the fastest CPU based FFT algorithm function library. However there is a common problem that once the available memory or memory is less than the capacity of image, there will be out of memory or memory overflow when using the above two methods to realize image FFT arithmetic. To address this problem, a CPU and partitioning technology based Huge Remote Fast Fourier Transform (HRFFT) algorithm is proposed in this paper. By improving the FFT algorithm in CUFFT function library, the problem of out of memory and memory overflow is solved. Moreover, this method is proved rational by experiment combined with the CCD image of HJ-1A satellite. When applied to practical image processing, it improves effect of the image processing, speeds up the processing, which saves the time of computation and achieves sound result.

  4. Can multilinguality improve Biomedical Word Sense Disambiguation?

    PubMed

    Duque, Andres; Martinez-Romo, Juan; Araujo, Lourdes

    2016-12-01

    Ambiguity in the biomedical domain represents a major issue when performing Natural Language Processing tasks over the huge amount of available information in the field. For this reason, Word Sense Disambiguation is critical for achieving accurate systems able to tackle complex tasks such as information extraction, summarization or document classification. In this work we explore whether multilinguality can help to solve the problem of ambiguity, and the conditions required for a system to improve the results obtained by monolingual approaches. Also, we analyze the best ways to generate those useful multilingual resources, and study different languages and sources of knowledge. The proposed system, based on co-occurrence graphs containing biomedical concepts and textual information, is evaluated on a test dataset frequently used in biomedicine. We can conclude that multilingual resources are able to provide a clear improvement of more than 7% compared to monolingual approaches, for graphs built from a small number of documents. Also, empirical results show that automatically translated resources are a useful source of information for this particular task. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. A distributed computing approach to mission operations support. [for spacecraft

    NASA Technical Reports Server (NTRS)

    Larsen, R. L.

    1975-01-01

    Computing mission operation support includes orbit determination, attitude processing, maneuver computation, resource scheduling, etc. The large-scale third-generation distributed computer network discussed is capable of fulfilling these dynamic requirements. It is shown that distribution of resources and control leads to increased reliability, and exhibits potential for incremental growth. Through functional specialization, a distributed system may be tuned to very specific operational requirements. Fundamental to the approach is the notion of process-to-process communication, which is effected through a high-bandwidth communications network. Both resource-sharing and load-sharing may be realized in the system.

  6. Computer Technology Resources for Literacy Projects.

    ERIC Educational Resources Information Center

    Florida State Council on Aging, Tallahassee.

    This resource booklet was prepared to assist literacy projects and community adult education programs in determining the technology they need to serve more older persons. Section 1 contains the following reprinted articles: "The Human Touch in the Computer Age: Seniors Learn Computer Skills from Schoolkids" (Suzanne Kashuba);…

  7. The Computer Explosion: Implications for Educational Equity. Resource Notebook.

    ERIC Educational Resources Information Center

    Denbo, Sheryl, Comp.

    This notebook was prepared to provide resources for educators interested in using computers to increase opportunities for all students. The notebook contains specially prepared materials and selected newspaper and journal articles. The first section reviews the issues related to computer equity (equal access, tracking through different…

  8. Development of Computer-Based Resources for Textile Education.

    ERIC Educational Resources Information Center

    Hopkins, Teresa; Thomas, Andrew; Bailey, Mike

    1998-01-01

    Describes the production of computer-based resources for students of textiles and engineering in the United Kingdom. Highlights include funding by the Teaching and Learning Technology Programme (TLTP), courseware author/subject expert interaction, usage test and evaluation, authoring software, graphics, computer-aided design simulation, self-test…

  9. Chinese health care system and clinical epidemiology

    PubMed Central

    Sun, Yuelian; Gregersen, Hans; Yuan, Wei

    2017-01-01

    China has gone through a comprehensive health care insurance reform since 2003 and achieved universal health insurance coverage in 2011. The new health care insurance system provides China with a huge opportunity for the development of health care and medical research when its rich medical resources are fully unfolded. In this study, we review the Chinese health care system and its implication for medical research, especially within clinical epidemiology. First, we briefly review the population register system, the distribution of the urban and rural population in China, and the development of the Chinese health care system after 1949. In the following sections, we describe the current Chinese health care delivery system and the current health insurance system. We then focus on the construction of the Chinese health information system as well as several existing registers and research projects on health data. Finally, we discuss the opportunities and challenges of the health care system in regard to clinical epidemiology research. China now has three main insurance schemes. The Urban Employee Basic Medical Insurance (UEBMI) covers urban employees and retired employees. The Urban Residence Basic Medical Insurance (URBMI) covers urban residents, including children, students, elderly people without previous employment, and unemployed people. The New Rural Cooperative Medical Scheme (NRCMS) covers rural residents. The Chinese Government has made efforts to build up health information data, including electronic medical records. The establishment of universal health care insurance with linkage to medical records will provide potentially huge research opportunities in the future. However, constructing a complete register system at a nationwide level is challenging. In the future, China will demand increased capacity of researchers and data managers, in particular within clinical epidemiology, to explore the rich resources. PMID:28356772

  10. "A players" or "A positions"? The strategic logic of workforce management.

    PubMed

    Huselid, Mark A; Beatty, Richard W; Becker, Brian E

    2005-12-01

    Companies simply can't afford to have "A players" in all positions. Rather, businesses need to adopt a portfolio approach to workforce management, systematically identifying their strategically important A positions, supporting B positions, and surplus C positions, then focusing disproportionate resources on making sure A players hold A positions. This is not as obvious as it may seem, because the three types of positions do not reflect corporate hierarchy, pay scales, or the level of difficulty in filling them. A positions are those that directly further company strategy and, less obviously, exhibit wide variation in the quality of the work done by the people who occupy them. Why variability? Because raising the average performance of individuals in these critical roles will pay huge dividends in corporate value. If a company like Nordstrom, for example, whose strategy depends on personalized service, were to improve the performance of its frontline sales associates, it could reap huge revenue benefits. B positions are those that support A positions or maintain company value. Inattention to them could represent a significant downside risk. (Think how damaging it would be to an airline, for example, if the quality of its pilots were to drop.) Yet investing in them to the same degree as A positions is ill-advised because B positions don't offer an upside potential. (Pilots are already highly trained, so channeling resources into improving their performance would probably not create much competitive advantage.) And C positions? Companies should consider outsourcing them--or eliminating them. We all know that effective business strategy requires differentiating a firm's products and services in ways that create value for customers. Accomplishing this requires a differentiated workforce strategy, as well.

  11. Brute force meets Bruno force in parameter optimisation: introduction of novel constraints for parameter accuracy improvement by symbolic computation.

    PubMed

    Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F

    2011-09-01

    Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.

  12. Computer algorithms and applications used to assist the evaluation and treatment of adolescent idiopathic scoliosis: a review of published articles 2000-2009.

    PubMed

    Phan, Philippe; Mezghani, Neila; Aubin, Carl-Éric; de Guise, Jacques A; Labelle, Hubert

    2011-07-01

    Adolescent idiopathic scoliosis (AIS) is a complex spinal deformity whose assessment and treatment present many challenges. Computer applications have been developed to assist clinicians. A literature review on computer applications used in AIS evaluation and treatment has been undertaken. The algorithms used, their accuracy and clinical usability were analyzed. Computer applications have been used to create new classifications for AIS based on 2D and 3D features, assess scoliosis severity or risk of progression and assist bracing and surgical treatment. It was found that classification accuracy could be improved using computer algorithms that AIS patient follow-up and screening could be done using surface topography thereby limiting radiation and that bracing and surgical treatment could be optimized using simulations. Yet few computer applications are routinely used in clinics. With the development of 3D imaging and databases, huge amounts of clinical and geometrical data need to be taken into consideration when researching and managing AIS. Computer applications based on advanced algorithms will be able to handle tasks that could otherwise not be done which can possibly improve AIS patients' management. Clinically oriented applications and evidence that they can improve current care will be required for their integration in the clinical setting.

  13. Workflow Management Systems for Molecular Dynamics on Leadership Computers

    NASA Astrophysics Data System (ADS)

    Wells, Jack; Panitkin, Sergey; Oleynik, Danila; Jha, Shantenu

    Molecular Dynamics (MD) simulations play an important role in a range of disciplines from Material Science to Biophysical systems and account for a large fraction of cycles consumed on computing resources. Increasingly science problems require the successful execution of ''many'' MD simulations as opposed to a single MD simulation. There is a need to provide scalable and flexible approaches to the execution of the workload. We present preliminary results on the Titan computer at the Oak Ridge Leadership Computing Facility that demonstrate a general capability to manage workload execution agnostic of a specific MD simulation kernel or execution pattern, and in a manner that integrates disparate grid-based and supercomputing resources. Our results build upon our extensive experience of distributed workload management in the high-energy physics ATLAS project using PanDA (Production and Distributed Analysis System), coupled with recent conceptual advances in our understanding of workload management on heterogeneous resources. We will discuss how we will generalize these initial capabilities towards a more production level service on DOE leadership resources. This research is sponsored by US DOE/ASCR and used resources of the OLCF computing facility.

  14. A new taxonomy for distributed computer systems based upon operating system structure

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.

    1985-01-01

    Characteristics of the resource structure found in the operating system are considered as a mechanism for classifying distributed computer systems. Since the operating system resources, themselves, are too diversified to provide a consistent classification, the structure upon which resources are built and shared are examined. The location and control character of this indivisibility provides the taxonomy for separating uniprocessors, computer networks, network computers (fully distributed processing systems or decentralized computers) and algorithm and/or data control multiprocessors. The taxonomy is important because it divides machines into a classification that is relevant or important to the client and not the hardware architect. It also defines the character of the kernel O/S structure needed for future computer systems. What constitutes an operating system for a fully distributed processor is discussed in detail.

  15. Discussing sexual and relationship health with young people in a children's hospital: evaluation of a computer-based resource.

    PubMed

    Bray, Lucy; Sanders, Caroline; McKenna, Jacqueline

    2013-12-01

    To investigate health professionals' evaluation of a computer-based resource designed to improve discussions about sexual and relationship health with young people. Evidence suggests that some health professionals can experience discomfort discussing sexual health and relationship issues with young people. Professionals within hospital settings should have the knowledge, competencies and skills to be able to ask young people sexual health questions and provide accurate sexual health education. Despite some educational material being available for community and adult services, there are no resources available, which are directly relevant to holding opportunistic discussions with young people within an acute children's hospital. A descriptive survey design. One hundred and fourteen health professionals from a children's hospital in the UK were involved in evaluating a computer-based resource. All completed an online questionnaire survey comprising of closed and open questions. The health professionals reported that the computer-based resource had a positive influence on their knowledge and clinical practice. The videos as well as the concise nature of the resource were evaluated highly. Learning was facilitated by professionals being able to control their learning through rerunning and accessing the resource on numerous occasions. An engaging, accessible computer-based resource has the capability to positively impact on health professionals' knowledge of, and skills in, starting and holding sexual health conversations with young people accessing a children's hospital. Health professionals working with children and young people value accessible, relevant and short computer-based training. This can facilitate knowledge and skill acquisition despite variation in working patterns. Improving the knowledge and skills of professionals working with young people to facilitate appropriate yet opportunistic sexual health discussions is important within the public health agenda. © 2013 John Wiley & Sons Ltd.

  16. CloVR: a virtual machine for automated and portable sequence analysis from the desktop using cloud computing.

    PubMed

    Angiuoli, Samuel V; Matalka, Malcolm; Gussman, Aaron; Galens, Kevin; Vangala, Mahesh; Riley, David R; Arze, Cesar; White, James R; White, Owen; Fricke, W Florian

    2011-08-30

    Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing.

  17. Learning with Computers. AECA Resource Book Series, Volume 3, Number 2.

    ERIC Educational Resources Information Center

    Elliott, Alison

    1996-01-01

    Research has supported the idea that the use of computers in the education of young children promotes social interaction and academic achievement. This resource booklet provides an introduction to computers in early childhood settings to enrich learning opportunities and provides guidance to teachers to find developmentally appropriate software…

  18. On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

    NASA Astrophysics Data System (ADS)

    Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.

    2017-10-01

    This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.

  19. Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing.

    PubMed

    Zhang, Nan; Yang, Xiaolong; Zhang, Min; Sun, Yan

    2016-01-01

    Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network.

  20. Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing

    PubMed Central

    Zhang, Min; Sun, Yan

    2016-01-01

    Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network. PMID:28030553

  1. Mining semantic networks of bioinformatics e-resources from the literature

    PubMed Central

    2011-01-01

    Background There have been a number of recent efforts (e.g. BioCatalogue, BioMoby) to systematically catalogue bioinformatics tools, services and datasets. These efforts rely on manual curation, making it difficult to cope with the huge influx of various electronic resources that have been provided by the bioinformatics community. We present a text mining approach that utilises the literature to automatically extract descriptions and semantically profile bioinformatics resources to make them available for resource discovery and exploration through semantic networks that contain related resources. Results The method identifies the mentions of resources in the literature and assigns a set of co-occurring terminological entities (descriptors) to represent them. We have processed 2,691 full-text bioinformatics articles and extracted profiles of 12,452 resources containing associated descriptors with binary and tf*idf weights. Since such representations are typically sparse (on average 13.77 features per resource), we used lexical kernel metrics to identify semantically related resources via descriptor smoothing. Resources are then clustered or linked into semantic networks, providing the users (bioinformaticians, curators and service/tool crawlers) with a possibility to explore algorithms, tools, services and datasets based on their relatedness. Manual exploration of links between a set of 18 well-known bioinformatics resources suggests that the method was able to identify and group semantically related entities. Conclusions The results have shown that the method can reconstruct interesting functional links between resources (e.g. linking data types and algorithms), in particular when tf*idf-like weights are used for profiling. This demonstrates the potential of combining literature mining and simple lexical kernel methods to model relatedness between resource descriptors in particular when there are few features, thus potentially improving the resource description, discovery and exploration process. The resource profiles are available at http://gnode1.mib.man.ac.uk/bioinf/semnets.html PMID:21388573

  2. Research on Key Technologies of Cloud Computing

    NASA Astrophysics Data System (ADS)

    Zhang, Shufen; Yan, Hongcan; Chen, Xuebin

    With the development of multi-core processors, virtualization, distributed storage, broadband Internet and automatic management, a new type of computing mode named cloud computing is produced. It distributes computation task on the resource pool which consists of massive computers, so the application systems can obtain the computing power, the storage space and software service according to its demand. It can concentrate all the computing resources and manage them automatically by the software without intervene. This makes application offers not to annoy for tedious details and more absorbed in his business. It will be advantageous to innovation and reduce cost. It's the ultimate goal of cloud computing to provide calculation, services and applications as a public facility for the public, So that people can use the computer resources just like using water, electricity, gas and telephone. Currently, the understanding of cloud computing is developing and changing constantly, cloud computing still has no unanimous definition. This paper describes three main service forms of cloud computing: SAAS, PAAS, IAAS, compared the definition of cloud computing which is given by Google, Amazon, IBM and other companies, summarized the basic characteristics of cloud computing, and emphasized on the key technologies such as data storage, data management, virtualization and programming model.

  3. Hard-real-time resource management for autonomous spacecraft

    NASA Technical Reports Server (NTRS)

    Gat, E.

    2000-01-01

    This paper describes tickets, a computational mechanism for hard-real-time autonomous resource management. Autonomous spacecraftcontrol can be considered abstractly as a computational process whose outputs are spacecraft commands.

  4. Synchronization of Finite State Shared Resources

    DTIC Science & Technology

    1976-03-01

    IMHI uiw mmm " AFOSR -TR- 70- 0^8 3 QC o SYNCHRONIZATION OF FINITE STATE SHARED RESOURCES Edward A Sei neide.- DEPARTMENT of COMPUTER...34" ■ ■ ^ I I. i. . : ,1 . i-i SYNCHRONIZATION OF FINITE STATE SHARED RESOURCES Edward A Schneider Department of Computer...SIGNIFICANT NUMBER OF PAGES WHICH DO NOT REPRODUCE LEGIBLY. ABSTRACT The problem of synchronizing a set of operations defined on a shared resource

  5. Radar Control Optimal Resource Allocation

    DTIC Science & Technology

    2015-07-13

    other tunable parameters of radars [17, 18]. Such radar resource scheduling usually demands massive computation. Even myopic 14 Distribution A: Approved...reduced validity of the optimal choice of radar resource. In the non- myopic context, the computational problem becomes exponentially more difficult...computed as t? = ασ2 q + σ r √ α q (σ + r + α q) α q2 r − 1ασ q2 + q r2 . (19) We are only interested in t? > 1 and solving the inequality we obtain the

  6. Performance Evaluation of Resource Management in Cloud Computing Environments.

    PubMed

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  7. Performance Evaluation of Resource Management in Cloud Computing Environments

    PubMed Central

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price. PMID:26555730

  8. Breaking the hype cycle: using the computer effectively with learners with intellectual disabilities.

    PubMed

    Lloyd, Jan; Moni, Karen B; Jobling, Anne

    2006-06-01

    There has been huge growth in the use of information technology (IT) in classrooms for learners of all ages. It has been suggested that computers in the classroom encourage independent and self-paced learning, provide immediate feedback and improve self-motivation and self-confidence. Concurrently there is increasing interest related to the role of technology in educational programs for individuals with intellectual disabilities. However, although many claims are made about the benefits of computers and software packages there is limited evidence based information to support these claims. Researchers are now starting to look at the specific instructional design features that are hypothesised to facilitate education outcomes rather than the over-emphasis on graphics and sounds. Research undertaken as part of a post-school program (Latch-On: Literacy and Technology - Hands On) at the University of Queensland investigated the use of computers by young adults with intellectual disabilities. The aims of the research reported in this paper were to address the challenges identified in the 'hype' surrounding different pieces of educational software and to develop a means of systematically analysing software for use in teaching programs.

  9. Integrated Geo Hazard Management System in Cloud Computing Technology

    NASA Astrophysics Data System (ADS)

    Hanifah, M. I. M.; Omar, R. C.; Khalid, N. H. N.; Ismail, A.; Mustapha, I. S.; Baharuddin, I. N. Z.; Roslan, R.; Zalam, W. M. Z.

    2016-11-01

    Geo hazard can result in reducing of environmental health and huge economic losses especially in mountainous area. In order to mitigate geo-hazard effectively, cloud computer technology are introduce for managing geo hazard database. Cloud computing technology and it services capable to provide stakeholder's with geo hazards information in near to real time for an effective environmental management and decision-making. UNITEN Integrated Geo Hazard Management System consist of the network management and operation to monitor geo-hazard disaster especially landslide in our study area at Kelantan River Basin and boundary between Hulu Kelantan and Hulu Terengganu. The system will provide easily manage flexible measuring system with data management operates autonomously and can be controlled by commands to collects and controls remotely by using “cloud” system computing. This paper aims to document the above relationship by identifying the special features and needs associated with effective geohazard database management using “cloud system”. This system later will use as part of the development activities and result in minimizing the frequency of the geo-hazard and risk at that research area.

  10. The application of dynamic programming in production planning

    NASA Astrophysics Data System (ADS)

    Wu, Run

    2017-05-01

    Nowadays, with the popularity of the computers, various industries and fields are widely applying computer information technology, which brings about huge demand for a variety of application software. In order to develop software meeting various needs with most economical cost and best quality, programmers must design efficient algorithms. A superior algorithm can not only soul up one thing, but also maximize the benefits and generate the smallest overhead. As one of the common algorithms, dynamic programming algorithms are used to solving problems with some sort of optimal properties. When solving problems with a large amount of sub-problems that needs repetitive calculations, the ordinary sub-recursive method requires to consume exponential time, and dynamic programming algorithm can reduce the time complexity of the algorithm to the polynomial level, according to which we can conclude that dynamic programming algorithm is a very efficient compared to other algorithms reducing the computational complexity and enriching the computational results. In this paper, we expound the concept, basic elements, properties, core, solving steps and difficulties of the dynamic programming algorithm besides, establish the dynamic programming model of the production planning problem.

  11. A combined registration and finite element analysis method for fast estimation of intraoperative brain shift; phantom and animal model study.

    PubMed

    Mohammadi, Amrollah; Ahmadian, Alireza; Rabbani, Shahram; Fattahi, Ehsan; Shirani, Shapour

    2017-12-01

    Finite element models for estimation of intraoperative brain shift suffer from huge computational cost. In these models, image registration and finite element analysis are two time-consuming processes. The proposed method is an improved version of our previously developed Finite Element Drift (FED) registration algorithm. In this work the registration process is combined with the finite element analysis. In the Combined FED (CFED), the deformation of whole brain mesh is iteratively calculated by geometrical extension of a local load vector which is computed by FED. While the processing time of the FED-based method including registration and finite element analysis was about 70 s, the computation time of the CFED was about 3.2 s. The computational cost of CFED is almost 50% less than similar state of the art brain shift estimators based on finite element models. The proposed combination of registration and structural analysis can make the calculation of brain deformation much faster. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Symmetrical compression distance for arrhythmia discrimination in cloud-based big-data services.

    PubMed

    Lillo-Castellano, J M; Mora-Jiménez, I; Santiago-Mozos, R; Chavarría-Asso, F; Cano-González, A; García-Alberola, A; Rojo-Álvarez, J L

    2015-07-01

    The current development of cloud computing is completely changing the paradigm of data knowledge extraction in huge databases. An example of this technology in the cardiac arrhythmia field is the SCOOP platform, a national-level scientific cloud-based big data service for implantable cardioverter defibrillators. In this scenario, we here propose a new methodology for automatic classification of intracardiac electrograms (EGMs) in a cloud computing system, designed for minimal signal preprocessing. A new compression-based similarity measure (CSM) is created for low computational burden, so-called weighted fast compression distance, which provides better performance when compared with other CSMs in the literature. Using simple machine learning techniques, a set of 6848 EGMs extracted from SCOOP platform were classified into seven cardiac arrhythmia classes and one noise class, reaching near to 90% accuracy when previous patient arrhythmia information was available and 63% otherwise, hence overcoming in all cases the classification provided by the majority class. Results show that this methodology can be used as a high-quality service of cloud computing, providing support to physicians for improving the knowledge on patient diagnosis.

  13. Eurogrid: a new glideinWMS based portal for CDF data analysis

    NASA Astrophysics Data System (ADS)

    Amerio, S.; Benjamin, D.; Dost, J.; Compostella, G.; Lucchesi, D.; Sfiligoi, I.

    2012-12-01

    The CDF experiment at Fermilab ended its Run-II phase on September 2011 after 11 years of operations and 10 fb-1 of collected data. CDF computing model is based on a Central Analysis Farm (CAF) consisting of local computing and storage resources, supported by OSG and LCG resources accessed through dedicated portals. At the beginning of 2011 a new portal, Eurogrid, has been developed to effectively exploit computing and disk resources in Europe: a dedicated farm and storage area at the TIER-1 CNAF computing center in Italy, and additional LCG computing resources at different TIER-2 sites in Italy, Spain, Germany and France, are accessed through a common interface. The goal of this project is to develop a portal easy to integrate in the existing CDF computing model, completely transparent to the user and requiring a minimum amount of maintenance support by the CDF collaboration. In this paper we will review the implementation of this new portal, and its performance in the first months of usage. Eurogrid is based on the glideinWMS software, a glidein based Workload Management System (WMS) that works on top of Condor. As CDF CAF is based on Condor, the choice of the glideinWMS software was natural and the implementation seamless. Thanks to the pilot jobs, user-specific requirements and site resources are matched in a very efficient way, completely transparent to the users. Official since June 2011, Eurogrid effectively complements and supports CDF computing resources offering an optimal solution for the future in terms of required manpower for administration, support and development.

  14. Application of microarray analysis on computer cluster and cloud platforms.

    PubMed

    Bernau, C; Boulesteix, A-L; Knaus, J

    2013-01-01

    Analysis of recent high-dimensional biological data tends to be computationally intensive as many common approaches such as resampling or permutation tests require the basic statistical analysis to be repeated many times. A crucial advantage of these methods is that they can be easily parallelized due to the computational independence of the resampling or permutation iterations, which has induced many statistics departments to establish their own computer clusters. An alternative is to rent computing resources in the cloud, e.g. at Amazon Web Services. In this article we analyze whether a selection of statistical projects, recently implemented at our department, can be efficiently realized on these cloud resources. Moreover, we illustrate an opportunity to combine computer cluster and cloud resources. In order to compare the efficiency of computer cluster and cloud implementations and their respective parallelizations we use microarray analysis procedures and compare their runtimes on the different platforms. Amazon Web Services provide various instance types which meet the particular needs of the different statistical projects we analyzed in this paper. Moreover, the network capacity is sufficient and the parallelization is comparable in efficiency to standard computer cluster implementations. Our results suggest that many statistical projects can be efficiently realized on cloud resources. It is important to mention, however, that workflows can change substantially as a result of a shift from computer cluster to cloud computing.

  15. A high-order spatial filter for a cubed-sphere spectral element model

    NASA Astrophysics Data System (ADS)

    Kang, Hyun-Gyu; Cheong, Hyeong-Bin

    2017-04-01

    A high-order spatial filter is developed for the spectral-element-method dynamical core on the cubed-sphere grid which employs the Gauss-Lobatto Lagrange interpolating polynomials (GLLIP) as orthogonal basis functions. The filter equation is the high-order Helmholtz equation which corresponds to the implicit time-differencing of a diffusion equation employing the high-order Laplacian. The Laplacian operator is discretized within a cell which is a building block of the cubed sphere grid and consists of the Gauss-Lobatto grid. When discretizing a high-order Laplacian, due to the requirement of C0 continuity along the cell boundaries the grid-points in neighboring cells should be used for the target cell: The number of neighboring cells is nearly quadratically proportional to the filter order. Discrete Helmholtz equation yields a huge-sized and highly sparse matrix equation whose size is N*N with N the number of total grid points on the globe. The number of nonzero entries is also almost in quadratic proportion to the filter order. Filtering is accomplished by solving the huge-matrix equation. While requiring a significant computing time, the solution of global matrix provides the filtered field free of discontinuity along the cell boundaries. To achieve the computational efficiency and the accuracy at the same time, the solution of the matrix equation was obtained by only accounting for the finite number of adjacent cells. This is called as a local-domain filter. It was shown that to remove the numerical noise near the grid-scale, inclusion of 5*5 cells for the local-domain filter was found sufficient, giving the same accuracy as that obtained by global domain solution while reducing the computing time to a considerably lower level. The high-order filter was evaluated using the standard test cases including the baroclinic instability of the zonal flow. Results indicated that the filter performs better on the removal of grid-scale numerical noises than the explicit high-order viscosity. It was also presented that the filter can be easily implemented on the distributed-memory parallel computers with a desirable scalability.

  16. Dynamic partitioning as a way to exploit new computing paradigms: the cloud use case.

    NASA Astrophysics Data System (ADS)

    Ciaschini, Vincenzo; Dal Pra, Stefano; dell'Agnello, Luca

    2015-12-01

    The WLCG community and many groups in the HEP community have based their computing strategy on the Grid paradigm, which proved successful and still ensures its goals. However, Grid technology has not spread much over other communities; in the commercial world, the cloud paradigm is the emerging way to provide computing services. WLCG experiments aim to achieve integration of their existing current computing model with cloud deployments and take advantage of the so-called opportunistic resources (including HPC facilities) which are usually not Grid compliant. One missing feature in the most common cloud frameworks, is the concept of job scheduler, which plays a key role in a traditional computing centre, by enabling a fairshare based access at the resources to the experiments in a scenario where demand greatly outstrips availability. At CNAF we are investigating the possibility to access the Tier-1 computing resources as an OpenStack based cloud service. The system, exploiting the dynamic partitioning mechanism already being used to enable Multicore computing, allowed us to avoid a static splitting of the computing resources in the Tier-1 farm, while permitting a share friendly approach. The hosts in a dynamically partitioned farm may be moved to or from the partition, according to suitable policies for request and release of computing resources. Nodes being requested in the partition switch their role and become available to play a different one. In the cloud use case hosts may switch from acting as Worker Node in the Batch system farm to cloud compute node member, made available to tenants. In this paper we describe the dynamic partitioning concept, its implementation and integration with our current batch system, LSF.

  17. Encapsulating model complexity and landscape-scale analyses of state-and-transition simulation models: an application of ecoinformatics and juniper encroachment in sagebrush steppe ecosystems

    USGS Publications Warehouse

    O'Donnell, Michael

    2015-01-01

    State-and-transition simulation modeling relies on knowledge of vegetation composition and structure (states) that describe community conditions, mechanistic feedbacks such as fire that can affect vegetation establishment, and ecological processes that drive community conditions as well as the transitions between these states. However, as the need for modeling larger and more complex landscapes increase, a more advanced awareness of computing resources becomes essential. The objectives of this study include identifying challenges of executing state-and-transition simulation models, identifying common bottlenecks of computing resources, developing a workflow and software that enable parallel processing of Monte Carlo simulations, and identifying the advantages and disadvantages of different computing resources. To address these objectives, this study used the ApexRMS® SyncroSim software and embarrassingly parallel tasks of Monte Carlo simulations on a single multicore computer and on distributed computing systems. The results demonstrated that state-and-transition simulation models scale best in distributed computing environments, such as high-throughput and high-performance computing, because these environments disseminate the workloads across many compute nodes, thereby supporting analysis of larger landscapes, higher spatial resolution vegetation products, and more complex models. Using a case study and five different computing environments, the top result (high-throughput computing versus serial computations) indicated an approximate 96.6% decrease of computing time. With a single, multicore compute node (bottom result), the computing time indicated an 81.8% decrease relative to using serial computations. These results provide insight into the tradeoffs of using different computing resources when research necessitates advanced integration of ecoinformatics incorporating large and complicated data inputs and models. - See more at: http://aimspress.com/aimses/ch/reader/view_abstract.aspx?file_no=Environ2015030&flag=1#sthash.p1XKDtF8.dpuf

  18. Computing Bounds on Resource Levels for Flexible Plans

    NASA Technical Reports Server (NTRS)

    Muscvettola, Nicola; Rijsman, David

    2009-01-01

    A new algorithm efficiently computes the tightest exact bound on the levels of resources induced by a flexible activity plan (see figure). Tightness of bounds is extremely important for computations involved in planning because tight bounds can save potentially exponential amounts of search (through early backtracking and detection of solutions), relative to looser bounds. The bound computed by the new algorithm, denoted the resource-level envelope, constitutes the measure of maximum and minimum consumption of resources at any time for all fixed-time schedules in the flexible plan. At each time, the envelope guarantees that there are two fixed-time instantiations one that produces the minimum level and one that produces the maximum level. Therefore, the resource-level envelope is the tightest possible resource-level bound for a flexible plan because any tighter bound would exclude the contribution of at least one fixed-time schedule. If the resource- level envelope can be computed efficiently, one could substitute looser bounds that are currently used in the inner cores of constraint-posting scheduling algorithms, with the potential for great improvements in performance. What is needed to reduce the cost of computation is an algorithm, the measure of complexity of which is no greater than a low-degree polynomial in N (where N is the number of activities). The new algorithm satisfies this need. In this algorithm, the computation of resource-level envelopes is based on a novel combination of (1) the theory of shortest paths in the temporal-constraint network for the flexible plan and (2) the theory of maximum flows for a flow network derived from the temporal and resource constraints. The measure of asymptotic complexity of the algorithm is O(N O(maxflow(N)), where O(x) denotes an amount of computing time or a number of arithmetic operations proportional to a number of the order of x and O(maxflow(N)) is the measure of complexity (and thus of cost) of a maximumflow algorithm applied to an auxiliary flow network of 2N nodes. The algorithm is believed to be efficient in practice; experimental analysis shows the practical cost of maxflow to be as low as O(N1.5). The algorithm could be enhanced following at least two approaches. In the first approach, incremental subalgorithms for the computation of the envelope could be developed. By use of temporal scanning of the events in the temporal network, it may be possible to significantly reduce the size of the networks on which it is necessary to run the maximum-flow subalgorithm, thereby significantly reducing the time required for envelope calculation. In the second approach, the practical effectiveness of resource envelopes in the inner loops of search algorithms could be tested for multi-capacity resource scheduling. This testing would include inner-loop backtracking and termination tests and variable and value-ordering heuristics that exploit the properties of resource envelopes more directly.

  19. Cloudbus Toolkit for Market-Oriented Cloud Computing

    NASA Astrophysics Data System (ADS)

    Buyya, Rajkumar; Pandey, Suraj; Vecchiola, Christian

    This keynote paper: (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver computing as a utility; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as virtual machines; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a Service software system containing SDK (Software Development Kit) for construction of Cloud applications and deployment on private or public Clouds, in addition to supporting market-oriented resource management; (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications; (iii) creation of 3rd party Cloud brokering services for building content delivery networks and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon along with Grid mashups; (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; (v) Energy Efficient Resource Allocation Mechanisms and Techniques for creation and management of Green Clouds; and (vi) pathways for future research.

  20. Super-sensitive two-wavelength fringe projection profilometry with 2-sensitivities temporal unwrapping

    NASA Astrophysics Data System (ADS)

    Servin, Manuel; Padilla, Moises; Garnica, Guillermo

    2018-07-01

    Since the early 1970s, optical two-wavelength phase-metrology (TWPM) has been used in a wide variety of experimental set ups. In TWPM one may compute the phase-sum and the phase-difference of two close phase measurements. Early TWPM optically computed the phase difference and phase sum by double exposure holography. However soon after, TWPM became almost synonymous to calculating the phase-difference only. The more sensitive phase-sum was largely forgotten. The standard application for phase-difference TWPM is to extend the phase measurement depth without phase-unwrapping for discontinuous phase-objects. This phase-difference, while non-wrapped, decreases however the signal-to-noise ratio (SNR) of the estimated phase. On the other hand, the phase-sum increases the phase sensitivity, and the SNR of the estimated phase. In spite of these two great advantages, the use of the phase-sum in TWPM has been almost ignored. In this paper we review and set the stage for digital TWPM for super-sensitive phase-sum estimation. This is coupled with two-sensitivity phase-unwrapping to obtain extended-range super-sensitive fringe-projection profilometry estimations. Here we mathematically prove, and experimentally show that using the phase-sum one obtains a huge increase in SNR with respect to using the phase-difference alone. The pioneer works on double exposure TWPM holography that uses the phase-difference and phase-sum are also properly acknowledged. Finally, two experimental results from fringe-projection profilometry that clearly show the huge SNR gain of the phase-sum, with respect to the phase-difference is now mathematically well established.

  1. Lessons learnt on the analysis of large sequence data in animal genomics.

    PubMed

    Biscarini, F; Cozzi, P; Orozco-Ter Wengel, P

    2018-04-06

    The 'omics revolution has made a large amount of sequence data available to researchers and the industry. This has had a profound impact in the field of bioinformatics, stimulating unprecedented advancements in this discipline. Mostly, this is usually looked at from the perspective of human 'omics, in particular human genomics. Plant and animal genomics, however, have also been deeply influenced by next-generation sequencing technologies, with several genomics applications now popular among researchers and the breeding industry. Genomics tends to generate huge amounts of data, and genomic sequence data account for an increasing proportion of big data in biological sciences, due largely to decreasing sequencing and genotyping costs and to large-scale sequencing and resequencing projects. The analysis of big data poses a challenge to scientists, as data gathering currently takes place at a faster pace than does data processing and analysis, and the associated computational burden is increasingly taxing, making even simple manipulation, visualization and transferring of data a cumbersome operation. The time consumed by the processing and analysing of huge data sets may be at the expense of data quality assessment and critical interpretation. Additionally, when analysing lots of data, something is likely to go awry-the software may crash or stop-and it can be very frustrating to track the error. We herein review the most relevant issues related to tackling these challenges and problems, from the perspective of animal genomics, and provide researchers that lack extensive computing experience with guidelines that will help when processing large genomic data sets. © 2018 Stichting International Foundation for Animal Genetics.

  2. Wastewater: A Potential Bioenergy Resource.

    PubMed

    Prakash, Jyotsana; Sharma, Rakesh; Ray, Subhasree; Koul, Shikha; Kalia, Vipin Chandra

    2018-06-01

    Wastewaters are a rich source of nutrients for microorganisms. However, if left unattended the biodegradation may lead to severe environmental hazards. The wastewaters can thus be utilized for the production of various value added products including bioenergy (H 2 and CH 4 ). A number of studies have reported utilization of various wastewaters for energy production. Depending on the nature of the wastewater, different reactor configurations, wastewater and inoculum pretreatments, co-substrate utilizations along with other process parameters have been studied for efficient product formation. Only a few studies have reported sequential utilization of wastewaters for H 2 and CH 4 production despite its huge potential for complete waste degradation.

  3. [Methods of quantitative proteomics].

    PubMed

    Kopylov, A T; Zgoda, V G

    2007-01-01

    In modern science proteomic analysis is inseparable from other fields of systemic biology. Possessing huge resources quantitative proteomics operates colossal information on molecular mechanisms of life. Advances in proteomics help researchers to solve complex problems of cell signaling, posttranslational modification, structure and functional homology of proteins, molecular diagnostics etc. More than 40 various methods have been developed in proteomics for quantitative analysis of proteins. Although each method is unique and has certain advantages and disadvantages all these use various isotope labels (tags). In this review we will consider the most popular and effective methods employing both chemical modifications of proteins and also metabolic and enzymatic methods of isotope labeling.

  4. On Building a Search Interface Discovery System

    NASA Astrophysics Data System (ADS)

    Shestakov, Denis

    A huge portion of the Web known as the deep Web is accessible via search interfaces to myriads of databases on the Web. While relatively good approaches for querying the contents of web databases have been recently proposed, one cannot fully utilize them having most search interfaces unlocated. Thus, the automatic recognition of search interfaces to online databases is crucial for any application accessing the deep Web. This paper describes the architecture of the I-Crawler, a system for finding and classifying search interfaces. The I-Crawler is intentionally designed to be used in the deep web characterization surveys and for constructing directories of deep web resources.

  5. Focus issue: series on computational and systems biology.

    PubMed

    Gough, Nancy R

    2011-09-06

    The application of computational biology and systems biology is yielding quantitative insight into cellular regulatory phenomena. For the month of September, Science Signaling highlights research featuring computational approaches to understanding cell signaling and investigation of signaling networks, a series of Teaching Resources from a course in systems biology, and various other articles and resources relevant to the application of computational biology and systems biology to the study of signal transduction.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The worldwide semisubmersible drilling rig fleet is approaching retirement. But replacement is not an attractive option even though dayrates are reaching record highs. In 1991, Schlumberger Sedco Forex managers decided that an alternative might exist if regulators and insurers could be convinced to extend rig life expectancy through restoration. Sedco Forex chose their No. 704 semisubmersible, an 18-year North Sea veteran, to test their process. The first step was to determine what required restoration, meaning fatigue life analysis of each weld on the huge vessel. If inspected, the task would be unacceptably time-consuming and of questionable accuracy. Instead a suitemore » of computer programs modeled the stress seen by each weld, statistically estimated the sea states seen by the rig throughout its North Sea service and calibrated a beam-element model on which to run their computer simulations. The elastic stiffness of the structure and detailed stress analysis of each weld was performed with ANSYS, a commercially available finite-element analysis program. The use of computer codes to evaluate service life extension is described.« less

  7. Processing LHC data in the UK

    PubMed Central

    Colling, D.; Britton, D.; Gordon, J.; Lloyd, S.; Doyle, A.; Gronbech, P.; Coles, J.; Sansum, A.; Patrick, G.; Jones, R.; Middleton, R.; Kelsey, D.; Cass, A.; Geddes, N.; Clark, P.; Barnby, L.

    2013-01-01

    The Large Hadron Collider (LHC) is one of the greatest scientific endeavours to date. The construction of the collider itself and the experiments that collect data from it represent a huge investment, both financially and in terms of human effort, in our hope to understand the way the Universe works at a deeper level. Yet the volumes of data produced are so large that they cannot be analysed at any single computing centre. Instead, the experiments have all adopted distributed computing models based on the LHC Computing Grid. Without the correct functioning of this grid infrastructure the experiments would not be able to understand the data that they have collected. Within the UK, the Grid infrastructure needed by the experiments is provided by the GridPP project. We report on the operations, performance and contributions made to the experiments by the GridPP project during the years of 2010 and 2011—the first two significant years of the running of the LHC. PMID:23230163

  8. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption

    NASA Astrophysics Data System (ADS)

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-01

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.

  9. Challenges of Future High-End Computing

    NASA Technical Reports Server (NTRS)

    Bailey, David; Kutler, Paul (Technical Monitor)

    1998-01-01

    The next major milestone in high performance computing is a sustained rate of one Pflop/s (also written one petaflops, or 10(circumflex)15 floating-point operations per second). In addition to prodigiously high computational performance, such systems must of necessity feature very large main memories, as well as comparably high I/O bandwidth and huge mass storage facilities. The current consensus of scientists who have studied these issues is that "affordable" petaflops systems may be feasible by the year 2010, assuming that certain key technologies continue to progress at current rates. One important question is whether applications can be structured to perform efficiently on such systems, which are expected to incorporate many thousands of processors and deeply hierarchical memory systems. To answer these questions, advanced performance modeling techniques, including simulation of future architectures and applications, may be required. It may also be necessary to formulate "latency tolerant algorithms" and other completely new algorithmic approaches for certain applications. This talk will give an overview of these challenges.

  10. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption

    PubMed Central

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-01

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information. PMID:26823196

  11. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption.

    PubMed

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-29

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.

  12. A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing.

    PubMed

    Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui

    2017-01-08

    Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4_ speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration.

  13. The Impacts of Attitudes and Engagement on Electronic Word of Mouth (eWOM) of Mobile Sensor Computing Applications.

    PubMed

    Zhao, Yu; Liu, Yide; Lai, Ivan K W; Zhang, Hongfeng; Zhang, Yi

    2016-03-18

    As one of the latest revolutions in networking technology, social networks allow users to keep connected and exchange information. Driven by the rapid wireless technology development and diffusion of mobile devices, social networks experienced a tremendous change based on mobile sensor computing. More and more mobile sensor network applications have appeared with the emergence of a huge amount of users. Therefore, an in-depth discussion on the human-computer interaction (HCI) issues of mobile sensor computing is required. The target of this study is to extend the discussions on HCI by examining the relationships of users' compound attitudes (i.e., affective attitudes, cognitive attitude), engagement and electronic word of mouth (eWOM) behaviors in the context of mobile sensor computing. A conceptual model is developed, based on which, 313 valid questionnaires are collected. The research discusses the level of impact on the eWOM of mobile sensor computing by considering user-technology issues, including the compound attitude and engagement, which can bring valuable discussions on the HCI of mobile sensor computing in further study. Besides, we find that user engagement plays a mediating role between the user's compound attitudes and eWOM. The research result can also help the mobile sensor computing industry to develop effective strategies and build strong consumer user-product (brand) relationships.

  14. The Impacts of Attitudes and Engagement on Electronic Word of Mouth (eWOM) of Mobile Sensor Computing Applications

    PubMed Central

    Zhao, Yu; Liu, Yide; Lai, Ivan K. W.; Zhang, Hongfeng; Zhang, Yi

    2016-01-01

    As one of the latest revolutions in networking technology, social networks allow users to keep connected and exchange information. Driven by the rapid wireless technology development and diffusion of mobile devices, social networks experienced a tremendous change based on mobile sensor computing. More and more mobile sensor network applications have appeared with the emergence of a huge amount of users. Therefore, an in-depth discussion on the human–computer interaction (HCI) issues of mobile sensor computing is required. The target of this study is to extend the discussions on HCI by examining the relationships of users’ compound attitudes (i.e., affective attitudes, cognitive attitude), engagement and electronic word of mouth (eWOM) behaviors in the context of mobile sensor computing. A conceptual model is developed, based on which, 313 valid questionnaires are collected. The research discusses the level of impact on the eWOM of mobile sensor computing by considering user-technology issues, including the compound attitude and engagement, which can bring valuable discussions on the HCI of mobile sensor computing in further study. Besides, we find that user engagement plays a mediating role between the user’s compound attitudes and eWOM. The research result can also help the mobile sensor computing industry to develop effective strategies and build strong consumer user—product (brand) relationships. PMID:26999155

  15. Applications of computer-aided text analysis in natural resources.

    Treesearch

    David N. Bengston

    2000-01-01

    Ten contributed papers describe the use of a variety of approaches to computer-aided text analysis and their application to a wide range of research questions related to natural resources and the environment. Taken together, these papers paint a picture of a growing and vital area of research on the human dimensions of natural resource management.

  16. SCANIT: centralized digitizing of forest resource maps or photographs

    Treesearch

    Elliot L. Amidon; E. Joyce Dye

    1981-01-01

    Spatial data on wildland resource maps and aerial photographs can be analyzed by computer after digitizing. SCANIT is a computerized system for encoding such data in digital form. The system, consisting of a collection of computer programs and subroutines, provides a powerful and versatile tool for a variety of resource analyses. SCANIT also may be converted easily to...

  17. The DoD's High Performance Computing Modernization Program - Ensuing the National Earth Systems Prediction Capability Becomes Operational

    NASA Astrophysics Data System (ADS)

    Burnett, W.

    2016-12-01

    The Department of Defense's (DoD) High Performance Computing Modernization Program (HPCMP) provides high performance computing to address the most significant challenges in computational resources, software application support and nationwide research and engineering networks. Today, the HPCMP has a critical role in ensuring the National Earth System Prediction Capability (N-ESPC) achieves initial operational status in 2019. A 2015 study commissioned by the HPCMP found that N-ESPC computational requirements will exceed interconnect bandwidth capacity due to the additional load from data assimilation and passing connecting data between ensemble codes. Memory bandwidth and I/O bandwidth will continue to be significant bottlenecks for the Navy's Hybrid Coordinate Ocean Model (HYCOM) scalability - by far the major driver of computing resource requirements in the N-ESPC. The study also found that few of the N-ESPC model developers have detailed plans to ensure their respective codes scale through 2024. Three HPCMP initiatives are designed to directly address and support these issues: Productivity Enhancement, Technology, Transfer and Training (PETTT), the HPCMP Applications Software Initiative (HASI), and Frontier Projects. PETTT supports code conversion by providing assistance, expertise and training in scalable and high-end computing architectures. HASI addresses the continuing need for modern application software that executes effectively and efficiently on next-generation high-performance computers. Frontier Projects enable research and development that could not be achieved using typical HPCMP resources by providing multi-disciplinary teams access to exceptional amounts of high performance computing resources. Finally, the Navy's DoD Supercomputing Resource Center (DSRC) currently operates a 6 Petabyte system, of which Naval Oceanography receives 15% of operational computational system use, or approximately 1 Petabyte of the processing capability. The DSRC will provide the DoD with future computing assets to initially operate the N-ESPC in 2019. This talk will further describe how DoD's HPCMP will ensure N-ESPC becomes operational, efficiently and effectively, using next-generation high performance computing.

  18. CloVR: A virtual machine for automated and portable sequence analysis from the desktop using cloud computing

    PubMed Central

    2011-01-01

    Background Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. Results We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. Conclusion The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing. PMID:21878105

  19. Global status of recycling waste solar panels: A review.

    PubMed

    Xu, Yan; Li, Jinhui; Tan, Quanyin; Peters, Anesia Lauren; Yang, Congren

    2018-05-01

    With the enormous growth in the development and utilization of solar-energy resources, the proliferation of waste solar panels has become problematic. While current research into solar panels has focused on how to improve the efficiency of the production capacity, the dismantling and recycling of end-of-life (EOL) panels are seldom considered, as can be seen, for instance, in the lack of dedicated solar-panel recycling plants. EOL solar-panel recycling can effectively save natural resources and reduce the cost of production. To address the environmental conservation and resource recycling issues posed by the huge amount of waste solar panels regarding environmental conservation and resource recycling, the status of the management and recycling technologies for waste solar panels are systemically reviewed and discussed in this article. This review can provide a quantitative basis to support the recycling of PV panels, and suggests future directions for public policy makers. At present, from the technical aspect, the research on solar panel recovery is facing many problems, and we need to further develop an economically feasible and non-toxic technology. The research on solar photovoltaic panels' management at the end of life is just beginning in many countries, and there is a need for further improvement and expansion of producer responsibility. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Bioprospecting Marine Plankton

    PubMed Central

    Abida, Heni; Ruchaud, Sandrine; Rios, Laurent; Humeau, Anne; Probert, Ian; De Vargas, Colomban; Bach, Stéphane; Bowler, Chris

    2013-01-01

    The ocean dominates the surface of our planet and plays a major role in regulating the biosphere. For example, the microscopic photosynthetic organisms living within provide 50% of the oxygen we breathe, and much of our food and mineral resources are extracted from the ocean. In a time of ecological crisis and major changes in our society, it is essential to turn our attention towards the sea to find additional solutions for a sustainable future. Remarkably, while we are overexploiting many marine resources, particularly the fisheries, the planktonic compartment composed of zooplankton, phytoplankton, bacteria and viruses, represents 95% of marine biomass and yet the extent of its diversity remains largely unknown and underexploited. Consequently, the potential of plankton as a bioresource for humanity is largely untapped. Due to their diverse evolutionary backgrounds, planktonic organisms offer immense opportunities: new resources for medicine, cosmetics and food, renewable energy, and long-term solutions to mitigate climate change. Research programs aiming to exploit culture collections of marine micro-organisms as well as to prospect the huge resources of marine planktonic biodiversity in the oceans are now underway, and several bioactive extracts and purified compounds have already been identified. This review will survey and assess the current state-of-the-art and will propose methodologies to better exploit the potential of marine plankton for drug discovery and for dermocosmetics. PMID:24240981

Top