Science.gov

Sample records for community cloud computing

  1. Community Cloud Computing

    NASA Astrophysics Data System (ADS)

    Marinos, Alexandros; Briscoe, Gerard

    Cloud Computing is rising fast, with its data centres growing at an unprecedented rate. However, this has come with concerns over privacy, efficiency at the expense of resilience, and environmental sustainability, because of the dependence on Cloud vendors such as Google, Amazon and Microsoft. Our response is an alternative model for the Cloud conceptualisation, providing a paradigm for Clouds in the community, utilising networked personal computers for liberation from the centralised vendor model. Community Cloud Computing (C3) offers an alternative architecture, created by combing the Cloud with paradigms from Grid Computing, principles from Digital Ecosystems, and sustainability from Green Computing, while remaining true to the original vision of the Internet. It is more technically challenging than Cloud Computing, having to deal with distributed computing issues, including heterogeneous nodes, varying quality of service, and additional security constraints. However, these are not insurmountable challenges, and with the need to retain control over our digital lives and the potential environmental consequences, it is a challenge we must pursue.

  2. Cloud Computing

    SciTech Connect

    Pete Beckman and Ian Foster

    2009-12-04

    Chicago Matters: Beyond Burnham (WTTW). Chicago has become a world center of "cloud computing." Argonne experts Pete Beckman and Ian Foster explain what "cloud computing" is and how you probably already use it on a daily basis.

  3. Cloud Computing Adoption and Usage in Community Colleges

    ERIC Educational Resources Information Center

    Behrend, Tara S.; Wiebe, Eric N.; London, Jennifer E.; Johnson, Emily C.

    2011-01-01

    Cloud computing is gaining popularity in higher education settings, but the costs and benefits of this tool have gone largely unexplored. The purpose of this study was to examine the factors that lead to technology adoption in a higher education setting. Specifically, we examined a range of predictors and outcomes relating to the acceptance of a…

  4. Cloud Computing for radiologists.

    PubMed

    Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit

    2012-07-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future. PMID:23599560

  5. Cloud Computing Explained

    ERIC Educational Resources Information Center

    Metz, Rosalyn

    2010-01-01

    While many talk about the cloud, few actually understand it. Three organizations' definitions come to the forefront when defining the cloud: Gartner, Forrester, and the National Institutes of Standards and Technology (NIST). Although both Gartner and Forrester provide definitions of cloud computing, the NIST definition is concise and uses…

  6. Computer animation of clouds

    SciTech Connect

    Max, N.

    1994-01-28

    Computer animation of outdoor scenes is enhanced by realistic clouds. I will discuss several different modeling and rendering schemes for clouds, and show how they evolved in my animation work. These include transparency-textured clouds on a 2-D plane, smooth shaded or textured 3-D clouds surfaces, and 3-D volume rendering. For the volume rendering, I will present various illumination schemes, including the density emitter, single scattering, and multiple scattering models.

  7. Running climate model in the commercial cloud computing environment: A case study using Community Earth System Model (CESM)

    NASA Astrophysics Data System (ADS)

    Chen, X.; Huang, X.; Jiao, C.; Flanner, M.; Raeker, T.; Palen, B.

    2015-12-01

    Numerical model is the major tool used in the studies of climate change and climate projection. Because of the enormous complexity involved in such climate models, they are usually run on supercomputing centers or at least high-performance computing clusters. The cloud computing environment, however, offers an alternative option for running climate models. Compared to traditional supercomputing environment, cloud computing offers more flexibility yet also extra technical challenges. Using the CESM (community earth system model) as a case study, we test the feasibility of running the climate model in the cloud-based virtual computing environment. Using the cloud computing resources offered by Amazon Web Service (AWS) Elastic Compute Cloud (EC2) and an open-source software, StarCluster, which can set up virtual cluster, we investigate how to run the CESM on AWS EC2 and the efficiency of parallelization of CESM on the AWS virtual cluster. We created virtual computing cluster using StarCluster on the AWS EC2 instances and carried out CESM simulations on such virtual cluster. We then compared the wall-clock time for one year of CESM simulation on the virtual cluster with that on a local high-performance computing (HPC) cluster with infiniband connections and operated by the University of Michigan. The results show that the CESM model can be efficiently scaled with number of CPUs on the AWS EC2 virtual computer cluster, and the parallelization efficiency is comparable to that on local HPC cluster. For standard configuration of the CESM at a spatial resolution of 1.9-degree latitude and 2.5-degree longitude, increasing the number of CPUs from 16 to 64 leads to a more than twice reduction in wall-clock running time and the scaling is nearly linear. Beyond 64 CPUs, the communication latency starts to overweight the saving of distributed computing and the parallelization efficiency becomes nearly level off.

  8. Cloud computing security.

    SciTech Connect

    Shin, Dongwan; Claycomb, William R.; Urias, Vincent E.

    2010-10-01

    Cloud computing is a paradigm rapidly being embraced by government and industry as a solution for cost-savings, scalability, and collaboration. While a multitude of applications and services are available commercially for cloud-based solutions, research in this area has yet to fully embrace the full spectrum of potential challenges facing cloud computing. This tutorial aims to provide researchers with a fundamental understanding of cloud computing, with the goals of identifying a broad range of potential research topics, and inspiring a new surge in research to address current issues. We will also discuss real implementations of research-oriented cloud computing systems for both academia and government, including configuration options, hardware issues, challenges, and solutions.

  9. Cloud computing basics for librarians.

    PubMed

    Hoy, Matthew B

    2012-01-01

    "Cloud computing" is the name for the recent trend of moving software and computing resources to an online, shared-service model. This article briefly defines cloud computing, discusses different models, explores the advantages and disadvantages, and describes some of the ways cloud computing can be used in libraries. Examples of cloud services are included at the end of the article. PMID:22289098

  10. Computing in the Clouds

    ERIC Educational Resources Information Center

    Johnson, Doug

    2010-01-01

    Web-based applications offer teachers, students, and school districts a convenient way to accomplish a wide range of tasks, from accounting to word processing, for free. Cloud computing has the potential to offer staff and students better services at a lower cost than the technology deployment models they're using now. Saving money and improving…

  11. The Basics of Cloud Computing

    ERIC Educational Resources Information Center

    Kaestner, Rich

    2012-01-01

    Most school business officials have heard the term "cloud computing" bandied about and may have some idea of what the term means. In fact, they likely already leverage a cloud-computing solution somewhere within their district. But what does cloud computing really mean? This brief article puts a bit of definition behind the term and helps one…

  12. Cloud Computing Security Issue: Survey

    NASA Astrophysics Data System (ADS)

    Kamal, Shailza; Kaur, Rajpreet

    2011-12-01

    Cloud computing is the growing field in IT industry since 2007 proposed by IBM. Another company like Google, Amazon, and Microsoft provides further products to cloud computing. The cloud computing is the internet based computing that shared recourses, information on demand. It provides the services like SaaS, IaaS and PaaS. The services and recourses are shared by virtualization that run multiple operation applications on cloud computing. This discussion gives the survey on the challenges on security issues during cloud computing and describes some standards and protocols that presents how security can be managed.

  13. Cloud computing for geophysical applications (Invited)

    NASA Astrophysics Data System (ADS)

    Zhizhin, M.; Kihn, E. A.; Mishin, D.; Medvedev, D.; Weigel, R. S.

    2010-12-01

    Cloud computing offers a scalable on-demand resource allocation model to evolving needs in data intensive geophysical applications, where computational needs in CPU and storage can vary over time depending on modeling or field campaign. Separate, sometimes incompatible cloud platforms and services are already available from major computing vendors (Amazon AWS, Microsoft Azure, Google Apps Engine), government agencies (NASA Nebulae) and Open Source community (Eucalyptus). Multiple cloud platforms with layered virtualization patterns (hardware-platform- software-data-or-everything as a service) provide a feature-rich environment and encourage experimentation with distributed data modeling, processing and storage. However, application and especially database development in the Cloud is different from the desktop and the compute cluster. In this presentation we will review scientific cloud applications relevant to geophysical research and present our results in building software components and cloud services for a virtual geophysical data center. We will discuss in depth economy, scalability and reliability of the distributed array and image data stores, synchronous and asynchronous RESTful services to access and model georefernced data, virtual observatory services for metadata management, and data visualization for web applications in Cloud.

  14. Trusted computing strengthens cloud authentication.

    PubMed

    Ghazizadeh, Eghbal; Zamani, Mazdak; Ab Manan, Jamalul-lail; Alizadeh, Mojtaba

    2014-01-01

    Cloud computing is a new generation of technology which is designed to provide the commercial necessities, solve the IT management issues, and run the appropriate applications. Another entry on the list of cloud functions which has been handled internally is Identity Access Management (IAM). Companies encounter IAM as security challenges while adopting more technologies became apparent. Trust Multi-tenancy and trusted computing based on a Trusted Platform Module (TPM) are great technologies for solving the trust and security concerns in the cloud identity environment. Single sign-on (SSO) and OpenID have been released to solve security and privacy problems for cloud identity. This paper proposes the use of trusted computing, Federated Identity Management, and OpenID Web SSO to solve identity theft in the cloud. Besides, this proposed model has been simulated in .Net environment. Security analyzing, simulation, and BLP confidential model are three ways to evaluate and analyze our proposed model. PMID:24701149

  15. Trusted Computing Strengthens Cloud Authentication

    PubMed Central

    2014-01-01

    Cloud computing is a new generation of technology which is designed to provide the commercial necessities, solve the IT management issues, and run the appropriate applications. Another entry on the list of cloud functions which has been handled internally is Identity Access Management (IAM). Companies encounter IAM as security challenges while adopting more technologies became apparent. Trust Multi-tenancy and trusted computing based on a Trusted Platform Module (TPM) are great technologies for solving the trust and security concerns in the cloud identity environment. Single sign-on (SSO) and OpenID have been released to solve security and privacy problems for cloud identity. This paper proposes the use of trusted computing, Federated Identity Management, and OpenID Web SSO to solve identity theft in the cloud. Besides, this proposed model has been simulated in .Net environment. Security analyzing, simulation, and BLP confidential model are three ways to evaluate and analyze our proposed model. PMID:24701149

  16. IBM Cloud Computing Powering a Smarter Planet

    NASA Astrophysics Data System (ADS)

    Zhu, Jinzy; Fang, Xing; Guo, Zhe; Niu, Meng Hua; Cao, Fan; Yue, Shuang; Liu, Qin Yu

    With increasing need for intelligent systems supporting the world's businesses, Cloud Computing has emerged as a dominant trend to provide a dynamic infrastructure to make such intelligence possible. The article introduced how to build a smarter planet with cloud computing technology. First, it introduced why we need cloud, and the evolution of cloud technology. Secondly, it analyzed the value of cloud computing and how to apply cloud technology. Finally, it predicted the future of cloud in the smarter planet.

  17. The Community Cloud Atlas - Building an Informed Cloud Watching Community

    NASA Astrophysics Data System (ADS)

    Guy, N.; Rowe, A.

    2014-12-01

    The sky is dynamic, from long lasting cloud systems to ethereal, fleeting formations. After years of observing the sky and growing our personal collections of cloud photos, we decided to take to social media to share pictures, as well as build and educate a community of cloud enthusiasts. We began a Facebook page, the Community Cloud Atlas, described as "...the place to show off your pictures of the sky, identify clouds, and to discuss how specific cloud types form and what they can tell you about current and future weather." Our main goal has been to encourage others to share their pictures, while we describe the scenes from a meteorological perspective and reach out to the general public to facilitate a deeper understanding of the sky. Nearly 16 months later, we have over 1400 "likes," spanning 45 countries with ages ranging from 13 to over 65. We have a consistent stream of submissions; so many that we decided to start a corresponding blog to better organize the photos, provide more detailed explanations, and reach a bigger audience. Feedback from users has been positive in support of not only sharing cloud pictures, but also to "learn the science as well as admiring" the clouds. As one community member stated, "This is not 'just' a place to share some lovely pictures." We have attempted to blend our social media presence with providing an educational resource, and we are encouraged by the response we have received. Our Atlas has been informally implemented into classrooms, ranging from a 6th grade science class to Meteorology courses at universities. NOVA's recent Cloud Lab also made use of our Atlas as a supply of categorized pictures. Our ongoing goal is to not only continue to increase understanding and appreciation of the sky among the public, but to provide an increasingly useful tool for educators. We continue to explore different social media options to interact with the public and provide easier content submission, as well as software options for

  18. Cloud computing in medical imaging.

    PubMed

    Kagadis, George C; Kloukinas, Christos; Moore, Kevin; Philbin, Jim; Papadimitroulas, Panagiotis; Alexakos, Christos; Nagy, Paul G; Visvikis, Dimitris; Hendee, William R

    2013-07-01

    Over the past century technology has played a decisive role in defining, driving, and reinventing procedures, devices, and pharmaceuticals in healthcare. Cloud computing has been introduced only recently but is already one of the major topics of discussion in research and clinical settings. The provision of extensive, easily accessible, and reconfigurable resources such as virtual systems, platforms, and applications with low service cost has caught the attention of many researchers and clinicians. Healthcare researchers are moving their efforts to the cloud, because they need adequate resources to process, store, exchange, and use large quantities of medical data. This Vision 20/20 paper addresses major questions related to the applicability of advanced cloud computing in medical imaging. The paper also considers security and ethical issues that accompany cloud computing. PMID:23822402

  19. Cloud Computing. Technology Briefing. Number 1

    ERIC Educational Resources Information Center

    Alberta Education, 2013

    2013-01-01

    Cloud computing is Internet-based computing in which shared resources, software and information are delivered as a service that computers or mobile devices can access on demand. Cloud computing is already used extensively in education. Free or low-cost cloud-based services are used daily by learners and educators to support learning, social…

  20. The Education Value of Cloud Computing

    ERIC Educational Resources Information Center

    Katzan, Harry, Jr.

    2010-01-01

    Cloud computing is a technique for supplying computer facilities and providing access to software via the Internet. Cloud computing represents a contextual shift in how computers are provisioned and accessed. One of the defining characteristics of cloud software service is the transfer of control from the client domain to the service provider.…

  1. Cloud Computing for Astronomers on Top of EGI Federated Cloud

    NASA Astrophysics Data System (ADS)

    Taffoni, G.; Vuerli, C.; Pasian, F.

    2015-09-01

    EGI Federated Cloud offers a general academic Cloud Infrastructure. We exploit EGI functionalities to address the needs of representative Astronomy and Astrophysics communities through clouds and gateways while respecting commonly used standards. The vision is to offer a novel environment empowering scientists to focus more on experimenting and pitching new ideas to service their needs for scientific discovery.

  2. 'Cloud computing' and clinical trials: report from an ECRIN workshop.

    PubMed

    Ohmann, Christian; Canham, Steve; Danielyan, Edgar; Robertshaw, Steve; Legré, Yannick; Clivio, Luca; Demotes, Jacques

    2015-01-01

    Growing use of cloud computing in clinical trials prompted the European Clinical Research Infrastructures Network, a European non-profit organisation established to support multinational clinical research, to organise a one-day workshop on the topic to clarify potential benefits and risks. The issues that arose in that workshop are summarised and include the following: the nature of cloud computing and the cloud computing industry; the risks in using cloud computing services now; the lack of explicit guidance on this subject, both generally and with reference to clinical trials; and some possible ways of reducing risks. There was particular interest in developing and using a European 'community cloud' specifically for academic clinical trial data. It was recognised that the day-long workshop was only the start of an ongoing process. Future discussion needs to include clarification of trial-specific regulatory requirements for cloud computing and involve representatives from the relevant regulatory bodies. PMID:26220186

  3. Analysis on the security of cloud computing

    NASA Astrophysics Data System (ADS)

    He, Zhonglin; He, Yuhua

    2011-02-01

    Cloud computing is a new technology, which is the fusion of computer technology and Internet development. It will lead the revolution of IT and information field. However, in cloud computing data and application software is stored at large data centers, and the management of data and service is not completely trustable, resulting in safety problems, which is the difficult point to improve the quality of cloud service. This paper briefly introduces the concept of cloud computing. Considering the characteristics of cloud computing, it constructs the security architecture of cloud computing. At the same time, with an eye toward the security threats cloud computing faces, several corresponding strategies are provided from the aspect of cloud computing users and service providers.

  4. A Privacy Manager for Cloud Computing

    NASA Astrophysics Data System (ADS)

    Pearson, Siani; Shen, Yun; Mowbray, Miranda

    We describe a privacy manager for cloud computing, which reduces the risk to the cloud computing user of their private data being stolen or misused, and also assists the cloud computing provider to conform to privacy law. We describe different possible architectures for privacy management in cloud computing; give an algebraic description of obfuscation, one of the features of the privacy manager; and describe how the privacy manager might be used to protect private metadata of online photos.

  5. Cloud Computing for Mission Design and Operations

    NASA Technical Reports Server (NTRS)

    Arrieta, Juan; Attiyah, Amy; Beswick, Robert; Gerasimantos, Dimitrios

    2012-01-01

    The space mission design and operations community already recognizes the value of cloud computing and virtualization. However, natural and valid concerns, like security, privacy, up-time, and vendor lock-in, have prevented a more widespread and expedited adoption into official workflows. In the interest of alleviating these concerns, we propose a series of guidelines for internally deploying a resource-oriented hub of data and algorithms. These guidelines provide a roadmap for implementing an architecture inspired in the cloud computing model: associative, elastic, semantical, interconnected, and adaptive. The architecture can be summarized as exposing data and algorithms as resource-oriented Web services, coordinated via messaging, and running on virtual machines; it is simple, and based on widely adopted standards, protocols, and tools. The architecture may help reduce common sources of complexity intrinsic to data-driven, collaborative interactions and, most importantly, it may provide the means for teams and agencies to evaluate the cloud computing model in their specific context, with minimal infrastructure changes, and before committing to a specific cloud services provider.

  6. Evaluating open-source cloud computing solutions for geosciences

    NASA Astrophysics Data System (ADS)

    Huang, Qunying; Yang, Chaowei; Liu, Kai; Xia, Jizhe; Xu, Chen; Li, Jing; Gui, Zhipeng; Sun, Min; Li, Zhenglong

    2013-09-01

    Many organizations start to adopt cloud computing for better utilizing computing resources by taking advantage of its scalability, cost reduction, and easy to access characteristics. Many private or community cloud computing platforms are being built using open-source cloud solutions. However, little has been done to systematically compare and evaluate the features and performance of open-source solutions in supporting Geosciences. This paper provides a comprehensive study of three open-source cloud solutions, including OpenNebula, Eucalyptus, and CloudStack. We compared a variety of features, capabilities, technologies and performances including: (1) general features and supported services for cloud resource creation and management, (2) advanced capabilities for networking and security, and (3) the performance of the cloud solutions in provisioning and operating the cloud resources as well as the performance of virtual machines initiated and managed by the cloud solutions in supporting selected geoscience applications. Our study found that: (1) no significant performance differences in central processing unit (CPU), memory and I/O of virtual machines created and managed by different solutions, (2) OpenNebula has the fastest internal network while both Eucalyptus and CloudStack have better virtual machine isolation and security strategies, (3) Cloudstack has the fastest operations in handling virtual machines, images, snapshots, volumes and networking, followed by OpenNebula, and (4) the selected cloud computing solutions are capable for supporting concurrent intensive web applications, computing intensive applications, and small-scale model simulations without intensive data communication.

  7. Introducing Cloud Computing Topics in Curricula

    ERIC Educational Resources Information Center

    Chen, Ling; Liu, Yang; Gallagher, Marcus; Pailthorpe, Bernard; Sadiq, Shazia; Shen, Heng Tao; Li, Xue

    2012-01-01

    The demand for graduates with exposure in Cloud Computing is on the rise. For many educational institutions, the challenge is to decide on how to incorporate appropriate cloud-based technologies into their curricula. In this paper, we describe our design and experiences of integrating Cloud Computing components into seven third/fourth-year…

  8. Research computing in a distributed cloud environment

    NASA Astrophysics Data System (ADS)

    Fransham, K.; Agarwal, A.; Armstrong, P.; Bishop, A.; Charbonneau, A.; Desmarais, R.; Hill, N.; Gable, I.; Gaudet, S.; Goliath, S.; Impey, R.; Leavett-Brown, C.; Ouellete, J.; Paterson, M.; Pritchet, C.; Penfold-Brown, D.; Podaima, W.; Schade, D.; Sobie, R. J.

    2010-11-01

    The recent increase in availability of Infrastructure-as-a-Service (IaaS) computing clouds provides a new way for researchers to run complex scientific applications. However, using cloud resources for a large number of research jobs requires significant effort and expertise. Furthermore, running jobs on many different clouds presents even more difficulty. In order to make it easy for researchers to deploy scientific applications across many cloud resources, we have developed a virtual machine resource manager (Cloud Scheduler) for distributed compute clouds. In response to a user's job submission to a batch system, the Cloud Scheduler manages the distribution and deployment of user-customized virtual machines across multiple clouds. We describe the motivation for and implementation of a distributed cloud using the Cloud Scheduler that is spread across both commercial and dedicated private sites, and present some early results of scientific data analysis using the system.

  9. Cloud Computing with iPlant Atmosphere.

    PubMed

    McKay, Sheldon J; Skidmore, Edwin J; LaRose, Christopher J; Mercer, Andre W; Noutsos, Christos

    2013-01-01

    Cloud Computing refers to distributed computing platforms that use virtualization software to provide easy access to physical computing infrastructure and data storage, typically administered through a Web interface. Cloud-based computing provides access to powerful servers, with specific software and virtual hardware configurations, while eliminating the initial capital cost of expensive computers and reducing the ongoing operating costs of system administration, maintenance contracts, power consumption, and cooling. This eliminates a significant barrier to entry into bioinformatics and high-performance computing for many researchers. This is especially true of free or modestly priced cloud computing services. The iPlant Collaborative offers a free cloud computing service, Atmosphere, which allows users to easily create and use instances on virtual servers preconfigured for their analytical needs. Atmosphere is a self-service, on-demand platform for scientific computing. This unit demonstrates how to set up, access and use cloud computing in Atmosphere. PMID:26270172

  10. Entity resolution using cloud computing

    NASA Astrophysics Data System (ADS)

    James, Alex; Tauer, Gregory; Czerniejewski, Adam; Brown, Ryan M.; Hartloff, Jesse; Chaves, Jillian; Sudit, Moises

    2015-05-01

    Roles and capabilities of analysts are changing as the volume of data grows. Open-source content is abundant and users are becoming increasingly dependent on automated capabilities to sift and correlate information. Entity resolution is one such capability. It is an algorithm that links entities using an arbitrary number of criteria (e.g., identifiers, attributes) from multiple sources. This paper demonstrates a prototype capability, which identifies enriched attributes of individuals stored across multiple sources. Here, the system first completes its processing on a cloud-computing cluster. Then, in a data explorer role, the analyst evaluates whether automated results are correct and whether attribute enrichment improves knowledge discovery.

  11. Airborne Cloud Computing Environment (ACCE)

    NASA Technical Reports Server (NTRS)

    Hardman, Sean; Freeborn, Dana; Crichton, Dan; Law, Emily; Kay-Im, Liz

    2011-01-01

    Airborne Cloud Computing Environment (ACCE) is JPL's internal investment to improve the return on airborne missions. Improve development performance of the data system. Improve return on the captured science data. The investment is to develop a common science data system capability for airborne instruments that encompasses the end-to-end lifecycle covering planning, provisioning of data system capabilities, and support for scientific analysis in order to improve the quality, cost effectiveness, and capabilities to enable new scientific discovery and research in earth observation.

  12. Enabling Earth Science Through Cloud Computing

    NASA Technical Reports Server (NTRS)

    Hardman, Sean; Riofrio, Andres; Shams, Khawaja; Freeborn, Dana; Springer, Paul; Chafin, Brian

    2012-01-01

    Cloud Computing holds tremendous potential for missions across the National Aeronautics and Space Administration. Several flight missions are already benefiting from an investment in cloud computing for mission critical pipelines and services through faster processing time, higher availability, and drastically lower costs available on cloud systems. However, these processes do not currently extend to general scientific algorithms relevant to earth science missions. The members of the Airborne Cloud Computing Environment task at the Jet Propulsion Laboratory have worked closely with the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) mission to integrate cloud computing into their science data processing pipeline. This paper details the efforts involved in deploying a science data system for the CARVE mission, evaluating and integrating cloud computing solutions with the system and porting their science algorithms for execution in a cloud environment.

  13. Cloud Computing and Its Applications in GIS

    NASA Astrophysics Data System (ADS)

    Kang, Cao

    2011-12-01

    Cloud computing is a novel computing paradigm that offers highly scalable and highly available distributed computing services. The objectives of this research are to: 1. analyze and understand cloud computing and its potential for GIS; 2. discover the feasibilities of migrating truly spatial GIS algorithms to distributed computing infrastructures; 3. explore a solution to host and serve large volumes of raster GIS data efficiently and speedily. These objectives thus form the basis for three professional articles. The first article is entitled "Cloud Computing and Its Applications in GIS". This paper introduces the concept, structure, and features of cloud computing. Features of cloud computing such as scalability, parallelization, and high availability make it a very capable computing paradigm. Unlike High Performance Computing (HPC), cloud computing uses inexpensive commodity computers. The uniform administration systems in cloud computing make it easier to use than GRID computing. Potential advantages of cloud-based GIS systems such as lower barrier to entry are consequently presented. Three cloud-based GIS system architectures are proposed: public cloud- based GIS systems, private cloud-based GIS systems and hybrid cloud-based GIS systems. Public cloud-based GIS systems provide the lowest entry barriers for users among these three architectures, but their advantages are offset by data security and privacy related issues. Private cloud-based GIS systems provide the best data protection, though they have the highest entry barriers. Hybrid cloud-based GIS systems provide a compromise between these extremes. The second article is entitled "A cloud computing algorithm for the calculation of Euclidian distance for raster GIS". Euclidean distance is a truly spatial GIS algorithm. Classical algorithms such as the pushbroom and growth ring techniques require computational propagation through the entire raster image, which makes it incompatible with the distributed nature

  14. Implementation of cloud computing in higher education

    NASA Astrophysics Data System (ADS)

    Asniar; Budiawan, R.

    2016-04-01

    Cloud computing research is a new trend in distributed computing, where people have developed service and SOA (Service Oriented Architecture) based application. This technology is very useful to be implemented, especially for higher education. This research is studied the need and feasibility for the suitability of cloud computing in higher education then propose the model of cloud computing service in higher education in Indonesia that can be implemented in order to support academic activities. Literature study is used as the research methodology to get a proposed model of cloud computing in higher education. Finally, SaaS and IaaS are cloud computing service that proposed to be implemented in higher education in Indonesia and cloud hybrid is the service model that can be recommended.

  15. Exploiting Virtualization and Cloud Computing in ATLAS

    NASA Astrophysics Data System (ADS)

    Harald Barreiro Megino, Fernando; Benjamin, Doug; De, Kaushik; Gable, Ian; Hendrix, Val; Panitkin, Sergey; Paterson, Michael; De Silva, Asoka; van der Ster, Daniel; Taylor, Ryan; Vitillo, Roberto A.; Walker, Rod

    2012-12-01

    The ATLAS Computing Model was designed around the concept of grid computing; since the start of data-taking, this model has proven very successful in the federated operation of more than one hundred Worldwide LHC Computing Grid (WLCG) sites for offline data distribution, storage, processing and analysis. However, new paradigms in computing, namely virtualization and cloud computing, present improved strategies for managing and provisioning IT resources that could allow ATLAS to more flexibly adapt and scale its storage and processing workloads on varied underlying resources. In particular, ATLAS is developing a “grid-of-clouds” infrastructure in order to utilize WLCG sites that make resources available via a cloud API. This work will present the current status of the Virtualization and Cloud Computing R&D project in ATLAS Distributed Computing. First, strategies for deploying PanDA queues on cloud sites will be discussed, including the introduction of a “cloud factory” for managing cloud VM instances. Next, performance results when running on virtualized/cloud resources at CERN LxCloud, StratusLab, and elsewhere will be presented. Finally, we will present the ATLAS strategies for exploiting cloud-based storage, including remote XROOTD access to input data, management of EC2-based files, and the deployment of cloud-resident LCG storage elements.

  16. Research on Key Technologies of Cloud Computing

    NASA Astrophysics Data System (ADS)

    Zhang, Shufen; Yan, Hongcan; Chen, Xuebin

    With the development of multi-core processors, virtualization, distributed storage, broadband Internet and automatic management, a new type of computing mode named cloud computing is produced. It distributes computation task on the resource pool which consists of massive computers, so the application systems can obtain the computing power, the storage space and software service according to its demand. It can concentrate all the computing resources and manage them automatically by the software without intervene. This makes application offers not to annoy for tedious details and more absorbed in his business. It will be advantageous to innovation and reduce cost. It's the ultimate goal of cloud computing to provide calculation, services and applications as a public facility for the public, So that people can use the computer resources just like using water, electricity, gas and telephone. Currently, the understanding of cloud computing is developing and changing constantly, cloud computing still has no unanimous definition. This paper describes three main service forms of cloud computing: SAAS, PAAS, IAAS, compared the definition of cloud computing which is given by Google, Amazon, IBM and other companies, summarized the basic characteristics of cloud computing, and emphasized on the key technologies such as data storage, data management, virtualization and programming model.

  17. The Evolution of Cloud Computing in ATLAS

    NASA Astrophysics Data System (ADS)

    Taylor, Ryan P.; Berghaus, Frank; Brasolin, Franco; Domingues Cordeiro, Cristovao Jose; Desmarais, Ron; Field, Laurence; Gable, Ian; Giordano, Domenico; Di Girolamo, Alessandro; Hover, John; LeBlanc, Matthew; Love, Peter; Paterson, Michael; Sobie, Randall; Zaytsev, Alexandr

    2015-12-01

    The ATLAS experiment at the LHC has successfully incorporated cloud computing technology and cloud resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature and transition into stable production systems, while ongoing evolutionary changes are still needed to adapt and refine the approaches used, in response to changes in prevailing cloud technology. In addition, completely new developments are needed to handle emerging requirements. This paper describes the overall evolution of cloud computing in ATLAS. The current status of the virtual machine (VM) management systems used for harnessing Infrastructure as a Service resources are discussed. Monitoring and accounting systems tailored for clouds are needed to complete the integration of cloud resources within ATLAS' distributed computing framework. We are developing and deploying new solutions to address the challenge of operation in a geographically distributed multi-cloud scenario, including a system for managing VM images across multiple clouds, a system for dynamic location-based discovery of caching proxy servers, and the usage of a data federation to unify the worldwide grid of storage elements into a single namespace and access point. The usage of the experiment's high level trigger farm for Monte Carlo production, in a specialized cloud environment, is presented. Finally, we evaluate and compare the performance of commercial clouds using several benchmarks.

  18. Cloud Computing Technologies and Applications

    NASA Astrophysics Data System (ADS)

    Zhu, Jinzy

    In a nutshell, the existing Internet provides to us content in the forms of videos, emails and information served up in web pages. With Cloud Computing, the next generation of Internet will allow us to "buy" IT services from a web portal, drastic expanding the types of merchandise available beyond those on e-commerce sites such as eBay and Taobao. We would be able to rent from a virtual storefront the basic necessities to build a virtual data center: such as CPU, memory, storage, and add on top of that the middleware necessary: web application servers, databases, enterprise server bus, etc. as the platform(s) to support the applications we would like to either rent from an Independent Software Vendor (ISV) or develop ourselves. Together this is what we call as "IT as a Service," or ITaaS, bundled to us the end users as a virtual data center.

  19. Research on private cloud computing based on analysis on typical opensource platform: a case study with Eucalyptus and Wavemaker

    NASA Astrophysics Data System (ADS)

    Yu, Xiaoyuan; Yuan, Jian; Chen, Shi

    2013-03-01

    Cloud computing is one of the most popular topics in the IT industry and is recently being adopted by many companies. It has four development models, as: public cloud, community cloud, hybrid cloud and private cloud. Except others, private cloud can be implemented in a private network, and delivers some benefits of cloud computing without pitfalls. This paper makes a comparison of typical open source platforms through which we can implement a private cloud. After this comparison, we choose Eucalyptus and Wavemaker to do a case study on the private cloud. We also do some performance estimation of cloud platform services and development of prototype software as cloud services.

  20. Virtualization and cloud computing in dentistry.

    PubMed

    Chow, Frank; Muftu, Ali; Shorter, Richard

    2014-01-01

    The use of virtualization and cloud computing has changed the way we use computers. Virtualization is a method of placing software called a hypervisor on the hardware of a computer or a host operating system. It allows a guest operating system to run on top of the physical computer with a virtual machine (i.e., virtual computer). Virtualization allows multiple virtual computers to run on top of one physical computer and to share its hardware resources, such as printers, scanners, and modems. This increases the efficient use of the computer by decreasing costs (e.g., hardware, electricity administration, and management) since only one physical computer is needed and running. This virtualization platform is the basis for cloud computing. It has expanded into areas of server and storage virtualization. One of the commonly used dental storage systems is cloud storage. Patient information is encrypted as required by the Health Insurance Portability and Accountability Act (HIPAA) and stored on off-site private cloud services for a monthly service fee. As computer costs continue to increase, so too will the need for more storage and processing power. Virtual and cloud computing will be a method for dentists to minimize costs and maximize computer efficiency in the near future. This article will provide some useful information on current uses of cloud computing. PMID:24941546

  1. An Overview of Cloud Computing in Distributed Systems

    NASA Astrophysics Data System (ADS)

    Divakarla, Usha; Kumari, Geetha

    2010-11-01

    Cloud computing is the emerging trend in the field of distributed computing. Cloud computing evolved from grid computing and distributed computing. Cloud plays an important role in huge organizations in maintaining huge data with limited resources. Cloud also helps in resource sharing through some specific virtual machines provided by the cloud service provider. This paper gives an overview of the cloud organization and some of the basic security issues pertaining to the cloud.

  2. Secure Document Service for Cloud Computing

    NASA Astrophysics Data System (ADS)

    Xu, Jin-Song; Huang, Ru-Cheng; Huang, Wan-Ming; Yang, Geng

    The development of cloud computing is still in its initial stage, and the biggest obstacle is data security. How to guarantee the privacy of user data is a worthwhile study. This paper has proposed a secure document service mechanism based on cloud computing. Out of consideration of security, in this mechanism, the content and the format of documents were separated prior to handling and storing. In addition, documents could be accessed safely within an optimized method of authorization. This mechanism would protect documents stored in cloud environment from leakage and provide an infrastructure for establishing reliable cloud services.

  3. Intelligent Smart Cloud Computing for Smart Service

    NASA Astrophysics Data System (ADS)

    Song, Su-Mi; Yoon, Yong-Ik

    The cloud computing technology causes much attention in IT field. The developments using this technology have done actively. The cloud computing is more evolved than the existing offer. So, the current cloud computing only has a process that responds user requirements when users demand their needs. For intelligently adapting the needs, this paper suggests a intelligent smart cloud model that is based on 4S/3R. This model can handle intelligently to meet users needs through collecting user's behaviors, prospecting, building, delivering, and rendering steps. It is because users have always mobile devices including smart phones so that is collecting user's behavior by sensors mounted on the devices. The proposed service model using intelligent smart cloud computing will show the personalized and customized services to be possible in various fields.

  4. Volunteered Cloud Computing for Disaster Management

    NASA Astrophysics Data System (ADS)

    Evans, J. D.; Hao, W.; Chettri, S. R.

    2014-12-01

    Disaster management relies increasingly on interpreting earth observations and running numerical models; which require significant computing capacity - usually on short notice and at irregular intervals. Peak computing demand during event detection, hazard assessment, or incident response may exceed agency budgets; however some of it can be met through volunteered computing, which distributes subtasks to participating computers via the Internet. This approach has enabled large projects in mathematics, basic science, and climate research to harness the slack computing capacity of thousands of desktop computers. This capacity is likely to diminish as desktops give way to battery-powered mobile devices (laptops, smartphones, tablets) in the consumer market; but as cloud computing becomes commonplace, it may offer significant slack capacity -- if its users are given an easy, trustworthy mechanism for participating. Such a "volunteered cloud computing" mechanism would also offer several advantages over traditional volunteered computing: tasks distributed within a cloud have fewer bandwidth limitations; granular billing mechanisms allow small slices of "interstitial" computing at no marginal cost; and virtual storage volumes allow in-depth, reversible machine reconfiguration. Volunteered cloud computing is especially suitable for "embarrassingly parallel" tasks, including ones requiring large data volumes: examples in disaster management include near-real-time image interpretation, pattern / trend detection, or large model ensembles. In the context of a major disaster, we estimate that cloud users (if suitably informed) might volunteer hundreds to thousands of CPU cores across a large provider such as Amazon Web Services. To explore this potential, we are building a volunteered cloud computing platform and targeting it to a disaster management context. Using a lightweight, fault-tolerant network protocol, this platform helps cloud users join parallel computing projects

  5. Cloud Computing and Its Applications in GIS

    NASA Astrophysics Data System (ADS)

    Kang, Cao

    2011-12-01

    Cloud computing is a novel computing paradigm that offers highly scalable and highly available distributed computing services. The objectives of this research are to: 1. analyze and understand cloud computing and its potential for GIS; 2. discover the feasibilities of migrating truly spatial GIS algorithms to distributed computing infrastructures; 3. explore a solution to host and serve large volumes of raster GIS data efficiently and speedily. These objectives thus form the basis for three professional articles. The first article is entitled "Cloud Computing and Its Applications in GIS". This paper introduces the concept, structure, and features of cloud computing. Features of cloud computing such as scalability, parallelization, and high availability make it a very capable computing paradigm. Unlike High Performance Computing (HPC), cloud computing uses inexpensive commodity computers. The uniform administration systems in cloud computing make it easier to use than GRID computing. Potential advantages of cloud-based GIS systems such as lower barrier to entry are consequently presented. Three cloud-based GIS system architectures are proposed: public cloud- based GIS systems, private cloud-based GIS systems and hybrid cloud-based GIS systems. Public cloud-based GIS systems provide the lowest entry barriers for users among these three architectures, but their advantages are offset by data security and privacy related issues. Private cloud-based GIS systems provide the best data protection, though they have the highest entry barriers. Hybrid cloud-based GIS systems provide a compromise between these extremes. The second article is entitled "A cloud computing algorithm for the calculation of Euclidian distance for raster GIS". Euclidean distance is a truly spatial GIS algorithm. Classical algorithms such as the pushbroom and growth ring techniques require computational propagation through the entire raster image, which makes it incompatible with the distributed nature

  6. Study on global cloud computing research trend

    NASA Astrophysics Data System (ADS)

    Ma, Feicheng; Zhan, Nan

    2014-01-01

    Since "cloud computing" was put forward by Google , it quickly became the most popular concept in IT industry and widely permeated into various areas promoted by IBM, Microsoft and other IT industry giants. In this paper the methods of bibliometric analysis were used to investigate the global cloud computing research trend based on Web of Science (WoS) database and the Engineering Index (EI) Compendex database. In this study, the publication, countries, institutes, keywords of the papers was deeply studied in methods of quantitative analysis, figures and tables are used to describe the production and the development trends of cloud computing.

  7. Cloud Computing for Geosciences--GeoCloud for standardized geospatial service platforms (Invited)

    NASA Astrophysics Data System (ADS)

    Nebert, D. D.; Huang, Q.; Yang, C.

    2013-12-01

    The 21st century geoscience faces challenges of Big Data, spike computing requirements (e.g., when natural disaster happens), and sharing resources through cyberinfrastructure across different organizations (Yang et al., 2011). With flexibility and cost-efficiency of computing resources a primary concern, cloud computing emerges as a promising solution to provide core capabilities to address these challenges. Many governmental and federal agencies are adopting cloud technologies to cut costs and to make federal IT operations more efficient (Huang et al., 2010). However, it is still difficult for geoscientists to take advantage of the benefits of cloud computing to facilitate the scientific research and discoveries. This presentation reports using GeoCloud to illustrate the process and strategies used in building a common platform for geoscience communities to enable the sharing, integration of geospatial data, information and knowledge across different domains. GeoCloud is an annual incubator project coordinated by the Federal Geographic Data Committee (FGDC) in collaboration with the U.S. General Services Administration (GSA) and the Department of Health and Human Services. It is designed as a staging environment to test and document the deployment of a common GeoCloud community platform that can be implemented by multiple agencies. With these standardized virtual geospatial servers, a variety of government geospatial applications can be quickly migrated to the cloud. In order to achieve this objective, multiple projects are nominated each year by federal agencies as existing public-facing geospatial data services. From the initial candidate projects, a set of common operating system and software requirements was identified as the baseline for platform as a service (PaaS) packages. Based on these developed common platform packages, each project deploys and monitors its web application, develops best practices, and documents cost and performance information. This

  8. InSAR Scientific Computing Environment on the Cloud

    NASA Astrophysics Data System (ADS)

    Rosen, P. A.; Shams, K. S.; Gurrola, E. M.; George, B. A.; Knight, D. S.

    2012-12-01

    In response to the needs of the international scientific and operational Earth observation communities, spaceborne Synthetic Aperture Radar (SAR) systems are being tasked to produce enormous volumes of raw data daily, with availability to scientists to increase substantially as more satellites come online and data becomes more accessible through more open data policies. The availability of these unprecedentedly dense and rich datasets has led to the development of sophisticated algorithms that can take advantage of them. In particular, interferometric time series analysis of SAR data provides insights into the changing earth and requires substantial computational power to process data across large regions and over large time periods. This poses challenges for existing infrastructure, software, and techniques required to process, store, and deliver the results to the global community of scientists. The current state-of-the-art solutions employ traditional data storage and processing applications that require download of data to the local repositories before processing. This approach is becoming untenable in light of the enormous volume of data that must be processed in an iterative and collaborative manner. We have analyzed and tested new cloud computing and virtualization approaches to address these challenges within the context of InSAR in the earth science community. Cloud computing is democratizing computational and storage capabilities for science users across the world. The NASA Jet Propulsion Laboratory has been an early adopter of this technology, successfully integrating cloud computing in a variety of production applications ranging from mission operations to downlink data processing. We have ported a new InSAR processing suite called ISCE (InSAR Scientific Computing Environment) to a scalable distributed system running in the Amazon GovCloud to demonstrate the efficacy of cloud computing for this application. We have integrated ISCE with Polyphony to

  9. Private Cloud Communities for Faculty and Students

    ERIC Educational Resources Information Center

    Tomal, Daniel R.; Grant, Cynthia

    2015-01-01

    Massive open online courses (MOOCs) and public and private cloud communities continue to flourish in the field of higher education. However, MOOCs have received criticism in recent years and offer little benefit to students already enrolled at an institution. This article advocates for the collaborative creation and use of institutional, program…

  10. 75 FR 64258 - Cloud Computing Forum & Workshop II

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-19

    ...NIST announces the Cloud Computing Forum & Workshop II to be held on November 4 and 5, 2010. This workshop will provide information on a Cloud Computing Roadmap Strategy as well as provide an updated status on NIST efforts to help develop open standards in interoperability, portability and security in cloud computing. The goals of this workshop are: Public announcement of the Cloud Computing......

  11. The Magellan Final Report on Cloud Computing

    SciTech Connect

    ,; Coghlan, Susan; Yelick, Katherine

    2011-12-21

    The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR), was to investigate the potential role of cloud computing in addressing the computing needs for the DOE Office of Science (SC), particularly related to serving the needs of mid- range computing and future data-intensive computing workloads. A set of research questions was formed to probe various aspects of cloud computing from performance, usability, and cost. To address these questions, a distributed testbed infrastructure was deployed at the Argonne Leadership Computing Facility (ALCF) and the National Energy Research Scientific Computing Center (NERSC). The testbed was designed to be flexible and capable enough to explore a variety of computing models and hardware design points in order to understand the impact for various scientific applications. During the project, the testbed also served as a valuable resource to application scientists. Applications from a diverse set of projects such as MG-RAST (a metagenomics analysis server), the Joint Genome Institute, the STAR experiment at the Relativistic Heavy Ion Collider, and the Laser Interferometer Gravitational Wave Observatory (LIGO), were used by the Magellan project for benchmarking within the cloud, but the project teams were also able to accomplish important production science utilizing the Magellan cloud resources.

  12. Searching for SNPs with cloud computing

    PubMed Central

    2009-01-01

    As DNA sequencing outpaces improvements in computer speed, there is a critical need to accelerate tasks like alignment and SNP calling. Crossbow is a cloud-computing software tool that combines the aligner Bowtie and the SNP caller SOAPsnp. Executing in parallel using Hadoop, Crossbow analyzes data comprising 38-fold coverage of the human genome in three hours using a 320-CPU cluster rented from a cloud computing service for about $85. Crossbow is available from http://bowtie-bio.sourceforge.net/crossbow/. PMID:19930550

  13. Searching for SNPs with cloud computing.

    PubMed

    Langmead, Ben; Schatz, Michael C; Lin, Jimmy; Pop, Mihai; Salzberg, Steven L

    2009-01-01

    As DNA sequencing outpaces improvements in computer speed, there is a critical need to accelerate tasks like alignment and SNP calling. Crossbow is a cloud-computing software tool that combines the aligner Bowtie and the SNP caller SOAPsnp. Executing in parallel using Hadoop, Crossbow analyzes data comprising 38-fold coverage of the human genome in three hours using a 320-CPU cluster rented from a cloud computing service for about $85. Crossbow is available from http://bowtie-bio.sourceforge.net/crossbow/. PMID:19930550

  14. Cloud computing approaches to accelerate drug discovery value chain.

    PubMed

    Garg, Vibhav; Arora, Suchir; Gupta, Chitra

    2011-12-01

    Continued advancements in the area of technology have helped high throughput screening (HTS) evolve from a linear to parallel approach by performing system level screening. Advanced experimental methods used for HTS at various steps of drug discovery (i.e. target identification, target validation, lead identification and lead validation) can generate data of the order of terabytes. As a consequence, there is pressing need to store, manage, mine and analyze this data to identify informational tags. This need is again posing challenges to computer scientists to offer the matching hardware and software infrastructure, while managing the varying degree of desired computational power. Therefore, the potential of "On-Demand Hardware" and "Software as a Service (SAAS)" delivery mechanisms cannot be denied. This on-demand computing, largely referred to as Cloud Computing, is now transforming the drug discovery research. Also, integration of Cloud computing with parallel computing is certainly expanding its footprint in the life sciences community. The speed, efficiency and cost effectiveness have made cloud computing a 'good to have tool' for researchers, providing them significant flexibility, allowing them to focus on the 'what' of science and not the 'how'. Once reached to its maturity, Discovery-Cloud would fit best to manage drug discovery and clinical development data, generated using advanced HTS techniques, hence supporting the vision of personalized medicine. PMID:21843145

  15. Argonne's Magellan Cloud Computing Research Project

    ScienceCinema

    Beckman, Pete

    2013-04-19

    Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF), discusses the Department of Energy's new $32-million Magellan project, which designed to test how cloud computing can be used for scientific research. More information: http://www.anl.gov/Media_Center/News/2009/news091014a.html

  16. Argonne's Magellan Cloud Computing Research Project

    SciTech Connect

    Beckman, Pete

    2009-01-01

    Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF), discusses the Department of Energy's new $32-million Magellan project, which designed to test how cloud computing can be used for scientific research. More information: http://www.anl.gov/Media_Center/News/2009/news091014a.html

  17. A European Federated Cloud: Innovative distributed computing solutions by EGI

    NASA Astrophysics Data System (ADS)

    Sipos, Gergely; Turilli, Matteo; Newhouse, Steven; Kacsuk, Peter

    2013-04-01

    The European Grid Infrastructure (EGI) is the result of pioneering work that has, over the last decade, built a collaborative production infrastructure of uniform services through the federation of national resource providers that supports multi-disciplinary science across Europe and around the world. This presentation will provide an overview of the recently established 'federated cloud computing services' that the National Grid Initiatives (NGIs), operators of EGI, offer to scientific communities. The presentation will explain the technical capabilities of the 'EGI Federated Cloud' and the processes whereby earth and space science researchers can engage with it. EGI's resource centres have been providing services for collaborative, compute- and data-intensive applications for over a decade. Besides the well-established 'grid services', several NGIs already offer privately run cloud services to their national researchers. Many of these researchers recently expressed the need to share these cloud capabilities within their international research collaborations - a model similar to the way the grid emerged through the federation of institutional batch computing and file storage servers. To facilitate the setup of a pan-European cloud service from the NGIs' resources, the EGI-InSPIRE project established a Federated Cloud Task Force in September 2011. The Task Force has a mandate to identify and test technologies for a multinational federated cloud that could be provisioned within EGI by the NGIs. A guiding principle for the EGI Federated Cloud is to remain technology neutral and flexible for both resource providers and users: • Resource providers are allowed to use any cloud hypervisor and management technology to join virtualised resources into the EGI Federated Cloud as long as the site is subscribed to the user-facing interfaces selected by the EGI community. • Users can integrate high level services - such as brokers, portals and customised Virtual Research

  18. Do Clouds Compute? A Framework for Estimating the Value of Cloud Computing

    NASA Astrophysics Data System (ADS)

    Klems, Markus; Nimis, Jens; Tai, Stefan

    On-demand provisioning of scalable and reliable compute services, along with a cost model that charges consumers based on actual service usage, has been an objective in distributed computing research and industry for a while. Cloud Computing promises to deliver on this objective: consumers are able to rent infrastructure in the Cloud as needed, deploy applications and store data, and access them via Web protocols on a pay-per-use basis. The acceptance of Cloud Computing, however, depends on the ability for Cloud Computing providers and consumers to implement a model for business value co-creation. Therefore, a systematic approach to measure costs and benefits of Cloud Computing is needed. In this paper, we discuss the need for valuation of Cloud Computing, identify key components, and structure these components in a framework. The framework assists decision makers in estimating Cloud Computing costs and to compare these costs to conventional IT solutions. We demonstrate by means of representative use cases how our framework can be applied to real world scenarios.

  19. Spontaneous Ad Hoc Mobile Cloud Computing Network

    PubMed Central

    Lacuesta, Raquel; Sendra, Sandra; Peñalver, Lourdes

    2014-01-01

    Cloud computing helps users and companies to share computing resources instead of having local servers or personal devices to handle the applications. Smart devices are becoming one of the main information processing devices. Their computing features are reaching levels that let them create a mobile cloud computing network. But sometimes they are not able to create it and collaborate actively in the cloud because it is difficult for them to build easily a spontaneous network and configure its parameters. For this reason, in this paper, we are going to present the design and deployment of a spontaneous ad hoc mobile cloud computing network. In order to perform it, we have developed a trusted algorithm that is able to manage the activity of the nodes when they join and leave the network. The paper shows the network procedures and classes that have been designed. Our simulation results using Castalia show that our proposal presents a good efficiency and network performance even by using high number of nodes. PMID:25202715

  20. Biomedical cloud computing with Amazon Web Services.

    PubMed

    Fusaro, Vincent A; Patil, Prasad; Gafni, Erik; Wall, Dennis P; Tonellato, Peter J

    2011-08-01

    In this overview to biomedical computing in the cloud, we discussed two primary ways to use the cloud (a single instance or cluster), provided a detailed example using NGS mapping, and highlighted the associated costs. While many users new to the cloud may assume that entry is as straightforward as uploading an application and selecting an instance type and storage options, we illustrated that there is substantial up-front effort required before an application can make full use of the cloud's vast resources. Our intention was to provide a set of best practices and to illustrate how those apply to a typical application pipeline for biomedical informatics, but also general enough for extrapolation to other types of computational problems. Our mapping example was intended to illustrate how to develop a scalable project and not to compare and contrast alignment algorithms for read mapping and genome assembly. Indeed, with a newer aligner such as Bowtie, it is possible to map the entire African genome using one m2.2xlarge instance in 48 hours for a total cost of approximately $48 in computation time. In our example, we were not concerned with data transfer rates, which are heavily influenced by the amount of available bandwidth, connection latency, and network availability. When transferring large amounts of data to the cloud, bandwidth limitations can be a major bottleneck, and in some cases it is more efficient to simply mail a storage device containing the data to AWS (http://aws.amazon.com/importexport/). More information about cloud computing, detailed cost analysis, and security can be found in references. PMID:21901085

  1. Exploring Cloud Computing for Distance Learning

    ERIC Educational Resources Information Center

    He, Wu; Cernusca, Dan; Abdous, M'hammed

    2011-01-01

    The use of distance courses in learning is growing exponentially. To better support faculty and students for teaching and learning, distance learning programs need to constantly innovate and optimize their IT infrastructures. The new IT paradigm called "cloud computing" has the potential to transform the way that IT resources are utilized and…

  2. Web Solutions Inspire Cloud Computing Software

    NASA Technical Reports Server (NTRS)

    2013-01-01

    An effort at Ames Research Center to standardize NASA websites unexpectedly led to a breakthrough in open source cloud computing technology. With the help of Rackspace Inc. of San Antonio, Texas, the resulting product, OpenStack, has spurred the growth of an entire industry that is already employing hundreds of people and generating hundreds of millions in revenue.

  3. Cloud Computing Based E-Learning System

    ERIC Educational Resources Information Center

    Al-Zoube, Mohammed; El-Seoud, Samir Abou; Wyne, Mudasser F.

    2010-01-01

    Cloud computing technologies although in their early stages, have managed to change the way applications are going to be developed and accessed. These technologies are aimed at running applications as services over the internet on a flexible infrastructure. Microsoft office applications, such as word processing, excel spreadsheet, access database…

  4. Green Cloud Computing: An Experimental Validation

    NASA Astrophysics Data System (ADS)

    Castellar Monteiro, Rogerio; Dantas, M. A. R.; Rodriguez, Martius Vicente Rodriguez y.

    2014-10-01

    Cloud configurations can be computational environment with interesting cost efficiency for several organizations sizes. However, the indiscriminate action of buying servers and network devices may not represent a correspondent performance number. In the academic and commercial literature, some researches highlight that these environments are idle for long periods. Therefore, energy management is an essential approach in any organization, because energy bills can causes remarkable negative impacts to these organizations in term of costs. In this paper, we present a research work that is characterized by an analysis of energy consumption in a private cloud computing environment, considering both computational resources and network devices. This study was motivated by a real case of a large organization. Therefore, the first part of the study we considered empirical experiments. In a second moment we used the GreenCloud simulator which was utilized to foresee some different configurations. The research reached a successful and differentiated goal in presenting key issues from computational resources and network, related to the energy consumption for real private cloud.

  5. A Novel College Network Resource Management Method using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Lin, Chen

    At present information construction of college mainly has construction of college networks and management information system; there are many problems during the process of information. Cloud computing is development of distributed processing, parallel processing and grid computing, which make data stored on the cloud, make software and services placed in the cloud and build on top of various standards and protocols, you can get it through all kinds of equipments. This article introduces cloud computing and function of cloud computing, then analyzes the exiting problems of college network resource management, the cloud computing technology and methods are applied in the construction of college information sharing platform.

  6. Risk in the Clouds?: Security Issues Facing Government Use of Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wyld, David C.

    Cloud computing is poised to become one of the most important and fundamental shifts in how computing is consumed and used. Forecasts show that government will play a lead role in adopting cloud computing - for data storage, applications, and processing power, as IT executives seek to maximize their returns on limited procurement budgets in these challenging economic times. After an overview of the cloud computing concept, this article explores the security issues facing public sector use of cloud computing and looks to the risk and benefits of shifting to cloud-based models. It concludes with an analysis of the challenges that lie ahead for government use of cloud resources.

  7. A Community Atmosphere Model with Superparameterized Clouds

    SciTech Connect

    Randall, David; Branson, Mark; Wang, Minghuai; Ghan, Steven J.; Craig, Cheryl; Gettelman, A.; Edwards, Jim

    2013-06-18

    In 1999, National Center for Atmospheric Research (NCAR) scientists Wojciech Grabowski and Piotr Smolarkiewicz created a "multiscale" atmospheric model in which the physical processes associated with clouds were represented by running a simple high-resolution model within each grid column of a lowresolution global model. In idealized experiments, they found that the multiscale model produced promising simulations of organized tropical convection, which other models had struggled to produce. Inspired by their results, Colorado State University (CSU) scientists Marat Khairoutdinov and David Randall created a multiscale version of the Community Atmosphere Model (CAM). They removed the cloud parameterizations of the CAM, and replaced them with Khairoutdinov's high-resolution cloud model. They dubbed the embedded cloud model a "super-parameterization," and the modified CAM is now called the "SP-CAM." Over the next several years, many scientists, from many institutions, have explored the ability of the SP-CAM to simulate tropical weather systems, the day-night changes of precipitation, the Asian and African monsoons, and a number of other climate processes. Cristiana Stan of the Center for Ocean-Land-Atmosphere Interactions found that the SP-CAM gives improved results when coupled to an ocean model, and follow-on studies have explored the SP-CAM's utility when used as the atmospheric component of the Community Earth System Model. Much of this research has been performed under the auspices of the Center for Multiscale Modeling of Atmospheric Processes, a National Science Foundation (NSF) Science and Technology Center for which the lead institution is CSU.

  8. Integration of High-Performance Computing into Cloud Computing Services

    NASA Astrophysics Data System (ADS)

    Vouk, Mladen A.; Sills, Eric; Dreher, Patrick

    High-Performance Computing (HPC) projects span a spectrum of computer hardware implementations ranging from peta-flop supercomputers, high-end tera-flop facilities running a variety of operating systems and applications, to mid-range and smaller computational clusters used for HPC application development, pilot runs and prototype staging clusters. What they all have in common is that they operate as a stand-alone system rather than a scalable and shared user re-configurable resource. The advent of cloud computing has changed the traditional HPC implementation. In this article, we will discuss a very successful production-level architecture and policy framework for supporting HPC services within a more general cloud computing infrastructure. This integrated environment, called Virtual Computing Lab (VCL), has been operating at NC State since fall 2004. Nearly 8,500,000 HPC CPU-Hrs were delivered by this environment to NC State faculty and students during 2009. In addition, we present and discuss operational data that show that integration of HPC and non-HPC (or general VCL) services in a cloud can substantially reduce the cost of delivering cloud services (down to cents per CPU hour).

  9. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.

    2014-06-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  10. Cloud Computing Test Bed for NASA Earth Observation

    NASA Astrophysics Data System (ADS)

    Klene, S. A.; Murphy, K. J.; Fertetta, M.; Law, E.; Wilson, B. D.; Hua, H.; Huang, T.

    2014-12-01

    In order to develop a deeper understanding of utilizing cloud computing technologies for using earth observation data processing a test bed was created to ease access to the technology. Users had expressed concerns about accruing large compute bills by accident while they are learning to use the technology. The test bed is to support NASA efforts such as: Developing a Science Data Service platform to handle big earth data for supporting scalable time and space searches, on-the-fly climatologies, data extraction and data transformation such as data re-gridding. Multi-sensor climate data fusion where users can select, merge and cache variables from multiple sensors to compare data over multiple years. Facilitate rapid prototype efforts to provide an infrastructure so that new development efforts do not need to spend time and effort obtaining a platform. Once successful development is done the application could then scale to very large platform on larger or commercial clouds. Goals of the test bed are: To provide a greater understanding of cloud computing so informed choices can be made on future efforts to handle the over 15 Petabytes of NASA earth science data. Provide an environment where a set of science tools can be developed and reused by multiple earth science disciplines. Develop a Platform as a Service (PaaS) capability for general earth science use. This talk will present the lessons learned from building a community cloud for earth science data.

  11. If It's in the Cloud, Get It on Paper: Cloud Computing Contract Issues

    ERIC Educational Resources Information Center

    Trappler, Thomas J.

    2010-01-01

    Much recent discussion has focused on the pros and cons of cloud computing. Some institutions are attracted to cloud computing benefits such as rapid deployment, flexible scalability, and low initial start-up cost, while others are concerned about cloud computing risks such as those related to data location, level of service, and security…

  12. Embracing the Cloud: Six Ways to Look at the Shift to Cloud Computing

    ERIC Educational Resources Information Center

    Ullman, David F.; Haggerty, Blake

    2010-01-01

    Cloud computing is the latest paradigm shift for the delivery of IT services. Where previous paradigms (centralized, decentralized, distributed) were based on fairly straightforward approaches to technology and its management, cloud computing is radical in comparison. The literature on cloud computing, however, suffers from many divergent…

  13. A cloud resolving model as a cloud parameterization in the NCAR Community Climate System Model: Preliminary results

    NASA Astrophysics Data System (ADS)

    Khairoutdinov, Marat F.; Randall, David A.

    Preliminary results of a short climate simulation with a 2-D cloud resolving model (CRM) installed into each grid column of an NCAR Community Climate System Model (CCSM) are presented. The CRM replaces the conventional convective and stratiform cloud parameterizations, and allows for explicit computation of the global cloud fraction distribution for radiation computations. The extreme computational cost of the combined CCSM/CRM model has thus far limited us to a two-month long climate simulation (December-January) using 2.8° × 2.8° resolution. The simulated geographical distributions of the total rainfall, precipitable water, cloud cover, and Earth radiation budget, for the month of January, look very reasonable.

  14. Atmospheric cloud water contains a diverse bacterial community

    SciTech Connect

    Kourtev, P. S.; Hill, Kimberly A.; Shepson, Paul B.; Konopka, Allan

    2011-06-15

    Atmospheric cloud water contains an active microbial community which can impact climate, human health and ecosystem processes in terrestrial and aquatic systems. Most studies on the composition of microbial communities in clouds have been performed with orographic clouds that are typically in direct contact with the ground. We collected water samples from cumulus clouds above the upper U.S. Midwest. The cloud water was analyzed for the diversity of bacterial phylotypes by denaturing gradient gel electrophoresis (DGGE) and sequencing of 16S rRNA gene amplicons. DGGE analyses of bacterial communities detected 17e21 bands per sample. Sequencing confirmed the presence of a diverse bacterial community; sequences from seven bacterial phyla were retrieved. Cloud water bacterial communities appeared to be dominated by members of the cyanobacteria, proteobacteria, actinobacteria and firmicutes.

  15. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    NASA Technical Reports Server (NTRS)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  16. A unified parameterization of clouds and turbulence using CLUBB and subcolumns in the Community Atmosphere Model

    DOE PAGESBeta

    Thayer-Calder, K.; Gettelman, A.; Craig, C.; Goldhaber, S.; Bogenschutz, P. A.; Chen, C.-C.; Morrison, H.; Höft, J.; Raut, E.; Griffin, B. M.; et al

    2015-06-30

    Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into amore » microphysics scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less

  17. A unified parameterization of clouds and turbulence using CLUBB and subcolumns in the Community Atmosphere Model

    DOE PAGESBeta

    Thayer-Calder, K.; Gettelman, A.; Craig, C.; Goldhaber, S.; Bogenschutz, P. A.; Chen, C.-C.; Morrison, H.; Höft, J.; Raut, E.; Griffin, B. M.; et al

    2015-12-01

    Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into amore » microphysics scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Model computational expense is estimated, and sensitivity to the number of subcolumns is investigated. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in shortwave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation.« less

  18. A unified parameterization of clouds and turbulence using CLUBB and subcolumns in the Community Atmosphere Model

    SciTech Connect

    Thayer-Calder, Katherine; Gettelman, A.; Craig, Cheryl; Goldhaber, Steve; Bogenschutz, Peter; Chen, Chih-Chieh; Morrison, H.; Hoft, Jan; Raut, E.; Griffin, Brian M.; Weber, J. K.; Larson, Vincent E.; Wyant, M. C.; Wang, Minghuai; Guo, Zhun; Ghan, Steven J.

    2015-12-01

    Most global climate models parameterize separate cloud types using separate parameterizations.This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into a microphysics scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, perceptible water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.

  19. A unified parameterization of clouds and turbulence using CLUBB and subcolumns in the Community Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Thayer-Calder, K.; Gettelman, A.; Craig, C.; Goldhaber, S.; Bogenschutz, P. A.; Chen, C.-C.; Morrison, H.; Höft, J.; Raut, E.; Griffin, B. M.; Weber, J. K.; Larson, V. E.; Wyant, M. C.; Wang, M.; Guo, Z.; Ghan, S. J.

    2015-12-01

    Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into a microphysics scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Model computational expense is estimated, and sensitivity to the number of subcolumns is investigated. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in shortwave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation.

  20. A unified parameterization of clouds and turbulence using CLUBB and subcolumns in the Community Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Thayer-Calder, K.; Gettelman, A.; Craig, C.; Goldhaber, S.; Bogenschutz, P. A.; Chen, C.-C.; Morrison, H.; Höft, J.; Raut, E.; Griffin, B. M.; Weber, J. K.; Larson, V. E.; Wyant, M. C.; Wang, M.; Guo, Z.; Ghan, S. J.

    2015-06-01

    Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into a microphysics scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.

  1. National electronic medical records integration on cloud computing system.

    PubMed

    Mirza, Hebah; El-Masri, Samir

    2013-01-01

    Few Healthcare providers have an advanced level of Electronic Medical Record (EMR) adoption. Others have a low level and most have no EMR at all. Cloud computing technology is a new emerging technology that has been used in other industry and showed a great success. Despite the great features of Cloud computing, they haven't been utilized fairly yet in healthcare industry. This study presents an innovative Healthcare Cloud Computing system for Integrating Electronic Health Record (EHR). The proposed Cloud system applies the Cloud Computing technology on EHR system, to present a comprehensive EHR integrated environment. PMID:23920993

  2. Uncover the Cloud for Geospatial Sciences and Applications to Adopt Cloud Computing

    NASA Astrophysics Data System (ADS)

    Yang, C.; Huang, Q.; Xia, J.; Liu, K.; Li, J.; Xu, C.; Sun, M.; Bambacus, M.; Xu, Y.; Fay, D.

    2012-12-01

    Cloud computing is emerging as the future infrastructure for providing computing resources to support and enable scientific research, engineering development, and application construction, as well as work force education. On the other hand, there is a lot of doubt about the readiness of cloud computing to support a variety of scientific research, development and educations. This research is a project funded by NASA SMD to investigate through holistic studies how ready is the cloud computing to support geosciences. Four applications with different computing characteristics including data, computing, concurrent, and spatiotemporal intensities are taken to test the readiness of cloud computing to support geosciences. Three popular and representative cloud platforms including Amazon EC2, Microsoft Azure, and NASA Nebula as well as a traditional cluster are utilized in the study. Results illustrates that cloud is ready to some degree but more research needs to be done to fully implemented the cloud benefit as advertised by many vendors and defined by NIST. Specifically, 1) most cloud platform could help stand up new computing instances, a new computer, in a few minutes as envisioned, therefore, is ready to support most computing needs in an on demand fashion; 2) the load balance and elasticity, a defining characteristic, is ready in some cloud platforms, such as Amazon EC2, to support bigger jobs, e.g., needs response in minutes, while some are not ready to support the elasticity and load balance well. All cloud platform needs further research and development to support real time application at subminute level; 3) the user interface and functionality of cloud platforms vary a lot and some of them are very professional and well supported/documented, such as Amazon EC2, some of them needs significant improvement for the general public to adopt cloud computing without professional training or knowledge about computing infrastructure; 4) the security is a big concern in

  3. Secure medical information sharing in cloud computing.

    PubMed

    Shao, Zhiyi; Yang, Bo; Zhang, Wenzheng; Zhao, Yi; Wu, Zhenqiang; Miao, Meixia

    2015-01-01

    Medical information sharing is one of the most attractive applications of cloud computing, where searchable encryption is a fascinating solution for securely and conveniently sharing medical data among different medical organizers. However, almost all previous works are designed in symmetric key encryption environment. The only works in public key encryption do not support keyword trapdoor security, have long ciphertext related to the number of receivers, do not support receiver revocation without re-encrypting, and do not preserve the membership of receivers. In this paper, we propose a searchable encryption supporting multiple receivers for medical information sharing based on bilinear maps in public key encryption environment. In the proposed protocol, data owner stores only one copy of his encrypted file and its corresponding encrypted keywords on cloud for multiple designated receivers. The keyword ciphertext is significantly shorter and its length is constant without relation to the number of designated receivers, i.e., for n receivers the ciphertext length is only twice the element length in the group. Only the owner knows that with whom his data is shared, and the access to his data is still under control after having been put on the cloud. We formally prove the security of keyword ciphertext based on the intractability of Bilinear Diffie-Hellman problem and the keyword trapdoor based on Decisional Diffie-Hellman problem. PMID:26410315

  4. Community-based complex cloud data center

    NASA Astrophysics Data System (ADS)

    Filiposka, Sonja; Juiz, Carlos

    2015-02-01

    The communication infrastructure is a critical component of a large-scale cloud data center. It needs to provide the best performance available while keeping overprovisioning and, lately even more important, power consumption, to the minimum. Aiming to provide a unified solution that will have high performance together with economical benefits and power consumption reduction, in this paper, we propose a new community-based scale-free model for data center network architecture. By comparing the proposed model to other similar solutions we show that the performance of the network in terms of average path length, bandwidth and resilience is similar to the state-of-the-art models. In our presented detailed analysis of the model properties, our focus is set on exploring how heterogeneity in terms of different type of network equipment influences the basic network properties. We also present solutions and network metrics that can be used in conjunction to the introduced community structure in order to additionally increase the performance.

  5. Three-dimensional geospatial information service based on cloud computing

    NASA Astrophysics Data System (ADS)

    Zhai, Xi; Yue, Peng; Jiang, Liangcun; Wang, Linnan

    2014-01-01

    Cloud computing technologies can support high-performance geospatial services in various domains, such as smart city and agriculture. Apache Hadoop, an open-source software framework, can be used to build a cloud environment on commodity clusters for storage and large-scale processing of data sets. The Open Geospatial Consortium (OGC) Web 3-D Service (W3DS) is a portrayal service for three-dimensional (3-D) geospatial data. Its performance could be improved by cloud computing technologies. This paper investigates how OGC W3DS could be developed in a cloud computing environment. It adopts the Apache Hadoop as the framework to provide a cloud implementation. The design and implementation of the 3-D geospatial information cloud service is presented. The performance evaluation is performed over data retrieval tests running in a cloud platform built by Hadoop clusters. The evaluation results provide a valuable reference on providing high-performance 3-D geospatial information cloud services.

  6. Securing the Data Storage and Processing in Cloud Computing Environment

    ERIC Educational Resources Information Center

    Owens, Rodney

    2013-01-01

    Organizations increasingly utilize cloud computing architectures to reduce costs and energy consumption both in the data warehouse and on mobile devices by better utilizing the computing resources available. However, the security and privacy issues with publicly available cloud computing infrastructures have not been studied to a sufficient depth…

  7. Search Engine Prototype System Based on Cloud Computing

    NASA Astrophysics Data System (ADS)

    Han, Jinyu; Hu, Min; Sun, Hongwei

    With the development of Internet, IT support systems need to provide more storage space and faster computing power for Internet applications such as search engine. The emergence of cloud computing can effectively solve these problems. We present a search engine prototype system based on cloud computing platform in this paper.

  8. A computational- And storage-cloud for integration of biodiversity collections

    USGS Publications Warehouse

    Matsunaga, A.; Thompson, A.; Figueiredo, R. J.; Germain-Aubrey, C.C; Collins, M.; Beeman, R.S; Macfadden, B.J.; Riccardi, G.; Soltis, P.S; Page, L. M.; Fortes, J.A.B

    2013-01-01

    A core mission of the Integrated Digitized Biocollections (iDigBio) project is the building and deployment of a cloud computing environment customized to support the digitization workflow and integration of data from all U.S. nonfederal biocollections. iDigBio chose to use cloud computing technologies to deliver a cyberinfrastructure that is flexible, agile, resilient, and scalable to meet the needs of the biodiversity community. In this context, this paper describes the integration of open source cloud middleware, applications, and third party services using standard formats, protocols, and services. In addition, this paper demonstrates the value of the digitized information from collections in a broader scenario involving multiple disciplines.

  9. Information Security in the Age of Cloud Computing

    ERIC Educational Resources Information Center

    Sims, J. Eric

    2012-01-01

    Information security has been a particularly hot topic since the enhanced internal control requirements of Sarbanes-Oxley (SOX) were introduced in 2002. At about this same time, cloud computing started its explosive growth. Outsourcing of mission-critical functions has always been a gamble for managers, but the advantages of cloud computing are…

  10. 76 FR 13984 - Cloud Computing Forum & Workshop III

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-15

    ...NIST announces the Cloud Computing Forum & Workshop III to be held on April 7 and 8, 2011. The event will include keynotes from the U.S. Chief Information Officer, NIST Under Secretary of Commerce for Standards and Technology, and other key federal officials. This workshop will provide information on the NIST strategic and tactical Cloud Computing program, including progress on the NIST......

  11. A Semantic Based Policy Management Framework for Cloud Computing Environments

    ERIC Educational Resources Information Center

    Takabi, Hassan

    2013-01-01

    Cloud computing paradigm has gained tremendous momentum and generated intensive interest. Although security issues are delaying its fast adoption, cloud computing is an unstoppable force and we need to provide security mechanisms to ensure its secure adoption. In this dissertation, we mainly focus on issues related to policy management and access…

  12. Cloudbus Toolkit for Market-Oriented Cloud Computing

    NASA Astrophysics Data System (ADS)

    Buyya, Rajkumar; Pandey, Suraj; Vecchiola, Christian

    This keynote paper: (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver computing as a utility; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as virtual machines; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a Service software system containing SDK (Software Development Kit) for construction of Cloud applications and deployment on private or public Clouds, in addition to supporting market-oriented resource management; (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications; (iii) creation of 3rd party Cloud brokering services for building content delivery networks and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon along with Grid mashups; (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; (v) Energy Efficient Resource Allocation Mechanisms and Techniques for creation and management of Green Clouds; and (vi) pathways for future research.

  13. Study on the application of mobile internet cloud computing platform

    NASA Astrophysics Data System (ADS)

    Gong, Songchun; Fu, Songyin; Chen, Zheng

    2012-04-01

    The innovative development of computer technology promotes the application of the cloud computing platform, which actually is the substitution and exchange of a sort of resource service models and meets the needs of users on the utilization of different resources after changes and adjustments of multiple aspects. "Cloud computing" owns advantages in many aspects which not merely reduce the difficulties to apply the operating system and also make it easy for users to search, acquire and process the resources. In accordance with this point, the author takes the management of digital libraries as the research focus in this paper, and analyzes the key technologies of the mobile internet cloud computing platform in the operation process. The popularization and promotion of computer technology drive people to create the digital library models, and its core idea is to strengthen the optimal management of the library resource information through computers and construct an inquiry and search platform with high performance, allowing the users to access to the necessary information resources at any time. However, the cloud computing is able to promote the computations within the computers to distribute in a large number of distributed computers, and hence implement the connection service of multiple computers. The digital libraries, as a typical representative of the applications of the cloud computing, can be used to carry out an analysis on the key technologies of the cloud computing.

  14. Heads in the Cloud: A Primer on Neuroimaging Applications of High Performance Computing

    PubMed Central

    Shatil, Anwar S.; Younas, Sohail; Pourreza, Hossein; Figley, Chase R.

    2015-01-01

    With larger data sets and more sophisticated analyses, it is becoming increasingly common for neuroimaging researchers to push (or exceed) the limitations of standalone computer workstations. Nonetheless, although high-performance computing platforms such as clusters, grids and clouds are already in routine use by a small handful of neuroimaging researchers to increase their storage and/or computational power, the adoption of such resources by the broader neuroimaging community remains relatively uncommon. Therefore, the goal of the current manuscript is to: 1) inform prospective users about the similarities and differences between computing clusters, grids and clouds; 2) highlight their main advantages; 3) discuss when it may (and may not) be advisable to use them; 4) review some of their potential problems and barriers to access; and finally 5) give a few practical suggestions for how interested new users can start analyzing their neuroimaging data using cloud resources. Although the aim of cloud computing is to hide most of the complexity of the infrastructure management from end-users, we recognize that this can still be an intimidating area for cognitive neuroscientists, psychologists, neurologists, radiologists, and other neuroimaging researchers lacking a strong computational background. Therefore, with this in mind, we have aimed to provide a basic introduction to cloud computing in general (including some of the basic terminology, computer architectures, infrastructure and service models, etc.), a practical overview of the benefits and drawbacks, and a specific focus on how cloud resources can be used for various neuroimaging applications. PMID:27279746

  15. Cloud Bursting with GlideinWMS: Means to satisfy ever increasing computing needs for Scientific Workflows

    SciTech Connect

    Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt; Larson, Krista; Sfiligoi, Igor; Rynge, Mats

    2014-01-01

    Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared over the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by 'Big Data' will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.

  16. Cloud Bursting with GlideinWMS: Means to satisfy ever increasing computing needs for Scientific Workflows

    NASA Astrophysics Data System (ADS)

    Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt; Larson, Krista; Sfiligoi, Igor; Rynge, Mats

    2014-06-01

    Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared over the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by "Big Data" will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.

  17. The JASMIN Cloud: specialised and hybrid to meet the needs of the Environmental Sciences Community

    NASA Astrophysics Data System (ADS)

    Kershaw, Philip; Lawrence, Bryan; Churchill, Jonathan; Pritchard, Matt

    2014-05-01

    Cloud computing provides enormous opportunities for the research community. The large public cloud providers provide near-limitless scaling capability. However, adapting Cloud to scientific workloads is not without its problems. The commodity nature of the public cloud infrastructure can be at odds with the specialist requirements of the research community. Issues such as trust, ownership of data, WAN bandwidth and costing models make additional barriers to more widespread adoption. Alongside the application of public cloud for scientific applications, a number of private cloud initiatives are underway in the research community of which the JASMIN Cloud is one example. Here, cloud service models are being effectively super-imposed over more established services such as data centres, compute cluster facilities and Grids. These have the potential to deliver the specialist infrastructure needed for the science community coupled with the benefits of a Cloud service model. The JASMIN facility based at the Rutherford Appleton Laboratory was established in 2012 to support the data analysis requirements of the climate and Earth Observation community. In its first year of operation, the 5PB of available storage capacity was filled and the hosted compute capability used extensively. JASMIN has modelled the concept of a centralised large-volume data analysis facility. Key characteristics have enabled success: peta-scale fast disk connected via low latency networks to compute resources and the use of virtualisation for effective management of the resources for a range of users. A second phase is now underway funded through NERC's (Natural Environment Research Council) Big Data initiative. This will see significant expansion to the resources available with a doubling of disk-based storage to 12PB and an increase of compute capacity by a factor of ten to over 3000 processing cores. This expansion is accompanied by a broadening in the scope for JASMIN, as a service available to

  18. Bioinformatics on the Cloud Computing Platform Azure

    PubMed Central

    Shanahan, Hugh P.; Owen, Anne M.; Harrison, Andrew P.

    2014-01-01

    We discuss the applicability of the Microsoft cloud computing platform, Azure, for bioinformatics. We focus on the usability of the resource rather than its performance. We provide an example of how R can be used on Azure to analyse a large amount of microarray expression data deposited at the public database ArrayExpress. We provide a walk through to demonstrate explicitly how Azure can be used to perform these analyses in Appendix S1 and we offer a comparison with a local computation. We note that the use of the Platform as a Service (PaaS) offering of Azure can represent a steep learning curve for bioinformatics developers who will usually have a Linux and scripting language background. On the other hand, the presence of an additional set of libraries makes it easier to deploy software in a parallel (scalable) fashion and explicitly manage such a production run with only a few hundred lines of code, most of which can be incorporated from a template. We propose that this environment is best suited for running stable bioinformatics software by users not involved with its development. PMID:25050811

  19. Bioinformatics on the cloud computing platform Azure.

    PubMed

    Shanahan, Hugh P; Owen, Anne M; Harrison, Andrew P

    2014-01-01

    We discuss the applicability of the Microsoft cloud computing platform, Azure, for bioinformatics. We focus on the usability of the resource rather than its performance. We provide an example of how R can be used on Azure to analyse a large amount of microarray expression data deposited at the public database ArrayExpress. We provide a walk through to demonstrate explicitly how Azure can be used to perform these analyses in Appendix S1 and we offer a comparison with a local computation. We note that the use of the Platform as a Service (PaaS) offering of Azure can represent a steep learning curve for bioinformatics developers who will usually have a Linux and scripting language background. On the other hand, the presence of an additional set of libraries makes it easier to deploy software in a parallel (scalable) fashion and explicitly manage such a production run with only a few hundred lines of code, most of which can be incorporated from a template. We propose that this environment is best suited for running stable bioinformatics software by users not involved with its development. PMID:25050811

  20. Evaluating the Efficacy of the Cloud for Cluster Computation

    NASA Technical Reports Server (NTRS)

    Knight, David; Shams, Khawaja; Chang, George; Soderstrom, Tom

    2012-01-01

    Computing requirements vary by industry, and it follows that NASA and other research organizations have computing demands that fall outside the mainstream. While cloud computing made rapid inroads for tasks such as powering web applications, performance issues on highly distributed tasks hindered early adoption for scientific computation. One venture to address this problem is Nebula, NASA's homegrown cloud project tasked with delivering science-quality cloud computing resources. However, another industry development is Amazon's high-performance computing (HPC) instances on Elastic Cloud Compute (EC2) that promises improved performance for cluster computation. This paper presents results from a series of benchmarks run on Amazon EC2 and discusses the efficacy of current commercial cloud technology for running scientific applications across a cluster. In particular, a 240-core cluster of cloud instances achieved 2 TFLOPS on High-Performance Linpack (HPL) at 70% of theoretical computational performance. The cluster's local network also demonstrated sub-100 ?s inter-process latency with sustained inter-node throughput in excess of 8 Gbps. Beyond HPL, a real-world Hadoop image processing task from NASA's Lunar Mapping and Modeling Project (LMMP) was run on a 29 instance cluster to process lunar and Martian surface images with sizes on the order of tens of gigapixels. These results demonstrate that while not a rival of dedicated supercomputing clusters, commercial cloud technology is now a feasible option for moderately demanding scientific workloads.

  1. A high performance scientific cloud computing environment for materials simulations

    NASA Astrophysics Data System (ADS)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  2. Dynamic partitioning as a way to exploit new computing paradigms: the cloud use case.

    NASA Astrophysics Data System (ADS)

    Ciaschini, Vincenzo; Dal Pra, Stefano; dell'Agnello, Luca

    2015-12-01

    The WLCG community and many groups in the HEP community have based their computing strategy on the Grid paradigm, which proved successful and still ensures its goals. However, Grid technology has not spread much over other communities; in the commercial world, the cloud paradigm is the emerging way to provide computing services. WLCG experiments aim to achieve integration of their existing current computing model with cloud deployments and take advantage of the so-called opportunistic resources (including HPC facilities) which are usually not Grid compliant. One missing feature in the most common cloud frameworks, is the concept of job scheduler, which plays a key role in a traditional computing centre, by enabling a fairshare based access at the resources to the experiments in a scenario where demand greatly outstrips availability. At CNAF we are investigating the possibility to access the Tier-1 computing resources as an OpenStack based cloud service. The system, exploiting the dynamic partitioning mechanism already being used to enable Multicore computing, allowed us to avoid a static splitting of the computing resources in the Tier-1 farm, while permitting a share friendly approach. The hosts in a dynamically partitioned farm may be moved to or from the partition, according to suitable policies for request and release of computing resources. Nodes being requested in the partition switch their role and become available to play a different one. In the cloud use case hosts may switch from acting as Worker Node in the Batch system farm to cloud compute node member, made available to tenants. In this paper we describe the dynamic partitioning concept, its implementation and integration with our current batch system, LSF.

  3. Evaluating the Influence of the Client Behavior in Cloud Computing

    PubMed Central

    Centurion, Adriana Molina; Franco Eustáquio, Paulo Sérgio; Carlucci Santana, Regina Helena; Bruschi, Sarita Mazzini; Santana, Marcos José

    2016-01-01

    This paper proposes a novel approach for the implementation of simulation scenarios, providing a client entity for cloud computing systems. The client entity allows the creation of scenarios in which the client behavior has an influence on the simulation, making the results more realistic. The proposed client entity is based on several characteristics that affect the performance of a cloud computing system, including different modes of submission and their behavior when the waiting time between requests (think time) is considered. The proposed characterization of the client enables the sending of either individual requests or group of Web services to scenarios where the workload takes the form of bursts. The client entity is included in the CloudSim, a framework for modelling and simulation of cloud computing. Experimental results show the influence of the client behavior on the performance of the services executed in a cloud computing system. PMID:27441559

  4. Evaluating the Influence of the Client Behavior in Cloud Computing.

    PubMed

    Souza Pardo, Mário Henrique; Centurion, Adriana Molina; Franco Eustáquio, Paulo Sérgio; Carlucci Santana, Regina Helena; Bruschi, Sarita Mazzini; Santana, Marcos José

    2016-01-01

    This paper proposes a novel approach for the implementation of simulation scenarios, providing a client entity for cloud computing systems. The client entity allows the creation of scenarios in which the client behavior has an influence on the simulation, making the results more realistic. The proposed client entity is based on several characteristics that affect the performance of a cloud computing system, including different modes of submission and their behavior when the waiting time between requests (think time) is considered. The proposed characterization of the client enables the sending of either individual requests or group of Web services to scenarios where the workload takes the form of bursts. The client entity is included in the CloudSim, a framework for modelling and simulation of cloud computing. Experimental results show the influence of the client behavior on the performance of the services executed in a cloud computing system. PMID:27441559

  5. Cloud4Psi: cloud computing for 3D protein structure similarity searching

    PubMed Central

    Mrozek, Dariusz; Małysiak-Mrozek, Bożena; Kłapciński, Artur

    2014-01-01

    Summary: Popular methods for 3D protein structure similarity searching, especially those that generate high-quality alignments such as Combinatorial Extension (CE) and Flexible structure Alignment by Chaining Aligned fragment pairs allowing Twists (FATCAT) are still time consuming. As a consequence, performing similarity searching against large repositories of structural data requires increased computational resources that are not always available. Cloud computing provides huge amounts of computational power that can be provisioned on a pay-as-you-go basis. We have developed the cloud-based system that allows scaling of the similarity searching process vertically and horizontally. Cloud4Psi (Cloud for Protein Similarity) was tested in the Microsoft Azure cloud environment and provided good, almost linearly proportional acceleration when scaled out onto many computational units. Availability and implementation: Cloud4Psi is available as Software as a Service for testing purposes at: http://cloud4psi.cloudapp.net/. For source code and software availability, please visit the Cloud4Psi project home page at http://zti.polsl.pl/dmrozek/science/cloud4psi.htm. Contact: dariusz.mrozek@polsl.pl PMID:24930141

  6. GPU-accelerated micromagnetic simulations using cloud computing

    NASA Astrophysics Data System (ADS)

    Jermain, C. L.; Rowlands, G. E.; Buhrman, R. A.; Ralph, D. C.

    2016-03-01

    Highly parallel graphics processing units (GPUs) can improve the speed of micromagnetic simulations significantly as compared to conventional computing using central processing units (CPUs). We present a strategy for performing GPU-accelerated micromagnetic simulations by utilizing cost-effective GPU access offered by cloud computing services with an open-source Python-based program for running the MuMax3 micromagnetics code remotely. We analyze the scaling and cost benefits of using cloud computing for micromagnetics.

  7. AstroCloud, a Cyber-Infrastructure for Astronomy Research: Cloud Computing Environments

    NASA Astrophysics Data System (ADS)

    Li, C.; Wang, J.; Cui, C.; He, B.; Fan, D.; Yang, Y.; Chen, J.; Zhang, H.; Yu, C.; Xiao, J.; Wang, C.; Cao, Z.; Fan, Y.; Hong, Z.; Li, S.; Mi, L.; Wan, W.; Wang, J.; Yin, S.

    2015-09-01

    AstroCloud is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on CloudStack, an open source software, we set up the cloud computing environment for AstroCloud Project. It consists of five distributed nodes across the mainland of China. Users can use and analysis data in this cloud computing environment. Based on GlusterFS, we built a scalable cloud storage system. Each user has a private space, which can be shared among different virtual machines and desktop systems. With this environments, astronomer can access to astronomical data collected by different telescopes and data centers easily, and data producers can archive their datasets safely.

  8. 'Big data', Hadoop and cloud computing in genomics.

    PubMed

    O'Driscoll, Aisling; Daugelaite, Jurate; Sleator, Roy D

    2013-10-01

    Since the completion of the Human Genome project at the turn of the Century, there has been an unprecedented proliferation of genomic sequence data. A consequence of this is that the medical discoveries of the future will largely depend on our ability to process and analyse large genomic data sets, which continue to expand as the cost of sequencing decreases. Herein, we provide an overview of cloud computing and big data technologies, and discuss how such expertise can be used to deal with biology's big data sets. In particular, big data technologies such as the Apache Hadoop project, which provides distributed and parallelised data processing and analysis of petabyte (PB) scale data sets will be discussed, together with an overview of the current usage of Hadoop within the bioinformatics community. PMID:23872175

  9. Challenges and opportunities of cloud computing for atmospheric sciences

    NASA Astrophysics Data System (ADS)

    Pérez Montes, Diego A.; Añel, Juan A.; Pena, Tomás F.; Wallom, David C. H.

    2016-04-01

    Cloud computing is an emerging technological solution widely used in many fields. Initially developed as a flexible way of managing peak demand it has began to make its way in scientific research. One of the greatest advantages of cloud computing for scientific research is independence of having access to a large cyberinfrastructure to fund or perform a research project. Cloud computing can avoid maintenance expenses for large supercomputers and has the potential to 'democratize' the access to high-performance computing, giving flexibility to funding bodies for allocating budgets for the computational costs associated with a project. Two of the most challenging problems in atmospheric sciences are computational cost and uncertainty in meteorological forecasting and climate projections. Both problems are closely related. Usually uncertainty can be reduced with the availability of computational resources to better reproduce a phenomenon or to perform a larger number of experiments. Here we expose results of the application of cloud computing resources for climate modeling using cloud computing infrastructures of three major vendors and two climate models. We show how the cloud infrastructure compares in performance to traditional supercomputers and how it provides the capability to complete experiments in shorter periods of time. The monetary cost associated is also analyzed. Finally we discuss the future potential of this technology for meteorological and climatological applications, both from the point of view of operational use and research.

  10. Facilitating NASA Earth Science Data Processing Using Nebula Cloud Computing

    NASA Technical Reports Server (NTRS)

    Pham, Long; Chen, Aijun; Kempler, Steven; Lynnes, Christopher; Theobald, Michael; Asghar, Esfandiari; Campino, Jane; Vollmer, Bruce

    2011-01-01

    Cloud Computing has been implemented in several commercial arenas. The NASA Nebula Cloud Computing platform is an Infrastructure as a Service (IaaS) built in 2008 at NASA Ames Research Center and 2010 at GSFC. Nebula is an open source Cloud platform intended to: a) Make NASA realize significant cost savings through efficient resource utilization, reduced energy consumption, and reduced labor costs. b) Provide an easier way for NASA scientists and researchers to efficiently explore and share large and complex data sets. c) Allow customers to provision, manage, and decommission computing capabilities on an as-needed bases

  11. A Study on Strategic Provisioning of Cloud Computing Services

    PubMed Central

    Rejaul Karim Chowdhury, Md

    2014-01-01

    Cloud computing is currently emerging as an ever-changing, growing paradigm that models “everything-as-a-service.” Virtualised physical resources, infrastructure, and applications are supplied by service provisioning in the cloud. The evolution in the adoption of cloud computing is driven by clear and distinct promising features for both cloud users and cloud providers. However, the increasing number of cloud providers and the variety of service offerings have made it difficult for the customers to choose the best services. By employing successful service provisioning, the essential services required by customers, such as agility and availability, pricing, security and trust, and user metrics can be guaranteed by service provisioning. Hence, continuous service provisioning that satisfies the user requirements is a mandatory feature for the cloud user and vitally important in cloud computing service offerings. Therefore, we aim to review the state-of-the-art service provisioning objectives, essential services, topologies, user requirements, necessary metrics, and pricing mechanisms. We synthesize and summarize different provision techniques, approaches, and models through a comprehensive literature review. A thematic taxonomy of cloud service provisioning is presented after the systematic review. Finally, future research directions and open research issues are identified. PMID:25032243

  12. Enhancing Instruction through Constructivism, Cooperative Learning, and Cloud Computing

    ERIC Educational Resources Information Center

    Denton, David W.

    2012-01-01

    Cloud computing technologies, such as Google Docs and Microsoft Office Live, have the potential to enhance instructional methods predicated on constructivism and cooperative learning. Cloud-based application features like file sharing and online publishing are prompting departments of education across the nation to adopt these technologies.…

  13. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing

    PubMed Central

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-01-01

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users’ costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers’ resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center’s energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically. PMID:26901201

  14. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing.

    PubMed

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-01-01

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users' costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers' resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center's energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically. PMID:26901201

  15. Using a Cloud-Based Computing Environment to Support Teacher Training on Common Core Implementation

    ERIC Educational Resources Information Center

    Robertson, Cory

    2013-01-01

    A cloud-based computing environment, Google Apps for Education (GAFE), has provided the Anaheim City School District (ACSD) a comprehensive and collaborative avenue for creating, sharing, and editing documents, calendars, and social networking communities. With this environment, teachers and district staff at ACSD are able to utilize the deep…

  16. Cloud Computing Value Chains: Understanding Businesses and Value Creation in the Cloud

    NASA Astrophysics Data System (ADS)

    Mohammed, Ashraf Bany; Altmann, Jörn; Hwang, Junseok

    Based on the promising developments in Cloud Computing technologies in recent years, commercial computing resource services (e.g. Amazon EC2) or software-as-a-service offerings (e.g. Salesforce. com) came into existence. However, the relatively weak business exploitation, participation, and adoption of other Cloud Computing services remain the main challenges. The vague value structures seem to be hindering business adoption and the creation of sustainable business models around its technology. Using an extensive analyze of existing Cloud business models, Cloud services, stakeholder relations, market configurations and value structures, this Chapter develops a reference model for value chains in the Cloud. Although this model is theoretically based on porter's value chain theory, the proposed Cloud value chain model is upgraded to fit the diversity of business service scenarios in the Cloud computing markets. Using this model, different service scenarios are explained. Our findings suggest new services, business opportunities, and policy practices for realizing more adoption and value creation paths in the Cloud.

  17. Cloud computing and patient engagement: leveraging available technology.

    PubMed

    Noblin, Alice; Cortelyou-Ward, Kendall; Servan, Rosa M

    2014-01-01

    Cloud computing technology has the potential to transform medical practices and improve patient engagement and quality of care. However, issues such as privacy and security and "fit" can make incorporation of the cloud an intimidating decision for many physicians. This article summarizes the four most common types of clouds and discusses their ideal uses, how they engage patients, and how they improve the quality of care offered. This technology also can be used to meet Meaningful Use requirements 1 and 2; and, if speculation is correct, the cloud will provide the necessary support needed for Meaningful Use 3 as well. PMID:25807597

  18. Capturing and analyzing wheelchair maneuvering patterns with mobile cloud computing.

    PubMed

    Fu, Jicheng; Hao, Wei; White, Travis; Yan, Yuqing; Jones, Maria; Jan, Yih-Kuen

    2013-01-01

    Power wheelchairs have been widely used to provide independent mobility to people with disabilities. Despite great advancements in power wheelchair technology, research shows that wheelchair related accidents occur frequently. To ensure safe maneuverability, capturing wheelchair maneuvering patterns is fundamental to enable other research, such as safe robotic assistance for wheelchair users. In this study, we propose to record, store, and analyze wheelchair maneuvering data by means of mobile cloud computing. Specifically, the accelerometer and gyroscope sensors in smart phones are used to record wheelchair maneuvering data in real-time. Then, the recorded data are periodically transmitted to the cloud for storage and analysis. The analyzed results are then made available to various types of users, such as mobile phone users, traditional desktop users, etc. The combination of mobile computing and cloud computing leverages the advantages of both techniques and extends the smart phone's capabilities of computing and data storage via the Internet. We performed a case study to implement the mobile cloud computing framework using Android smart phones and Google App Engine, a popular cloud computing platform. Experimental results demonstrated the feasibility of the proposed mobile cloud computing framework. PMID:24110214

  19. Scientific Data Storage for Cloud Computing

    NASA Astrophysics Data System (ADS)

    Readey, J.

    2014-12-01

    Traditionally data storage used for geophysical software systems has centered on file-based systems and libraries such as NetCDF and HDF5. In contrast cloud based infrastructure providers such as Amazon AWS, Microsoft Azure, and the Google Cloud Platform generally provide storage technologies based on an object based storage service (for large binary objects) complemented by a database service (for small objects that can be represented as key-value pairs). These systems have been shown to be highly scalable, reliable, and cost effective. We will discuss a proposed system that leverages these cloud-based storage technologies to provide an API-compatible library for traditional NetCDF and HDF5 applications. This system will enable cloud storage suitable for geophysical applications that can scale up to petabytes of data and thousands of users. We'll also cover other advantages of this system such as enhanced metadata search.

  20. Cloud Computing: A Free Technology Option to Promote Collaborative Learning

    ERIC Educational Resources Information Center

    Siegle, Del

    2010-01-01

    In a time of budget cuts and limited funding, purchasing and installing the latest software on classroom computers can be prohibitive for schools. Many educators are unaware that a variety of free software options exist, and some of them do not actually require installing software on the user's computer. One such option is cloud computing. This…

  1. Mobile healthcare information management utilizing Cloud Computing and Android OS.

    PubMed

    Doukas, Charalampos; Pliakas, Thomas; Maglogiannis, Ilias

    2010-01-01

    Cloud Computing provides functionality for managing information data in a distributed, ubiquitous and pervasive manner supporting several platforms, systems and applications. This work presents the implementation of a mobile system that enables electronic healthcare data storage, update and retrieval using Cloud Computing. The mobile application is developed using Google's Android operating system and provides management of patient health records and medical images (supporting DICOM format and JPEG2000 coding). The developed system has been evaluated using the Amazon's S3 cloud service. This article summarizes the implementation details and presents initial results of the system in practice. PMID:21097207

  2. Further developments in cloud statistics for computer simulations

    NASA Technical Reports Server (NTRS)

    Chang, D. T.; Willand, J. H.

    1972-01-01

    This study is a part of NASA's continued program to provide global statistics of cloud parameters for computer simulation. The primary emphasis was on the development of the data bank of the global statistical distributions of cloud types and cloud layers and their applications in the simulation of the vertical distributions of in-cloud parameters such as liquid water content. These statistics were compiled from actual surface observations as recorded in Standard WBAN forms. Data for a total of 19 stations were obtained and reduced. These stations were selected to be representative of the 19 primary cloud climatological regions defined in previous studies of cloud statistics. Using the data compiled in this study, a limited study was conducted of the hemogeneity of cloud regions, the latitudinal dependence of cloud-type distributions, the dependence of these statistics on sample size, and other factors in the statistics which are of significance to the problem of simulation. The application of the statistics in cloud simulation was investigated. In particular, the inclusion of the new statistics in an expanded multi-step Monte Carlo simulation scheme is suggested and briefly outlined.

  3. Cloud@Home: A New Enhanced Computing Paradigm

    NASA Astrophysics Data System (ADS)

    Distefano, Salvatore; Cunsolo, Vincenzo D.; Puliafito, Antonio; Scarpa, Marco

    Cloud computing is a distributed computing paradigm that mixes aspects of Grid computing, ("… hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities" (Foster, 2002)) Internet Computing ("…a computing platform geographically distributed across the Internet" (Milenkovic et al., 2003)), Utility computing ("a collection of technologies and business practices that enables computing to be delivered seamlessly and reliably across multiple computers, ... available as needed and billed according to usage, much like water and electricity are today" (Ross & Westerman, 2004)) Autonomic computing ("computing systems that can manage themselves given high-level objectives from administrators" (Kephart & Chess, 2003)), Edge computing ("… provides a generic template facility for any type of application to spread its execution across a dedicated grid, balancing the load …" Davis, Parikh, & Weihl, 2004) and Green computing (a new frontier of Ethical computing1 starting from the assumption that in next future energy costs will be related to the environment pollution).

  4. CloudMC: a cloud computing application for Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Miras, H.; Jiménez, R.; Miras, C.; Gomà, C.

    2013-04-01

    This work presents CloudMC, a cloud computing application—developed in Windows Azure®, the platform of the Microsoft® cloud—for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based—the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.

  5. ProteoCloud: a full-featured open source proteomics cloud computing pipeline.

    PubMed

    Muth, Thilo; Peters, Julian; Blackburn, Jonathan; Rapp, Erdmann; Martens, Lennart

    2013-08-01

    We here present the ProteoCloud pipeline, a freely available, full-featured cloud-based platform to perform computationally intensive, exhaustive searches in a cloud environment using five different peptide identification algorithms. ProteoCloud is entirely open source, and is built around an easy to use and cross-platform software client with a rich graphical user interface. This client allows full control of the number of cloud instances to initiate and of the spectra to assign for identification. It also enables the user to track progress, and to visualize and interpret the results in detail. Source code, binaries and documentation are all available at http://proteocloud.googlecode.com. PMID:23305951

  6. GATE Monte Carlo simulation in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Rowedder, Blake Austin

    The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.

  7. Snore related signals processing in a private cloud computing system.

    PubMed

    Qian, Kun; Guo, Jian; Xu, Huijie; Zhu, Zhaomeng; Zhang, Gongxuan

    2014-09-01

    Snore related signals (SRS) have been demonstrated to carry important information about the obstruction site and degree in the upper airway of Obstructive Sleep Apnea-Hypopnea Syndrome (OSAHS) patients in recent years. To make this acoustic signal analysis method more accurate and robust, big SRS data processing is inevitable. As an emerging concept and technology, cloud computing has motivated numerous researchers and engineers to exploit applications both in academic and industry field, which could have an ability to implement a huge blue print in biomedical engineering. Considering the security and transferring requirement of biomedical data, we designed a system based on private cloud computing to process SRS. Then we set the comparable experiments of processing a 5-hour audio recording of an OSAHS patient by a personal computer, a server and a private cloud computing system to demonstrate the efficiency of the infrastructure we proposed. PMID:25205499

  8. Managing Laboratory Data Using Cloud Computing as an Organizational Tool

    ERIC Educational Resources Information Center

    Bennett, Jacqueline; Pence, Harry E.

    2011-01-01

    One of the most significant difficulties encountered when directing undergraduate research and developing new laboratory experiments is how to efficiently manage the data generated by a number of students. Cloud computing, where both software and computer files reside online, offers a solution to this data-management problem and allows researchers…

  9. The Advance of Computing from the Ground to the Cloud

    ERIC Educational Resources Information Center

    Breeding, Marshall

    2009-01-01

    A trend toward the abstraction of computing platforms that has been developing in the broader IT arena over the last few years is just beginning to make inroads into the library technology scene. Cloud computing offers for libraries many interesting possibilities that may help reduce technology costs and increase capacity, reliability, and…

  10. Cloud Computing for Comparative Genomics with Windows Azure Platform

    PubMed Central

    Kim, Insik; Jung, Jae-Yoon; DeLuca, Todd F.; Nelson, Tristan H.; Wall, Dennis P.

    2012-01-01

    Cloud computing services have emerged as a cost-effective alternative for cluster systems as the number of genomes and required computation power to analyze them increased in recent years. Here we introduce the Microsoft Azure platform with detailed execution steps and a cost comparison with Amazon Web Services. PMID:23032609

  11. A Novel Green Cloud Computing Framework for Improving System Efficiency

    NASA Astrophysics Data System (ADS)

    Lin, Chen

    As the prevalence of Cloud computing continues to rise, the need for power saving mechanisms within the Cloud also increases. In this paper we have presented a novel Green Cloud framework for improving system efficiency in a data center. To demonstrate the potential of our framework, we have presented new energy efficient scheduling, VM system image, and image management components that explore new ways to conserve power. Though our research presented in this paper, we have found new ways to save vast amounts of energy while minimally impacting performance.

  12. Design and Implementation of a Cloud Computing Adoption Decision Tool: Generating a Cloud Road

    PubMed Central

    Bildosola, Iñaki; Río-Belver, Rosa; Cilleruelo, Ernesto; Garechana, Gaizka

    2015-01-01

    Migrating to cloud computing is one of the current enterprise challenges. This technology provides a new paradigm based on “on-demand payment” for information and communication technologies. In this sense, the small and medium enterprise is supposed to be the most interested, since initial investments are avoided and the technology allows gradual implementation. However, even if the characteristics and capacities have been widely discussed, entry into the cloud is still lacking in terms of practical, real frameworks. This paper aims at filling this gap, presenting a real tool already implemented and tested, which can be used as a cloud computing adoption decision tool. This tool uses diagnosis based on specific questions to gather the required information and subsequently provide the user with valuable information to deploy the business within the cloud, specifically in the form of Software as a Service (SaaS) solutions. This information allows the decision makers to generate their particular Cloud Road. A pilot study has been carried out with enterprises at a local level with a two-fold objective: to ascertain the degree of knowledge on cloud computing and to identify the most interesting business areas and their related tools for this technology. As expected, the results show high interest and low knowledge on this subject and the tool presented aims to readdress this mismatch, insofar as possible. PMID:26230400

  13. Design and Implementation of a Cloud Computing Adoption Decision Tool: Generating a Cloud Road.

    PubMed

    Bildosola, Iñaki; Río-Belver, Rosa; Cilleruelo, Ernesto; Garechana, Gaizka

    2015-01-01

    Migrating to cloud computing is one of the current enterprise challenges. This technology provides a new paradigm based on "on-demand payment" for information and communication technologies. In this sense, the small and medium enterprise is supposed to be the most interested, since initial investments are avoided and the technology allows gradual implementation. However, even if the characteristics and capacities have been widely discussed, entry into the cloud is still lacking in terms of practical, real frameworks. This paper aims at filling this gap, presenting a real tool already implemented and tested, which can be used as a cloud computing adoption decision tool. This tool uses diagnosis based on specific questions to gather the required information and subsequently provide the user with valuable information to deploy the business within the cloud, specifically in the form of Software as a Service (SaaS) solutions. This information allows the decision makers to generate their particular Cloud Road. A pilot study has been carried out with enterprises at a local level with a two-fold objective: to ascertain the degree of knowledge on cloud computing and to identify the most interesting business areas and their related tools for this technology. As expected, the results show high interest and low knowledge on this subject and the tool presented aims to readdress this mismatch, insofar as possible. PMID:26230400

  14. Open Source Software Reuse in the Airborne Cloud Computing Environment

    NASA Astrophysics Data System (ADS)

    Khudikyan, S. E.; Hart, A. F.; Hardman, S.; Freeborn, D.; Davoodi, F.; Resneck, G.; Mattmann, C. A.; Crichton, D. J.

    2012-12-01

    Earth science airborne missions play an important role in helping humans understand our climate. A challenge for airborne campaigns in contrast to larger NASA missions is that their relatively modest budgets do not permit the ground-up development of data management tools. These smaller missions generally consist of scientists whose primary focus is on the algorithmic and scientific aspects of the mission, which often leaves data management software and systems to be addressed as an afterthought. The Airborne Cloud Computing Environment (ACCE), developed by the Jet Propulsion Laboratory (JPL) to support Earth Science Airborne Program, is a reusable, multi-mission data system environment for NASA airborne missions. ACCE provides missions with a cloud-enabled platform for managing their data. The platform consists of a comprehensive set of robust data management capabilities that cover everything from data ingestion and archiving, to algorithmic processing, and to data delivery. Missions interact with this system programmatically as well as via browser-based user interfaces. The core components of ACCE are largely based on Apache Object Oriented Data Technology (OODT), an open source information integration framework at the Apache Software Foundation (ASF). Apache OODT is designed around a component-based architecture that allows for selective combination of components to create highly configurable data management systems. The diverse and growing community that currently contributes to Apache OODT fosters on-going growth and maturation of the software. ACCE's key objective is to reduce cost and risks associated with developing data management systems for airborne missions. Software reuse plays a prominent role in mitigating these problems. By providing a reusable platform based on open source software, ACCE enables airborne missions to allocate more resources to their scientific goals, thereby opening the doors to increased scientific discovery.

  15. Genomic cloud computing: legal and ethical points to consider

    PubMed Central

    Dove, Edward S; Joly, Yann; Tassé, Anne-Marie; Burton, Paul; Chisholm, Rex; Fortier, Isabel; Goodwin, Pat; Harris, Jennifer; Hveem, Kristian; Kaye, Jane; Kent, Alistair; Knoppers, Bartha Maria; Lindpaintner, Klaus; Little, Julian; Riegman, Peter; Ripatti, Samuli; Stolk, Ronald; Bobrow, Martin; Cambon-Thomsen, Anne; Dressler, Lynn; Joly, Yann; Kato, Kazuto; Knoppers, Bartha Maria; Rodriguez, Laura Lyman; McPherson, Treasa; Nicolás, Pilar; Ouellette, Francis; Romeo-Casabona, Carlos; Sarin, Rajiv; Wallace, Susan; Wiesner, Georgia; Wilson, Julia; Zeps, Nikolajs; Simkevitz, Howard; De Rienzo, Assunta; Knoppers, Bartha M

    2015-01-01

    The biggest challenge in twenty-first century data-intensive genomic science, is developing vast computer infrastructure and advanced software tools to perform comprehensive analyses of genomic data sets for biomedical research and clinical practice. Researchers are increasingly turning to cloud computing both as a solution to integrate data from genomics, systems biology and biomedical data mining and as an approach to analyze data to solve biomedical problems. Although cloud computing provides several benefits such as lower costs and greater efficiency, it also raises legal and ethical issues. In this article, we discuss three key ‘points to consider' (data control; data security, confidentiality and transfer; and accountability) based on a preliminary review of several publicly available cloud service providers' Terms of Service. These ‘points to consider' should be borne in mind by genomic research organizations when negotiating legal arrangements to store genomic data on a large commercial cloud service provider's servers. Diligent genomic cloud computing means leveraging security standards and evaluation processes as a means to protect data and entails many of the same good practices that researchers should always consider in securing their local infrastructure. PMID:25248396

  16. Cloud Computing for Protein-Ligand Binding Site Comparison

    PubMed Central

    2013-01-01

    The proteome-wide analysis of protein-ligand binding sites and their interactions with ligands is important in structure-based drug design and in understanding ligand cross reactivity and toxicity. The well-known and commonly used software, SMAP, has been designed for 3D ligand binding site comparison and similarity searching of a structural proteome. SMAP can also predict drug side effects and reassign existing drugs to new indications. However, the computing scale of SMAP is limited. We have developed a high availability, high performance system that expands the comparison scale of SMAP. This cloud computing service, called Cloud-PLBS, combines the SMAP and Hadoop frameworks and is deployed on a virtual cloud computing platform. To handle the vast amount of experimental data on protein-ligand binding site pairs, Cloud-PLBS exploits the MapReduce paradigm as a management and parallelizing tool. Cloud-PLBS provides a web portal and scalability through which biologists can address a wide range of computer-intensive questions in biology and drug discovery. PMID:23762824

  17. Cloud computing for protein-ligand binding site comparison.

    PubMed

    Hung, Che-Lun; Hua, Guan-Jie

    2013-01-01

    The proteome-wide analysis of protein-ligand binding sites and their interactions with ligands is important in structure-based drug design and in understanding ligand cross reactivity and toxicity. The well-known and commonly used software, SMAP, has been designed for 3D ligand binding site comparison and similarity searching of a structural proteome. SMAP can also predict drug side effects and reassign existing drugs to new indications. However, the computing scale of SMAP is limited. We have developed a high availability, high performance system that expands the comparison scale of SMAP. This cloud computing service, called Cloud-PLBS, combines the SMAP and Hadoop frameworks and is deployed on a virtual cloud computing platform. To handle the vast amount of experimental data on protein-ligand binding site pairs, Cloud-PLBS exploits the MapReduce paradigm as a management and parallelizing tool. Cloud-PLBS provides a web portal and scalability through which biologists can address a wide range of computer-intensive questions in biology and drug discovery. PMID:23762824

  18. Genomic cloud computing: legal and ethical points to consider.

    PubMed

    Dove, Edward S; Joly, Yann; Tassé, Anne-Marie; Knoppers, Bartha M

    2015-10-01

    The biggest challenge in twenty-first century data-intensive genomic science, is developing vast computer infrastructure and advanced software tools to perform comprehensive analyses of genomic data sets for biomedical research and clinical practice. Researchers are increasingly turning to cloud computing both as a solution to integrate data from genomics, systems biology and biomedical data mining and as an approach to analyze data to solve biomedical problems. Although cloud computing provides several benefits such as lower costs and greater efficiency, it also raises legal and ethical issues. In this article, we discuss three key 'points to consider' (data control; data security, confidentiality and transfer; and accountability) based on a preliminary review of several publicly available cloud service providers' Terms of Service. These 'points to consider' should be borne in mind by genomic research organizations when negotiating legal arrangements to store genomic data on a large commercial cloud service provider's servers. Diligent genomic cloud computing means leveraging security standards and evaluation processes as a means to protect data and entails many of the same good practices that researchers should always consider in securing their local infrastructure. PMID:25248396

  19. COMPUTATIONAL MODELING OF ELECTRON CLOUD FOR MEIC

    SciTech Connect

    S. Ahmed, B. Yunn, J. Dolph, T. Satogata, G.A. Krafft

    2012-07-01

    This work is the continuation of [4] our earlier studies on electron cloud (EC) simulations for the medium energy electron-ion collider (MEIC) envisioned at Jefferson Lab beyond the 12 GeV upgrade of CEBAF. In this paper, we study the EC saturation density with various MEIC operational parameters. The details of the study shows saturation of line density 1.7 nC/m and tune shift per unit length 4.9 x 10{sup -7} m{sup -1}.

  20. Secure Dynamic access control scheme of PHR in cloud computing.

    PubMed

    Chen, Tzer-Shyong; Liu, Chia-Hui; Chen, Tzer-Long; Chen, Chin-Sheng; Bau, Jian-Guo; Lin, Tzu-Ching

    2012-12-01

    With the development of information technology and medical technology, medical information has been developed from traditional paper records into electronic medical records, which have now been widely applied. The new-style medical information exchange system "personal health records (PHR)" is gradually developed. PHR is a kind of health records maintained and recorded by individuals. An ideal personal health record could integrate personal medical information from different sources and provide complete and correct personal health and medical summary through the Internet or portable media under the requirements of security and privacy. A lot of personal health records are being utilized. The patient-centered PHR information exchange system allows the public autonomously maintain and manage personal health records. Such management is convenient for storing, accessing, and sharing personal medical records. With the emergence of Cloud computing, PHR service has been transferred to storing data into Cloud servers that the resources could be flexibly utilized and the operation cost can be reduced. Nevertheless, patients would face privacy problem when storing PHR data into Cloud. Besides, it requires a secure protection scheme to encrypt the medical records of each patient for storing PHR into Cloud server. In the encryption process, it would be a challenge to achieve accurately accessing to medical records and corresponding to flexibility and efficiency. A new PHR access control scheme under Cloud computing environments is proposed in this study. With Lagrange interpolation polynomial to establish a secure and effective PHR information access scheme, it allows to accurately access to PHR with security and is suitable for enormous multi-users. Moreover, this scheme also dynamically supports multi-users in Cloud computing environments with personal privacy and offers legal authorities to access to PHR. From security and effectiveness analyses, the proposed PHR access

  1. Using Computers To Design Historical Communities.

    ERIC Educational Resources Information Center

    Mock, April

    1999-01-01

    Describes an activity in which middle school students use the computer software entitles "Community Construction Kit," a computer program, to design buildings for historical and contemporary communities. Addresses the student benefits of both this activity and the "Community Construction Kit" in general. (CMK)

  2. CloudWF: A Computational Workflow System for Clouds Based on Hadoop

    NASA Astrophysics Data System (ADS)

    Zhang, Chen; de Sterck, Hans

    This paper describes CloudWF, a scalable and lightweight computational workflow system for clouds on top of Hadoop. CloudWF can run workflow jobs composed of multiple Hadoop MapReduce or legacy programs. Its novelty lies in several aspects: a simple workflow description language that encodes workflow blocks and block-to-block dependencies separately as standalone executable components; a new workflow storage method that uses Hadoop HBase sparse tables to store workflow information internally and reconstruct workflow block dependencies implicitly for efficient workflow execution; transparent file staging with Hadoop DFS; and decentralized workflow execution management relying on the MapReduce framework for task scheduling and fault tolerance. This paper describes the design and implementation of CloudWF.

  3. Companies Reaching for the Clouds for Computing Power

    SciTech Connect

    Madison, Alison L.

    2012-10-07

    By now, we’ve likely all at least heard of cloud computing, and to some extent may grasp what it’s all about. But after delving into a recent article in The New York Times, I came to realize just how big of a deal it is--much bigger than my own limited experience with it had allowed me to see. Cloud computing is the use of hardware or software computing resources that are delivered as a service over a network, typically via the web. The gist of it is, almost anything you can imagine doing with your computer system doesn’t have to physically exist on your system or in your office in order to be accessible to you. You can entrust remote services with your data, software, and computation. It’s easier, and also much less expensive.

  4. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment.

    PubMed

    Abdullahi, Mohammed; Ngadi, Md Asri

    2016-01-01

    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan. PMID:27348127

  5. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment

    PubMed Central

    Abdullahi, Mohammed; Ngadi, Md Asri

    2016-01-01

    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan. PMID:27348127

  6. Cloud CPFP: a shotgun proteomics data analysis pipeline using cloud and high performance computing.

    PubMed

    Trudgian, David C; Mirzaei, Hamid

    2012-12-01

    We have extended the functionality of the Central Proteomics Facilities Pipeline (CPFP) to allow use of remote cloud and high performance computing (HPC) resources for shotgun proteomics data processing. CPFP has been modified to include modular local and remote scheduling for data processing jobs. The pipeline can now be run on a single PC or server, a local cluster, a remote HPC cluster, and/or the Amazon Web Services (AWS) cloud. We provide public images that allow easy deployment of CPFP in its entirety in the AWS cloud. This significantly reduces the effort necessary to use the software, and allows proteomics laboratories to pay for compute time ad hoc, rather than obtaining and maintaining expensive local server clusters. Alternatively the Amazon cloud can be used to increase the throughput of a local installation of CPFP as necessary. We demonstrate that cloud CPFP allows users to process data at higher speed than local installations but with similar cost and lower staff requirements. In addition to the computational improvements, the web interface to CPFP is simplified, and other functionalities are enhanced. The software is under active development at two leading institutions and continues to be released under an open-source license at http://cpfp.sourceforge.net. PMID:23088505

  7. Cloud Computing Techniques for Space Mission Design

    NASA Technical Reports Server (NTRS)

    Arrieta, Juan; Senent, Juan

    2014-01-01

    The overarching objective of space mission design is to tackle complex problems producing better results, and faster. In developing the methods and tools to fulfill this objective, the user interacts with the different layers of a computing system.

  8. Cloud Computing Boosts Business Intelligence of Telecommunication Industry

    NASA Astrophysics Data System (ADS)

    Xu, Meng; Gao, Dan; Deng, Chao; Luo, Zhiguo; Sun, Shaoling

    Business Intelligence becomes an attracting topic in today's data intensive applications, especially in telecommunication industry. Meanwhile, Cloud Computing providing IT supporting Infrastructure with excellent scalability, large scale storage, and high performance becomes an effective way to implement parallel data processing and data mining algorithms. BC-PDM (Big Cloud based Parallel Data Miner) is a new MapReduce based parallel data mining platform developed by CMRI (China Mobile Research Institute) to fit the urgent requirements of business intelligence in telecommunication industry. In this paper, the architecture, functionality and performance of BC-PDM are presented, together with the experimental evaluation and case studies of its applications. The evaluation result demonstrates both the usability and the cost-effectiveness of Cloud Computing based Business Intelligence system in applications of telecommunication industry.

  9. A Scientific Cloud Computing Platform for Condensed Matter Physics

    NASA Astrophysics Data System (ADS)

    Jorissen, K.; Johnson, W.; Vila, F. D.; Rehr, J. J.

    2013-03-01

    Scientific Cloud Computing (SCC) makes possible calculations with high performance computational tools, without the need to purchase or maintain sophisticated hardware and software. We have recently developed an interface dubbed SC2IT that controls on-demand virtual Linux clusters within the Amazon EC2 cloud platform. Using this interface we have developed a more advanced, user-friendly SCC Platform configured especially for condensed matter calculations. This platform contains a GUI, based on a new Java version of SC2IT, that permits calculations of various materials properties. The cloud platform includes Virtual Machines preconfigured for parallel calculations and several precompiled and optimized materials science codes for electronic structure and x-ray and electron spectroscopy. Consequently this SCC makes state-of-the-art condensed matter calculations easy to access for general users. Proof-of-principle performance benchmarks show excellent parallelization and communication performance. Supported by NSF grant OCI-1048052

  10. Above the cloud computing: applying cloud computing principles to create an orbital services model

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy; Mohammad, Atif; Berk, Josh; Nervold, Anders K.

    2013-05-01

    Large satellites and exquisite planetary missions are generally self-contained. They have, onboard, all of the computational, communications and other capabilities required to perform their designated functions. Because of this, the satellite or spacecraft carries hardware that may be utilized only a fraction of the time; however, the full cost of development and launch are still bone by the program. Small satellites do not have this luxury. Due to mass and volume constraints, they cannot afford to carry numerous pieces of barely utilized equipment or large antennas. This paper proposes a cloud-computing model for exposing satellite services in an orbital environment. Under this approach, each satellite with available capabilities broadcasts a service description for each service that it can provide (e.g., general computing capacity, DSP capabilities, specialized sensing capabilities, transmission capabilities, etc.) and its orbital elements. Consumer spacecraft retain a cache of service providers and select one utilizing decision making heuristics (e.g., suitability of performance, opportunity to transmit instructions and receive results - based on the orbits of the two craft). The two craft negotiate service provisioning (e.g., when the service can be available and for how long) based on the operating rules prioritizing use of (and allowing access to) the service on the service provider craft, based on the credentials of the consumer. Service description, negotiation and sample service performance protocols are presented. The required components of each consumer or provider spacecraft are reviewed. These include fully autonomous control capabilities (for provider craft), a lightweight orbit determination routine (to determine when consumer and provider craft can see each other and, possibly, pointing requirements for craft with directional antennas) and an authentication and resource utilization priority-based access decision making subsystem (for provider craft

  11. Cloud computing strategic framework (FY13 - FY15).

    SciTech Connect

    Arellano, Lawrence R.; Arroyo, Steven C.; Giese, Gerald J.; Cox, Philip M.; Rogers, G. Kelly

    2012-11-01

    This document presents an architectural framework (plan) and roadmap for the implementation of a robust Cloud Computing capability at Sandia National Laboratories. It is intended to be a living document and serve as the basis for detailed implementation plans, project proposals and strategic investment requests.

  12. Factors Influencing Cloud-Computing Technology Adoption in Developing Countries

    ERIC Educational Resources Information Center

    Hailu, Alemayehu

    2012-01-01

    Adoption of new technology has complicating components both from the selection, as well as decision-making criteria and process. Although new technology such as cloud computing provides great benefits especially to the developing countries, it has challenges that may complicate the selection decision and subsequent adoption process. This study…

  13. Risk in Enterprise Cloud Computing: Re-Evaluated

    ERIC Educational Resources Information Center

    Funmilayo, Bolonduro, R.

    2016-01-01

    A quantitative study was conducted to get the perspectives of IT experts about risks in enterprise cloud computing. In businesses, these IT experts are often not in positions to prioritize business needs. The business experts commonly known as business managers mostly determine an organization's business needs. Even if an IT expert classified a…

  14. ``Cloud computations'' for chemical departments of power stations

    NASA Astrophysics Data System (ADS)

    Ochkov, V. F.; Chudova, Yu. V.; Minaeva, E. A.

    2009-07-01

    The notion of “cloud computations” is defined, and examples of such computations carried out at the Moscow Power Engineering Institute are given. Calculations of emissions discharged into the atmosphere from steam and hot-water boilers, as well as other calculations presented on the Internet that are of interest for power stations, are shown.

  15. Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds

    NASA Astrophysics Data System (ADS)

    Seinstra, Frank J.; Maassen, Jason; van Nieuwpoort, Rob V.; Drost, Niels; van Kessel, Timo; van Werkhoven, Ben; Urbani, Jacopo; Jacobs, Ceriel; Kielmann, Thilo; Bal, Henri E.

    In recent years, the application of high-performance and distributed computing in scientific practice has become increasingly wide spread. Among the most widely available platforms to scientists are clusters, grids, and cloud systems. Such infrastructures currently are undergoing revolutionary change due to the integration of many-core technologies, providing orders-of-magnitude speed improvements for selected compute kernels. With high-performance and distributed computing systems thus becoming more heterogeneous and hierarchical, programming complexity is vastly increased. Further complexities arise because urgent desire for scalability and issues including data distribution, software heterogeneity, and ad hoc hardware availability commonly force scientists into simultaneous use of multiple platforms (e.g., clusters, grids, and clouds used concurrently). A true computing jungle.

  16. Polyphony: A Workflow Orchestration Framework for Cloud Computing

    NASA Technical Reports Server (NTRS)

    Shams, Khawaja S.; Powell, Mark W.; Crockett, Tom M.; Norris, Jeffrey S.; Rossi, Ryan; Soderstrom, Tom

    2010-01-01

    Cloud Computing has delivered unprecedented compute capacity to NASA missions at affordable rates. Missions like the Mars Exploration Rovers (MER) and Mars Science Lab (MSL) are enjoying the elasticity that enables them to leverage hundreds, if not thousands, or machines for short durations without making any hardware procurements. In this paper, we describe Polyphony, a resilient, scalable, and modular framework that efficiently leverages a large set of computing resources to perform parallel computations. Polyphony can employ resources on the cloud, excess capacity on local machines, as well as spare resources on the supercomputing center, and it enables these resources to work in concert to accomplish a common goal. Polyphony is resilient to node failures, even if they occur in the middle of a transaction. We will conclude with an evaluation of a production-ready application built on top of Polyphony to perform image-processing operations of images from around the solar system, including Mars, Saturn, and Titan.

  17. Exploring the Universe with WISE and Cloud Computing

    NASA Technical Reports Server (NTRS)

    Benford, Dominic J.

    2011-01-01

    WISE is a recently-completed astronomical survey mission that has imaged the entire sky in four infrared wavelength bands. The large quantity of science images returned consists of 2,776,922 individual snapshots in various locations in each band which, along with ancillary data, totals around 110TB of raw, uncompressed data. Making the most use of this data requires advanced computing resources. I will discuss some initial attempts in the use of cloud computing to make this large problem tractable.

  18. Cloud Computing Technologies Facilitate Earth Research

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Under a Space Act Agreement, NASA partnered with Seattle-based Amazon Web Services to make the agency's climate and Earth science satellite data publicly available on the company's servers. Users can access the data for free, but they can also pay to use Amazon's computing services to analyze and visualize information using the same software available to NASA researchers.

  19. Cloud Computing and the Power to Choose

    ERIC Educational Resources Information Center

    Bristow, Rob; Dodds, Ted; Northam, Richard; Plugge, Leo

    2010-01-01

    Some of the most significant changes in information technology are those that have given the individual user greater power to choose. The first of these changes was the development of the personal computer. The PC liberated the individual user from the limitations of the mainframe and minicomputers and from the rules and regulations of centralized…

  20. BOINC service for volunteer cloud computing

    NASA Astrophysics Data System (ADS)

    Høimyr, N.; Blomer, J.; Buncic, P.; Giovannozzi, M.; Gonzalez, A.; Harutyunyan, A.; Jones, P. L.; Karneyeu, A.; Marquina, M. A.; Mcintosh, E.; Segal, B.; Skands, P.; Grey, F.; Lombraña González, D.; Zacharov, I.

    2012-12-01

    Since a couple of years, a team at CERN and partners from the Citizen Cyberscience Centre (CCC) have been working on a project that enables general physics simulation programs to run in a virtual machine on volunteer PCs around the world. The project uses the Berkeley Open Infrastructure for Network Computing (BOINC) framework. Based on CERNVM and the job management framework Co-Pilot, this project was made available for public beta-testing in August 2011 with Monte Carlo simulations of LHC physics under the name “LHC@home 2.0” and the BOINC project: “Test4Theory”. At the same time, CERN's efforts on Volunteer Computing for LHC machine studies have been intensified; this project has previously been known as LHC@home, and has been running the “Sixtrack” beam dynamics application for the LHC accelerator, using a classic BOINC framework without virtual machines. CERN-IT has set up a BOINC server cluster, and has provided and supported the BOINC infrastructure for both projects. CERN intends to evolve the setup into a generic BOINC application service that will allow scientists and engineers at CERN to profit from volunteer computing. This paper describes the experience with the two different approaches to volunteer computing as well as the status and outlook of a general BOINC service.

  1. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    PubMed

    Yazar, Seyhan; Gooden, George E C; Mackey, David A; Hewitt, Alex W

    2014-01-01

    A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2) for E.coli and 53.5% (95% CI: 34.4-72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1) and 173.9% (95% CI: 134.6-213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE. PMID:25247298

  2. Towards dynamic remote data auditing in computational clouds.

    PubMed

    Sookhak, Mehdi; Akhunzada, Adnan; Gani, Abdullah; Khurram Khan, Muhammad; Anuar, Nor Badrul

    2014-01-01

    Cloud computing is a significant shift of computational paradigm where computing as a utility and storing data remotely have a great potential. Enterprise and businesses are now more interested in outsourcing their data to the cloud to lessen the burden of local data storage and maintenance. However, the outsourced data and the computation outcomes are not continuously trustworthy due to the lack of control and physical possession of the data owners. To better streamline this issue, researchers have now focused on designing remote data auditing (RDA) techniques. The majority of these techniques, however, are only applicable for static archive data and are not subject to audit the dynamically updated outsourced data. We propose an effectual RDA technique based on algebraic signature properties for cloud storage system and also present a new data structure capable of efficiently supporting dynamic data operations like append, insert, modify, and delete. Moreover, this data structure empowers our method to be applicable for large-scale data with minimum computation cost. The comparative analysis with the state-of-the-art RDA schemes shows that the proposed scheme is secure and highly efficient in terms of the computation and communication overhead on the auditor and server. PMID:25121114

  3. Towards Dynamic Remote Data Auditing in Computational Clouds

    PubMed Central

    Khurram Khan, Muhammad; Anuar, Nor Badrul

    2014-01-01

    Cloud computing is a significant shift of computational paradigm where computing as a utility and storing data remotely have a great potential. Enterprise and businesses are now more interested in outsourcing their data to the cloud to lessen the burden of local data storage and maintenance. However, the outsourced data and the computation outcomes are not continuously trustworthy due to the lack of control and physical possession of the data owners. To better streamline this issue, researchers have now focused on designing remote data auditing (RDA) techniques. The majority of these techniques, however, are only applicable for static archive data and are not subject to audit the dynamically updated outsourced data. We propose an effectual RDA technique based on algebraic signature properties for cloud storage system and also present a new data structure capable of efficiently supporting dynamic data operations like append, insert, modify, and delete. Moreover, this data structure empowers our method to be applicable for large-scale data with minimum computation cost. The comparative analysis with the state-of-the-art RDA schemes shows that the proposed scheme is secure and highly efficient in terms of the computation and communication overhead on the auditor and server. PMID:25121114

  4. Computer tomography of large dust clouds in complex plasmas

    SciTech Connect

    Killer, Carsten; Himpel, Michael; Melzer, André

    2014-10-15

    The dust density is a central parameter of a dusty plasma. Here, a tomography setup for the determination of the three-dimensionally resolved density distribution of spatially extended dust clouds is presented. The dust clouds consist of micron-sized particles confined in a radio frequency argon plasma, where they fill almost the entire discharge volume. First, a line-of-sight integrated dust density is obtained from extinction measurements, where the incident light from an LED panel is scattered and absorbed by the dust. Performing these extinction measurements from many different angles allows the reconstruction of the 3D dust density distribution, analogous to a computer tomography in medical applications.

  5. Facilitating NASA Earth Science Data Processing Using Nebula Cloud Computing

    NASA Astrophysics Data System (ADS)

    Chen, A.; Pham, L.; Kempler, S.; Theobald, M.; Esfandiari, A.; Campino, J.; Vollmer, B.; Lynnes, C.

    2011-12-01

    Cloud Computing technology has been used to offer high-performance and low-cost computing and storage resources for both scientific problems and business services. Several cloud computing services have been implemented in the commercial arena, e.g. Amazon's EC2 & S3, Microsoft's Azure, and Google App Engine. There are also some research and application programs being launched in academia and governments to utilize Cloud Computing. NASA launched the Nebula Cloud Computing platform in 2008, which is an Infrastructure as a Service (IaaS) to deliver on-demand distributed virtual computers. Nebula users can receive required computing resources as a fully outsourced service. NASA Goddard Earth Science Data and Information Service Center (GES DISC) migrated several GES DISC's applications to the Nebula as a proof of concept, including: a) The Simple, Scalable, Script-based Science Processor for Measurements (S4PM) for processing scientific data; b) the Atmospheric Infrared Sounder (AIRS) data process workflow for processing AIRS raw data; and c) the GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (GIOVANNI) for online access to, analysis, and visualization of Earth science data. This work aims to evaluate the practicability and adaptability of the Nebula. The initial work focused on the AIRS data process workflow to evaluate the Nebula. The AIRS data process workflow consists of a series of algorithms being used to process raw AIRS level 0 data and output AIRS level 2 geophysical retrievals. Migrating the entire workflow to the Nebula platform is challenging, but practicable. After installing several supporting libraries and the processing code itself, the workflow is able to process AIRS data in a similar fashion to its current (non-cloud) configuration. We compared the performance of processing 2 days of AIRS level 0 data through level 2 using a Nebula virtual computer and a local Linux computer. The result shows that Nebula has significantly

  6. Smart Learning Services Based on Smart Cloud Computing

    PubMed Central

    Kim, Svetlana; Song, Su-Mi; Yoon, Yong-Ik

    2011-01-01

    Context-aware technologies can make e-learning services smarter and more efficient since context-aware services are based on the user’s behavior. To add those technologies into existing e-learning services, a service architecture model is needed to transform the existing e-learning environment, which is situation-aware, into the environment that understands context as well. The context-awareness in e-learning may include the awareness of user profile and terminal context. In this paper, we propose a new notion of service that provides context-awareness to smart learning content in a cloud computing environment. We suggest the elastic four smarts (E4S)—smart pull, smart prospect, smart content, and smart push—concept to the cloud services so smart learning services are possible. The E4S focuses on meeting the users’ needs by collecting and analyzing users’ behavior, prospecting future services, building corresponding contents, and delivering the contents through cloud computing environment. Users’ behavior can be collected through mobile devices such as smart phones that have built-in sensors. As results, the proposed smart e-learning model in cloud computing environment provides personalized and customized learning services to its users. PMID:22164048

  7. An Analysis of Cloud Computing with Amazon Web Services for the Atmospheric Science Data Center

    NASA Astrophysics Data System (ADS)

    Gleason, J. L.; Little, M. M.

    2013-12-01

    NASA science and engineering efforts rely heavily on compute and data handling systems. The nature of NASA science data is such that it is not restricted to NASA users, instead it is widely shared across a globally distributed user community including scientists, educators, policy decision makers, and the public. Therefore NASA science computing is a candidate use case for cloud computing where compute resources are outsourced to an external vendor. Amazon Web Services (AWS) is a commercial cloud computing service developed to use excess computing capacity at Amazon, and potentially provides an alternative to costly and potentially underutilized dedicated acquisitions whenever NASA scientists or engineers require additional data processing. AWS desires to provide a simplified avenue for NASA scientists and researchers to share large, complex data sets with external partners and the public. AWS has been extensively used by JPL for a wide range of computing needs and was previously tested on a NASA Agency basis during the Nebula testing program. Its ability to support the Langley Science Directorate needs to be evaluated by integrating it with real world operational needs across NASA and the associated maturity that would come with that. The strengths and weaknesses of this architecture and its ability to support general science and engineering applications has been demonstrated during the previous testing. The Langley Office of the Chief Information Officer in partnership with the Atmospheric Sciences Data Center (ASDC) has established a pilot business interface to utilize AWS cloud computing resources on a organization and project level pay per use model. This poster discusses an effort to evaluate the feasibility of the pilot business interface from a project level perspective by specifically using a processing scenario involving the Clouds and Earth's Radiant Energy System (CERES) project.

  8. Examining the Relationship between Technological, Organizational, and Environmental Factors and Cloud Computing Adoption

    ERIC Educational Resources Information Center

    Tweel, Abdeneaser

    2012-01-01

    High uncertainties related to cloud computing adoption may hinder IT managers from making solid decisions about adopting cloud computing. The problem addressed in this study was the lack of understanding of the relationship between factors related to the adoption of cloud computing and IT managers' interest in adopting this technology. In…

  9. 77 FR 26509 - Notice of Public Meeting-Cloud Computing Forum & Workshop V

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-04

    ... National Institute of Standards and Technology Notice of Public Meeting--Cloud Computing Forum & Workshop V... announces the Cloud Computing Forum & Workshop V to be held on Tuesday, Wednesday and Thursday, June 5, 6... provide information on the U.S. Government (USG) Cloud Computing Technology Roadmap initiative....

  10. 76 FR 62373 - Notice of Public Meeting-Cloud Computing Forum & Workshop IV

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-07

    ...NIST announces the Cloud Computing Forum & Workshop IV to be held on November 2, 3 and 4, 2011. This workshop will provide information on the U.S. Government (USG) Cloud Computing Technology Roadmap initiative. This workshop will also provide an updated status on NIST efforts to help develop open standards in interoperability, portability and security in cloud computing. This event is open to......

  11. 77 FR 74829 - Notice of Public Meeting-Cloud Computing and Big Data Forum and Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-18

    ... National Institute of Standards and Technology Notice of Public Meeting--Cloud Computing and Big Data Forum...) announces a Cloud Computing and Big Data Forum and Workshop to be held on Tuesday, January 15, Wednesday... workshop. The NIST Cloud Computing and Big Data Forum and Workshop will bring together leaders...

  12. In the Clouds: The Implications of Cloud Computing for Higher Education Information Technology Governance and Decision Making

    ERIC Educational Resources Information Center

    Dulaney, Malik H.

    2013-01-01

    Emerging technologies challenge the management of information technology in organizations. Paradigm changing technologies, such as cloud computing, have the ability to reverse the norms in organizational management, decision making, and information technology governance. This study explores the effects of cloud computing on information technology…

  13. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    PubMed Central

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  14. A lightweight distributed framework for computational offloading in mobile cloud computing.

    PubMed

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  15. Exploring Cloud Computing for Large-scale Scientific Applications

    SciTech Connect

    Lin, Guang; Han, Binh; Yin, Jian; Gorton, Ian

    2013-06-27

    This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address these challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.

  16. Two-Cloud-Servers-Assisted Secure Outsourcing Multiparty Computation

    PubMed Central

    Wen, Qiaoyan; Zhang, Hua; Jin, Zhengping; Li, Wenmin

    2014-01-01

    We focus on how to securely outsource computation task to the cloud and propose a secure outsourcing multiparty computation protocol on lattice-based encrypted data in two-cloud-servers scenario. Our main idea is to transform the outsourced data respectively encrypted by different users' public keys to the ones that are encrypted by the same two private keys of the two assisted servers so that it is feasible to operate on the transformed ciphertexts to compute an encrypted result following the function to be computed. In order to keep the privacy of the result, the two servers cooperatively produce a custom-made result for each user that is authorized to get the result so that all authorized users can recover the desired result while other unauthorized ones including the two servers cannot. Compared with previous research, our protocol is completely noninteractive between any users, and both of the computation and the communication complexities of each user in our solution are independent of the computing function. PMID:24982949

  17. Energy Conservation Using Dynamic Voltage Frequency Scaling for Computational Cloud.

    PubMed

    Florence, A Paulin; Shanthi, V; Simon, C B Sunil

    2016-01-01

    Cloud computing is a new technology which supports resource sharing on a "Pay as you go" basis around the world. It provides various services such as SaaS, IaaS, and PaaS. Computation is a part of IaaS and the entire computational requests are to be served efficiently with optimal power utilization in the cloud. Recently, various algorithms are developed to reduce power consumption and even Dynamic Voltage and Frequency Scaling (DVFS) scheme is also used in this perspective. In this paper we have devised methodology which analyzes the behavior of the given cloud request and identifies the associated type of algorithm. Once the type of algorithm is identified, using their asymptotic notations, its time complexity is calculated. Using best fit strategy the appropriate host is identified and the incoming job is allocated to the victimized host. Using the measured time complexity the required clock frequency of the host is measured. According to that CPU frequency is scaled up or down using DVFS scheme, enabling energy to be saved up to 55% of total Watts consumption. PMID:27239551

  18. Energy Conservation Using Dynamic Voltage Frequency Scaling for Computational Cloud

    PubMed Central

    Florence, A. Paulin; Shanthi, V.; Simon, C. B. Sunil

    2016-01-01

    Cloud computing is a new technology which supports resource sharing on a “Pay as you go” basis around the world. It provides various services such as SaaS, IaaS, and PaaS. Computation is a part of IaaS and the entire computational requests are to be served efficiently with optimal power utilization in the cloud. Recently, various algorithms are developed to reduce power consumption and even Dynamic Voltage and Frequency Scaling (DVFS) scheme is also used in this perspective. In this paper we have devised methodology which analyzes the behavior of the given cloud request and identifies the associated type of algorithm. Once the type of algorithm is identified, using their asymptotic notations, its time complexity is calculated. Using best fit strategy the appropriate host is identified and the incoming job is allocated to the victimized host. Using the measured time complexity the required clock frequency of the host is measured. According to that CPU frequency is scaled up or down using DVFS scheme, enabling energy to be saved up to 55% of total Watts consumption. PMID:27239551

  19. Evolving the Land Information System into a Cloud Computing Service

    SciTech Connect

    Houser, Paul R.

    2015-02-17

    The Land Information System (LIS) was developed to use advanced flexible land surface modeling and data assimilation frameworks to integrate extremely large satellite- and ground-based observations with advanced land surface models to produce continuous high-resolution fields of land surface states and fluxes. The resulting fields are extremely useful for drought and flood assessment, agricultural planning, disaster management, weather and climate forecasting, water resources assessment, and the like. We envisioned transforming the LIS modeling system into a scientific cloud computing-aware web and data service that would allow clients to easily setup and configure for use in addressing large water management issues. The focus of this Phase 1 project was to determine the scientific, technical, commercial merit and feasibility of the proposed LIS-cloud innovations that are currently barriers to broad LIS applicability. We (a) quantified the barriers to broad LIS utility and commercialization (high performance computing, big data, user interface, and licensing issues); (b) designed the proposed LIS-cloud web service, model-data interface, database services, and user interfaces; (c) constructed a prototype LIS user interface including abstractions for simulation control, visualization, and data interaction, (d) used the prototype to conduct a market analysis and survey to determine potential market size and competition, (e) identified LIS software licensing and copyright limitations and developed solutions, and (f) developed a business plan for development and marketing of the LIS-cloud innovation. While some significant feasibility issues were found in the LIS licensing, overall a high degree of LIS-cloud technical feasibility was found.

  20. Computer Education and Instructional Technology Teacher Trainees' Opinions about Cloud Computing Technology

    ERIC Educational Resources Information Center

    Karamete, Aysen

    2015-01-01

    This study aims to show the present conditions about the usage of cloud computing in the department of Computer Education and Instructional Technology (CEIT) amongst teacher trainees in School of Necatibey Education, Balikesir University, Turkey. In this study, a questionnaire with open-ended questions was used. 17 CEIT teacher trainees…

  1. cloudPEST - A python module for cloud-computing deployment of PEST, a program for parameter estimation

    USGS Publications Warehouse

    Fienen, Michael N.; Kunicki, Thomas C.; Kester, Daniel E.

    2011-01-01

    This report documents cloudPEST-a Python module with functions to facilitate deployment of the model-independent parameter estimation code PEST on a cloud-computing environment. cloudPEST makes use of low-level, freely available command-line tools that interface with the Amazon Elastic Compute Cloud (EC2(TradeMark)) that are unlikely to change dramatically. This report describes the preliminary setup for both Python and EC2 tools and subsequently describes the functions themselves. The code and guidelines have been tested primarily on the Windows(Registered) operating system but are extensible to Linux(Registered).

  2. Providing Assistive Technology Applications as a Service Through Cloud Computing.

    PubMed

    Mulfari, Davide; Celesti, Antonio; Villari, Massimo; Puliafito, Antonio

    2015-01-01

    Users with disabilities interact with Personal Computers (PCs) using Assistive Technology (AT) software solutions. Such applications run on a PC that a person with a disability commonly uses. However the configuration of AT applications is not trivial at all, especially whenever the user needs to work on a PC that does not allow him/her to rely on his / her AT tools (e.g., at work, at university, in an Internet point). In this paper, we discuss how cloud computing provides a valid technological solution to enhance such a scenario.With the emergence of cloud computing, many applications are executed on top of virtual machines (VMs). Virtualization allows us to achieve a software implementation of a real computer able to execute a standard operating system and any kind of application. In this paper we propose to build personalized VMs running AT programs and settings. By using the remote desktop technology, our solution enables users to control their customized virtual desktop environment by means of an HTML5-based web interface running on any computer equipped with a browser, whenever they are. PMID:26132225

  3. SC2IT: a cloud computing interface that makes computational science available to non-specialists

    NASA Astrophysics Data System (ADS)

    Jorissen, Kevin; Vila, Fernando; Rehr, John

    2012-10-01

    Computational work is a vital part of much scientific research. In materials science research in particular, theoretical models are usually needed to understand measurements. There is currently a double barrier that keeps a broad class of researchers from using state-of-the-art materials science (MS) codes: the software typically lacks user-friendliness, and the hardware requirements can demand a significant investment, e.g. the purchase of a Beowulf cluster. Scientific Cloud Computing (SCC) has the potential to breach this barrier and make computational science accessible to a wide class of non-specialists scientists. We present a platform, SC2IT, that enables seamless control of virtual compute clusters in the Amazon EC2 cloud and is designed to be embedded in user-friendly Java GUIs. Thus users can create powerful High-Performance Computing systems with preconfigured MS codes in the cloud with a single mouse click. We present applications of our SCC platform to the materials science codes FEFF9, WIEN2k, and MEEP-mpi. SC2IT and the paradigm described here are applicable to other fields of research beyond materials science, although the computational performance of Cloud Computing may vary with the characteristics of the calculations.

  4. Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources

    NASA Astrophysics Data System (ADS)

    Evans, D.; Fisk, I.; Holzman, B.; Melo, A.; Metson, S.; Pordes, R.; Sheldon, P.; Tiradani, A.

    2011-12-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely "on-demand" as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the "base-line" needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.

  5. Performance Evaluation of Resource Management in Cloud Computing Environments

    PubMed Central

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price. PMID:26555730

  6. Performance Evaluation of Resource Management in Cloud Computing Environments.

    PubMed

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price. PMID:26555730

  7. A Quantitative Risk Analysis Framework for Evaluating and Monitoring Operational Reliability of Cloud Computing

    ERIC Educational Resources Information Center

    Islam, Muhammad Faysal

    2013-01-01

    Cloud computing offers the advantage of on-demand, reliable and cost efficient computing solutions without the capital investment and management resources to build and maintain in-house data centers and network infrastructures. Scalability of cloud solutions enable consumers to upgrade or downsize their services as needed. In a cloud environment,…

  8. Community Building and Computer-Mediated Conferencing

    ERIC Educational Resources Information Center

    Moisey, Susan Darlene; Neu, Candace; Cleveland-Innes, Martha

    2008-01-01

    This study examined the relationship between community cohesion and computer-mediated conferencing (CMC), as well as other variables potentially associated with the development of a learning community. Within the context of a graduate-level course in instructional design (a core course in the Masters of Distance Education program at Athabasca…

  9. Utilizing Cloud Computing to Improve Climate Modeling and Studies

    NASA Astrophysics Data System (ADS)

    Li, Z.; Yang, C.; Liu, K.; Sun, M.; XIA, J.; Huang, Q.

    2013-12-01

    Climate studies have become increasingly important due to the global climate change, one of the biggest challenges for the human in the 21st century. Climate data, not only observations data collected from various sensors but also simulated data generated from diverse climate models, are essential for scientists to explore the potential climate change patterns and analyze the complex climate dynamics. Climate modeling and simulation, a critical methodology for simulating the past and predicting the future climate conditions, can produce huge amount of data that contains potentially valuable information for climate studies. However, using modeling method in climate studies poses at least two challenges for scientists. First, running climate models is a computing intensive process, which requires large amounts of computation resources. Second, running climate models is also a data intensive process generating Big geospatial Data (model output), which demands large storage for managing the data and large computing power to process and analyze these data. This presentation introduces a novel framework to tackle the two challenges by 1) running climate models in a cloud environment in an automated fashion, and 2) managing and parallel processing Big model output Data by leveraging cloud computing technologies. A prototype system is developed based on the framework using ModelE as the climate model. Experiment results show that this framework can improve climate modeling in the research cycle by accelerating big data generation (model simulation), big data management (storage and processing) and on demand big data analytics.

  10. An Application-Based Performance Evaluation of NASAs Nebula Cloud Computing Platform

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Heistand, Steve; Jin, Haoqiang; Chang, Johnny; Hood, Robert T.; Mehrotra, Piyush; Biswas, Rupak

    2012-01-01

    The high performance computing (HPC) community has shown tremendous interest in exploring cloud computing as it promises high potential. In this paper, we examine the feasibility, performance, and scalability of production quality scientific and engineering applications of interest to NASA on NASA's cloud computing platform, called Nebula, hosted at Ames Research Center. This work represents the comprehensive evaluation of Nebula using NUTTCP, HPCC, NPB, I/O, and MPI function benchmarks as well as four applications representative of the NASA HPC workload. Specifically, we compare Nebula performance on some of these benchmarks and applications to that of NASA s Pleiades supercomputer, a traditional HPC system. We also investigate the impact of virtIO and jumbo frames on interconnect performance. Overall results indicate that on Nebula (i) virtIO and jumbo frames improve network bandwidth by a factor of 5x, (ii) there is a significant virtualization layer overhead of about 10% to 25%, (iii) write performance is lower by a factor of 25x, (iv) latency for short MPI messages is very high, and (v) overall performance is 15% to 48% lower than that on Pleiades for NASA HPC applications. We also comment on the usability of the cloud platform.

  11. Factors Influencing the Adoption of Cloud Computing by Decision Making Managers

    ERIC Educational Resources Information Center

    Ross, Virginia Watson

    2010-01-01

    Cloud computing is a growing field, addressing the market need for access to computing resources to meet organizational computing requirements. The purpose of this research is to evaluate the factors that influence an organization in their decision whether to adopt cloud computing as a part of their strategic information technology planning.…

  12. Change Detection of Mobile LIDAR Data Using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Liu, Kun; Boehm, Jan; Alis, Christian

    2016-06-01

    Change detection has long been a challenging problem although a lot of research has been conducted in different fields such as remote sensing and photogrammetry, computer vision, and robotics. In this paper, we blend voxel grid and Apache Spark together to propose an efficient method to address the problem in the context of big data. Voxel grid is a regular geometry representation consisting of the voxels with the same size, which fairly suites parallel computation. Apache Spark is a popular distributed parallel computing platform which allows fault tolerance and memory cache. These features can significantly enhance the performance of Apache Spark and results in an efficient and robust implementation. In our experiments, both synthetic and real point cloud data are employed to demonstrate the quality of our method.

  13. An Expert Fitness Diagnosis System Based on Elastic Cloud Computing

    PubMed Central

    Tseng, Kevin C.; Wu, Chia-Chuan

    2014-01-01

    This paper presents an expert diagnosis system based on cloud computing. It classifies a user's fitness level based on supervised machine learning techniques. This system is able to learn and make customized diagnoses according to the user's physiological data, such as age, gender, and body mass index (BMI). In addition, an elastic algorithm based on Poisson distribution is presented to allocate computation resources dynamically. It predicts the required resources in the future according to the exponential moving average of past observations. The experimental results show that Naïve Bayes is the best classifier with the highest accuracy (90.8%) and that the elastic algorithm is able to capture tightly the trend of requests generated from the Internet and thus assign corresponding computation resources to ensure the quality of service. PMID:24723842

  14. 76 FR 52353 - Assumption Buster Workshop: “Current Implementations of Cloud Computing Indicate a New Approach...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-22

    ... Assumption Buster Workshop: ``Current Implementations of Cloud Computing Indicate a New Approach to Security...: ``Current implementations of cloud computing indicate a new approach to security'' Implementations of cloud computing have provided new ways of thinking about how to secure data and computation. Cloud is a...

  15. Cloud identification using genetic algorithms and massively parallel computation

    NASA Technical Reports Server (NTRS)

    Buckles, Bill P.; Petry, Frederick E.

    1996-01-01

    As a Guest Computational Investigator under the NASA administered component of the High Performance Computing and Communication Program, we implemented a massively parallel genetic algorithm on the MasPar SIMD computer. Experiments were conducted using Earth Science data in the domains of meteorology and oceanography. Results obtained in these domains are competitive with, and in most cases better than, similar problems solved using other methods. In the meteorological domain, we chose to identify clouds using AVHRR spectral data. Four cloud speciations were used although most researchers settle for three. Results were remarkedly consistent across all tests (91% accuracy). Refinements of this method may lead to more timely and complete information for Global Circulation Models (GCMS) that are prevalent in weather forecasting and global environment studies. In the oceanographic domain, we chose to identify ocean currents from a spectrometer having similar characteristics to AVHRR. Here the results were mixed (60% to 80% accuracy). Given that one is willing to run the experiment several times (say 10), then it is acceptable to claim the higher accuracy rating. This problem has never been successfully automated. Therefore, these results are encouraging even though less impressive than the cloud experiment. Successful conclusion of an automated ocean current detection system would impact coastal fishing, naval tactics, and the study of micro-climates. Finally we contributed to the basic knowledge of GA (genetic algorithm) behavior in parallel environments. We developed better knowledge of the use of subpopulations in the context of shared breeding pools and the migration of individuals. Rigorous experiments were conducted based on quantifiable performance criteria. While much of the work confirmed current wisdom, for the first time we were able to submit conclusive evidence. The software developed under this grant was placed in the public domain. An extensive user

  16. Assessing the Amazon Cloud Suitability for CLARREO's Computational Needs

    NASA Technical Reports Server (NTRS)

    Goldin, Daniel; Vakhnin, Andrei A.; Currey, Jon C.

    2015-01-01

    In this document we compare the performance of the Amazon Web Services (AWS), also known as Amazon Cloud, with the CLARREO (Climate Absolute Radiance and Refractivity Observatory) cluster and assess its suitability for computational needs of the CLARREO mission. A benchmark executable to process one month and one year of PARASOL (Polarization and Anistropy of Reflectances for Atmospheric Sciences coupled with Observations from a Lidar) data was used. With the optimal AWS configuration, adequate data-processing times, comparable to the CLARREO cluster, were found. The assessment of alternatives to the CLARREO cluster continues and several options, such as a NASA-based cluster, are being considered.

  17. Equisolid Fisheye Stereovision Calibration and Point Cloud Computation

    NASA Astrophysics Data System (ADS)

    Moreau, J.; Ambellouis, A.; Ruichek, Y.

    2013-10-01

    This paper deals with dense 3D point cloud computation of urban environments around a vehicle. The idea is to use two fisheye views to get 3D coordinates of the surrounding scene's points. The first contribution of this paper is the adaptation of an omnidirectional stereovision self-calibration algorithm to an equisolid fisheye projection model. The second contribution is the description of a new epipolar matching based on a scan-circle principle and a dynamic programming technique adapted for fisheye images. The method is validated using both synthetic images for which ground truth is available and real images of an urban scene.

  18. Applying a cloud computing approach to storage architectures for spacecraft

    NASA Astrophysics Data System (ADS)

    Baldor, Sue A.; Quiroz, Carlos; Wood, Paul

    As sensor technologies, processor speeds, and memory densities increase, spacecraft command, control, processing, and data storage systems have grown in complexity to take advantage of these improvements and expand the possible missions of spacecraft. Spacecraft systems engineers are increasingly looking for novel ways to address this growth in complexity and mitigate associated risks. Looking to conventional computing, many solutions have been executed to solve both the problem of complexity and heterogeneity in systems. In particular, the cloud-based paradigm provides a solution for distributing applications and storage capabilities across multiple platforms. In this paper, we propose utilizing a cloud-like architecture to provide a scalable mechanism for providing mass storage in spacecraft networks that can be reused on multiple spacecraft systems. By presenting a consistent interface to applications and devices that request data to be stored, complex systems designed by multiple organizations may be more readily integrated. Behind the abstraction, the cloud storage capability would manage wear-leveling, power consumption, and other attributes related to the physical memory devices, critical components in any mass storage solution for spacecraft. Our approach employs SpaceWire networks and SpaceWire-capable devices, although the concept could easily be extended to non-heterogeneous networks consisting of multiple spacecraft and potentially the ground segment.

  19. Teaching, Learning, and Collaborating in the Cloud: Applications of Cloud Computing for Educators in Post-Secondary Institutions

    ERIC Educational Resources Information Center

    Aaron, Lynn S.; Roche, Catherine M.

    2012-01-01

    "Cloud computing" refers to the use of computing resources on the Internet instead of on individual personal computers. The field is expanding and has significant potential value for educators. This is discussed with a focus on four main functions: file storage, file synchronization, document creation, and collaboration--each of which has…

  20. Proposal for a security management in cloud computing for health care.

    PubMed

    Haufe, Knut; Dzombeta, Srdan; Brandis, Knud

    2014-01-01

    Cloud computing is actually one of the most popular themes of information systems research. Considering the nature of the processed information especially health care organizations need to assess and treat specific risks according to cloud computing in their information security management system. Therefore, in this paper we propose a framework that includes the most important security processes regarding cloud computing in the health care sector. Starting with a framework of general information security management processes derived from standards of the ISO 27000 family the most important information security processes for health care organizations using cloud computing will be identified considering the main risks regarding cloud computing and the type of information processed. The identified processes will help a health care organization using cloud computing to focus on the most important ISMS processes and establish and operate them at an appropriate level of maturity considering limited resources. PMID:24701137

  1. Proposal for a Security Management in Cloud Computing for Health Care

    PubMed Central

    Dzombeta, Srdan; Brandis, Knud

    2014-01-01

    Cloud computing is actually one of the most popular themes of information systems research. Considering the nature of the processed information especially health care organizations need to assess and treat specific risks according to cloud computing in their information security management system. Therefore, in this paper we propose a framework that includes the most important security processes regarding cloud computing in the health care sector. Starting with a framework of general information security management processes derived from standards of the ISO 27000 family the most important information security processes for health care organizations using cloud computing will be identified considering the main risks regarding cloud computing and the type of information processed. The identified processes will help a health care organization using cloud computing to focus on the most important ISMS processes and establish and operate them at an appropriate level of maturity considering limited resources. PMID:24701137

  2. A hierarchical method for molecular docking using cloud computing.

    PubMed

    Kang, Ling; Guo, Quan; Wang, Xicheng

    2012-11-01

    Discovering small molecules that interact with protein targets will be a key part of future drug discovery efforts. Molecular docking of drug-like molecules is likely to be valuable in this field; however, the great number of such molecules makes the potential size of this task enormous. In this paper, a method to screen small molecular databases using cloud computing is proposed. This method is called the hierarchical method for molecular docking and can be completed in a relatively short period of time. In this method, the optimization of molecular docking is divided into two subproblems based on the different effects on the protein-ligand interaction energy. An adaptive genetic algorithm is developed to solve the optimization problem and a new docking program (FlexGAsDock) based on the hierarchical docking method has been developed. The implementation of docking on a cloud computing platform is then discussed. The docking results show that this method can be conveniently used for the efficient molecular design of drugs. PMID:23017886

  3. A cloud computing based 12-lead ECG telemedicine service

    PubMed Central

    2012-01-01

    Background Due to the great variability of 12-lead ECG instruments and medical specialists’ interpretation skills, it remains a challenge to deliver rapid and accurate 12-lead ECG reports with senior cardiologists’ decision making support in emergency telecardiology. Methods We create a new cloud and pervasive computing based 12-lead Electrocardiography (ECG) service to realize ubiquitous 12-lead ECG tele-diagnosis. Results This developed service enables ECG to be transmitted and interpreted via mobile phones. That is, tele-consultation can take place while the patient is on the ambulance, between the onsite clinicians and the off-site senior cardiologists, or among hospitals. Most importantly, this developed service is convenient, efficient, and inexpensive. Conclusions This cloud computing based ECG tele-consultation service expands the traditional 12-lead ECG applications onto the collaboration of clinicians at different locations or among hospitals. In short, this service can greatly improve medical service quality and efficiency, especially for patients in rural areas. This service has been evaluated and proved to be useful by cardiologists in Taiwan. PMID:22838382

  4. Cloud Computing for Pharmacometrics: Using AWS, NONMEM, PsN, Grid Engine, and Sonic

    PubMed Central

    Sanduja, S; Jewell, P; Aron, E; Pharai, N

    2015-01-01

    Cloud computing allows pharmacometricians to access advanced hardware, network, and security resources available to expedite analysis and reporting. Cloud-based computing environments are available at a fraction of the time and effort when compared to traditional local datacenter-based solutions. This tutorial explains how to get started with building your own personal cloud computer cluster using Amazon Web Services (AWS), NONMEM, PsN, Grid Engine, and Sonic. PMID:26451333

  5. Cloud Computing for Pharmacometrics: Using AWS, NONMEM, PsN, Grid Engine, and Sonic.

    PubMed

    Sanduja, S; Jewell, P; Aron, E; Pharai, N

    2015-09-01

    Cloud computing allows pharmacometricians to access advanced hardware, network, and security resources available to expedite analysis and reporting. Cloud-based computing environments are available at a fraction of the time and effort when compared to traditional local datacenter-based solutions. This tutorial explains how to get started with building your own personal cloud computer cluster using Amazon Web Services (AWS), NONMEM, PsN, Grid Engine, and Sonic. PMID:26451333

  6. Easy, Collaborative and Engaging--The Use of Cloud Computing in the Design of Management Classrooms

    ERIC Educational Resources Information Center

    Schneckenberg, Dirk

    2014-01-01

    Background: Cloud computing has recently received interest in information systems research and practice as a new way to organise information with the help of an increasingly ubiquitous computer infrastructure. However, the use of cloud computing in higher education institutions and business schools, as well as its potential to create novel…

  7. Migrating Educational Data and Services to Cloud Computing: Exploring Benefits and Challenges

    ERIC Educational Resources Information Center

    Lahiri, Minakshi; Moseley, James L.

    2013-01-01

    "Cloud computing" is currently the "buzzword" in the Information Technology field. Cloud computing facilitates convenient access to information and software resources as well as easy storage and sharing of files and data, without the end users being aware of the details of the computing technology behind the process. This…

  8. Evaluating and improving cloud phase in the Community Atmosphere Model version 5 using spaceborne lidar observations

    NASA Astrophysics Data System (ADS)

    Kay, Jennifer E.; Bourdages, Line; Miller, Nathaniel B.; Morrison, Ariel; Yettella, Vineel; Chepfer, Helene; Eaton, Brian

    2016-04-01

    Spaceborne lidar observations from the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) satellite are used to evaluate cloud amount and cloud phase in the Community Atmosphere Model version 5 (CAM5), the atmospheric component of a widely used state-of-the-art global coupled climate model (Community Earth System Model). By embedding a lidar simulator within CAM5, the idiosyncrasies of spaceborne lidar cloud detection and phase assignment are replicated. As a result, this study makes scale-aware and definition-aware comparisons between model-simulated and observed cloud amount and cloud phase. In the global mean, CAM5 has insufficient liquid cloud and excessive ice cloud when compared to CALIPSO observations. Over the ice-covered Arctic Ocean, CAM5 has insufficient liquid cloud in all seasons. Having important implications for projections of future sea level rise, a liquid cloud deficit contributes to a cold bias of 2-3°C for summer daily maximum near-surface air temperatures at Summit, Greenland. Over the midlatitude storm tracks, CAM5 has excessive ice cloud and insufficient liquid cloud. Storm track cloud phase biases in CAM5 maximize over the Southern Ocean, which also has larger-than-observed seasonal variations in cloud phase. Physical parameter modifications reduce the Southern Ocean cloud phase and shortwave radiation biases in CAM5 and illustrate the power of the CALIPSO observations as an observational constraint. The results also highlight the importance of using a regime-based, as opposed to a geographic-based, model evaluation approach. More generally, the results demonstrate the importance and value of simulator-enabled comparisons of cloud phase in models used for future climate projection.

  9. Data Sets, Ensemble Cloud Computing, and the University Library (Invited)

    NASA Astrophysics Data System (ADS)

    Plale, B. A.

    2013-12-01

    The environmental researcher at the public university has new resources at their disposal to aid in research and publishing. Cloud computing provides compute cycles on demand for analysis and modeling scenarios. Cloud computing is attractive for e-Science because of the ease with which cores can be accessed on demand, and because the virtual machine implementation that underlies cloud computing reduces the cost of porting a numeric or analysis code to a new platform. At the university, many libraries at larger universities are developing the e-Science skills to serve as repositories of record for publishable data sets. But these are confusing times for the publication of data sets from environmental research. The large publishers of scientific literature are advocating a process whereby data sets are tightly tied to a publication. In other words, a paper published in the scientific literature that gives results based on data, must have an associated data set accessible that backs up the results. This approach supports reproducibility of results in that publishers maintain a repository for the papers they publish, and the data sets that the papers used. Does such a solution that maps one data set (or subset) to one paper fit the needs of the environmental researcher who among other things uses complex models, mines longitudinal data bases, and generates observational results? The second school of thought has emerged out of NSF, NOAA, and NASA funded efforts over time: data sets exist coherent at a location, such as occurs at National Snow and Ice Data Center (NSIDC). But when a collection is coherent, reproducibility of individual results is more challenging. We argue for a third complementary option: the university repository as a location for data sets produced as a result of university-based research. This location for a repository relies on the expertise developing in the university libraries across the country, and leverages tools, such as are being developed

  10. Analyzing the Applicability of Airline Booking Systems for Cloud Computing Offerings

    NASA Astrophysics Data System (ADS)

    Watzl, Johannes; Felde, Nils Gentschen; Kranzlmuller, Dieter

    This paper introduces revenue management systems for Cloud computing offerings on the Infrastructure as a Service level. One of the main fields revenue management systems are deployed in is the airline industry. At the moment, the predominant part of the Cloud providers use static pricing models. In this work, a mapping of Cloud resources to flights in different categories and classes is presented together with a possible strategy to make use of these models in the emerging area of Cloud computing. The latter part of this work then describes a first step towards an inter-cloud brokering and trading platform by deriving requirements for a potential architectural design.

  11. Security Risks of Cloud Computing and Its Emergence as 5th Utility Service

    NASA Astrophysics Data System (ADS)

    Ahmad, Mushtaq

    Cloud Computing is being projected by the major cloud services provider IT companies such as IBM, Google, Yahoo, Amazon and others as fifth utility where clients will have access for processing those applications and or software projects which need very high processing speed for compute intensive and huge data capacity for scientific, engineering research problems and also e- business and data content network applications. These services for different types of clients are provided under DASM-Direct Access Service Management based on virtualization of hardware, software and very high bandwidth Internet (Web 2.0) communication. The paper reviews these developments for Cloud Computing and Hardware/Software configuration of the cloud paradigm. The paper also examines the vital aspects of security risks projected by IT Industry experts, cloud clients. The paper also highlights the cloud provider's response to cloud security risks.

  12. Geometric Data Perturbation-Based Personal Health Record Transactions in Cloud Computing

    PubMed Central

    Balasubramaniam, S.; Kavitha, V.

    2015-01-01

    Cloud computing is a new delivery model for information technology services and it typically involves the provision of dynamically scalable and often virtualized resources over the Internet. However, cloud computing raises concerns on how cloud service providers, user organizations, and governments should handle such information and interactions. Personal health records represent an emerging patient-centric model for health information exchange, and they are outsourced for storage by third parties, such as cloud providers. With these records, it is necessary for each patient to encrypt their own personal health data before uploading them to cloud servers. Current techniques for encryption primarily rely on conventional cryptographic approaches. However, key management issues remain largely unsolved with these cryptographic-based encryption techniques. We propose that personal health record transactions be managed using geometric data perturbation in cloud computing. In our proposed scheme, the personal health record database is perturbed using geometric data perturbation and outsourced to the Amazon EC2 cloud. PMID:25767826

  13. Geometric data perturbation-based personal health record transactions in cloud computing.

    PubMed

    Balasubramaniam, S; Kavitha, V

    2015-01-01

    Cloud computing is a new delivery model for information technology services and it typically involves the provision of dynamically scalable and often virtualized resources over the Internet. However, cloud computing raises concerns on how cloud service providers, user organizations, and governments should handle such information and interactions. Personal health records represent an emerging patient-centric model for health information exchange, and they are outsourced for storage by third parties, such as cloud providers. With these records, it is necessary for each patient to encrypt their own personal health data before uploading them to cloud servers. Current techniques for encryption primarily rely on conventional cryptographic approaches. However, key management issues remain largely unsolved with these cryptographic-based encryption techniques. We propose that personal health record transactions be managed using geometric data perturbation in cloud computing. In our proposed scheme, the personal health record database is perturbed using geometric data perturbation and outsourced to the Amazon EC2 cloud. PMID:25767826

  14. Cloud Computing Applications in Support of Earth Science Activities at Marshall Space Flight Center

    NASA Astrophysics Data System (ADS)

    Molthan, A.; Limaye, A. S.

    2011-12-01

    Currently, the NASA Nebula Cloud Computing Platform is available to Agency personnel in a pre-release status as the system undergoes a formal operational readiness review. Over the past year, two projects within the Earth Science Office at NASA Marshall Space Flight Center have been investigating the performance and value of Nebula's "Infrastructure as a Service", or "IaaS" concept and applying cloud computing concepts to advance their respective mission goals. The Short-term Prediction Research and Transition (SPoRT) Center focuses on the transition of unique NASA satellite observations and weather forecasting capabilities for use within the operational forecasting community through partnerships with NOAA's National Weather Service (NWS). SPoRT has evaluated the performance of the Weather Research and Forecasting (WRF) model on virtual machines deployed within Nebula and used Nebula instances to simulate local forecasts in support of regional forecast studies of interest to select NWS forecast offices. In addition to weather forecasting applications, rapidly deployable Nebula virtual machines have supported the processing of high resolution NASA satellite imagery to support disaster assessment following the historic severe weather and tornado outbreak of April 27, 2011. Other modeling and satellite analysis activities are underway in support of NASA's SERVIR program, which integrates satellite observations, ground-based data and forecast models to monitor environmental change and improve disaster response in Central America, the Caribbean, Africa, and the Himalayas. Leveraging SPoRT's experience, SERVIR is working to establish a real-time weather forecasting model for Central America. Other modeling efforts include hydrologic forecasts for Kenya, driven by NASA satellite observations and reanalysis data sets provided by the broader meteorological community. Forecast modeling efforts are supplemented by short-term forecasts of convective initiation, determined by

  15. Redefining Tactical Operations for MER Using Cloud Computing

    NASA Technical Reports Server (NTRS)

    Joswig, Joseph C.; Shams, Khawaja S.

    2011-01-01

    The Mars Exploration Rover Mission (MER) includes the twin rovers, Spirit and Opportunity, which have been performing geological research and surface exploration since early 2004. The rovers' durability well beyond their original prime mission (90 sols or Martian days) has allowed them to be a valuable platform for scientific research for well over 2000 sols, but as a by-product it has produced new challenges in providing efficient and cost-effective tactical operational planning. An early stage process adaptation was the move to distributed operations as mission scientists returned to their places of work in the summer of 2004, but they would still came together via teleconference and connected software to plan rover activities a few times a week. This distributed model has worked well since, but it requires the purchase, operation, and maintenance of a dedicated infrastructure at the Jet Propulsion Laboratory. This server infrastructure is costly to operate and the periodic nature of its usage (typically heavy usage for 8 hours every 2 days) has made moving to a cloud based tactical infrastructure an extremely tempting proposition. In this paper we will review both past and current implementations of the tactical planning application focusing on remote plan saving and discuss the unique challenges present with long-latency, distributed operations. We then detail the motivations behind our move to cloud based computing services and as well as our system design and implementation. We will discuss security and reliability concerns and how they were addressed

  16. Open Source Cloud Computing for Transiting Planet Discovery

    NASA Astrophysics Data System (ADS)

    McCullough, Peter R.; Fleming, Scott W.; Zonca, Andrea; Flowers, Jack; Nguyen, Duy Cuong; Sinkovits, Robert; Machalek, Pavel

    2014-06-01

    We provide an update on the development of the open-source software suite designed to detect exoplanet transits using high-performance and cloud computing resources (https://github.com/openEXO). Our collaboration continues to grow as we are developing algorithms and codes related to the detection of transit-like events, especially in Kepler data, Kepler 2.0 and TESS data when available. Extending the work of Berriman et al. (2010, 2012), we describe our use of the XSEDE-Gordon supercomputer and Amazon EMR cloud to search for aperiodic transit-like events in Kepler light curves. Such events may be caused by circumbinary planets or transiting bodies, either planets or stars, with orbital periods comparable to or longer than the observing baseline such that only one transit is observed. As a bonus, we use the same code to find stellar flares too; whereas transits reduce the flux in a box-shaped profile, flares increase the flux in a fast-rise, exponential-decay (FRED) profile that nevertheless can be detected reliably with a square-wave finder.

  17. The fourth International Conference on Information Science and Cloud Computing

    NASA Astrophysics Data System (ADS)

    This book comprises the papers accepted by the fourth International Conference on Information Science and Cloud Computing (ISCC), which was held from 18-19 December, 2015 in Guangzhou, China. It has 70 papers divided into four parts. The first part focuses on Information Theory with 20 papers; the second part emphasizes Machine Learning also containing 21 papers; in the third part, there are 21 papers as well in the area of Control Science; and the last part with 8 papers is dedicated to Cloud Science. Each part can be used as an excellent reference by engineers, researchers and students who need to build a knowledge base of the most current advances and state-of-practice in the topics covered by the ISCC conference. Special thanks go to Professor Deyu Qi, General Chair of ISCC 2015, for his leadership in supervising the organization of the entire conference; Professor Tinghuai Ma, Program Chair, and members of program committee for evaluating all the submissions and ensuring the selection of only the highest quality papers; and the authors for sharing their ideas, results and insights. We sincerely hope that you enjoy reading papers included in this book.

  18. A Cloud-Based Global Flood Disaster Community Cyber-Infrastructure: Development and Demonstration

    NASA Technical Reports Server (NTRS)

    Wan, Zhanming; Hong, Yang; Khan, Sadiq; Gourley, Jonathan; Flamig, Zachary; Kirschbaum, Dalia; Tang, Guoqiang

    2014-01-01

    Flood disasters have significant impacts on the development of communities globally. This study describes a public cloud-based flood cyber-infrastructure (CyberFlood) that collects, organizes, visualizes, and manages several global flood databases for authorities and the public in real-time, providing location-based eventful visualization as well as statistical analysis and graphing capabilities. In order to expand and update the existing flood inventory, a crowdsourcing data collection methodology is employed for the public with smartphones or Internet to report new flood events, which is also intended to engage citizen-scientists so that they may become motivated and educated about the latest developments in satellite remote sensing and hydrologic modeling technologies. Our shared vision is to better serve the global water community with comprehensive flood information, aided by the state-of-the- art cloud computing and crowdsourcing technology. The CyberFlood presents an opportunity to eventually modernize the existing paradigm used to collect, manage, analyze, and visualize water-related disasters.

  19. Parametric Behaviors of CLUBB in Simulations of Low Clouds in the Community Atmosphere Model (CAM)

    SciTech Connect

    Guo, Zhun; Wang, Minghuai; Qian, Yun; Larson, Vincent E.; Ghan, Steven J.; Ovchinnikov, Mikhail; Bogenschutz, Peter; Gettelman, A.; Zhou, Tianjun

    2015-07-03

    In this study, we investigate the sensitivity of simulated low clouds to 14 selected tunable parameters of Cloud Layers Unified By Binormals (CLUBB), a higher order closure (HOC) scheme, and 4 parameters of the Zhang-McFarlane (ZM) deep convection scheme in the Community Atmosphere Model version 5 (CAM5). A quasi-Monte Carlo (QMC) sampling approach is adopted to effectively explore the high-dimensional parameter space and a generalized linear model is applied to study the responses of simulated cloud fields to tunable parameters. Our results show that the variance in simulated low-cloud properties (cloud fraction and liquid water path) can be explained by the selected tunable parameters in two different ways: macrophysics itself and its interaction with microphysics. First, the parameters related to dynamic and thermodynamic turbulent structure and double Gaussians closure are found to be the most influential parameters for simulating low clouds. The spatial distributions of the parameter contributions show clear cloud-regime dependence. Second, because of the coupling between cloud macrophysics and cloud microphysics, the coefficient of the dissipation term in the total water variance equation is influential. This parameter affects the variance of in-cloud cloud water, which further influences microphysical process rates, such as autoconversion, and eventually low-cloud fraction. This study improves understanding of HOC behavior associated with parameter uncertainties and provides valuable insights for the interaction of macrophysics and microphysics.

  20. Parametric behaviors of CLUBB in simulations of low clouds in the Community Atmosphere Model (CAM)

    NASA Astrophysics Data System (ADS)

    Guo, Zhun; Wang, Minghuai; Qian, Yun; Larson, Vincent E.; Ghan, Steven; Ovchinnikov, Mikhail; Bogenschutz, Peter A.; Gettelman, Andrew; Zhou, Tianjun

    2015-09-01

    In this study, we investigate the sensitivity of simulated low clouds to 14 selected tunable parameters of Cloud Layers Unified By Binormals (CLUBB), a higher-order closure (HOC) scheme, and four parameters of the Zhang-McFarlane (ZM) deep convection scheme in the Community Atmosphere Model version 5 (CAM5). A Quasi-Monte Carlo (QMC) sampling approach is adopted to effectively explore the high-dimensional parameter space and a generalized linear model is applied to study the responses of simulated cloud fields to tunable parameters. Our results show that the variance in simulated low-cloud properties (cloud fraction and liquid water path) can be explained by the selected tunable parameters in two different ways: macrophysics itself and its interaction with microphysics. First, the parameters related to dynamic and thermodynamic turbulent structure and double Gaussian closure are found to be the most influential parameters for simulating low clouds. The spatial distributions of the parameter contributions show clear cloud-regime dependence. Second, because of the coupling between cloud macrophysics and cloud microphysics, the coefficient of the dissipation term in the total water variance equation is influential. This parameter affects the variance of in-cloud cloud water, which further influences microphysical process rates, such as autoconversion, and eventually low-cloud fraction. This study improves understanding of HOC behavior associated with parameter uncertainties and provides valuable insights for the interaction of macrophysics and microphysics.

  1. WNoDeS, a tool for integrated Grid and Cloud access and computing farm virtualization

    NASA Astrophysics Data System (ADS)

    Salomoni, Davide; Italiano, Alessandro; Ronchieri, Elisabetta

    2011-12-01

    INFN CNAF is the National Computing Center, located in Bologna, Italy, of the Italian National Institute for Nuclear Physics (INFN). INFN CNAF, also called the INFN Tier-1, provides computing and storage facilities to the International High-Energy Physics community and to several multi-disciplinary experiments. Currently, the INFN Tier-1 supports more than twenty different collaborations; in this context, optimization of the usage of computing resources is essential. This is one of the main drivers behind the development of a software called WNoDeS (Worker Nodes on Demand Service). WNoDeS, developed at INFN CNAF and deployed on the INFN Tier-1 production infrastructure, is a solution to virtualize computing resources and to make them available through local, Grid or Cloud interfaces. It is designed to be fully integrated with a Local Resource Management System; it is therefore inherently scalable and permits full integration with existing scheduling, policing, monitoring, accounting and security workflows. WNoDeS dynamically instantiates Virtual Machines (VMs) on-demand, i.e. only when the need arises; these VMs can be tailored and used for purposes like batch job execution, interactive analysis or service instantiation. WNoDeS supports interaction with user requests through traditional batch or Grid jobs and also via the Open Cloud Computing Interface standard, making it possible to allocate compute, storage and network resources on a pay-as-you-go basis. User authentication is supported via several authentication methods, while authorization policies are handled via gLite Argus. WNoDeS is an ambitious solution aimed at virtualizing cluster resources in medium or large scale computing centers, with up to several thousands of Virtual Machines up and running at any given time. In this paper, we descrive the WNoDeS architecture.

  2. A Cloud Computing Based Patient Centric Medical Information System

    NASA Astrophysics Data System (ADS)

    Agarwal, Ankur; Henehan, Nathan; Somashekarappa, Vivek; Pandya, A. S.; Kalva, Hari; Furht, Borko

    This chapter discusses an emerging concept of a cloud computing based Patient Centric Medical Information System framework that will allow various authorized users to securely access patient records from various Care Delivery Organizations (CDOs) such as hospitals, urgent care centers, doctors, laboratories, imaging centers among others, from any location. Such a system must seamlessly integrate all patient records including images such as CT-SCANS and MRI'S which can easily be accessed from any location and reviewed by any authorized user. In such a scenario the storage and transmission of medical records will have be conducted in a totally secure and safe environment with a very high standard of data integrity, protecting patient privacy and complying with all Health Insurance Portability and Accountability Act (HIPAA) regulations.

  3. Simple computation of reaction–diffusion processes on point clouds

    PubMed Central

    Macdonald, Colin B.; Merriman, Barry; Ruuth, Steven J.

    2013-01-01

    The study of reaction–diffusion processes is much more complicated on general curved surfaces than on standard Cartesian coordinate spaces. Here we show how to formulate and solve systems of reaction–diffusion equations on surfaces in an extremely simple way, using only the standard Cartesian form of differential operators, and a discrete unorganized point set to represent the surface. Our method decouples surface geometry from the underlying differential operators. As a consequence, it becomes possible to formulate and solve rather general reaction–diffusion equations on general surfaces without having to consider the complexities of differential geometry or sophisticated numerical analysis. To illustrate the generality of the method, computations for surface diffusion, pattern formation, excitable media, and bulk-surface coupling are provided for a variety of complex point cloud surfaces. PMID:23690616

  4. A Journal for the Astronomical Computing Community?

    NASA Astrophysics Data System (ADS)

    Gray, N.; Mann, R. G.

    2011-07-01

    One of the Birds of a Feather (BoF) discussion sessions at ADASS XX considered whether a new journal is needed to serve the astronomical computing community. In this paper we discuss the nature and requirements of that community, outline the analysis that led us to propose this as a topic for a BoF, and review the discussion from the BoF session itself. We also present the results from a survey designed to assess the suitability of astronomical computing papers of different kinds for publication in a range of existing astronomical and scientific computing journals. The discussion in the BoF session was somewhat inconclusive, and it seems likely that this topic will be debated again at a future ADASS or in a similar forum.

  5. Research on phone contacts online status based on mobile cloud computing

    NASA Astrophysics Data System (ADS)

    Wang, Wen-jinga; Ge, Weib

    2013-03-01

    Because the limited ability of storage space, CPU processing on mobile phone, it is difficult to realize complex applications on mobile phones, but along with the development of cloud computing, we can place the computing and storage in the clouds, provide users with rich cloud services, helping users complete various function through the browser has become the trend for future mobile communication. This article is taking the mobile phone contacts online status as an example to analysis the development and application of mobile cloud computing.

  6. Does Cloud Computing in the Atmospheric Sciences Make Sense? A case study of hybrid cloud computing at NASA Langley Research Center

    NASA Astrophysics Data System (ADS)

    Nguyen, L.; Chee, T.; Minnis, P.; Spangenberg, D.; Ayers, J. K.; Palikonda, R.; Vakhnin, A.; Dubois, R.; Murphy, P. R.

    2014-12-01

    The processing, storage and dissemination of satellite cloud and radiation products produced at NASA Langley Research Center are key activities for the Climate Science Branch. A constellation of systems operates in sync to accomplish these goals. Because of the complexity involved with operating such intricate systems, there are both high failure rates and high costs for hardware and system maintenance. Cloud computing has the potential to ameliorate cost and complexity issues. Over time, the cloud computing model has evolved and hybrid systems comprising off-site as well as on-site resources are now common. Towards our mission of providing the highest quality research products to the widest audience, we have explored the use of the Amazon Web Services (AWS) Cloud and Storage and present a case study of our results and efforts. This project builds upon NASA Langley Cloud and Radiation Group's experience with operating large and complex computing infrastructures in a reliable and cost effective manner to explore novel ways to leverage cloud computing resources in the atmospheric science environment. Our case study presents the project requirements and then examines the fit of AWS with the LaRC computing model. We also discuss the evaluation metrics, feasibility, and outcomes and close the case study with the lessons we learned that would apply to others interested in exploring the implementation of the AWS system in their own atmospheric science computing environments.

  7. Assessing Affordances of Selected Cloud Computing Tools for Language Teacher Education in Nigeria

    ERIC Educational Resources Information Center

    Ofemile, Abdulmalik Yusuf

    2015-01-01

    This paper reports part of a study that hoped to understand Teacher Educators' (TE) assessment of the affordances of selected cloud computing tools ranked among the top 100 for the year 2010. Research has shown that ICT and by extension cloud computing has positive impacts on daily life and this informed the Nigerian government's policy to…

  8. CANFAR+Skytree: A Cloud Computing and Data Mining System for Astronomy

    NASA Astrophysics Data System (ADS)

    Ball, N. M.

    2013-10-01

    This is a companion Focus Demonstration article to the CANFAR+Skytree poster (Ball 2013, this volume), demonstrating the usage of the Skytree machine learning software on the Canadian Advanced Network for Astronomical Research (CANFAR) cloud computing system. CANFAR+Skytree is the world's first cloud computing system for data mining in astronomy.

  9. WE-B-BRD-01: Innovation in Radiation Therapy Planning II: Cloud Computing in RT

    SciTech Connect

    Moore, K; Kagadis, G; Xing, L; McNutt, T

    2014-06-15

    As defined by the National Institute of Standards and Technology, cloud computing is “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” Despite the omnipresent role of computers in radiotherapy, cloud computing has yet to achieve widespread adoption in clinical or research applications, though the transition to such “on-demand” access is underway. As this transition proceeds, new opportunities for aggregate studies and efficient use of computational resources are set against new challenges in patient privacy protection, data integrity, and management of clinical informatics systems. In this Session, current and future applications of cloud computing and distributed computational resources will be discussed in the context of medical imaging, radiotherapy research, and clinical radiation oncology applications. Learning Objectives: Understand basic concepts of cloud computing. Understand how cloud computing could be used for medical imaging applications. Understand how cloud computing could be employed for radiotherapy research.4. Understand how clinical radiotherapy software applications would function in the cloud.

  10. State of the Art of Network Security Perspectives in Cloud Computing

    NASA Astrophysics Data System (ADS)

    Oh, Tae Hwan; Lim, Shinyoung; Choi, Young B.; Park, Kwang-Roh; Lee, Heejo; Choi, Hyunsang

    Cloud computing is now regarded as one of social phenomenon that satisfy customers' needs. It is possible that the customers' needs and the primary principle of economy - gain maximum benefits from minimum investment - reflects realization of cloud computing. We are living in the connected society with flood of information and without connected computers to the Internet, our activities and work of daily living will be impossible. Cloud computing is able to provide customers with custom-tailored features of application software and user's environment based on the customer's needs by adopting on-demand outsourcing of computing resources through the Internet. It also provides cloud computing users with high-end computing power and expensive application software package, and accordingly the users will access their data and the application software where they are located at the remote system. As the cloud computing system is connected to the Internet, network security issues of cloud computing are considered as mandatory prior to real world service. In this paper, survey and issues on the network security in cloud computing are discussed from the perspective of real world service environments.

  11. Applications integration in a hybrid cloud computing environment: modelling and platform

    NASA Astrophysics Data System (ADS)

    Li, Qing; Wang, Ze-yuan; Li, Wei-hua; Li, Jun; Wang, Cheng; Du, Rui-yang

    2013-08-01

    With the development of application services providers and cloud computing, more and more small- and medium-sized business enterprises use software services and even infrastructure services provided by professional information service companies to replace all or part of their information systems (ISs). These information service companies provide applications, such as data storage, computing processes, document sharing and even management information system services as public resources to support the business process management of their customers. However, no cloud computing service vendor can satisfy the full functional IS requirements of an enterprise. As a result, enterprises often have to simultaneously use systems distributed in different clouds and their intra enterprise ISs. Thus, this article presents a framework to integrate applications deployed in public clouds and intra ISs. A run-time platform is developed and a cross-computing environment process modelling technique is also developed to improve the feasibility of ISs under hybrid cloud computing environments.

  12. Cloud-Based Computational Tools for Earth Science Applications

    NASA Astrophysics Data System (ADS)

    Arendt, A. A.; Fatland, R.; Howe, B.

    2015-12-01

    Earth scientists are increasingly required to think across disciplines and utilize a wide range of datasets in order to solve complex environmental challenges. Although significant progress has been made in distributing data, researchers must still invest heavily in developing computational tools to accommodate their specific domain. Here we document our development of lightweight computational data systems aimed at enabling rapid data distribution, analytics and problem solving tools for Earth science applications. Our goal is for these systems to be easily deployable, scalable and flexible to accommodate new research directions. As an example we describe "Ice2Ocean", a software system aimed at predicting runoff from snow and ice in the Gulf of Alaska region. Our backend components include relational database software to handle tabular and vector datasets, Python tools (NumPy, pandas and xray) for rapid querying of gridded climate data, and an energy and mass balance hydrological simulation model (SnowModel). These components are hosted in a cloud environment for direct access across research teams, and can also be accessed via API web services using a REST interface. This API is a vital component of our system architecture, as it enables quick integration of our analytical tools across disciplines, and can be accessed by any existing data distribution centers. We will showcase several data integration and visualization examples to illustrate how our system has expanded our ability to conduct cross-disciplinary research.

  13. Cloud Computing for Supporting Earth Sciences: a GMU CISC Practice

    NASA Astrophysics Data System (ADS)

    Yang, C.; Houser, P.

    2009-12-01

    Earth Science advancements in the past decades have produced large amounts of Earth observation data, model simulation results, extracted and derived information, analyses and visualization tools, and decision support knowledge. It is a grand challenge on how to utilize the state-of-the-art data, information, knowledge, and tools to provide transparent services to a wide variety of users with interactive functions that are of essential to their scientific studies, development tasks, and educational purposes but without being trapped in the complex system jargons and scientific algorithms that are not their focused interests. This presentation will leverage several projects received from NASA, NOAA, EPA, FGDC, and several other agencies and companies to demonstrate how a cloud computing approach can help to relieve Earth science stake-holders from the time-consuming tasks in identifying proper resources for their using purpose. The systems, developed based- on spatial computing, model simulations, interoperable data, information, and knowledge, are utilized to demonstrate the ideas described. Water study is used as a domain for the demonstration.

  14. Exploring the factors influencing the cloud computing adoption: a systematic study on cloud migration.

    PubMed

    Rai, Rashmi; Sahoo, Gadadhar; Mehfuz, Shabana

    2015-01-01

    Today, most of the organizations trust on their age old legacy applications, to support their business-critical systems. However, there are several critical concerns, as maintainability and scalability issues, associated with the legacy system. In this background, cloud services offer a more agile and cost effective platform, to support business applications and IT infrastructure. As the adoption of cloud services has been increasing recently and so has been the academic research in cloud migration. However, there is a genuine need of secondary study to further strengthen this research. The primary objective of this paper is to scientifically and systematically identify, categorize and compare the existing research work in the area of legacy to cloud migration. The paper has also endeavored to consolidate the research on Security issues, which is prime factor hindering the adoption of cloud through classifying the studies on secure cloud migration. SLR (Systematic Literature Review) of thirty selected papers, published from 2009 to 2014 was conducted to properly understand the nuances of the security framework. To categorize the selected studies, authors have proposed a conceptual model for cloud migration which has resulted in a resource base of existing solutions for cloud migration. This study concludes that cloud migration research is in seminal stage but simultaneously it is also evolving and maturing, with increasing participation from academics and industry alike. The paper also identifies the need for a secure migration model, which can fortify organization's trust into cloud migration and facilitate necessary tool support to automate the migration process. PMID:25977891

  15. Reconciliation of the cloud computing model with US federal electronic health record regulations.

    PubMed

    Schweitzer, Eugene J

    2012-01-01

    Cloud computing refers to subscription-based, fee-for-service utilization of computer hardware and software over the Internet. The model is gaining acceptance for business information technology (IT) applications because it allows capacity and functionality to increase on the fly without major investment in infrastructure, personnel or licensing fees. Large IT investments can be converted to a series of smaller operating expenses. Cloud architectures could potentially be superior to traditional electronic health record (EHR) designs in terms of economy, efficiency and utility. A central issue for EHR developers in the US is that these systems are constrained by federal regulatory legislation and oversight. These laws focus on security and privacy, which are well-recognized challenges for cloud computing systems in general. EHRs built with the cloud computing model can achieve acceptable privacy and security through business associate contracts with cloud providers that specify compliance requirements, performance metrics and liability sharing. PMID:21727204

  16. Reconciliation of the cloud computing model with US federal electronic health record regulations

    PubMed Central

    2011-01-01

    Cloud computing refers to subscription-based, fee-for-service utilization of computer hardware and software over the Internet. The model is gaining acceptance for business information technology (IT) applications because it allows capacity and functionality to increase on the fly without major investment in infrastructure, personnel or licensing fees. Large IT investments can be converted to a series of smaller operating expenses. Cloud architectures could potentially be superior to traditional electronic health record (EHR) designs in terms of economy, efficiency and utility. A central issue for EHR developers in the US is that these systems are constrained by federal regulatory legislation and oversight. These laws focus on security and privacy, which are well-recognized challenges for cloud computing systems in general. EHRs built with the cloud computing model can achieve acceptable privacy and security through business associate contracts with cloud providers that specify compliance requirements, performance metrics and liability sharing. PMID:21727204

  17. The Open Cloud Testbed: Supporting Open Source Cloud Computing Systems Based on Large Scale High Performance, Dynamic Network Services

    NASA Astrophysics Data System (ADS)

    Grossman, Robert; Gu, Yunhong; Sabala, Michal; Bennet, Colin; Seidman, Jonathan; Mambratti, Joe

    Recently, a number of cloud platforms and services have been developed for data intensive computing, including Hadoop, Sector, CloudStore (formerly KFS), HBase, and Thrift. In order to benchmark the performance of these systems, to investigate their interoperability, and to experiment with new services based on flexible compute node and network provisioning capabilities, we have designed and implemented a large scale testbed called the Open Cloud Testbed (OCT). Currently OCT has 120 nodes in 4 data centers: Baltimore, Chicago (two locations), and San Diego. In contrast to other cloud testbeds, which are in small geographic areas and which are based on commodity Internet services, the OCT is a wide area testbed and the 4 data centers are connected with a high performance 10Gb/s network, based on a foundation of dedicated lightpaths. This testbed can address the requirements of extremely large data streams that challenge other types of distributed infrastructure. We have also developed several utilities to support the development of cloud computing systems and services, including novel node and network provisioning services, a monitoring system, and an RPC system. In this paper, we describe the OCT concepts, architecture, infrastructure, a few benchmarks that were developed for this platform, interoperability studies, and results.

  18. Exploration of cloud computing late start LDRD #149630 : Raincoat. v. 2.1.

    SciTech Connect

    Echeverria, Victor T.; Metral, Michael David; Leger, Michelle A.; Gabert, Kasimir Georg; Edgett, Patrick Garrett; Thai, Tan Q.

    2010-09-01

    This report contains documentation from an interoperability study conducted under the Late Start LDRD 149630, Exploration of Cloud Computing. A small late-start LDRD from last year resulted in a study (Raincoat) on using Virtual Private Networks (VPNs) to enhance security in a hybrid cloud environment. Raincoat initially explored the use of OpenVPN on IPv4 and demonstrates that it is possible to secure the communication channel between two small 'test' clouds (a few nodes each) at New Mexico Tech and Sandia. We extended the Raincoat study to add IPSec support via Vyatta routers, to interface with a public cloud (Amazon Elastic Compute Cloud (EC2)), and to be significantly more scalable than the previous iteration. The study contributed to our understanding of interoperability in a hybrid cloud.

  19. Above the cloud computing orbital services distributed data model

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2014-05-01

    Technology miniaturization and system architecture advancements have created an opportunity to significantly lower the cost of many types of space missions by sharing capabilities between multiple spacecraft. Historically, most spacecraft have been atomic entities that (aside from their communications with and tasking by ground controllers) operate in isolation. Several notable example exist; however, these are purpose-designed systems that collaborate to perform a single goal. The above the cloud computing (ATCC) concept aims to create ad-hoc collaboration between service provider and consumer craft. Consumer craft can procure processing, data transmission, storage, imaging and other capabilities from provider craft. Because of onboard storage limitations, communications link capability limitations and limited windows of communication, data relevant to or required for various operations may span multiple craft. This paper presents a model for the identification, storage and accessing of this data. This model includes appropriate identification features for this highly distributed environment. It also deals with business model constraints such as data ownership, retention and the rights of the storing craft to access, resell, transmit or discard the data in its possession. The model ensures data integrity and confidentiality (to the extent applicable to a given data item), deals with unique constraints of the orbital environment and tags data with business model (contractual) obligation data.

  20. Emergency healthcare process automation using mobile computing and cloud services.

    PubMed

    Poulymenopoulou, M; Malamateniou, F; Vassilacopoulos, G

    2012-10-01

    Emergency care is basically concerned with the provision of pre-hospital and in-hospital medical and/or paramedical services and it typically involves a wide variety of interdependent and distributed activities that can be interconnected to form emergency care processes within and between Emergency Medical Service (EMS) agencies and hospitals. Hence, in developing an information system for emergency care processes, it is essential to support individual process activities and to satisfy collaboration and coordination needs by providing readily access to patient and operational information regardless of location and time. Filling this information gap by enabling the provision of the right information, to the right people, at the right time fosters new challenges, including the specification of a common information format, the interoperability among heterogeneous institutional information systems or the development of new, ubiquitous trans-institutional systems. This paper is concerned with the development of an integrated computer support to emergency care processes by evolving and cross-linking institutional healthcare systems. To this end, an integrated EMS cloud-based architecture has been developed that allows authorized users to access emergency case information in standardized document form, as proposed by the Integrating the Healthcare Enterprise (IHE) profile, uses the Organization for the Advancement of Structured Information Standards (OASIS) standard Emergency Data Exchange Language (EDXL) Hospital Availability Exchange (HAVE) for exchanging operational data with hospitals and incorporates an intelligent module that supports triaging and selecting the most appropriate ambulances and hospitals for each case. PMID:22205383

  1. Access Control of Cloud Service Based on UCON

    NASA Astrophysics Data System (ADS)

    Danwei, Chen; Xiuli, Huang; Xunyi, Ren

    Cloud computing is an emerging computing paradigm, and cloud service is also becoming increasingly relevant. Most research communities have recently embarked in the area, and research challenges in every aspect. This paper mainly discusses cloud service security. Cloud service is based on Web Services, and it will face all kinds of security problems including what Web Services face. The development of cloud service closely relates to its security, so the research of cloud service security is a very important theme. This paper introduces cloud computing and cloud service firstly, and then gives cloud services access control model based on UCON and negotiation technologies, and also designs the negotiation module.

  2. 78 FR 54453 - Notice of Public Meeting-Intersection of Cloud Computing and Mobility Forum and Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-04

    ... National Institute of Standards and Technology Notice of Public Meeting--Intersection of Cloud Computing...-mobility.cfm . SUPPLEMENTARY INFORMATION: NIST hosted six prior Cloud Computing Forum & Workshop events in..., portability, and security, discuss the Federal Government's experience with cloud computing, report on...

  3. The performance of low-cost commercial cloud computing as an alternative in computational chemistry.

    PubMed

    Thackston, Russell; Fortenberry, Ryan C

    2015-05-01

    The growth of commercial cloud computing (CCC) as a viable means of computational infrastructure is largely unexplored for the purposes of quantum chemistry. In this work, the PSI4 suite of computational chemistry programs is installed on five different types of Amazon World Services CCC platforms. The performance for a set of electronically excited state single-point energies is compared between these CCC platforms and typical, "in-house" physical machines. Further considerations are made for the number of cores or virtual CPUs (vCPUs, for the CCC platforms), but no considerations are made for full parallelization of the program (even though parallelization of the BLAS library is implemented), complete high-performance computing cluster utilization, or steal time. Even with this most pessimistic view of the computations, CCC resources are shown to be more cost effective for significant numbers of typical quantum chemistry computations. Large numbers of large computations are still best utilized by more traditional means, but smaller-scale research may be more effectively undertaken through CCC services. PMID:25753841

  4. Science in the clouds: UAVs and cloud computing methods for spatial diffuse pollution risk assessment (Invited)

    NASA Astrophysics Data System (ADS)

    Reaney, S. M.

    2010-12-01

    . For example, information on changes in the direction of plough lines and the timing of canopy closure will give extra insight into the export of nutrients from the landscape. The extraction of the amount of vegetation cover from the images has been done through the use of a custom web based image processing service. Basing the analysis in a cloud computing framework enables greater collaboration within the project consortium and the effective dissemination of images and results to stakeholders. This presentation will discuss the results of the first four months of the UAV helicopter images and how the information has been extracted from the images. This work is part of the Defra Demonstration Test Catchments project and the NERC Pilot Virtual Observatory project.

  5. Cloud Computing Applications in Support of Earth Science Activities at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew L.; Limaye, Ashutosh S.; Srikishen, Jayanthi

    2011-01-01

    Currently, the NASA Nebula Cloud Computing Platform is available to Agency personnel in a pre-release status as the system undergoes a formal operational readiness review. Over the past year, two projects within the Earth Science Office at NASA Marshall Space Flight Center have been investigating the performance and value of Nebula s "Infrastructure as a Service", or "IaaS" concept and applying cloud computing concepts to advance their respective mission goals. The Short-term Prediction Research and Transition (SPoRT) Center focuses on the transition of unique NASA satellite observations and weather forecasting capabilities for use within the operational forecasting community through partnerships with NOAA s National Weather Service (NWS). SPoRT has evaluated the performance of the Weather Research and Forecasting (WRF) model on virtual machines deployed within Nebula and used Nebula instances to simulate local forecasts in support of regional forecast studies of interest to select NWS forecast offices. In addition to weather forecasting applications, rapidly deployable Nebula virtual machines have supported the processing of high resolution NASA satellite imagery to support disaster assessment following the historic severe weather and tornado outbreak of April 27, 2011. Other modeling and satellite analysis activities are underway in support of NASA s SERVIR program, which integrates satellite observations, ground-based data and forecast models to monitor environmental change and improve disaster response in Central America, the Caribbean, Africa, and the Himalayas. Leveraging SPoRT s experience, SERVIR is working to establish a real-time weather forecasting model for Central America. Other modeling efforts include hydrologic forecasts for Kenya, driven by NASA satellite observations and reanalysis data sets provided by the broader meteorological community. Forecast modeling efforts are supplemented by short-term forecasts of convective initiation, determined by

  6. Accelerating Astronomy & Astrophysics in the New Era of Parallel Computing: GPUs, Phi and Cloud Computing

    NASA Astrophysics Data System (ADS)

    Ford, Eric B.; Dindar, Saleh; Peters, Jorg

    2015-08-01

    The realism of astrophysical simulations and statistical analyses of astronomical data are set by the available computational resources. Thus, astronomers and astrophysicists are constantly pushing the limits of computational capabilities. For decades, astronomers benefited from massive improvements in computational power that were driven primarily by increasing clock speeds and required relatively little attention to details of the computational hardware. For nearly a decade, increases in computational capabilities have come primarily from increasing the degree of parallelism, rather than increasing clock speeds. Further increases in computational capabilities will likely be led by many-core architectures such as Graphical Processing Units (GPUs) and Intel Xeon Phi. Successfully harnessing these new architectures, requires significantly more understanding of the hardware architecture, cache hierarchy, compiler capabilities and network network characteristics.I will provide an astronomer's overview of the opportunities and challenges provided by modern many-core architectures and elastic cloud computing. The primary goal is to help an astronomical audience understand what types of problems are likely to yield more than order of magnitude speed-ups and which problems are unlikely to parallelize sufficiently efficiently to be worth the development time and/or costs.I will draw on my experience leading a team in developing the Swarm-NG library for parallel integration of large ensembles of small n-body systems on GPUs, as well as several smaller software projects. I will share lessons learned from collaborating with computer scientists, including both technical and soft skills. Finally, I will discuss the challenges of training the next generation of astronomers to be proficient in this new era of high-performance computing, drawing on experience teaching a graduate class on High-Performance Scientific Computing for Astrophysics and organizing a 2014 advanced summer

  7. Elastic Cloud Computing Infrastructures in the Open Cirrus Testbed Implemented via Eucalyptus

    NASA Astrophysics Data System (ADS)

    Baun, Christian; Kunze, Marcel

    Cloud computing realizes the advantages and overcomes some restrictionsof the grid computing paradigm. Elastic infrastructures can easily be createdand managed by cloud users. In order to accelerate the research ondata center management and cloud services the OpenCirrusTM researchtestbed has been started by HP, Intel and Yahoo!. Although commercialcloud offerings are proprietary, Open Source solutions exist in the field ofIaaS with Eucalyptus, PaaS with AppScale and at the applications layerwith Hadoop MapReduce. This paper examines the I/O performance ofcloud computing infrastructures implemented with Eucalyptus in contrastto Amazon S3.

  8. Cloud Computing for the Grid: GridControl: A Software Platform to Support the Smart Grid

    SciTech Connect

    2012-02-08

    GENI Project: Cornell University is creating a new software platform for grid operators called GridControl that will utilize cloud computing to more efficiently control the grid. In a cloud computing system, there are minimal hardware and software demands on users. The user can tap into a network of computers that is housed elsewhere (the cloud) and the network runs computer applications for the user. The user only needs interface software to access all of the cloud’s data resources, which can be as simple as a web browser. Cloud computing can reduce costs, facilitate innovation through sharing, empower users, and improve the overall reliability of a dispersed system. Cornell’s GridControl will focus on 4 elements: delivering the state of the grid to users quickly and reliably; building networked, scalable grid-control software; tailoring services to emerging smart grid uses; and simulating smart grid behavior under various conditions.

  9. Community-driven computational biology with Debian Linux

    PubMed Central

    2010-01-01

    Background The Open Source movement and its technologies are popular in the bioinformatics community because they provide freely available tools and resources for research. In order to feed the steady demand for updates on software and associated data, a service infrastructure is required for sharing and providing these tools to heterogeneous computing environments. Results The Debian Med initiative provides ready and coherent software packages for medical informatics and bioinformatics. These packages can be used together in Taverna workflows via the UseCase plugin to manage execution on local or remote machines. If such packages are available in cloud computing environments, the underlying hardware and the analysis pipelines can be shared along with the software. Conclusions Debian Med closes the gap between developers and users. It provides a simple method for offering new releases of software and data resources, thus provisioning a local infrastructure for computational biology. For geographically distributed teams it can ensure they are working on the same versions of tools, in the same conditions. This contributes to the world-wide networking of researchers. PMID:21210984

  10. Impact of office productivity cloud computing on energy consumption and greenhouse gas emissions.

    PubMed

    Williams, Daniel R; Tang, Yinshan

    2013-05-01

    Cloud computing is usually regarded as being energy efficient and thus emitting less greenhouse gases (GHG) than traditional forms of computing. When the energy consumption of Microsoft's cloud computing Office 365 (O365) and traditional Office 2010 (O2010) software suites were tested and modeled, some cloud services were found to consume more energy than the traditional form. The developed model in this research took into consideration the energy consumption at the three main stages of data transmission; data center, network, and end user device. Comparable products from each suite were selected and activities were defined for each product to represent a different computing type. Microsoft provided highly confidential data for the data center stage, while the networking and user device stages were measured directly. A new measurement and software apportionment approach was defined and utilized allowing the power consumption of cloud services to be directly measured for the user device stage. Results indicated that cloud computing is more energy efficient for Excel and Outlook which consumed less energy and emitted less GHG than the standalone counterpart. The power consumption of the cloud based Outlook (8%) and Excel (17%) was lower than their traditional counterparts. However, the power consumption of the cloud version of Word was 17% higher than its traditional equivalent. A third mixed access method was also measured for Word which emitted 5% more GHG than the traditional version. It is evident that cloud computing may not provide a unified way forward to reduce energy consumption and GHG. Direct conversion from the standalone package into the cloud provision platform can now consider energy and GHG emissions at the software development and cloud service design stage using the methods described in this research. PMID:23548097

  11. Evaluating the Acceptance of Cloud-Based Productivity Computer Solutions in Small and Medium Enterprises

    ERIC Educational Resources Information Center

    Dominguez, Alfredo

    2013-01-01

    Cloud computing has emerged as a new paradigm for on-demand delivery and consumption of shared IT resources over the Internet. Research has predicted that small and medium organizations (SMEs) would be among the earliest adopters of cloud solutions; however, this projection has not materialized. This study set out to investigate if behavior…

  12. Using Cloud-Computing Applications to Support Collaborative Scientific Inquiry: Examining Pre-Service Teachers' Perceived Barriers to Integration

    ERIC Educational Resources Information Center

    Donna, Joel D.; Miller, Brant G.

    2013-01-01

    Technology plays a crucial role in facilitating collaboration within the scientific community. Cloud-computing applications, such as Google Drive, can be used to model such collaboration and support inquiry within the secondary science classroom. Little is known about pre-service teachers' beliefs related to the envisioned use of…

  13. Evaluating Cloud Computing in the Proposed NASA DESDynI Ground Data System

    NASA Technical Reports Server (NTRS)

    Tran, John J.; Cinquini, Luca; Mattmann, Chris A.; Zimdars, Paul A.; Cuddy, David T.; Leung, Kon S.; Kwoun, Oh-Ig; Crichton, Dan; Freeborn, Dana

    2011-01-01

    The proposed NASA Deformation, Ecosystem Structure and Dynamics of Ice (DESDynI) mission would be a first-of-breed endeavor that would fundamentally change the paradigm by which Earth Science data systems at NASA are built. DESDynI is evaluating a distributed architecture where expert science nodes around the country all engage in some form of mission processing and data archiving. This is compared to the traditional NASA Earth Science missions where the science processing is typically centralized. What's more, DESDynI is poised to profoundly increase the amount of data collection and processing well into the 5 terabyte/day and tens of thousands of job range, both of which comprise a tremendous challenge to DESDynI's proposed distributed data system architecture. In this paper, we report on a set of architectural trade studies and benchmarks meant to inform the DESDynI mission and the broader community of the impacts of these unprecedented requirements. In particular, we evaluate the benefits of cloud computing and its integration with our existing NASA ground data system software called Apache Object Oriented Data Technology (OODT). The preliminary conclusions of our study suggest that the use of the cloud and OODT together synergistically form an effective, efficient and extensible combination that could meet the challenges of NASA science missions requiring DESDynI-like data collection and processing volumes at reduced costs.

  14. Now and Next-Generation Sequencing Techniques: Future of Sequence Analysis Using Cloud Computing

    PubMed Central

    Thakur, Radhe Shyam; Bandopadhyay, Rajib; Chaudhary, Bratati; Chatterjee, Sourav

    2012-01-01

    Advances in the field of sequencing techniques have resulted in the greatly accelerated production of huge sequence datasets. This presents immediate challenges in database maintenance at datacenters. It provides additional computational challenges in data mining and sequence analysis. Together these represent a significant overburden on traditional stand-alone computer resources, and to reach effective conclusions quickly and efficiently, the virtualization of the resources and computation on a pay-as-you-go concept (together termed “cloud computing”) has recently appeared. The collective resources of the datacenter, including both hardware and software, can be available publicly, being then termed a public cloud, the resources being provided in a virtual mode to the clients who pay according to the resources they employ. Examples of public companies providing these resources include Amazon, Google, and Joyent. The computational workload is shifted to the provider, which also implements required hardware and software upgrades over time. A virtual environment is created in the cloud corresponding to the computational and data storage needs of the user via the internet. The task is then performed, the results transmitted to the user, and the environment finally deleted after all tasks are completed. In this discussion, we focus on the basics of cloud computing, and go on to analyze the prerequisites and overall working of clouds. Finally, the applications of cloud computing in biological systems, particularly in comparative genomics, genome informatics, and SNP detection are discussed with reference to traditional workflows. PMID:23248640

  15. Performance, Agility and Cost of Cloud Computing Services for NASA GES DISC Giovanni Application

    NASA Astrophysics Data System (ADS)

    Pham, L.; Chen, A.; Wharton, S.; Winter, E. L.; Lynnes, C.

    2013-12-01

    The NASA Goddard Earth Science Data and Information Services Center (GES DISC) is investigating the performance, agility and cost of Cloud computing for GES DISC applications. Giovanni (Geospatial Interactive Online Visualization ANd aNalysis Infrastructure), one of the core applications at the GES DISC for online climate-related Earth science data access, subsetting, analysis, visualization, and downloading, was used to evaluate the feasibility and effort of porting an application to the Amazon Cloud Services platform. The performance and the cost of running Giovanni on the Amazon Cloud were compared to similar parameters for the GES DISC local operational system. A Giovanni Time-Series analysis of aerosol absorption optical depth (388nm) from OMI (Ozone Monitoring Instrument)/Aura was selected for these comparisons. All required data were pre-cached in both the Cloud and local system to avoid data transfer delays. The 3-, 6-, 12-, and 24-month data were used for analysis on the Cloud and local system respectively, and the processing times for the analysis were used to evaluate system performance. To investigate application agility, Giovanni was installed and tested on multiple Cloud platforms. The cost of using a Cloud computing platform mainly consists of: computing, storage, data requests, and data transfer in/out. The Cloud computing cost is calculated based on the hourly rate, and the storage cost is calculated based on the rate of Gigabytes per month. Cost for incoming data transfer is free, and for data transfer out, the cost is based on the rate in Gigabytes. The costs for a local server system consist of buying hardware/software, system maintenance/updating, and operating cost. The results showed that the Cloud platform had a 38% better performance and cost 36% less than the local system. This investigation shows the potential of cloud computing to increase system performance and lower the overall cost of system management.

  16. CANFAR+Skytree: A Cloud Computing and Data Mining System for Astronomy

    NASA Astrophysics Data System (ADS)

    Ball, N. M.

    2013-10-01

    To-date, computing systems have allowed either sophisticated analysis of small datasets, as exemplified by most astronomy software, or simple analysis of large datasets, such as database queries. At the Canadian Astronomy Data Centre, we have combined our cloud computing system, the Canadian Advanced Network for Astronomical Research (CANFAR), with the world's most advanced machine learning software, Skytree, to create the world's first cloud computing system for data mining in astronomy. CANFAR provides a generic environment for the storage and processing of large datasets, removing the requirement for an individual or project to set up and maintain a computing system when implementing an extensive undertaking such as a survey pipeline. 500 processor cores and several hundred terabytes of persistent storage are currently available to users, and both the storage and processing infrastructure are expandable. The storage is implemented via the International Virtual Observatory Alliance's VOSpace protocol, and is available as a mounted filesystem accessible both interactively, and to all processing jobs. The user interacts with CANFAR by utilizing virtual machines, which appear to them as equivalent to a desktop. Each machine is replicated as desired to perform large-scale parallel processing. Such an arrangement enables the user to immediately install and run the same astronomy code that they already utilize, in the same way as on a desktop. In addition, unlike many cloud systems, batch job scheduling is handled for the user on multiple virtual machines by the Condor job queueing system. Skytree is installed and run just as any other software on the system, and thus acts as a library of command line data mining functions that can be integrated into one's wider analysis. Thus we have created a generic environment for large-scale analysis by data mining, in the same way that CANFAR itself has done for storage and processing. Because Skytree scales to large data in

  17. Computation of Concentric Shell Particle Scattering Effects in Jovian Clouds

    NASA Astrophysics Data System (ADS)

    Fry, Patrick M.; Sromovsky, Lawrence A.

    2014-11-01

    From analysis of NIMS and ISO spectra of Jupiter Sromovsky and Fry (2010, Icarus 210, 211-229; 2010, Icarus 210, 230-257) concluded that both NH3 and NH4SH were present near the visible cloud tops, probably in the form of composite particles. Composite particles were also suggested from analysis of VIMS spectra of Saturn's Great Storm of 2010-2011 by Sromovsky et al. (2013, Icarus 226, 402-418), in this case concentric shells of H2O, NH4SH, and NH3. These results and suggestions that coatings of various materials might be capable of hiding NH3 spectral features on Jupiter, such as by Atreya et al. (2005, Planet. Space Sci. 53, 498-507), have raised interest in and a need for modeling of scattering properties of complex composite particles. Since many of the particle sizes inferred for composite particles are below or close to the range near 1 μm where particle shape has less impact on near IR spectral features (Clapp and Miller, 1993, Icarus 105, 529-536), concentric shell codes have considerable relevance to modeling of composite particles. Here we report on two codes: one fast code (Toon and Ackerman, 1981, Applied Optics 20, No. 20, 3657-3660) that is capable of handling a core and shell of different materials, and a slower code (Pena and Pal, 2009, Computer Physics Comm., 180, 2348-2354) that can handle an arbitrary number of layers. Typical times to calculate a phase function for a wide size distribution (gamma distribution with normalized variance of 0.1) for the faster core/shell code are about 0.75 seconds per wavelength. The newer slower, but more versatile, code runs about 10X slower, and will typically double or triple the execution time of our multiple scattering code when it is incorporated. Optimizing integration over particle size distributions to achieve suitable accuracy can minimize computational costs; we have therefore determined a rule for the number of intervals in the size distribution. Sample calculations will be presented to show effects

  18. Integrating Cloud Processes in the Community Atmosphere Model, Version 5.

    SciTech Connect

    Park, S.; Bretherton, Christopher S.; Rasch, Philip J.

    2014-09-15

    This paper provides a description on the parameterizations of global cloud system in CAM5. Compared to the previous versions, CAM5 cloud parameterization has the following unique characteristics: (1) a transparent cloud macrophysical structure that has horizontally non-overlapped deep cumulus, shallow cumulus and stratus in each grid layer, each of which has own cloud fraction, mass and number concentrations of cloud liquid droplets and ice crystals, (2) stratus-radiation-turbulence interaction that allows CAM5 to simulate marine stratocumulus solely from grid-mean RH without relying on the stability-based empirical empty stratus, (3) prognostic treatment of the number concentrations of stratus liquid droplets and ice crystals with activated aerosols and detrained in-cumulus condensates as the main sources and evaporation-sedimentation-precipitation of stratus condensate as the main sinks, and (4) radiatively active cumulus. By imposing consistency between diagnosed stratus fraction and prognosed stratus condensate, CAM5 is free from empty or highly-dense stratus at the end of stratus macrophysics. CAM5 also prognoses mass and number concentrations of various aerosol species. Thanks to the aerosol activation and the parameterizations of the radiation and stratiform precipitation production as a function of the droplet size, CAM5 simulates various aerosol indirect effects associated with stratus as well as direct effects, i.e., aerosol controls both the radiative and hydrological budgets. Detailed analysis of various simulations revealed that CAM5 is much better than CAM3/4 in the global performance as well as the physical formulation. However, several problems were also identifed, which can be attributed to inappropriate regional tuning, inconsistency between various physics parameterizations, and incomplete model physics. Continuous efforts are going on to further improve CAM5.

  19. Applying analytic hierarchy process to assess healthcare-oriented cloud computing service systems.

    PubMed

    Liao, Wen-Hwa; Qiu, Wan-Li

    2016-01-01

    Numerous differences exist between the healthcare industry and other industries. Difficulties in the business operation of the healthcare industry have continually increased because of the volatility and importance of health care, changes to and requirements of health insurance policies, and the statuses of healthcare providers, which are typically considered not-for-profit organizations. Moreover, because of the financial risks associated with constant changes in healthcare payment methods and constantly evolving information technology, healthcare organizations must continually adjust their business operation objectives; therefore, cloud computing presents both a challenge and an opportunity. As a response to aging populations and the prevalence of the Internet in fast-paced contemporary societies, cloud computing can be used to facilitate the task of balancing the quality and costs of health care. To evaluate cloud computing service systems for use in health care, providing decision makers with a comprehensive assessment method for prioritizing decision-making factors is highly beneficial. Hence, this study applied the analytic hierarchy process, compared items related to cloud computing and health care, executed a questionnaire survey, and then classified the critical factors influencing healthcare cloud computing service systems on the basis of statistical analyses of the questionnaire results. The results indicate that the primary factor affecting the design or implementation of optimal cloud computing healthcare service systems is cost effectiveness, with the secondary factors being practical considerations such as software design and system architecture. PMID:27441149

  20. Cloud computing for energy management in smart grid - an application survey

    NASA Astrophysics Data System (ADS)

    Naveen, P.; Kiing Ing, Wong; Kobina Danquah, Michael; Sidhu, Amandeep S.; Abu-Siada, Ahmed

    2016-03-01

    The smart grid is the emerging energy system wherein the application of information technology, tools and techniques that make the grid run more efficiently. It possesses demand response capacity to help balance electrical consumption with supply. The challenges and opportunities of emerging and future smart grids can be addressed by cloud computing. To focus on these requirements, we provide an in-depth survey on different cloud computing applications for energy management in the smart grid architecture. In this survey, we present an outline of the current state of research on smart grid development. We also propose a model of cloud based economic power dispatch for smart grid.

  1. Fast methods of computing bulk radiative properties of inhomogeneous clouds illuminated by solar radiation

    SciTech Connect

    Gabriel, P.

    1995-09-01

    The use of cloud fraction as a means of incorporating horizontal cloud inhomogeneity in radiative transfer calculations is widespread in the atmospheric science community. This research attempts to bypass the use of cloud fraction in radiative transfer modeling for two-dimensional media. Gabriel describes two approximation techniques useful in calculating the domain averaged bulk radiative properties such as albedo, flux divergence and mean radiance that dispense with the need to use cloud fraction as a specifier of cloud inhomogeneity. The results suggest that the variability of the medium can largely be accounted for through the pseudo-source term, offering hope of parameterizing the equation of transfer in terms of the statistical properties of the medium. 1 fig.

  2. Confidentiality Protection of Digital Health Records in Cloud Computing.

    PubMed

    Chen, Shyh-Wei; Chiang, Dai Lun; Liu, Chia-Hui; Chen, Tzer-Shyong; Lai, Feipei; Wang, Huihui; Wei, Wei

    2016-05-01

    Electronic medical records containing confidential information were uploaded to the cloud. The cloud allows medical crews to access and manage the data and integration of medical records easily. This data system provides relevant information to medical personnel and facilitates and improve electronic medical record management and data transmission. A structure of cloud-based and patient-centered personal health record (PHR) is proposed in this study. This technique helps patients to manage their health information, such as appointment date with doctor, health reports, and a completed understanding of their own health conditions. It will create patients a positive attitudes to maintain the health. The patients make decision on their own for those whom has access to their records over a specific span of time specified by the patients. Storing data in the cloud environment can reduce costs and enhance the share of information, but the potential threat of information security should be taken into consideration. This study is proposing the cloud-based secure transmission mechanism is suitable for multiple users (like nurse aides, patients, and family members). PMID:27059737

  3. The monitoring and managing application of cloud computing based on Internet of Things.

    PubMed

    Luo, Shiliang; Ren, Bin

    2016-07-01

    Cloud computing and the Internet of Things are the two hot points in the Internet application field. The application of the two new technologies is in hot discussion and research, but quite less on the field of medical monitoring and managing application. Thus, in this paper, we study and analyze the application of cloud computing and the Internet of Things on the medical field. And we manage to make a combination of the two techniques in the medical monitoring and managing field. The model architecture for remote monitoring cloud platform of healthcare information (RMCPHI) was established firstly. Then the RMCPHI architecture was analyzed. Finally an efficient PSOSAA algorithm was proposed for the medical monitoring and managing application of cloud computing. Simulation results showed that our proposed scheme can improve the efficiency about 50%. PMID:27208530

  4. Computers in Communications and Education at Coast Community College District.

    ERIC Educational Resources Information Center

    Luskin, Bernard J.; Ruth, Monty W.

    Coast Community College District in Orange County, California is a leader among community colleges in the instructional use computers. The district's hardware consists of an IBM system 370 model 155 computer, over 80 typewriter terminals, 12 cathode ray tubes (CRT), and several microfiche image projection devices. Better than 700 computer-assisted…

  5. Predicting Cloud Droplet Number Concentration in Community Atmosphere Model (CAM)-Oslo

    SciTech Connect

    Storelvmo, Trude; Kristjansson, J. E.; Ghan, Steven J.; Kirkevag, A.; Seland, O.; Iversen, T.

    2006-12-22

    A continuity equation for cloud droplet number concentration is implemented in an extended version of the National Center for Atmospheric Research (NCAR) Community Atmosphere Model version 2.0.1 (CAM-2.0.1). The new continuity equation for cloud droplet number concentration consists of a nucleation term and several microphysical sink terms. The nucleation term is calculated based on a parameterization of activation of cloud condensation nuclei. A sub-grid distribution of vertical velocity is used to determine the range of supersaturations found within each model grid box. This supersaturation combined with the hygroscopicity of the aerosols present will determine the number of Cloud Condensation Nuclei (CCN) activated into cloud droplets. The aerosol types considered in this study are sea salt, sulfate, black carbon, organic carbon and mineral dust. The horizontal and vertical distributions of sulfate and carbonaceous aerosols are calculated based on AEROCOM (http://nansen.ipsl.jussieu.fr/AEROCOM) sources. These are combined with the background aerosols, which are a combination of sea salt, mineral dust and sulfate dependent on soil type, wind speed and location (Arctic, Antarctic, maritime, desert or continental). The resulting aerosol size distributions are multimodal, allowing sulfate, black carbon and organic carbon to be both internally and externally mixed with the background aerosols. Microphysical sink terms for cloud droplets are obtained from a prognostic cloud water scheme, assuming a direct proportionality between loss of cloud water and loss of cloud droplets. Based on the framework described above, the cloud droplet number concentration and cloud droplet effective radius can be determined. The resulting cloud radiative forcings (CRF) can hereafter be calculated. By comparing the CRF for two different model runs, one with pre-industrial aerosol sources and the other with sources corresponding to present day, the indirect effect of aerosols can be

  6. Investigation into Cloud Computing for More Robust Automated Bulk Image Geoprocessing

    NASA Technical Reports Server (NTRS)

    Brown, Richard B.; Smoot, James C.; Underwood, Lauren; Armstrong, C. Duane

    2012-01-01

    Geospatial resource assessments frequently require timely geospatial data processing that involves large multivariate remote sensing data sets. In particular, for disasters, response requires rapid access to large data volumes, substantial storage space and high performance processing capability. The processing and distribution of this data into usable information products requires a processing pipeline that can efficiently manage the required storage, computing utilities, and data handling requirements. In recent years, with the availability of cloud computing technology, cloud processing platforms have made available a powerful new computing infrastructure resource that can meet this need. To assess the utility of this resource, this project investigates cloud computing platforms for bulk, automated geoprocessing capabilities with respect to data handling and application development requirements. This presentation is of work being conducted by Applied Sciences Program Office at NASA-Stennis Space Center. A prototypical set of image manipulation and transformation processes that incorporate sample Unmanned Airborne System data were developed to create value-added products and tested for implementation on the "cloud". This project outlines the steps involved in creating and testing of open source software developed process code on a local prototype platform, and then transitioning this code with associated environment requirements into an analogous, but memory and processor enhanced cloud platform. A data processing cloud was used to store both standard digital camera panchromatic and multi-band image data, which were subsequently subjected to standard image processing functions such as NDVI (Normalized Difference Vegetation Index), NDMI (Normalized Difference Moisture Index), band stacking, reprojection, and other similar type data processes. Cloud infrastructure service providers were evaluated by taking these locally tested processing functions, and then

  7. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

    NASA Astrophysics Data System (ADS)

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-01

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. This work was presented in part at the 2010 Annual Meeting of the American Association of Physicists in Medicine (AAPM), Philadelphia, PA.

  8. Toward Real-Time Monte Carlo Simulation Using a Commercial Cloud Computing Infrastructure+

    PubMed Central

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-01-01

    Purpose Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. Methods We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the Message Passing Interface (MPI), and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. Results The output of the cloud-based MC simulation is identical to that produced by the single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 hour on a local computer can be executed in 3.3 minutes on the cloud with 100 nodes, a 47x speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Conclusion Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. PMID:21841211

  9. An Interactive Web-Based Analysis Framework for Remote Sensing Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wang, X. Z.; Zhang, H. M.; Zhao, J. H.; Lin, Q. H.; Zhou, Y. C.; Li, J. H.

    2015-07-01

    Spatiotemporal data, especially remote sensing data, are widely used in ecological, geographical, agriculture, and military research and applications. With the development of remote sensing technology, more and more remote sensing data are accumulated and stored in the cloud. An effective way for cloud users to access and analyse these massive spatiotemporal data in the web clients becomes an urgent issue. In this paper, we proposed a new scalable, interactive and web-based cloud computing solution for massive remote sensing data analysis. We build a spatiotemporal analysis platform to provide the end-user with a safe and convenient way to access massive remote sensing data stored in the cloud. The lightweight cloud storage system used to store public data and users' private data is constructed based on open source distributed file system. In it, massive remote sensing data are stored as public data, while the intermediate and input data are stored as private data. The elastic, scalable, and flexible cloud computing environment is built using Docker, which is a technology of open-source lightweight cloud computing container in the Linux operating system. In the Docker container, open-source software such as IPython, NumPy, GDAL, and Grass GIS etc., are deployed. Users can write scripts in the IPython Notebook web page through the web browser to process data, and the scripts will be submitted to IPython kernel to be executed. By comparing the performance of remote sensing data analysis tasks executed in Docker container, KVM virtual machines and physical machines respectively, we can conclude that the cloud computing environment built by Docker makes the greatest use of the host system resources, and can handle more concurrent spatial-temporal computing tasks. Docker technology provides resource isolation mechanism in aspects of IO, CPU, and memory etc., which offers security guarantee when processing remote sensing data in the IPython Notebook. Users can write

  10. Dynamic Integration of Mobile JXTA with Cloud Computing for Emergency Rural Public Health Care

    PubMed Central

    Rajkumar, Rajasekaran; Sriman Narayana Iyengar, Nallani Chackravatula

    2013-01-01

    Objectives The existing processes of health care systems where data collection requires a great deal of labor with high-end tasks to retrieve and analyze information, are usually slow, tedious, and error prone, which restrains their clinical diagnostic and monitoring capabilities. Research is now focused on integrating cloud services with P2P JXTA to identify systematic dynamic process for emergency health care systems. The proposal is based on the concepts of a community cloud for preventative medicine, to help promote a healthy rural community. We investigate the approaches of patient health monitoring, emergency care, and an ambulance alert alarm (AAA) under mobile cloud-based telecare or community cloud controller systems. Methods Considering permanent mobile users, an efficient health promotion method is proposed. Experiments were conducted to verify the effectiveness of the method. The performance was evaluated from September 2011 to July 2012. A total of 1,856,454 cases were transported and referred to hospital, identified with health problems, and were monitored. We selected all the peer groups and the control server N0 which controls N1, N2, and N3 proxied peer groups. The hospital cloud controller maintains the database of the patients through a JXTA network. Results Among 1,856,454 transported cases with beneficiaries of 1,712,877 cases there were 1,662,834 lives saved and 8,500 cases transported per day with 104,530 transported cases found to be registered in a JXTA network. Conclusion The registered case histories were referred from the Hospital community cloud (HCC). SMS messages were sent from node N0 to the relay peers which connected to the N1, N2, and N3 nodes, controlled by the cloud controller through a JXTA network. PMID:24298441

  11. Managing competing elastic Grid and Cloud scientific computing applications using OpenNebula

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Lusso, S.; Masera, M.; Vallero, S.

    2015-12-01

    Elastic cloud computing applications, i.e. applications that automatically scale according to computing needs, work on the ideal assumption of infinite resources. While large public cloud infrastructures may be a reasonable approximation of this condition, scientific computing centres like WLCG Grid sites usually work in a saturated regime, in which applications compete for scarce resources through queues, priorities and scheduling policies, and keeping a fraction of the computing cores idle to allow for headroom is usually not an option. In our particular environment one of the applications (a WLCG Tier-2 Grid site) is much larger than all the others and cannot autoscale easily. Nevertheless, other smaller applications can benefit of automatic elasticity; the implementation of this property in our infrastructure, based on the OpenNebula cloud stack, will be described and the very first operational experiences with a small number of strategies for timely allocation and release of resources will be discussed.

  12. Parallel optimization of pixel purity index algorithm for massive hyperspectral images in cloud computing environment

    NASA Astrophysics Data System (ADS)

    Chen, Yufeng; Wu, Zebin; Sun, Le; Wei, Zhihui; Li, Yonglong

    2016-04-01

    With the gradual increase in the spatial and spectral resolution of hyperspectral images, the size of image data becomes larger and larger, and the complexity of processing algorithms is growing, which poses a big challenge to efficient massive hyperspectral image processing. Cloud computing technologies distribute computing tasks to a large number of computing resources for handling large data sets without the limitation of memory and computing resource of a single machine. This paper proposes a parallel pixel purity index (PPI) algorithm for unmixing massive hyperspectral images based on a MapReduce programming model for the first time in the literature. According to the characteristics of hyperspectral images, we describe the design principle of the algorithm, illustrate the main cloud unmixing processes of PPI, and analyze the time complexity of serial and parallel algorithms. Experimental results demonstrate that the parallel implementation of the PPI algorithm on the cloud can effectively process big hyperspectral data and accelerate the algorithm.

  13. Opportunities and challenges of cloud computing to improve health care services.

    PubMed

    Kuo, Alex Mu-Hsing

    2011-01-01

    Cloud computing is a new way of delivering computing resources and services. Many managers and experts believe that it can improve health care services, benefit health care research, and change the face of health information technology. However, as with any innovation, cloud computing should be rigorously evaluated before its widespread adoption. This paper discusses the concept and its current place in health care, and uses 4 aspects (management, technology, security, and legal) to evaluate the opportunities and challenges of this computing model. Strategic planning that could be used by a health organization to determine its direction, strategy, and resource allocation when it has decided to migrate from traditional to cloud-based health services is also discussed. PMID:21937354

  14. Opportunities and Challenges of Cloud Computing to Improve Health Care Services

    PubMed Central

    2011-01-01

    Cloud computing is a new way of delivering computing resources and services. Many managers and experts believe that it can improve health care services, benefit health care research, and change the face of health information technology. However, as with any innovation, cloud computing should be rigorously evaluated before its widespread adoption. This paper discusses the concept and its current place in health care, and uses 4 aspects (management, technology, security, and legal) to evaluate the opportunities and challenges of this computing model. Strategic planning that could be used by a health organization to determine its direction, strategy, and resource allocation when it has decided to migrate from traditional to cloud-based health services is also discussed. PMID:21937354

  15. Development and clinical study of mobile 12-lead electrocardiography based on cloud computing for cardiac emergency.

    PubMed

    Fujita, Hideo; Uchimura, Yuji; Waki, Kayo; Omae, Koji; Takeuchi, Ichiro; Ohe, Kazuhiko

    2013-01-01

    To improve emergency services for accurate diagnosis of cardiac emergency, we developed a low-cost new mobile electrocardiography system "Cloud Cardiology®" based upon cloud computing for prehospital diagnosis. This comprises a compact 12-lead ECG unit equipped with Bluetooth and Android Smartphone with an application for transmission. Cloud server enables us to share ECG simultaneously inside and outside the hospital. We evaluated the clinical effectiveness by conducting a clinical trial with historical comparison to evaluate this system in a rapid response car in the real emergency service settings. We found that this system has an ability to shorten the onset to balloon time of patients with acute myocardial infarction, resulting in better clinical outcome. Here we propose that cloud-computing based simultaneous data sharing could be powerful solution for emergency service for cardiology, along with its significant clinical outcome. PMID:23920851

  16. Above-Campus Services: Shaping the Promise of Cloud Computing for Higher Education

    ERIC Educational Resources Information Center

    Wheeler, Brad; Waggener, Shelton

    2009-01-01

    The concept of today's cloud computing may date back to 1961, when John McCarthy, retired Stanford professor and Turing Award winner, delivered a speech at MIT's Centennial. In that speech, he predicted that in the future, computing would become a "public utility." Yet for colleges and universities, the recent growth of pervasive, very high speed…

  17. An Analysis of the Use of Cloud Computing among University Lecturers: A Case Study in Zimbabwe

    ERIC Educational Resources Information Center

    Musungwini, Samuel; Mugoniwa, Beauty; Furusa, Samuel Simbarashe; Rebanowako, Taurai George

    2016-01-01

    Cloud computing is a novel model of computing that may bring extensive benefits to users, institutions, businesses and academics, while at the same time also giving rise to new risks and challenges. This study looked at the benefits of using Google docs by researchers and academics and analysing the factors affecting the adoption and use of the…

  18. Distance Learning and Cloud Computing: "Just Another Buzzword or a Major E-Learning Breakthrough?"

    ERIC Educational Resources Information Center

    Romiszowski, Alexander J.

    2012-01-01

    "Cloud computing is a model for the enabling of ubiquitous, convenient, and on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and other services) that can be rapidly provisioned and released with minimal management effort or service provider interaction." This somewhat…

  19. Accelerating statistical image reconstruction algorithms for fan-beam x-ray CT using cloud computing

    NASA Astrophysics Data System (ADS)

    Srivastava, Somesh; Rao, A. Ravishankar; Sheinin, Vadim

    2011-03-01

    Statistical image reconstruction algorithms potentially offer many advantages to x-ray computed tomography (CT), e.g. lower radiation dose. But, their adoption in practical CT scanners requires extra computation power, which is traditionally provided by incorporating additional computing hardware (e.g. CPU-clusters, GPUs, FPGAs etc.) into a scanner. An alternative solution is to access the required computation power over the internet from a cloud computing service, which is orders-of-magnitude more cost-effective. This is because users only pay a small pay-as-you-go fee for the computation resources used (i.e. CPU time, storage etc.), and completely avoid purchase, maintenance and upgrade costs. In this paper, we investigate the benefits and shortcomings of using cloud computing for statistical image reconstruction. We parallelized the most time-consuming parts of our application, the forward and back projectors, using MapReduce, the standard parallelization library on clouds. From preliminary investigations, we found that a large speedup is possible at a very low cost. But, communication overheads inside MapReduce can limit the maximum speedup, and a better MapReduce implementation might become necessary in the future. All the experiments for this paper, including development and testing, were completed on the Amazon Elastic Compute Cloud (EC2) for less than $20.

  20. Directly executable formal models of middleware for MANET and Cloud Networking and Computing

    NASA Astrophysics Data System (ADS)

    Pashchenko, D. V.; Sadeq Jaafar, Mustafa; Zinkin, S. A.; Trokoz, D. A.; Pashchenko, T. U.; Sinev, M. P.

    2016-04-01

    The article considers some “directly executable” formal models that are suitable for the specification of computing and networking in the cloud environment and other networks which are similar to wireless networks MANET. These models can be easily programmed and implemented on computer networks.

  1. A Novel Cost Based Model for Energy Consumption in Cloud Computing

    PubMed Central

    Horri, A.; Dastghaibyfard, Gh.

    2015-01-01

    Cloud data centers consume enormous amounts of electrical energy. To support green cloud computing, providers also need to minimize cloud infrastructure energy consumption while conducting the QoS. In this study, for cloud environments an energy consumption model is proposed for time-shared policy in virtualization layer. The cost and energy usage of time-shared policy were modeled in the CloudSim simulator based upon the results obtained from the real system and then proposed model was evaluated by different scenarios. In the proposed model, the cache interference costs were considered. These costs were based upon the size of data. The proposed model was implemented in the CloudSim simulator and the related simulation results indicate that the energy consumption may be considerable and that it can vary with different parameters such as the quantum parameter, data size, and the number of VMs on a host. Measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. Also, measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. PMID:25705716

  2. A novel cost based model for energy consumption in cloud computing.

    PubMed

    Horri, A; Dastghaibyfard, Gh

    2015-01-01

    Cloud data centers consume enormous amounts of electrical energy. To support green cloud computing, providers also need to minimize cloud infrastructure energy consumption while conducting the QoS. In this study, for cloud environments an energy consumption model is proposed for time-shared policy in virtualization layer. The cost and energy usage of time-shared policy were modeled in the CloudSim simulator based upon the results obtained from the real system and then proposed model was evaluated by different scenarios. In the proposed model, the cache interference costs were considered. These costs were based upon the size of data. The proposed model was implemented in the CloudSim simulator and the related simulation results indicate that the energy consumption may be considerable and that it can vary with different parameters such as the quantum parameter, data size, and the number of VMs on a host. Measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. Also, measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. PMID:25705716

  3. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm

    PubMed Central

    Abdulhamid, Shafi’i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239

  4. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm.

    PubMed

    Abdulhamid, Shafi'i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239

  5. iSPHERE - A New Approach to Collaborative Research and Cloud Computing

    NASA Astrophysics Data System (ADS)

    Al-Ubaidi, T.; Khodachenko, M. L.; Kallio, E. J.; Harry, A.; Alexeev, I. I.; Vázquez-Poletti, J. L.; Enke, H.; Magin, T.; Mair, M.; Scherf, M.; Poedts, S.; De Causmaecker, P.; Heynderickx, D.; Congedo, P.; Manolescu, I.; Esser, B.; Webb, S.; Ruja, C.

    2015-10-01

    The project iSPHERE (integrated Scientific Platform for HEterogeneous Research and Engineering) that has been proposed for Horizon 2020 (EINFRA-9- 2015, [1]) aims at creating a next generation Virtual Research Environment (VRE) that embraces existing and emerging technologies and standards in order to provide a versatile platform for scientific investigations and collaboration. The presentation will introduce the large project consortium, provide a comprehensive overview of iSPHERE's basic concepts and approaches and outline general user requirements that the VRE will strive to satisfy. An overview of the envisioned architecture will be given, focusing on the adapted Service Bus concept, i.e. the "Scientific Service Bus" as it is called in iSPHERE. The bus will act as a central hub for all communication and user access, and will be implemented in the course of the project. The agile approach [2] that has been chosen for detailed elaboration and documentation of user requirements, as well as for the actual implementation of the system, will be outlined and its motivation and basic structure will be discussed. The presentation will show which user communities will benefit and which concrete problems, scientific investigations are facing today, will be tackled by the system. Another focus of the presentation is iSPHERE's seamless integration of cloud computing resources and how these will benefit scientific modeling teams by providing a reliable and web based environment for cloud based model execution, storage of results, and comparison with measurements, including fully web based tools for data mining, analysis and visualization. Also the envisioned creation of a dedicated data model for experimental plasma physics will be discussed. It will be shown why the Scientific Service Bus provides an ideal basis to integrate a number of data models and communication protocols and to provide mechanisms for data exchange across multiple and even multidisciplinary platforms.

  6. Threshold-based queuing system for performance analysis of cloud computing system with dynamic scaling

    SciTech Connect

    Shorgin, Sergey Ya.; Pechinkin, Alexander V.; Samouylov, Konstantin E.; Gaidamaka, Yuliya V.; Gudkova, Irina A.; Sopin, Eduard S.

    2015-03-10

    Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. For better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures.

  7. Large-Scale, Multi-Sensor Atmospheric Data Fusion Using Hybrid Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2015-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, MODIS, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. HySDS is a Hybrid-Cloud Science Data System that has been developed and applied under NASA AIST, MEaSUREs, and ACCESS grants. HySDS uses the SciFlow workflow engine to partition analysis workflows into parallel tasks (e.g. segmenting by time or space) that are pushed into a durable job queue. The tasks are "pulled" from the queue by worker Virtual Machines (VM's) and executed in an on-premise Cloud (Eucalyptus or OpenStack) or at Amazon in the public Cloud or govCloud. In this way, years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the transferred data. We are using HySDS to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a MEASURES grant. We will present the architecture of HySDS, describe the achieved "clock time" speedups in fusing datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. Our system demonstrates how one can pull A-Train variables (Levels 2 & 3) on-demand into the Amazon Cloud, and cache only those variables that are heavily used, so that any number of compute jobs can be

  8. New Literacies at the Digital Divide: American Indian Community Computing

    ERIC Educational Resources Information Center

    Betts, J. David

    2009-01-01

    This study is about a community computing lab established by a U.S. Department of Commerce grant to bridge the Digital Divide in a rural Arizona American Indian community, a project called "Native Connection" (a pseudonym). This paper describes the process of integrating new literacies associated with a high-tech computer lab into the life of a…

  9. Implementation of a solution Cloud Computing with MapReduce model

    NASA Astrophysics Data System (ADS)

    Baya, Chalabi

    2014-10-01

    In recent years, large scale computer systems have emerged to meet the demands of high storage, supercomputing, and applications using very large data sets. The emergence of Cloud Computing offers the potentiel for analysis and processing of large data sets. Mapreduce is the most popular programming model which is used to support the development of such applications. It was initially designed by Google for building large datacenters on a large scale, to provide Web search services with rapid response and high availability. In this paper we will test the clustering algorithm K-means Clustering in a Cloud Computing. This algorithm is implemented on MapReduce. It has been chosen for its characteristics that are representative of many iterative data analysis algorithms. Then, we modify the framework CloudSim to simulate the MapReduce execution of K-means Clustering on different Cloud Computing, depending on their size and characteristics of target platforms. The experiment show that the implementation of K-means Clustering gives good results especially for large data set and the Cloud infrastructure has an influence on these results.

  10. Computer generated hologram from point cloud using graphics processor.

    PubMed

    Chen, Rick H-Y; Wilkinson, Timothy D

    2009-12-20

    Computer generated holography is an extremely demanding and complex task when it comes to providing realistic reconstructions with full parallax, occlusion, and shadowing. We present an algorithm designed for data-parallel computing on modern graphics processing units to alleviate the computational burden. We apply Gaussian interpolation to create a continuous surface representation from discrete input object points. The algorithm maintains a potential occluder list for each individual hologram plane sample to keep the number of visibility tests to a minimum. We experimented with two approximations that simplify and accelerate occlusion computation. It is observed that letting several neighboring hologram plane samples share visibility information on object points leads to significantly faster computation without causing noticeable artifacts in the reconstructed images. Computing a reduced sample set via nonuniform sampling is also found to be an effective acceleration technique. PMID:20029585

  11. Heart beats in the cloud: distributed analysis of electrophysiological ‘Big Data’ using cloud computing for epilepsy clinical research

    PubMed Central

    Sahoo, Satya S; Jayapandian, Catherine; Garg, Gaurav; Kaffashi, Farhad; Chung, Stephanie; Bozorgi, Alireza; Chen, Chien-Hun; Loparo, Kenneth; Lhatoo, Samden D; Zhang, Guo-Qiang

    2014-01-01

    Objective The rapidly growing volume of multimodal electrophysiological signal data is playing a critical role in patient care and clinical research across multiple disease domains, such as epilepsy and sleep medicine. To facilitate secondary use of these data, there is an urgent need to develop novel algorithms and informatics approaches using new cloud computing technologies as well as ontologies for collaborative multicenter studies. Materials and methods We present the Cloudwave platform, which (a) defines parallelized algorithms for computing cardiac measures using the MapReduce parallel programming framework, (b) supports real-time interaction with large volumes of electrophysiological signals, and (c) features signal visualization and querying functionalities using an ontology-driven web-based interface. Cloudwave is currently used in the multicenter National Institute of Neurological Diseases and Stroke (NINDS)-funded Prevention and Risk Identification of SUDEP (sudden unexplained death in epilepsy) Mortality (PRISM) project to identify risk factors for sudden death in epilepsy. Results Comparative evaluations of Cloudwave with traditional desktop approaches to compute cardiac measures (eg, QRS complexes, RR intervals, and instantaneous heart rate) on epilepsy patient data show one order of magnitude improvement for single-channel ECG data and 20 times improvement for four-channel ECG data. This enables Cloudwave to support real-time user interaction with signal data, which is semantically annotated with a novel epilepsy and seizure ontology. Discussion Data privacy is a critical issue in using cloud infrastructure, and cloud platforms, such as Amazon Web Services, offer features to support Health Insurance Portability and Accountability Act standards. Conclusion The Cloudwave platform is a new approach to leverage of large-scale electrophysiological data for advancing multicenter clinical research. PMID:24326538

  12. OpenTopography: Addressing Big Data Challenges Using Cloud Computing, HPC, and Data Analytics

    NASA Astrophysics Data System (ADS)

    Crosby, C. J.; Nandigam, V.; Phan, M.; Youn, C.; Baru, C.; Arrowsmith, R.

    2014-12-01

    OpenTopography (OT) is a geoinformatics-based data facility initiated in 2009 for democratizing access to high-resolution topographic data, derived products, and tools. Hosted at the San Diego Supercomputer Center (SDSC), OT utilizes cyberinfrastructure, including large-scale data management, high-performance computing, and service-oriented architectures to provide efficient Web based access to large, high-resolution topographic datasets. OT collocates data with processing tools to enable users to quickly access custom data and derived products for their application. OT's ongoing R&D efforts aim to solve emerging technical challenges associated with exponential growth in data, higher order data products, as well as user base. Optimization of data management strategies can be informed by a comprehensive set of OT user access metrics that allows us to better understand usage patterns with respect to the data. By analyzing the spatiotemporal access patterns within the datasets, we can map areas of the data archive that are highly active (hot) versus the ones that are rarely accessed (cold). This enables us to architect a tiered storage environment consisting of high performance disk storage (SSD) for the hot areas and less expensive slower disk for the cold ones, thereby optimizing price to performance. From a compute perspective, OT is looking at cloud based solutions such as the Microsoft Azure platform to handle sudden increases in load. An OT virtual machine image in Microsoft's VM Depot can be invoked and deployed quickly in response to increased system demand. OT has also integrated SDSC HPC systems like the Gordon supercomputer into our infrastructure tier to enable compute intensive workloads like parallel computation of hydrologic routing on high resolution topography. This capability also allows OT to scale to HPC resources during high loads to meet user demand and provide more efficient processing. With a growing user base and maturing scientific user

  13. Enabling Water Quality Management Decision Support and Public Outreach Using Cloud-Computing Services

    NASA Astrophysics Data System (ADS)

    Sun, A. Y.; Scanlon, B. R.; Uhlman, K.

    2013-12-01

    Watershed management is a participatory process that requires collaboration among multiple groups of people. Environmental decision support systems (EDSS) have long been used to support such co-management and co-learning processes in watershed management. However, implementing and maintaining EDSS in-house can be a significant burden to many water agencies because of budget, technical, and policy constraints. Basing on experiences from several web-GIS environmental management projects in Texas, we showcase how cloud-computing services can help shift the design and hosting of EDSS from the traditional client-server-based platforms to be simple clients of cloud-computing services.

  14. Improvements of top-of-atmosphere and surface irradiance computations with CALIPSO-, CloudSat-, and MODIS-derived cloud and aerosol properties

    NASA Astrophysics Data System (ADS)

    Kato, Seiji; Rose, Fred G.; Sun-Mack, Sunny; Miller, Walter F.; Chen, Yan; Rutan, David A.; Stephens, Graeme L.; Loeb, Norman G.; Minnis, Patrick; Wielicki, Bruce A.; Winker, David M.; Charlock, Thomas P.; Stackhouse, Paul W., Jr.; Xu, Kuan-Man; Collins, William D.

    2011-10-01

    One year of instantaneous top-of-atmosphere (TOA) and surface shortwave and longwave irradiances are computed using cloud and aerosol properties derived from instruments on the A-Train Constellation: the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) on the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) satellite, the CloudSat Cloud Profiling Radar (CPR), and the Aqua Moderate Resolution Imaging Spectrometer (MODIS). When modeled irradiances are compared with those computed with cloud properties derived from MODIS radiances by a Clouds and the Earth's Radiant Energy System (CERES) cloud algorithm, the global and annual mean of modeled instantaneous TOA irradiances decreases by 12.5 W m-2 (5.0%) for reflected shortwave and 2.5 W m-2 (1.1%) for longwave irradiances. As a result, the global annual mean of instantaneous TOA irradiances agrees better with CERES-derived irradiances to within 0.5W m-2 (out of 237.8 W m-2) for reflected shortwave and 2.6W m-2 (out of 240.1 W m-2) for longwave irradiances. In addition, the global annual mean of instantaneous surface downward longwave irradiances increases by 3.6 W m-2 (1.0%) when CALIOP- and CPR-derived cloud properties are used. The global annual mean of instantaneous surface downward shortwave irradiances also increases by 8.6 W m-2 (1.6%), indicating that the net surface irradiance increases when CALIOP- and CPR-derived cloud properties are used. Increasing the surface downward longwave irradiance is caused by larger cloud fractions (the global annual mean by 0.11, 0.04 excluding clouds with optical thickness less than 0.3) and lower cloud base heights (the global annual mean by 1.6 km). The increase of the surface downward longwave irradiance in the Arctic exceeds 10 W m-2 (˜4%) in winter because CALIOP and CPR detect more clouds in comparison with the cloud detection by the CERES cloud algorithm during polar night. The global annual mean surface downward longwave irradiance of

  15. Behavior Life Style Analysis for Mobile Sensory Data in Cloud Computing through MapReduce

    PubMed Central

    Hussain, Shujaat; Bang, Jae Hun; Han, Manhyung; Ahmed, Muhammad Idris; Amin, Muhammad Bilal; Lee, Sungyoung; Nugent, Chris; McClean, Sally; Scotney, Bryan; Parr, Gerard

    2014-01-01

    Cloud computing has revolutionized healthcare in today's world as it can be seamlessly integrated into a mobile application and sensor devices. The sensory data is then transferred from these devices to the public and private clouds. In this paper, a hybrid and distributed environment is built which is capable of collecting data from the mobile phone application and store it in the cloud. We developed an activity recognition application and transfer the data to the cloud for further processing. Big data technology Hadoop MapReduce is employed to analyze the data and create user timeline of user's activities. These activities are visualized to find useful health analytics and trends. In this paper a big data solution is proposed to analyze the sensory data and give insights into user behavior and lifestyle trends. PMID:25420151

  16. Behavior life style analysis for mobile sensory data in cloud computing through MapReduce.

    PubMed

    Hussain, Shujaat; Bang, Jae Hun; Han, Manhyung; Ahmed, Muhammad Idris; Amin, Muhammad Bilal; Lee, Sungyoung; Nugent, Chris; McClean, Sally; Scotney, Bryan; Parr, Gerard

    2014-01-01

    Cloud computing has revolutionized healthcare in today's world as it can be seamlessly integrated into a mobile application and sensor devices. The sensory data is then transferred from these devices to the public and private clouds. In this paper, a hybrid and distributed environment is built which is capable of collecting data from the mobile phone application and store it in the cloud. We developed an activity recognition application and transfer the data to the cloud for further processing. Big data technology Hadoop MapReduce is employed to analyze the data and create user timeline of user's activities. These activities are visualized to find useful health analytics and trends. In this paper a big data solution is proposed to analyze the sensory data and give insights into user behavior and lifestyle trends. PMID:25420151

  17. A Comprehensive Review on Adaptability of Network Forensics Frameworks for Mobile Cloud Computing

    PubMed Central

    Abdul Wahab, Ainuddin Wahid; Han, Qi; Bin Abdul Rahman, Zulkanain

    2014-01-01

    Network forensics enables investigation and identification of network attacks through the retrieved digital content. The proliferation of smartphones and the cost-effective universal data access through cloud has made Mobile Cloud Computing (MCC) a congenital target for network attacks. However, confines in carrying out forensics in MCC is interrelated with the autonomous cloud hosting companies and their policies for restricted access to the digital content in the back-end cloud platforms. It implies that existing Network Forensic Frameworks (NFFs) have limited impact in the MCC paradigm. To this end, we qualitatively analyze the adaptability of existing NFFs when applied to the MCC. Explicitly, the fundamental mechanisms of NFFs are highlighted and then analyzed using the most relevant parameters. A classification is proposed to help understand the anatomy of existing NFFs. Subsequently, a comparison is given that explores the functional similarities and deviations among NFFs. The paper concludes by discussing research challenges for progressive network forensics in MCC. PMID:25097880

  18. APFA: Asynchronous Parallel Finite Automaton for Deep Packet Inspection in Cloud Computing

    NASA Astrophysics Data System (ADS)

    Li, Yang; Li, Zheng; Yu, Nenghai; Ma, Ke

    Security in cloud computing is getting more and more important recently. Besides passive defense such as encryption, it is necessary to implement real-time active monitoring, detection and defense in the cloud. According to the published researches, DPI (deep packet inspection) is the most effective technology to realize active inspection and defense. However, most recent works of DPI aim at space reduction but could not meet the demands of high speed and stability in the cloud. So, it is important to improve regular methods of DPI, making it more suitable for cloud computing. In this paper, an asynchronous parallel finite automaton named APFA is proposed, by introducing the asynchronous parallelization and the heuristically forecast mechanism, which significantly decreases the time consumed in matching while still keeps reducing the memory required. What is more, APFA is immune to the overlapping problem so that the stability is also enhanced. The evaluation results show that APFA achieves higher stability, better performance on time and memory. In short, APFA is more suitable for cloud computing.

  19. A cloud computing based platform for sleep behavior and chronic diseases collaborative research.

    PubMed

    Kuo, Mu-Hsing; Borycki, Elizabeth; Kushniruk, Andre; Huang, Yueh-Min; Hung, Shu-Hui

    2014-01-01

    The objective of this study is to propose a Cloud Computing based platform for sleep behavior and chronic disease collaborative research. The platform consists of two main components: (1) a sensing bed sheet with textile sensors to automatically record patient's sleep behaviors and vital signs, and (2) a service-oriented cloud computing architecture (SOCCA) that provides a data repository and allows for sharing and analysis of collected data. Also, we describe our systematic approach to implementing the SOCCA. We believe that the new cloud-based platform can provide nurse and other health professional researchers located in differing geographic locations with a cost effective, flexible, secure and privacy-preserved research environment. PMID:24943526

  20. A Medical Image Backup Architecture Based on a NoSQL Database and Cloud Computing Services.

    PubMed

    Santos Simões de Almeida, Luan Henrique; Costa Oliveira, Marcelo

    2015-01-01

    The use of digital systems for storing medical images generates a huge volume of data. Digital images are commonly stored and managed on a Picture Archiving and Communication System (PACS), under the DICOM standard. However, PACS is limited because it is strongly dependent on the server's physical space. Alternatively, Cloud Computing arises as an extensive, low cost, and reconfigurable resource. However, medical images contain patient information that can not be made available in a public cloud. Therefore, a mechanism to anonymize these images is needed. This poster presents a solution for this issue by taking digital images from PACS, converting the information contained in each image file to a NoSQL database, and using cloud computing to store digital images. PMID:26262231

  1. Cloud computing in pharmaceutical R&D: business risks and mitigations.

    PubMed

    Geiger, Karl

    2010-05-01

    Cloud computing provides information processing power and business services, delivering these services over the Internet from centrally hosted locations. Major technology corporations aim to supply these services to every sector of the economy. Deploying business processes 'in the cloud' requires special attention to the regulatory and business risks assumed when running on both hardware and software that are outside the direct control of a company. The identification of risks at the correct service level allows a good mitigation strategy to be selected. The pharmaceutical industry can take advantage of existing risk management strategies that have already been tested in the finance and electronic commerce sectors. In this review, the business risks associated with the use of cloud computing are discussed, and mitigations achieved through knowledge from securing services for electronic commerce and from good IT practice are highlighted. PMID:20443161

  2. Large-Scale, Multi-Sensor Atmospheric Data Fusion Using Hybrid Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wilson, Brian; Manipon, Gerald; Hua, Hook; Fetzer, Eric

    2014-05-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map-reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in a hybrid Cloud (private eucalyptus & public Amazon). Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Multi-year datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will present the architecture of SciReduce, describe the

  3. Privacy and Data Security under Cloud Computing Arrangements: The Legal Framework and Practical Do's and Don'ts

    ERIC Educational Resources Information Center

    Buckman, Joel; Gold, Stephanie

    2012-01-01

    This article outlines privacy and data security compliance issues facing postsecondary education institutions when they utilize cloud computing and concludes with a practical list of do's and dont's. Cloud computing does not change an institution's privacy and data security obligations. It does involve reliance on a third party, which requires an…

  4. Application of Cloud Computing at KTU: MS Live@Edu Case

    ERIC Educational Resources Information Center

    Miseviciene, Regina; Budnikas, Germanas; Ambraziene, Danute

    2011-01-01

    Cloud computing is a significant alternative in today's educational perspective. The technology gives the students and teachers the opportunity to quickly access various application platforms and resources through the web pages on-demand. Unfortunately, not all educational institutions often have an ability to take full advantages of the newest…

  5. Cloud Computing and Validated Learning for Accelerating Innovation in IoT

    ERIC Educational Resources Information Center

    Suciu, George; Todoran, Gyorgy; Vulpe, Alexandru; Suciu, Victor; Bulca, Cristina; Cheveresan, Romulus

    2015-01-01

    Innovation in Internet of Things (IoT) requires more than just creation of technology and use of cloud computing or big data platforms. It requires accelerated commercialization or aptly called go-to-market processes. To successfully accelerate, companies need a new type of product development, the so-called validated learning process.…

  6. A Quantitative Study of the Relationship between Leadership Practice and Strategic Intentions to Use Cloud Computing

    ERIC Educational Resources Information Center

    Castillo, Alan F.

    2014-01-01

    The purpose of this quantitative correlational cross-sectional research study was to examine a theoretical model consisting of leadership practice, attitudes of business process outsourcing, and strategic intentions of leaders to use cloud computing and to examine the relationships between each of the variables respectively. This study…

  7. A City Parking Integration System Combined with Cloud Computing Technologies and Smart Mobile Devices

    ERIC Educational Resources Information Center

    Yeh, Her-Tyan; Chen, Bing-Chang; Wang, Bo-Xun

    2016-01-01

    The current study applied cloud computing technology and smart mobile devices combined with a streaming server for parking lots to plan a city parking integration system. It is also equipped with a parking search system, parking navigation system, parking reservation service, and car retrieval service. With this system, users can quickly find…

  8. The Benefits & Drawbacks of Integrating Cloud Computing and Interactive Whiteboards in Teacher Preparation

    ERIC Educational Resources Information Center

    Blue, Elfreda; Tirotta, Rose

    2011-01-01

    Twenty-first century technology has changed the way tools are used to support and enhance learning and instruction. Cloud computing and interactive white boards, make it possible for learners to interact, simulate, collaborate, and document learning experiences and real world problem-solving. This article discusses how various technologies (blogs,…

  9. Factors Influencing F/OSS Cloud Computing Software Product Success: A Quantitative Study

    ERIC Educational Resources Information Center

    Letort, D. Brian

    2012-01-01

    Cloud Computing introduces a new business operational model that allows an organization to shift information technology consumption from traditional capital expenditure to operational expenditure. This shift introduces challenges from both the adoption and creation vantage. This study evaluates factors that influence Free/Open Source Software…

  10. Risks and Crises for Healthcare Providers: The Impact of Cloud Computing

    PubMed Central

    Glasberg, Ronald; Hartmann, Michael; Tamm, Gerrit

    2014-01-01

    We analyze risks and crises for healthcare providers and discuss the impact of cloud computing in such scenarios. The analysis is conducted in a holistic way, taking into account organizational and human aspects, clinical, IT-related, and utilities-related risks as well as incorporating the view of the overall risk management. PMID:24707207

  11. Risks and crises for healthcare providers: the impact of cloud computing.

    PubMed

    Glasberg, Ronald; Hartmann, Michael; Draheim, Michael; Tamm, Gerrit; Hessel, Franz

    2014-01-01

    We analyze risks and crises for healthcare providers and discuss the impact of cloud computing in such scenarios. The analysis is conducted in a holistic way, taking into account organizational and human aspects, clinical, IT-related, and utilities-related risks as well as incorporating the view of the overall risk management. PMID:24707207

  12. Cloud object store for archive storage of high performance computing data using decoupling middleware

    SciTech Connect

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-06-30

    Cloud object storage is enabled for archived data, such as checkpoints and results, of high performance computing applications using a middleware process. A plurality of archived files, such as checkpoint files and results, generated by a plurality of processes in a parallel computing system are stored by obtaining the plurality of archived files from the parallel computing system; converting the plurality of archived files to objects using a log structured file system middleware process; and providing the objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  13. Cloud object store for checkpoints of high performance computing applications using decoupling middleware

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-04-19

    Cloud object storage is enabled for checkpoints of high performance computing applications using a middleware process. A plurality of files, such as checkpoint files, generated by a plurality of processes in a parallel computing system are stored by obtaining said plurality of files from said parallel computing system; converting said plurality of files to objects using a log structured file system middleware process; and providing said objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  14. Seamless personal health information system in cloud computing.

    PubMed

    Chung, Wan-Young; Fong, Ee May

    2014-01-01

    Noncontact ECG measurement has gained popularity these days due to its noninvasive and conveniences to be applied on daily life. This approach does not require any direct contact between patient's skin and sensor for physiological signal measurement. The noncontact ECG measurement is integrated with mobile healthcare system for health status monitoring. Mobile phone acts as the personal health information system displaying health status and body mass index (BMI) tracking. Besides that, it plays an important role being the medical guidance providing medical knowledge database including symptom checker and health fitness guidance. At the same time, the system also features some unique medical functions that cater to the living demand of the patients or users, including regular medication reminders, alert alarm, medical guidance, appointment scheduling. Lastly, we demonstrate mobile healthcare system with web application for extended uses, thus health data are clouded into web server system and web database storage. This allows remote health status monitoring easily and so forth it promotes a cost effective personal healthcare system. PMID:25570784

  15. Lost in Cloud

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Shetye, Sandeep D.; Chilukuri, Sri; Sturken, Ian

    2012-01-01

    Cloud computing can reduce cost significantly because businesses can share computing resources. In recent years Small and Medium Businesses (SMB) have used Cloud effectively for cost saving and for sharing IT expenses. With the success of SMBs, many perceive that the larger enterprises ought to move into Cloud environment as well. Government agency s stove-piped environments are being considered as candidates for potential use of Cloud either as an enterprise entity or pockets of small communities. Cloud Computing is the delivery of computing as a service rather than as a product, whereby shared resources, software, and information are provided to computers and other devices as a utility over a network. Underneath the offered services, there exists a modern infrastructure cost of which is often spread across its services or its investors. As NASA is considered as an Enterprise class organization, like other enterprises, a shift has been occurring in perceiving its IT services as candidates for Cloud services. This paper discusses market trends in cloud computing from an enterprise angle and then addresses the topic of Cloud Computing for NASA in two possible forms. First, in the form of a public Cloud to support it as an enterprise, as well as to share it with the commercial and public at large. Second, as a private Cloud wherein the infrastructure is operated solely for NASA, whether managed internally or by a third-party and hosted internally or externally. The paper addresses the strengths and weaknesses of both paradigms of public and private Clouds, in both internally and externally operated settings. The content of the paper is from a NASA perspective but is applicable to any large enterprise with thousands of employees and contractors.

  16. EduCloud: PaaS versus IaaS Cloud Usage for an Advanced Computer Science Course

    ERIC Educational Resources Information Center

    Vaquero, L. M.

    2011-01-01

    The cloud has become a widely used term in academia and the industry. Education has not remained unaware of this trend, and several educational solutions based on cloud technologies are already in place, especially for software as a service cloud. However, an evaluation of the educational potential of infrastructure and platform clouds has not…

  17. A Tale of Two Clouds

    ERIC Educational Resources Information Center

    Gray, Terry

    2010-01-01

    The University of Washington (UW) adopted a dual-provider cloud-computing strategy, focusing initially on software as a service. The original project--to replace an obsolete alumni e-mail system--resulted in a cloud solution that soon grew to encompass the entire campus community. The policies and contract terms UW developed, focusing on…

  18. Computer Simulations as a Teaching Tool in Community Colleges

    ERIC Educational Resources Information Center

    Grimm, Floyd M., III

    1978-01-01

    Describes the implementation of a computer assisted instruction program at Harford Community College. Eight different biology simulation programs are used covering topics in ecology, genetics, biochemistry, and sociobiology. (MA)

  19. Canadian Community College Computer Usage Survey, May 1983.

    ERIC Educational Resources Information Center

    Gee, Michael Dennis

    This survey was conducted to provide information on the level of computer usage in Canadian community colleges. A 19-question form was mailed to the deans of instruction in 175 Canadian public community colleges identified as such by Statistics Canada. Of these, 111 colleges returned their surveys (a 63% response rate), and the results were…

  20. Dynamic resource allocation engine for cloud-based real-time video transcoding in mobile cloud computing environments

    NASA Astrophysics Data System (ADS)

    Adedayo, Bada; Wang, Qi; Alcaraz Calero, Jose M.; Grecos, Christos

    2015-02-01

    The recent explosion in video-related Internet traffic has been driven by the widespread use of smart mobile devices, particularly smartphones with advanced cameras that are able to record high-quality videos. Although many of these devices offer the facility to record videos at different spatial and temporal resolutions, primarily with local storage considerations in mind, most users only ever use the highest quality settings. The vast majority of these devices are optimised for compressing the acquired video using a single built-in codec and have neither the computational resources nor battery reserves to transcode the video to alternative formats. This paper proposes a new low-complexity dynamic resource allocation engine for cloud-based video transcoding services that are both scalable and capable of being delivered in real-time. Firstly, through extensive experimentation, we establish resource requirement benchmarks for a wide range of transcoding tasks. The set of tasks investigated covers the most widely used input formats (encoder type, resolution, amount of motion and frame rate) associated with mobile devices and the most popular output formats derived from a comprehensive set of use cases, e.g. a mobile news reporter directly transmitting videos to the TV audience of various video format requirements, with minimal usage of resources both at the reporter's end and at the cloud infrastructure end for transcoding services.

  1. Survey of Storage and Fault Tolerance Strategies Used in Cloud Computing

    NASA Astrophysics Data System (ADS)

    Ericson, Kathleen; Pallickara, Shrideep

    Cloud computing has gained significant traction in recent years. Companies such as Google, Amazon and Microsoft have been building massive data centers over the past few years. Spanning geographic and administrative domains, these data centers tend to be built out of commodity desktops with the total number of computers managed by these companies being in the order of millions. Additionally, the use of virtualization allows a physical node to be presented as a set of virtual nodes resulting in a seemingly inexhaustible set of computational resources. By leveraging economies of scale, these data centers can provision cpu, networking, and storage at substantially reduced prices which in turn underpins the move by many institutions to host their services in the cloud.

  2. The Frontier Newspaper Community: A Computer Profile.

    ERIC Educational Resources Information Center

    Cloud, Barbara

    Relying on Daniel Boorstin's argument that the newspaper was at least one of the first institutions in a frontier community, a study examined the 1880 United States Census of eight western territories to determine the number of newspapers and the ratio of newspapers to population in frontier counties. Three groups were examined: counties that had…

  3. Acceleration and novelty: community restoration speeds recovery and transforms species composition in Andean cloud forest.

    PubMed

    Wilson, Sarah Jane; Rhemtulla, Jeanine M

    2016-01-01

    Community-based tropical forest restoration projects, often promoted as a win-win solution for local communities and the environment, have increased dramatically in number in the past decade. Many such projects are underway in Andean cloud forests, which, given their extremely high biodiversity and history of extensive clearing, are understudied. This study investigates the efficacy of community-based tree-planting projects to accelerate cloud forest recovery, as compared to unassisted natural regeneration. This study takes place in northwest Andean Ecuador, where the majority of the original, highly diverse cloud forests have been cleared, in five communities that initiated tree-planting projects to restore forests in 2003. In 2011, we identified tree species along transects in planted forests (n = 5), naturally regenerating forests (n = 5), and primary forests (n = 5). We also surveyed 120 households about their restoration methods, tree preferences, and forest uses. We found that tree diversity was higher in planted than in unplanted secondary forest, but both were less diverse than primary forests. Ordination analysis showed that all three forests had distinct species compositions, although planted forests shared more species with primary forests than did unplanted forests. Planted forests also contained more animal-dispersed species in both the planted canopy and in the unplanted, regenerating understory than unplanted forests, and contained the highest proportion of species with use value for local people. While restoring forest increased biodiversity and accelerated forest recovery, restored forests may also represent novel ecosystems that are distinct from the region's previous ecosystems and, given their usefulness to people, are likely to be more common in the future. PMID:27039520

  4. The thinking of Cloud computing in the digital construction of the oil companies

    NASA Astrophysics Data System (ADS)

    CaoLei, Qizhilin; Dengsheng, Lei

    In order to speed up digital construction of the oil companies and enhance productivity and decision-support capabilities while avoiding the disadvantages from the waste of the original process of building digital and duplication of development and input. This paper presents a cloud-based models for the build in the digital construction of the oil companies that National oil companies though the private network will join the cloud data of the oil companies and service center equipment integrated into a whole cloud system, then according to the needs of various departments to prepare their own virtual service center, which can provide a strong service industry and computing power for the Oil companies.

  5. Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2013-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the 'A-Train' platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (MERRA), stratify the comparisons using a classification of the 'cloud scenes' from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically 'sharded' by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will

  6. Designing a Curriculum for Computer Students in the Community College.

    ERIC Educational Resources Information Center

    Kolatis, Maria

    An overview is provided of the institutional and technological factors to be considered in designing or updating a computer science curriculum at the community college level. After underscoring the importance of the computer in today's society, the paper identifies and discusses the following considerations in curriculum design: (1) the mission of…

  7. Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E.

    2013-05-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will

  8. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment

    PubMed Central

    2014-01-01

    Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel

  9. Compute unified device architecture (CUDA)-based parallelization of WRF Kessler cloud microphysics scheme

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Wang, Jun; Allen Huang, H.-L.; Goldberg, Mitchell D.

    2013-03-01

    In recent years, graphics processing units (GPUs) have emerged as a low-cost, low-power and a very high performance alternative to conventional central processing units (CPUs). The latest GPUs offer a speedup of two-to-three orders of magnitude over CPU for various science and engineering applications. The Weather Research and Forecasting (WRF) model is the latest-generation numerical weather prediction model. It has been designed to serve both operational forecasting and atmospheric research needs. It proves useful for a broad spectrum of applications for domain scales ranging from meters to hundreds of kilometers. WRF computes an approximate solution to the differential equations which govern the air motion of the whole atmosphere. Kessler microphysics module in WRF is a simple warm cloud scheme that includes water vapor, cloud water and rain. Microphysics processes which are modeled are rain production, fall and evaporation. The accretion and auto-conversion of cloud water processes are also included along with the production of cloud water from condensation. In this paper, we develop an efficient WRF Kessler microphysics scheme which runs on Graphics Processing Units (GPUs) using the NVIDIA Compute Unified Device Architecture (CUDA). The GPU-based implementation of Kessler microphysics scheme achieves a significant speedup of 70× over its CPU based single-threaded counterpart. When a 4 GPU system is used, we achieve an overall speedup of 132× as compared to the single thread CPU version.

  10. A Cloud-Computing Service for Environmental Geophysics and Seismic Data Processing

    NASA Astrophysics Data System (ADS)

    Heilmann, B. Z.; Maggi, P.; Piras, A.; Satta, G.; Deidda, G. P.; Bonomi, E.

    2012-04-01

    Cloud computing is establishing worldwide as a new high performance computing paradigm that offers formidable possibilities to industry and science. The presented cloud-computing portal, part of the Grida3 project, provides an innovative approach to seismic data processing by combining open-source state-of-the-art processing software and cloud-computing technology, making possible the effective use of distributed computation and data management with administratively distant resources. We substituted the user-side demanding hardware and software requirements by remote access to high-performance grid-computing facilities. As a result, data processing can be done quasi in real-time being ubiquitously controlled via Internet by a user-friendly web-browser interface. Besides the obvious advantages over locally installed seismic-processing packages, the presented cloud-computing solution creates completely new possibilities for scientific education, collaboration, and presentation of reproducible results. The web-browser interface of our portal is based on the commercially supported grid portal EnginFrame, an open framework based on Java, XML, and Web Services. We selected the hosted applications with the objective to allow the construction of typical 2D time-domain seismic-imaging workflows as used for environmental studies and, originally, for hydrocarbon exploration. For data visualization and pre-processing, we chose the free software package Seismic Un*x. We ported tools for trace balancing, amplitude gaining, muting, frequency filtering, dip filtering, deconvolution and rendering, with a customized choice of options as services onto the cloud-computing portal. For structural imaging and velocity-model building, we developed a grid version of the Common-Reflection-Surface stack, a data-driven imaging method that requires no user interaction at run time such as manual picking in prestack volumes or velocity spectra. Due to its high level of automation, CRS stacking

  11. Secure Encapsulation and Publication of Biological Services in the Cloud Computing Environment

    PubMed Central

    Zhang, Weizhe; Wang, Xuehui; Lu, Bo; Kim, Tai-hoon

    2013-01-01

    Secure encapsulation and publication for bioinformatics software products based on web service are presented, and the basic function of biological information is realized in the cloud computing environment. In the encapsulation phase, the workflow and function of bioinformatics software are conducted, the encapsulation interfaces are designed, and the runtime interaction between users and computers is simulated. In the publication phase, the execution and management mechanisms and principles of the GRAM components are analyzed. The functions such as remote user job submission and job status query are implemented by using the GRAM components. The services of bioinformatics software are published to remote users. Finally the basic prototype system of the biological cloud is achieved. PMID:24078906

  12. Optimization of knowledge sharing through multi-forum using cloud computing architecture

    NASA Astrophysics Data System (ADS)

    Madapusi Vasudevan, Sriram; Sankaran, Srivatsan; Muthuswamy, Shanmugasundaram; Ram, N. Sankar

    2011-12-01

    Knowledge sharing is done through various knowledge sharing forums which requires multiple logins through multiple browser instances. Here a single Multi-Forum knowledge sharing concept is introduced which requires only one login session which makes user to connect multiple forums and display the data in a single browser window. Also few optimization techniques are introduced here to speed up the access time using cloud computing architecture.

  13. Cloud computing technology applied in healthcare for developing large scale flexible solutions.

    PubMed

    Lupşe, Oana Sorina; Vida, Mihaela; Stoicu-Tivadar, Lăcrămioara

    2012-01-01

    An extremely important area in which there is also vital information needed in different locations is the healthcare domain. In the areas of healthcare there is an important exchange of information since there are many departments where a patient can be sent for investigation. In this regard cloud computing is a technology that could really help supporting flexibility, seamless care and financial cuts. PMID:22491119

  14. Use of cloud computing technology in natural hazard assessment and emergency management

    NASA Astrophysics Data System (ADS)

    Webley, P. W.; Dehn, J.

    2015-12-01

    During a natural hazard event, the most up-to-date data needs to be in the hands of those on the front line. Decision support system tools can be developed to provide access to pre-made outputs to quickly assess the hazard and potential risk. However, with the ever growing availability of new satellite data as well as ground and airborne data generated in real-time there is a need to analyze the large volumes of data in an easy-to-access and effective environment. With the growth in the use of cloud computing, where the analysis and visualization system can grow with the needs of the user, then these facilities can used to provide this real-time analysis. Think of a central command center uploading the data to the cloud compute system and then those researchers in-the-field connecting to a web-based tool to view the newly acquired data. New data can be added by any user and then viewed instantly by anyone else in the organization through the cloud computing interface. This provides the ideal tool for collaborative data analysis, hazard assessment and decision making. We present the rationale for developing a cloud computing systems and illustrate how this tool can be developed for use in real-time environments. Users would have access to an interactive online image analysis tool without the need for specific remote sensing software on their local system therefore increasing their understanding of the ongoing hazard and mitigate its impact on the surrounding region.

  15. Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce

    NASA Astrophysics Data System (ADS)

    Pratx, Guillem; Xing, Lei

    2011-12-01

    Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes.

  16. Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce.

    PubMed

    Pratx, Guillem; Xing, Lei

    2011-12-01

    Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916

  17. Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce

    PubMed Central

    Pratx, Guillem; Xing, Lei

    2011-01-01

    Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916

  18. Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks

    PubMed Central

    Devi, D. Chitra; Uthariaraj, V. Rhymend

    2016-01-01

    Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656

  19. Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks.

    PubMed

    Devi, D Chitra; Uthariaraj, V Rhymend

    2016-01-01

    Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656

  20. Managing a tier-2 computer centre with a private cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-06-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI.

  1. Mobile cloud-computing-based healthcare service by noncontact ECG monitoring.

    PubMed

    Fong, Ee-May; Chung, Wan-Young

    2013-01-01

    Noncontact electrocardiogram (ECG) measurement technique has gained popularity these days owing to its noninvasive features and convenience in daily life use. This paper presents mobile cloud computing for a healthcare system where a noncontact ECG measurement method is employed to capture biomedical signals from users. Healthcare service is provided to continuously collect biomedical signals from multiple locations. To observe and analyze the ECG signals in real time, a mobile device is used as a mobile monitoring terminal. In addition, a personalized healthcare assistant is installed on the mobile device; several healthcare features such as health status summaries, medication QR code scanning, and reminders are integrated into the mobile application. Health data are being synchronized into the healthcare cloud computing service (Web server system and Web server dataset) to ensure a seamless healthcare monitoring system and anytime and anywhere coverage of network connection is available. Together with a Web page application, medical data are easily accessed by medical professionals or family members. Web page performance evaluation was conducted to ensure minimal Web server latency. The system demonstrates better availability of off-site and up-to-the-minute patient data, which can help detect health problems early and keep elderly patients out of the emergency room, thus providing a better and more comprehensive healthcare cloud computing service. PMID:24316562

  2. Mobile Cloud-Computing-Based Healthcare Service by Noncontact ECG Monitoring

    PubMed Central

    Fong, Ee-May; Chung, Wan-Young

    2013-01-01

    Noncontact electrocardiogram (ECG) measurement technique has gained popularity these days owing to its noninvasive features and convenience in daily life use. This paper presents mobile cloud computing for a healthcare system where a noncontact ECG measurement method is employed to capture biomedical signals from users. Healthcare service is provided to continuously collect biomedical signals from multiple locations. To observe and analyze the ECG signals in real time, a mobile device is used as a mobile monitoring terminal. In addition, a personalized healthcare assistant is installed on the mobile device; several healthcare features such as health status summaries, medication QR code scanning, and reminders are integrated into the mobile application. Health data are being synchronized into the healthcare cloud computing service (Web server system and Web server dataset) to ensure a seamless healthcare monitoring system and anytime and anywhere coverage of network connection is available. Together with a Web page application, medical data are easily accessed by medical professionals or family members. Web page performance evaluation was conducted to ensure minimal Web server latency. The system demonstrates better availability of off-site and up-to-the-minute patient data, which can help detect health problems early and keep elderly patients out of the emergency room, thus providing a better and more comprehensive healthcare cloud computing service. PMID:24316562

  3. Cloud forest restoration for erosion control in a Kichwa community of the Ecuadorian central Andes Mountains

    NASA Astrophysics Data System (ADS)

    Backus, L.; Giordanengo, J.; Sacatoro, I.

    2013-12-01

    The Denver Professional Chapter of Engineers Without Borders (EWB) has begun conducting erosion control projects in the Kichwa communities of Malingua Pamba in the Andes Mountains south of Quito, Ecuador. In many high elevation areas in this region, erosion of volcanic soils on steep hillsides (i.e., < 40%) is severe and often associated with roads, water supply systems, and loss of native cloud forests followed by burning and cultivation of food crops. Following a 2011 investigation of over 75 erosion sites, the multidisciplinary Erosion Control team traveled to Malingua Pamba in October 2012 to conduct final design and project implementation at 5 sites. In partnership with the local communities, we installed woody cloud forest species, grass (sig-sig) contour hedges, erosion matting, and rock structures (toe walls, plunge pools, bank armoring, cross vanes, contour infiltration ditches, etc.) to reduce incision rates and risk of slump failures, facilitate aggradation, and hasten revegetation. In keeping with the EWB goal of project sustainability, we used primarily locally available resources. High school students of the community grew 5000 native trees and some naturalized shrubs in a nursery started by the school principal, hand weavers produced jute erosion mats, and rocks were provided by a nearby quarry. Where possible, local rock was harvested from landslide areas and other local erosion features. Based on follow up reports and photographs from the community and EWB travelers, the approach of using locally available materials installed by the community is successful; plants are growing well and erosion control structures have remained in place throughout the November to April rainy season. The community has continued planting native vegetation at several additional erosion sites. Formal monitoring will be conducted in October 2013, followed by analysis of data to determine if induced meandering and other low-maintenance erosion control techniques are working

  4. Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments

    PubMed Central

    Kadima, Hubert; Granado, Bertrand

    2013-01-01

    We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361

  5. On Using Home Networks and Cloud Computing for a Future Internet of Things

    NASA Astrophysics Data System (ADS)

    Niedermayer, Heiko; Holz, Ralph; Pahl, Marc-Oliver; Carle, Georg

    In this position paper we state four requirements for a Future Internet and sketch our initial concept. The requirements: (1) more comfort, (2) integration of home networks, (3) resources like service clouds in the network, and (4) access anywhere on any machine. Future Internet needs future quality and future comfort. There need to be new possiblities for everyone. Our focus is on higher layers and related to the many overlay proposals. We consider them to run on top of a basic Future Internet core. A new user experience means to include all user devices. Home networks and services should be a fundamental part of the Future Internet. Home networks extend access and allow interaction with the environment. Cloud Computing can provide reliable resources beyond local boundaries. For access anywhere, we also need secure storage for data and profiles in the network, in particular for access with non-personal devices (Internet terminal, ticket machine, ...).

  6. Two-Level Verification of Data Integrity for Data Storage in Cloud Computing

    NASA Astrophysics Data System (ADS)

    Xu, Guangwei; Chen, Chunlin; Wang, Hongya; Zang, Zhuping; Pang, Mugen; Jiang, Ping

    Data storage in cloud computing can save capital expenditure and relive burden of storage management for users. As the lose or corruption of files stored may happen, many researchers focus on the verification of data integrity. However, massive users often bring large numbers of verifying tasks for the auditor. Moreover, users also need to pay extra fee for these verifying tasks beyond storage fee. Therefore, we propose a two-level verification of data integrity to alleviate these problems. The key idea is to routinely verify the data integrity by users and arbitrate the challenge between the user and cloud provider by the auditor according to the MACs and ϕ values. The extensive performance simulations show that the proposed scheme obviously decreases auditor's verifying tasks and the ratio of wrong arbitration.

  7. A Scheduling Algorithm for Cloud Computing System Based on the Driver of Dynamic Essential Path.

    PubMed

    Xie, Zhiqiang; Shao, Xia; Xin, Yu

    2016-01-01

    To solve the problem of task scheduling in the cloud computing system, this paper proposes a scheduling algorithm for cloud computing based on the driver of dynamic essential path (DDEP). This algorithm applies a predecessor-task layer priority strategy to solve the problem of constraint relations among task nodes. The strategy assigns different priority values to every task node based on the scheduling order of task node as affected by the constraint relations among task nodes, and the task node list is generated by the different priority value. To address the scheduling order problem in which task nodes have the same priority value, the dynamic essential long path strategy is proposed. This strategy computes the dynamic essential path of the pre-scheduling task nodes based on the actual computation cost and communication cost of task node in the scheduling process. The task node that has the longest dynamic essential path is scheduled first as the completion time of task graph is indirectly influenced by the finishing time of task nodes in the longest dynamic essential path. Finally, we demonstrate the proposed algorithm via simulation experiments using Matlab tools. The experimental results indicate that the proposed algorithm can effectively reduce the task Makespan in most cases and meet a high quality performance objective. PMID:27490901

  8. A Scheduling Algorithm for Cloud Computing System Based on the Driver of Dynamic Essential Path

    PubMed Central

    Xie, Zhiqiang; Shao, Xia; Xin, Yu

    2016-01-01

    To solve the problem of task scheduling in the cloud computing system, this paper proposes a scheduling algorithm for cloud computing based on the driver of dynamic essential path (DDEP). This algorithm applies a predecessor-task layer priority strategy to solve the problem of constraint relations among task nodes. The strategy assigns different priority values to every task node based on the scheduling order of task node as affected by the constraint relations among task nodes, and the task node list is generated by the different priority value. To address the scheduling order problem in which task nodes have the same priority value, the dynamic essential long path strategy is proposed. This strategy computes the dynamic essential path of the pre-scheduling task nodes based on the actual computation cost and communication cost of task node in the scheduling process. The task node that has the longest dynamic essential path is scheduled first as the completion time of task graph is indirectly influenced by the finishing time of task nodes in the longest dynamic essential path. Finally, we demonstrate the proposed algorithm via simulation experiments using Matlab tools. The experimental results indicate that the proposed algorithm can effectively reduce the task Makespan in most cases and meet a high quality performance objective. PMID:27490901

  9. Imprinting Community College Computer Science Education with Software Engineering Principles

    ERIC Educational Resources Information Center

    Hundley, Jacqueline Holliday

    2012-01-01

    Although the two-year curriculum guide includes coverage of all eight software engineering core topics, the computer science courses taught in Alabama community colleges limit student exposure to the programming, or coding, phase of the software development lifecycle and offer little experience in requirements analysis, design, testing, and…

  10. Master Plan for the Virginia Community College System Computing Services.

    ERIC Educational Resources Information Center

    Virginia State Dept. of Community Colleges, Richmond.

    This master plan sets forth a general strategy for providing administrative and academic computing services and satisfying the data processing requirements for the Virginia Community College System (VCCS) during the 1980's. Following an executive summary, chapter 1 sets forth the purpose of the plan and outlines the planning processes used.…

  11. Urban Senegal and Rural Gambia: Computer and Community Education Programs.

    ERIC Educational Resources Information Center

    Pagano, Alicia I.

    1986-01-01

    Describes two educational programs in West Africa: a (LOGO) computer education program in Senegal, and a "self-improvement" community education program in The Gambia. While these two programs are diverse in geographic location, population, and curriculum materials, both are action-oriented, learner controlled, and consider the learner's broader…

  12. Imprinting Community College Computer Science Education with Software Engineering Principles

    NASA Astrophysics Data System (ADS)

    Hundley, Jacqueline Holliday

    Although the two-year curriculum guide includes coverage of all eight software engineering core topics, the computer science courses taught in Alabama community colleges limit student exposure to the programming, or coding, phase of the software development lifecycle and offer little experience in requirements analysis, design, testing, and maintenance. We proposed that some software engineering principles can be incorporated into the introductory-level of the computer science curriculum. Our vision is to give community college students a broader exposure to the software development lifecycle. For those students who plan to transfer to a baccalaureate program subsequent to their community college education, our vision is to prepare them sufficiently to move seamlessly into mainstream computer science and software engineering degrees. For those students who plan to move from the community college to a programming career, our vision is to equip them with the foundational knowledge and skills required by the software industry. To accomplish our goals, we developed curriculum modules for teaching seven of the software engineering knowledge areas within current computer science introductory-level courses. Each module was designed to be self-supported with suggested learning objectives, teaching outline, software tool support, teaching activities, and other material to assist the instructor in using it.

  13. Computer-Assisted Community Planning and Decision Making.

    ERIC Educational Resources Information Center

    College of the Atlantic, Bar Harbor, ME.

    The College of the Atlantic (COA) developed a broad-based, interdisciplinary curriculum in ecological policy and community planning and decision-making that incorporates two primary computer-based tools: ARC/INFO Geographic Information System (GIS) and STELLA, a systems-dynamics modeling tool. Students learn how to use and apply these tools…

  14. Astronomy and Computing: A New Journal for the Astronomical Computing Community

    NASA Astrophysics Data System (ADS)

    Mann, R. G.; Accomazzi, A.; Budavári, T.; Fluke, C.; Gray, N.; O'Mullane, W.; Wicenec, A.; Wise, M.

    2013-10-01

    We introduce Astronomy and Computing (A&C), a new, peer-reviewed journal for the expanding community of people whose work focuses on the application of computer science and information technology within astronomy, rather than on astronomical research per se. A&C arose from a BoF discussion at the ADASS XX conference in Boston, and from the ADASS community will come many of the people who will write, referee and read the papers published in A&C. In this paper, we outline the aims and scope of A&C, together with a summary of the types of paper we envisage it publishing and the criteria that will be used to referee them, and we invite the ADASS community to help us develop these in more detail and to shape a journal that serves the astronomical computing community well.

  15. An Assessment of Security Vulnerabilities Comprehension of Cloud Computing Environments: A Quantitative Study Using the Unified Theory of Acceptance and Use

    ERIC Educational Resources Information Center

    Venkatesh, Vijay P.

    2013-01-01

    The current computing landscape owes its roots to the birth of hardware and software technologies from the 1940s and 1950s. Since then, the advent of mainframes, miniaturized computing, and internetworking has given rise to the now prevalent cloud computing era. In the past few months just after 2010, cloud computing adoption has picked up pace…

  16. DInSAR time series generation within a cloud computing environment: from ERS to Sentinel-1 scenario

    NASA Astrophysics Data System (ADS)

    Casu, Francesco; Elefante, Stefano; Imperatore, Pasquale; Lanari, Riccardo; Manunta, Michele; Zinno, Ivana; Mathot, Emmanuel; Brito, Fabrice; Farres, Jordi; Lengert, Wolfgang

    2013-04-01

    One of the techniques that will strongly benefit from the advent of the Sentinel-1 system is Differential SAR Interferometry (DInSAR), which has successfully demonstrated to be an effective tool to detect and monitor ground displacements with centimetre accuracy. The geoscience communities (volcanology, seismicity, …), as well as those related to hazard monitoring and risk mitigation, make extensively use of the DInSAR technique and they will take advantage from the huge amount of SAR data acquired by Sentinel-1. Indeed, such an information will successfully permit the generation of Earth's surface displacement maps and time series both over large areas and long time span. However, the issue of managing, processing and analysing the large Sentinel data stream is envisaged by the scientific community to be a major bottleneck, particularly during crisis phases. The emerging need of creating a common ecosystem in which data, results and processing tools are shared, is envisaged to be a successful way to address such a problem and to contribute to the information and knowledge spreading. The Supersites initiative as well as the ESA SuperSites Exploitation Platform (SSEP) and the ESA Cloud Computing Operational Pilot (CIOP) projects provide effective answers to this need and they are pushing towards the development of such an ecosystem. It is clear that all the current and existent tools for querying, processing and analysing SAR data are required to be not only updated for managing the large data stream of Sentinel-1 satellite, but also reorganized for quickly replying to the simultaneous and highly demanding user requests, mainly during emergency situations. This translates into the automatic and unsupervised processing of large amount of data as well as the availability of scalable, widely accessible and high performance computing capabilities. The cloud computing environment permits to achieve all of these objectives, particularly in case of spike and peak

  17. Comparison of ice cloud properties simulated by the Community Atmosphere Model (CAM5) with in-situ observations

    NASA Astrophysics Data System (ADS)

    Eidhammer, T.; Morrison, H.; Bansemer, A.; Gettelman, A.; Heymsfield, A. J.

    2014-09-01

    Detailed measurements of ice crystals in cirrus clouds were used to compare with results from the Community Atmospheric Model Version 5 (CAM5) global climate model. The observations are from two different field campaigns with contrasting conditions: Atmospheric Radiation Measurements Spring Cloud Intensive Operational Period in 2000 (ARM-IOP), which was characterized primarily by midlatitude frontal clouds and cirrus, and Tropical Composition, Cloud and Climate Coupling (TC4), which was dominated by anvil cirrus. Results show that the model typically overestimates the slope parameter of the exponential size distributions of cloud ice and snow, while the variation with temperature (height) is comparable. The model also overestimates the ice/snow number concentration (0th moment of the size distribution) and underestimates higher moments (2nd through 5th), but compares well with observations for the 1st moment. Overall the model shows better agreement with observations for TC4 than for ARM-IOP in regards to the moments. The mass-weighted terminal fall speed is lower in the model compared to observations for both ARM-IOP and TC4, which is partly due to the overestimation of the size distribution slope parameter. Sensitivity tests with modification of the threshold size for cloud ice to snow autoconversion (Dcs) do not show noticeable improvement in modeled moments, slope parameter and mass weighed fall speed compared to observations. Further, there is considerable sensitivity of the cloud radiative forcing to Dcs, consistent with previous studies, but no value of Dcs improves modeled cloud radiative forcing compared to measurements. Since the autoconversion of cloud ice to snow using the threshold size Dcs has little physical basis, future improvement to combine cloud ice and snow into a single category, eliminating the need for autoconversion, is suggested.

  18. Cloud Computing: Short Term Impacts of 1:1 Computing in the Sixth Grade

    ERIC Educational Resources Information Center

    Bebell, Damian; Clarkson, Apryl; Burraston, James

    2014-01-01

    Many parents, educators, and policy makers see great potential for leveraging tools like laptop computers, tablets, and smartphones in the classrooms of the world. Under budget constraints and shared access to equipment for students and teachers, the impacts have been irregular but hint at greater possibilities in 1:1 student computing settings.…

  19. Cloud-based serviced-orientated data systems for ocean observational data - an example from the coral reef community

    NASA Astrophysics Data System (ADS)

    Bainbridge, S.

    2012-04-01

    to deliver results, in context, to their preferred medium. The paper contrasts what has been achieved within a small community with well defined issues with what it would take to build equivalent systems to hold a wide range of cross community observational data addressing a wider range of potential issues. The role of discoverability, quality control, uncertainly, conformity and metadata are investigated along with a brief discussion of existing and emerging standards in this area. The elements of such as system are described along with the role of modelling and scenario tools in delivering a higher level of outputs linking what may have already occurred (event detection) with what may potentially occur (scenarios). The development of service based cloud computing open data systems coupled with complex event detection systems delivering through social media and other channels linked into model and scenario systems represents one vision for delivering value from the increasing store of ocean observations, most of which lie unknown, unused and unloved.

  20. A Telemetric system for electromagnetic measurements based on Internet technologies and cloud computing

    NASA Astrophysics Data System (ADS)

    Tassoulas, E.; Vereses, A.; Agiakatsikas, D.; Koulouras, Gr.; Nomicos, C.

    2010-05-01

    A few years ago, real time communication, data collection and transmission from a field station measuring electromagnetic variations in the middle of nowhere, was a very expensive accomplishment. Nowadays, wireless communications and Internet access reach end users much easier and they are less expensive. WIFI, GPRS, 3G or Satellite Internet connections enable this to come true even at the most detached areas of our world where no cables can easily reach at a low cost. Except for the effective potential range, these communication technologies can also give high speed, constant and low cost Internet access. As the Internet access speeds grow, a new term is coming to the foreground. Cloud Computing. The terminology of Cloud Computing refers to a wide subset of Internet technologies usage that the clients: A)Do not need to store any valuable information in any physical infrastructure owned by themselves. B)Consume on-line resources from a third party provider, enabling them to focus on their productivity without having to worry about their data or any other possible local hardware failure. C)Collaborate and share between associates faster and easier, as they can access their work from anywhere, just with the existence of Internet access. This telemetric system, relies on Cloud Computing for the delivery of collected data from the field station to an on-line storage. Collaborators and scientists, can be synchronized with the on-line storage, make changes and synchronize vice versa. Local storage at the field station end, is only needed in the case of an Internet connection failure, so that the data can be stored until the Internet connection is regained. Local storage at the user's side is optional, however desirable thus giving the ability to work off-line and synchronize again the changes when one goes on-line.

  1. Mobile, cloud, and big data computing: contributions, challenges, and new directions in telecardiology.

    PubMed

    Hsieh, Jui-Chien; Li, Ai-Hsien; Yang, Chung-Chi

    2013-11-01

    Many studies have indicated that computing technology can enable off-site cardiologists to read patients' electrocardiograph (ECG), echocardiography (ECHO), and relevant images via smart phones during pre-hospital, in-hospital, and post-hospital teleconsultation, which not only identifies emergency cases in need of immediate treatment, but also prevents the unnecessary re-hospitalizations. Meanwhile, several studies have combined cloud computing and mobile computing to facilitate better storage, delivery, retrieval, and management of medical files for telecardiology. In the future, the aggregated ECG and images from hospitals worldwide will become big data, which should be used to develop an e-consultation program helping on-site practitioners deliver appropriate treatment. With information technology, real-time tele-consultation and tele-diagnosis of ECG and images can be practiced via an e-platform for clinical, research, and educational purposes. While being devoted to promote the application of information technology onto telecardiology, we need to resolve several issues: (1) data confidentiality in the cloud, (2) data interoperability among hospitals, and (3) network latency and accessibility. If these challenges are overcome, tele-consultation will be ubiquitous, easy to perform, inexpensive, and beneficial. Most importantly, these services will increase global collaboration and advance clinical practice, education, and scientific research in cardiology. PMID:24232290

  2. Using Cloud Computing To Create A Multi-Wavelength Atlas Of The Galactic Plane

    NASA Astrophysics Data System (ADS)

    Berriman, G. B.; Good, J.; Rynge, M.; Juve, G.; Deelman, E.; Kinney, J.; Merrihew, A.

    2014-01-01

    We describe by example how to optimize cloud-computing resources offered by Amazon Web Services (AWS) to create and curate new datasets at scale. We are producing a co-registered atlas of the Galactic Plane at 16 wavelengths from 1 micron to 24 microns with a spatial sampling of 1 arcsec. The atlas is being created by using the Montage mosaic engine to generate co-registered mosaics of images released by the major surveys WISE, 2MASS, ADASS, GLIMPSE and MIPSGAL. The Atlas, when complete, will be 45 TB in size, composed of over 9,600 5 deg x 5 deg tiles with one degree overlap between them. The dataset will be housed on Amazon S3, designed for at-scale storage with access via web protocols. It will be publicly accessible through an API that will support access to the data and creation of cutouts according to the users’ specifications. The processing, which is estimated to require 340,000 compute hours for completion, has exploited virtual clusters created and managed on AWS platforms through the Pegasus workflow management system. We will describe the optimization methods, compute time and processing costs, as a guide for others wishing to exploit cloud platforms for processing and data creation.

  3. Mobile, Cloud, and Big Data Computing: Contributions, Challenges, and New Directions in Telecardiology

    PubMed Central

    Hsieh, Jui-Chien; Li, Ai-Hsien; Yang, Chung-Chi

    2013-01-01

    Many studies have indicated that computing technology can enable off-site cardiologists to read patients’ electrocardiograph (ECG), echocardiography (ECHO), and relevant images via smart phones during pre-hospital, in-hospital, and post-hospital teleconsultation, which not only identifies emergency cases in need of immediate treatment, but also prevents the unnecessary re-hospitalizations. Meanwhile, several studies have combined cloud computing and mobile computing to facilitate better storage, delivery, retrieval, and management of medical files for telecardiology. In the future, the aggregated ECG and images from hospitals worldwide will become big data, which should be used to develop an e-consultation program helping on-site practitioners deliver appropriate treatment. With information technology, real-time tele-consultation and tele-diagnosis of ECG and images can be practiced via an e-platform for clinical, research, and educational purposes. While being devoted to promote the application of information technology onto telecardiology, we need to resolve several issues: (1) data confidentiality in the cloud, (2) data interoperability among hospitals, and (3) network latency and accessibility. If these challenges are overcome, tele-consultation will be ubiquitous, easy to perform, inexpensive, and beneficial. Most importantly, these services will increase global collaboration and advance clinical practice, education, and scientific research in cardiology. PMID:24232290

  4. AGM: A DSL for mobile cloud computing based on directed graph

    NASA Astrophysics Data System (ADS)

    Tanković, Nikola; Grbac, Tihana Galinac

    2016-06-01

    This paper summarizes a novel approach for consuming a domain specific language (DSL) by transforming it to a directed graph representation persisted by a graph database. Using such specialized database enables advanced navigation trough the stored model exposing only relevant subsets of meta-data to different involved services and components. We applied this approach in a mobile cloud computing system and used it to model several mobile applications in retail, supply chain management and merchandising domain. These application are distributed in a Software-as-a-Service (SaaS) fashion and used by thousands of customers in Croatia. We report on lessons learned and propose further research on this topic.

  5. Cloud computing for context-aware enhanced m-Health services.

    PubMed

    Fernandez-Llatas, Carlos; Pileggi, Salvatore F; Ibañez, Gema; Valero, Zoe; Sala, Pilar

    2015-01-01

    m-Health services are increasing its presence in our lives due to the high penetration of new smartphone devices. This new scenario proposes new challenges in terms of information accessibility that require new paradigms which enable the new applications to access the data in a continuous and ubiquitous way, ensuring the privacy required depending on the kind of data accessed. This paper proposes an architecture based on cloud computing paradigms in order to empower new m-Health applications to enrich their results by providing secure access to user data. PMID:25417085

  6. Cloudy confidentiality: clinical and legal implications of cloud computing in health care.

    PubMed

    Klein, Carolina A

    2011-01-01

    The Internet has grown into a world of its own, and its ethereal space now offers capabilities that could aid physicians in their duties in numerous ways. In recent years software functions have moved from the individual's local hardware to a central server that operates from a remote location. This centralization is called cloud computing. Privacy laws that speak to the protection of patient confidentiality are complex and often difficult to understand in the context of an ever-growing cloud-based technology. This article is a review of the legal background of protected health records, as well as cloud technology and physician applications. An attempt is made to integrate both concepts and examine Health Insurance Portability and Accountability Act (HIPAA) compliance for each of the examples discussed. The legal regulations that may inform care and standards of practice are reviewed, and the difficulties that arise in assessment and monitoring of the current situation are analyzed. For forensic psychiatrists who may be asked to provide expert opinions regarding malpractice situations pertaining to confidentiality standards, it is important to become acquainted with the new digital language from which these questions may arise. PMID:22159987

  7. Ion Clouds in the Inductively Coupled Plasma Torch: A Closer Look through Computations.

    PubMed

    Aghaei, Maryam; Lindner, Helmut; Bogaerts, Annemie

    2016-08-16

    We have computationally investigated the introduction of copper elemental particles in an inductively coupled plasma torch connected to a sampling cone, including for the first time the ionization of the sample. The sample is inserted as liquid particles, which are followed inside the entire torch, i.e., from the injector inlet up to the ionization and reaching the sampler. The spatial position of the ion clouds inside the torch as well as detailed information on the copper species fluxes at the position of the sampler orifice and the exhausts of the torch are provided. The effect of on- and off-axis injection is studied. We clearly show that the ion clouds of on-axis injected material are located closer to the sampler with less radial diffusion. This guarantees a higher transport efficiency through the sampler cone. Moreover, our model reveals the optimum ranges of applied power and flow rates, which ensure the proper position of ion clouds inside the torch, i.e., close enough to the sampler to increase the fraction that can enter the mass spectrometer and with minimum loss of material toward the exhausts as well as a sufficiently high plasma temperature for efficient ionization. PMID:27457191

  8. Cloud computing geospatial application for water resources based on free and open source software and open standards - a prototype

    NASA Astrophysics Data System (ADS)

    Delipetrev, Blagoj

    2016-04-01

    Presently, most of the existing software is desktop-based, designed to work on a single computer, which represents a major limitation in many ways, starting from limited computer processing, storage power, accessibility, availability, etc. The only feasible solution lies in the web and cloud. This abstract presents research and development of a cloud computing geospatial application for water resources based on free and open source software and open standards using hybrid deployment model of public - private cloud, running on two separate virtual machines (VMs). The first one (VM1) is running on Amazon web services (AWS) and the second one (VM2) is running on a Xen cloud platform. The presented cloud application is developed using free and open source software, open standards and prototype code. The cloud application presents a framework how to develop specialized cloud geospatial application that needs only a web browser to be used. This cloud application is the ultimate collaboration geospatial platform because multiple users across the globe with internet connection and browser can jointly model geospatial objects, enter attribute data and information, execute algorithms, and visualize results. The presented cloud application is: available all the time, accessible from everywhere, it is scalable, works in a distributed computer environment, it creates a real-time multiuser collaboration platform, the programing languages code and components are interoperable, and it is flexible in including additional components. The cloud geospatial application is implemented as a specialized water resources application with three web services for 1) data infrastructure (DI), 2) support for water resources modelling (WRM), 3) user management. The web services are running on two VMs that are communicating over the internet providing services to users. The application was tested on the Zletovica river basin case study with concurrent multiple users. The application is a state

  9. Leveraging the Cloud to Deliver Scalable Capabilities to Earth Science Community

    NASA Astrophysics Data System (ADS)

    Law, E.; Crichton, D. J.

    2012-12-01

    Instrument technologies for making science observations have advanced considerably with datasets now in the petabyte range for Earth science. The scale and complexity of increasing science data, commonly referred to as "Big Data", posts challenges in many aspects for science data systems. For instance, today's image files managed by science data systems range from a few gigabytes to hundreds of gigabytes in size with new data arriving every day. Despite this ever-increasing amount of data, science data systems must make the data readily available in a timely manner for users to view and analyze. This talk describes these challenges and provides examples of how the NASA/JPL Earth Science Data Systems program is leveraging the cloud to deliver scalable capabilities to our science community.

  10. Proposed Use of the NASA Ames Nebula Cloud Computing Platform for Numerical Weather Prediction and the Distribution of High Resolution Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Limaye, Ashutosh S.; Molthan, Andrew L.; Srikishen, Jayanthi

    2010-01-01

    The development of the Nebula Cloud Computing Platform at NASA Ames Research Center provides an open-source solution for the deployment of scalable computing and storage capabilities relevant to the execution of real-time weather forecasts and the distribution of high resolution satellite data to the operational weather community. Two projects at Marshall Space Flight Center may benefit from use of the Nebula system. The NASA Short-term Prediction Research and Transition (SPoRT) Center facilitates the use of unique NASA satellite data and research capabilities in the operational weather community by providing datasets relevant to numerical weather prediction, and satellite data sets useful in weather analysis. SERVIR provides satellite data products for decision support, emphasizing environmental threats such as wildfires, floods, landslides, and other hazards, with interests in numerical weather prediction in support of disaster response. The Weather Research and Forecast (WRF) model Environmental Modeling System (WRF-EMS) has been configured for Nebula cloud computing use via the creation of a disk image and deployment of repeated instances. Given the available infrastructure within Nebula and the "infrastructure as a service" concept, the system appears well-suited for the rapid deployment of additional forecast models over different domains, in response to real-time research applications or disaster response. Future investigations into Nebula capabilities will focus on the development of a web mapping server and load balancing configuration to support the distribution of high resolution satellite data sets to users within the National Weather Service and international partners of SERVIR.

  11. A Computer Simulation of Community Pharmacy Practice for Educational Use

    PubMed Central

    Ling, Tristan; Bereznicki, Luke; Westbury, Juanita; Chalmers, Leanne; Peterson, Gregory; Ollington, Robert

    2014-01-01

    Objective. To provide a computer-based learning method for pharmacy practice that is as effective as paper-based scenarios, but more engaging and less labor-intensive. Design. We developed a flexible and customizable computer simulation of community pharmacy. Using it, the students would be able to work through scenarios which encapsulate the entirety of a patient presentation. We compared the traditional paper-based teaching method to our computer-based approach using equivalent scenarios. The paper-based group had 2 tutors while the computer group had none. Both groups were given a prescenario and postscenario clinical knowledge quiz and survey. Assessment. Students in the computer-based group had generally greater improvements in their clinical knowledge score, and third-year students using the computer-based method also showed more improvements in history taking and counseling competencies. Third-year students also found the simulation fun and engaging. Conclusion. Our simulation of community pharmacy provided an educational experience as effective as the paper-based alternative, despite the lack of a human tutor. PMID:26056406

  12. 42 CFR 417.594 - Computation of adjusted community rate (ACR).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Computation of adjusted community rate (ACR). 417... community rate (ACR). (a) Basic rule. Each HMO or CMP must compute its basic rate as follows: (1) Compute an... must compute its initial rate using either of the following systems: (i) A community rating system...

  13. 42 CFR 417.594 - Computation of adjusted community rate (ACR).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 3 2011-10-01 2011-10-01 false Computation of adjusted community rate (ACR). 417... community rate (ACR). (a) Basic rule. Each HMO or CMP must compute its basic rate as follows: (1) Compute an... must compute its initial rate using either of the following systems: (i) A community rating system...

  14. Predicting Cloud Computing Technology Adoption by Organizations: An Empirical Integration of Technology Acceptance Model and Theory of Planned Behavior

    ERIC Educational Resources Information Center

    Ekufu, ThankGod K.

    2012-01-01

    Organizations are finding it difficult in today's economy to implement the vast information technology infrastructure required to effectively conduct their business operations. Despite the fact that some of these organizations are leveraging on the computational powers and the cost-saving benefits of computing on the Internet cloud, others…

  15. Scientific Services on the Cloud

    NASA Astrophysics Data System (ADS)

    Chapman, David; Joshi, Karuna P.; Yesha, Yelena; Halem, Milt; Yesha, Yaacov; Nguyen, Phuong

    Scientific Computing was one of the first every applications for parallel and distributed computation. To this date, scientific applications remain some of the most compute intensive, and have inspired creation of petaflop compute infrastructure such as the Oak Ridge Jaguar and Los Alamos RoadRunner. Large dedicated hardware infrastructure has become both a blessing and a curse to the scientific community. Scientists are interested in cloud computing for much the same reason as businesses and other professionals. The hardware is provided, maintained, and administrated by a third party. Software abstraction and virtualization provide reliability, and fault tolerance. Graduated fees allow for multi-scale prototyping and execution. Cloud computing resources are only a few clicks away, and by far the easiest high performance distributed platform to gain access to. There may still be dedicated infrastructure for ultra-scale science, but the cloud can easily play a major part of the scientific computing initiative.

  16. An innovative privacy preserving technique for incremental datasets on cloud computing.

    PubMed

    Aldeen, Yousra Abdul Alsahib S; Salleh, Mazleena; Aljeroudi, Yazan

    2016-08-01

    Cloud computing (CC) is a magnificent service-based delivery with gigantic computer processing power and data storage across connected communications channels. It imparted overwhelming technological impetus in the internet (web) mediated IT industry, where users can easily share private data for further analysis and mining. Furthermore, user affable CC services enable to deploy sundry applications economically. Meanwhile, simple data sharing impelled various phishing attacks and malware assisted security threats. Some privacy sensitive applications like health services on cloud that are built with several economic and operational benefits necessitate enhanced security. Thus, absolute cyberspace security and mitigation against phishing blitz became mandatory to protect overall data privacy. Typically, diverse applications datasets are anonymized with better privacy to owners without providing all secrecy requirements to the newly added records. Some proposed techniques emphasized this issue by re-anonymizing the datasets from the scratch. The utmost privacy protection over incremental datasets on CC is far from being achieved. Certainly, the distribution of huge datasets volume across multiple storage nodes limits the privacy preservation. In this view, we propose a new anonymization technique to attain better privacy protection with high data utility over distributed and incremental datasets on CC. The proficiency of data privacy preservation and improved confidentiality requirements is demonstrated through performance evaluation. PMID:27369566

  17. Elucidating Ligand-Modulated Conformational Landscape of GPCRs Using Cloud-Computing Approaches.

    PubMed

    Shukla, Diwakar; Lawrenz, Morgan; Pande, Vijay S

    2015-01-01

    G-protein-coupled receptors (GPCRs) are a versatile family of membrane-bound signaling proteins. Despite the recent successes in obtaining crystal structures of GPCRs, much needs to be learned about the conformational changes associated with their activation. Furthermore, the mechanism by which ligands modulate the activation of GPCRs has remained elusive. Molecular simulations provide a way of obtaining detailed an atomistic description of GPCR activation dynamics. However, simulating GPCR activation is challenging due to the long timescales involved and the associated challenge of gaining insights from the "Big" simulation datasets. Here, we demonstrate how cloud-computing approaches have been used to tackle these challenges and obtain insights into the activation mechanism of GPCRs. In particular, we review the use of Markov state model (MSM)-based sampling algorithms for sampling milliseconds of dynamics of a major drug target, the G-protein-coupled receptor β2-AR. MSMs of agonist and inverse agonist-bound β2-AR reveal multiple activation pathways and how ligands function via modulation of the ensemble of activation pathways. We target this ensemble of conformations with computer-aided drug design approaches, with the goal of designing drugs that interact more closely with diverse receptor states, for overall increased efficacy and specificity. We conclude by discussing how cloud-based approaches present a powerful and broadly available tool for studying the complex biological systems routinely. PMID:25950981

  18. A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.

    NASA Astrophysics Data System (ADS)

    Wehner, M. F.; Oliker, L.; Shalf, J.

    2008-12-01

    Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.

  19. An Improved Clustering Algorithm of Tunnel Monitoring Data for Cloud Computing

    PubMed Central

    Zhong, Luo; Tang, KunHao; Li, Lin; Yang, Guang; Ye, JingJing

    2014-01-01

    With the rapid development of urban construction, the number of urban tunnels is increasing and the data they produce become more and more complex. It results in the fact that the traditional clustering algorithm cannot handle the mass data of the tunnel. To solve this problem, an improved parallel clustering algorithm based on k-means has been proposed. It is a clustering algorithm using the MapReduce within cloud computing that deals with data. It not only has the advantage of being used to deal with mass data but also is more efficient. Moreover, it is able to compute the average dissimilarity degree of each cluster in order to clean the abnormal data. PMID:24982971

  20. Integration of drug dosing data with physiological data streams using a cloud computing paradigm.

    PubMed

    Bressan, Nadja; James, Andrew; McGregor, Carolyn

    2013-01-01

    Many drugs are used during the provision of intensive care for the preterm newborn infant. Recommendations for drug dosing in newborns depend upon data from population based pharmacokinetic research. There is a need to be able to modify drug dosing in response to the preterm infant's response to the standard dosing recommendations. The real-time integration of physiological data with drug dosing data would facilitate individualised drug dosing for these immature infants. This paper proposes the use of a novel computational framework that employs real-time, temporal data analysis for this task. Deployment of the framework within the cloud computing paradigm will enable widespread distribution of individualized drug dosing for newborn infants. PMID:24110652

  1. Basis Set Exchange: A Community Database for Computational Sciences

    SciTech Connect

    Schuchardt, Karen L.; Didier, Brett T.; Elsethagen, Todd O.; Sun, Lisong; Gurumoorthi, Vidhya; Chase, Jared M.; Li, Jun; Windus, Theresa L.

    2007-05-01

    Basis sets are one of the most important input data for computational models in the chemistry, materials, biology and other science domains that utilize computational quantum mechanics methods. Providing a shared, web accessible environment where researchers can not only download basis sets in their required format, but browse the data, contribute new basis sets, and ultimately curate and manage the data as a community will facilitate growth of this resource and encourage sharing both data and knowledge. We describe the Basis Set Exchange (BSE), a web portal that provides advanced browsing and download capabilities, facilities for contributing basis set data, and an environment that incorporates tools to foster development and interaction of communities. The BSE leverages and enables continued development of the basis set library originally assembled at the Environmental Molecular Sciences Laboratory.

  2. A community-based study of asthenopia in computer operators

    PubMed Central

    Choudhary, Sushilkumar; Doshi, Vikas G

    2008-01-01

    Context: There is growing body of evidence that use of computers can adversely affect the visual health. Considering the rising number of computer users in India, computer-related asthenopia might take an epidemic form. In view of that, this study was undertaken to find out the magnitude of asthenopia in computer operators and its relationship with various personal and workplace factors. Aims: To study the prevalence of asthenopia among computer operators and its association with various epidemiological factors. Settings and Design: Community-based cross-sectional study of 419 subjects who work on computer for varying period of time. Materials and Methods: Four hundred forty computer operators working in different institutes were selected randomly. Twenty-one did not participate in the study, making the nonresponse rate 4.8%. Rest of the subjects (n = 419) were asked to fill a pre-tested questionnaire, after obtaining their verbal consent. Other relevant information was obtained by personal interview and inspection of workstation. Statistical Analysis Used: Simple proportions and Chi-square test. Results: Among the 419 subjects studied, 194 (46.3%) suffered from asthenopia during or after work on computer. Marginally higher proportion of asthenopia was noted in females compared to males. Occurrence of asthenopia was significantly associated with age of starting use of computer, presence of refractive error, viewing distance, level of top of the computer screen with respect to eyes, use of antiglare screen and adjustment of contrast and brightness of monitor screen. Conclusions: Prevalence of asthenopia was noted to be quite high among computer operators, particularly in those who started its use at an early age. Individual as well as work-related factors were found to be predictive of asthenopia. PMID:18158404

  3. Astronomy and Computing: A new journal for the astronomical computing community

    NASA Astrophysics Data System (ADS)

    Accomazzi, Alberto; Budavári, Tamás; Fluke, Christopher; Gray, Norman; Mann, Robert G.; O'Mullane, William; Wicenec, Andreas; Wise, Michael

    2013-02-01

    We introduce Astronomy and Computing, a new journal for the growing population of people working in the domain where astronomy overlaps with computer science and information technology. The journal aims to provide a new communication channel within that community, which is not well served by current journals, and to help secure recognition of its true importance within modern astronomy. In this inaugural editorial, we describe the rationale for creating the journal, outline its scope and ambitions, and seek input from the community in defining in detail how the journal should work towards its high-level goals.

  4. Resources and Costs for Microbial Sequence Analysis Evaluated Using Virtual Machines and Cloud Computing

    PubMed Central

    Angiuoli, Samuel V.; White, James R.; Matalka, Malcolm; White, Owen; Fricke, W. Florian

    2011-01-01

    Background The widespread popularity of genomic applications is threatened by the “bioinformatics bottleneck” resulting from uncertainty about the cost and infrastructure needed to meet increasing demands for next-generation sequence analysis. Cloud computing services have been discussed as potential new bioinformatics support systems but have not been evaluated thoroughly. Results We present benchmark costs and runtimes for common microbial genomics applications, including 16S rRNA analysis, microbial whole-genome shotgun (WGS) sequence assembly and annotation, WGS metagenomics and large-scale BLAST. Sequence dataset types and sizes were selected to correspond to outputs typically generated by small- to midsize facilities equipped with 454 and Illumina platforms, except for WGS metagenomics where sampling of Illumina data was used. Automated analysis pipelines, as implemented in the CloVR virtual machine, were used in order to guarantee transparency, reproducibility and portability across different operating systems, including the commercial Amazon Elastic Compute Cloud (EC2), which was used to attach real dollar costs to each analysis type. We found considerable differences in computational requirements, runtimes and costs associated with different microbial genomics applications. While all 16S analyses completed on a single-CPU desktop in under three hours, microbial genome and metagenome analyses utilized multi-CPU support of up to 120 CPUs on Amazon EC2, where each analysis completed in under 24 hours for less than $60. Representative datasets were used to estimate maximum data throughput on different cluster sizes and to compare costs between EC2 and comparable local grid servers. Conclusions Although bioinformatics requirements for microbial genomics depend on dataset characteristics and the analysis protocols applied, our results suggests that smaller sequencing facilities (up to three Roche/454 or one Illumina GAIIx sequencer) invested in 16S r

  5. Finding Tropical Cyclones on a Cloud Computing Cluster: Using Parallel Virtualization for Large-Scale Climate Simulation Analysis

    SciTech Connect

    Hasenkamp, Daren; Sim, Alexander; Wehner, Michael; Wu, Kesheng

    2010-09-30

    Extensive computing power has been used to tackle issues such as climate changes, fusion energy, and other pressing scientific challenges. These computations produce a tremendous amount of data; however, many of the data analysis programs currently only run a single processor. In this work, we explore the possibility of using the emerging cloud computing platform to parallelize such sequential data analysis tasks. As a proof of concept, we wrap a program for analyzing trends of tropical cyclones in a set of virtual machines (VMs). This approach allows the user to keep their familiar data analysis environment in the VMs, while we provide the coordination and data transfer services to ensure the necessary input and output are directed to the desired locations. This work extensively exercises the networking capability of the cloud computing systems and has revealed a number of weaknesses in the current cloud system software. In our tests, we are able to scale the parallel data analysis job to a modest number of VMs and achieve a speedup that is comparable to running the same analysis task using MPI. However, compared to MPI based parallelization, the cloud-based approach has a number of advantages. The cloud-based approach is more flexible because the VMs can capture arbitrary software dependencies without requiring the user to rewrite their programs. The cloud-based approach is also more resilient to failure; as long as a single VM is running, it can make progress while as soon as one MPI node fails the whole analysis job fails. In short, this initial work demonstrates that a cloud computing system is a viable platform for distributed scientific data analyses traditionally conducted on dedicated supercomputing systems.

  6. Optimizing the Use of Storage Systems Provided by Cloud Computing Environments

    NASA Astrophysics Data System (ADS)

    Gallagher, J. H.; Potter, N.; Byrne, D. A.; Ogata, J.; Relph, J.

    2013-12-01

    Cloud computing systems present a set of features that include familiar computing resources (albeit augmented to support dynamic scaling of processing power) bundled with a mix of conventional and unconventional storage systems. The linux base on which many Cloud environments (e.g., Amazon) are based make it tempting to assume that any Unix software will run efficiently in this environment efficiently without change. OPeNDAP and NODC collaborated on a short project to explore how the S3 and Glacier storage systems provided by the Amazon Cloud Computing infrastructure could be used with a data server developed primarily to access data stored in a traditional Unix file system. Our work used the Amazon cloud system, but we strived for designs that could be adapted easily to other systems like OpenStack. Lastly, we evaluated different architectures from a computer security perspective. We found that there are considerable issues associated with treating S3 as if it is a traditional file system, even though doing so is conceptually simple. These issues include performance penalties because using a software tool that emulates a traditional file system to store data in S3 performs poorly when compared to a storing data directly in S3. We also found there are important benefits beyond performance to ensuring that data written to S3 can directly accessed without relying on a specific software tool. To provide a hierarchical organization to the data stored in S3, we wrote 'catalog' files, using XML. These catalog files map discrete files to S3 access keys. Like a traditional file system's directories, the catalogs can also contain references to other catalogs, providing a simple but effective hierarchy overlaid on top of S3's flat storage space. An added benefit to these catalogs is that they can be viewed in a web browser; our storage scheme provides both efficient access for the data server and access via a web browser. We also looked at the Glacier storage system and

  7. Distributed Network, Wireless and Cloud Computing Enabled 3-D Ultrasound; a New Medical Technology Paradigm

    PubMed Central

    Meir, Arie; Rubinsky, Boris

    2009-01-01

    Medical technologies are indispensable to modern medicine. However, they have become exceedingly expensive and complex and are not available to the economically disadvantaged majority of the world population in underdeveloped as well as developed parts of the world. For example, according to the World Health Organization about two thirds of the world population does not have access to medical imaging. In this paper we introduce a new medical technology paradigm centered on wireless technology and cloud computing that was designed to overcome the problems of increasing health technology costs. We demonstrate the value of the concept with an example; the design of a wireless, distributed network and central (cloud) computing enabled three-dimensional (3-D) ultrasound system. Specifically, we demonstrate the feasibility of producing a 3-D high end ultrasound scan at a central computing facility using the raw data acquired at the remote patient site with an inexpensive low end ultrasound transducer designed for 2-D, through a mobile device and wireless connection link between them. Producing high-end 3D ultrasound images with simple low-end transducers reduces the cost of imaging by orders of magnitude. It also removes the requirement of having a highly trained imaging expert at the patient site, since the need for hand-eye coordination and the ability to reconstruct a 3-D mental image from 2-D scans, which is a necessity for high quality ultrasound imaging, is eliminated. This could enable relatively untrained medical workers in developing nations to administer imaging and a more accurate diagnosis, effectively saving the lives of people. PMID:19936236

  8. Distributed network, wireless and cloud computing enabled 3-D ultrasound; a new medical technology paradigm.

    PubMed

    Meir, Arie; Rubinsky, Boris

    2009-01-01

    Medical technologies are indispensable to modern medicine. However, they have become exceedingly expensive and complex and are not available to the economically disadvantaged majority of the world population in underdeveloped as well as developed parts of the world. For example, according to the World Health Organization about two thirds of the world population does not have access to medical imaging. In this paper we introduce a new medical technology paradigm centered on wireless technology and cloud computing that was designed to overcome the problems of increasing health technology costs. We demonstrate the value of the concept with an example; the design of a wireless, distributed network and central (cloud) computing enabled three-dimensional (3-D) ultrasound system. Specifically, we demonstrate the feasibility of producing a 3-D high end ultrasound scan at a central computing facility using the raw data acquired at the remote patient site with an inexpensive low end ultrasound transducer designed for 2-D, through a mobile device and wireless connection link between them. Producing high-end 3D ultrasound images with simple low-end transducers reduces the cost of imaging by orders of magnitude. It also removes the requirement of having a highly trained imaging expert at the patient site, since the need for hand-eye coordination and the ability to reconstruct a 3-D mental image from 2-D scans, which is a necessity for high quality ultrasound imaging, is eliminated. This could enable relatively untrained medical workers in developing nations to administer imaging and a more accurate diagnosis, effectively saving the lives of people. PMID:19936236

  9. [Design and study of parallel computing environment of Monte Carlo simulation for particle therapy planning using a public cloud-computing infrastructure].

    PubMed

    Yokohama, Noriya

    2013-07-01

    This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost. PMID:23877155

  10. Computer Literacy in Pennsylvania Community Colleges. Competencies in a Beginning Level College Computer Literacy Course.

    ERIC Educational Resources Information Center

    Tortorelli, Ann Eichorn

    A study was conducted at the 14 community colleges (17 campuses) in Pennsylvania to assess the perceptions of faculty about the relative importance of course content items in a beginning credit course in computer literacy, and to survey courses currently being offered. A detailed questionnaire consisting of 96 questions based on MECC (Minnesota…

  11. Improved Low-cloud Simulation from the Community Atmosphere Model with an Advanced Third-order Turbulence Closure

    NASA Astrophysics Data System (ADS)

    Cheng, A.; Xu, K.

    2013-12-01

    This presentation describes the implementation and testing of an advanced third-order turbulence closure, an intermediately-prognostic higher-order turbulence closure (IPHOC), into the Community Atmosphere Model version 5 (CAM5). The third-order turbulence closure introduces a joint double-Gaussian distribution of liquid water potential temperature, total water mixing ratio, and vertical velocity to represent the subgrid scale variations including skewed turbulence circulations. The distribution is inferred from the first-, second-, and third-order moments of the variables given above and is used to diagnose cloud fraction and grid-mean liquid water mixing ratio, as well as the buoyancy term and fourth-order terms in the equations describing the evolution of the second- and third-order moments. In addition, a diagnostic planetary boundary layer (PBL) height approach has been incorporated in IPHOC in order to resolve the strong inversion above PBL for the coarse general circulation model (GCM) vertical grid-spacing. The IPHOC replaces PBL, shallow convection, and cloud macrophysics parameterizations in CAM5. The coupling of CAM5 with IPHOC (CAM5-IP) represents a more unified treatment of boundary layer and shallow convective processes. Results from global climate simulations are presented and suggest that CAM5-IP can provide a better treatment of boundary layer clouds and processes when compared to CAM5. The global annual mean low cloud fraction and precipitation are compared among CAM5, CAM5-IP, and a multi-scale modeling framework model with IPHOC (MMF-IP). The low cloud amounts near the west coast of the subtropical continents are well produced in CAM5-IP and are more abundant than in other two models. The global mean liquid water path is the closest to the SSM/I observation. The cloud structures from CAM5-IP, represented by the cloud fraction and cloud water content at 15°S transect, compare well with the CloudSat/CALIPSO observations. The shallow cumulus

  12. A Security-Awareness Virtual Machine Management Scheme Based on Chinese Wall Policy in Cloud Computing

    PubMed Central

    Gui, Xiaolin; Lin, Jiancai; Tian, Feng; Zhao, Jianqiang; Dai, Min

    2014-01-01

    Cloud computing gets increasing attention for its capacity to leverage developers from infrastructure management tasks. However, recent works reveal that side channel attacks can lead to privacy leakage in the cloud. Enhancing isolation between users is an effective solution to eliminate the attack. In this paper, to eliminate side channel attacks, we investigate the isolation enhancement scheme from the aspect of virtual machine (VM) management. The security-awareness VMs management scheme (SVMS), a VMs isolation enhancement scheme to defend against side channel attacks, is proposed. First, we use the aggressive conflict of interest relation (ACIR) and aggressive in ally with relation (AIAR) to describe user constraint relations. Second, based on the Chinese wall policy, we put forward four isolation rules. Third, the VMs placement and migration algorithms are designed to enforce VMs isolation between the conflict users. Finally, based on the normal distribution, we conduct a series of experiments to evaluate SVMS. The experimental results show that SVMS is efficient in guaranteeing isolation between VMs owned by conflict users, while the resource utilization rate decreases but not by much. PMID:24688434

  13. Open Science in the Cloud: Towards a Universal Platform for Scientific and Statistical Computing

    NASA Astrophysics Data System (ADS)

    Chine, Karim

    The UK, through the e-Science program, the US through the NSF-funded cyber infrastructure and the European Union through the ICT Calls aimed to provide "the technological solution to the problem of efficiently connecting data, computers, and people with the goal of enabling derivation of novel scientific theories and knowledge".1 The Grid (Foster, 2002; Foster; Kesselman, Nick, & Tuecke, 2002), foreseen as a major accelerator of discovery, didn't meet the expectations it had excited at its beginnings and was not adopted by the broad population of research professionals. The Grid is a good tool for particle physicists and it has allowed them to tackle the tremendous computational challenges inherent to their field. However, as a technology and paradigm for delivering computing on demand, it doesn't work and it can't be fixed. On one hand, "the abstractions that Grids expose - to the end-user, to the deployers and to application developers - are inappropriate and they need to be higher level" (Jha, Merzky, & Fox), and on the other hand, academic Grids are inherently economically unsustainable. They can't compete with a service outsourced to the Industry whose quality and price would be driven by market forces. The virtualization technologies and their corollary, the Infrastructure-as-a-Service (IaaS) style cloud, hold the promise to enable what the Grid failed to deliver: a sustainable environment for computational sciences that would lower the barriers for accessing federated computational resources, software tools and data; enable collaboration and resources sharing and provide the building blocks of a ubiquitous platform for traceable and reproducible computational research.

  14. Cloud Computing Application for Hotspot Clustering Using Recursive Density Based Clustering (RDBC)

    NASA Astrophysics Data System (ADS)

    Santoso, Aries; Khiyarin Nisa, Karlina

    2016-01-01

    Indonesia has vast areas of tropical forest, but are often burned which causes extensive damage to property and human life. Monitoring hotspots can be one of the forest fire management. Each hotspot is recorded in dataset so that it can be processed and analyzed. This research aims to build a cloud computing application which visualizes hotspots clustering. This application uses the R programming language with Shiny web framework and implements Recursive Density Based Clustering (RDBC) algorithm. Clustering is done on hotspot dataset of the Kalimantan Island and South Sumatra Province to find the spread pattern of hotspots. The clustering results are evaluated using the Silhouette's Coefficient (SC) which yield best value 0.3220798 for Kalimantan dataset. Clustering pattern are displayed in the form of web pages so that it can be widely accessed and become the reference for fire occurrence prediction.

  15. BioPig: Developing Cloud Computing Applications for Next-Generation Sequence Analysis

    SciTech Connect

    Bhatia, Karan; Wang, Zhong

    2011-03-22

    Next Generation sequencing is producing ever larger data sizes with a growth rate outpacing Moore's Law. The data deluge has made many of the current sequenceanalysis tools obsolete because they do not scale with data. Here we present BioPig, a collection of cloud computing tools to scale data analysis and management. Pig is aflexible data scripting language that uses Apache's Hadoop data structure and map reduce framework to process very large data files in parallel and combine the results.BioPig extends Pig with capability with sequence analysis. We will show the performance of BioPig on a variety of bioinformatics tasks, including screeningsequence contaminants, Illumina QA/QC, and gene discovery from metagenome data sets using the Rumen metagenome as an example.

  16. A Sensitivity Analysis of Cloud Properties to CLUBB Parameters in the Single-Column Community Atmosphere Model (SCAM5)

    SciTech Connect

    Guo, Zhun; Wang, Minghuai; Qian, Yun; Larson, Vincent E.; Ghan, Steven J.; Ovchinnikov, Mikhail; Bogenschutz, Peter; Zhao, Chun; Lin, Guang; Zhou, Tianjun

    2014-09-01

    In this study, we investigate the sensitivity of simulated shallow cumulus and stratocumulus clouds to selected tunable parameters of Cloud Layers Unified by Binormals (CLUBB) in the single column version of Community Atmosphere Model version 5 (SCAM5). A quasi-Monte Carlo (QMC) sampling approach is adopted to effectively explore the high-dimensional parameter space and a generalized linear model is adopted to study the responses of simulated cloud fields to tunable parameters. One stratocumulus and two shallow convection cases are configured at both coarse and fine vertical resolutions in this study.. Our results show that most of the variance in simulated cloud fields can be explained by a small number of tunable parameters. The parameters related to Newtonian and buoyancy-damping terms of total water flux are found to be the most influential parameters for stratocumulus. For shallow cumulus, the most influential parameters are those related to skewness of vertical velocity, reflecting the strong coupling between cloud properties and dynamics in this regime. The influential parameters in the stratocumulus case are sensitive to the choice of the vertical resolution while little sensitivity is found for the shallow convection cases, as eddy mixing length (or dissipation time scale) plays a more important role and depends more strongly on the vertical resolution in stratocumulus than in shallow convections. The influential parameters remain almost unchanged when the number of tunable parameters increases from 16 to 35. This study improves understanding of the CLUBB behavior associated with parameter uncertainties.

  17. Consumer Security Perceptions and the Perceived Influence on Adopting Cloud Computing: A Quantitative Study Using the Technology Acceptance Model

    ERIC Educational Resources Information Center

    Paquet, Katherine G.

    2013-01-01

    Cloud computing may provide cost benefits for organizations by eliminating the overhead costs of software, hardware, and maintenance (e.g., license renewals, upgrading software, servers and their physical storage space, administration along with funding a large IT department). In addition to the promised savings, the organization may require…

  18. 76 FR 67418 - Request for Comments on NIST Special Publication 500-293, US Government Cloud Computing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-01

    ...The National Institute of Standards and Technology (NIST) publishes this notice to seek public comments on the first draft of Special Publication 500-293, US Government Cloud Computing Technology Roadmap, Release 1.0 (Draft). This document is intended to be the mechanism to define and communicate interoperability, portability, and security requirement priorities that must be met in terms of......

  19. Computational opportunities for remote collaboration and capacity building afforded by Web 2.0 and cloud computing

    PubMed Central

    Olson, Wilma K.; dos Remedios, Cristobal G.

    2012-01-01

    In this paper, we state our aims and aspirations for building a global network of likeminded people interested in developing and encouraging students in the field of computational biophysics (CB). Global capacity building efforts have uncovered local computational talent in virtually every community regardless of where the students reside. Our vision is to discover and encourage these aspiring investigators by suggesting ways that they and other “garage scientists” can participate in new science even if they have no access to sophisticated research infrastructure. We argue that participatory computing in the “cloud” is particularly suitable for CB and available to any budding computational biophysicist if he or she is provided with open-minded mentors who have the necessary skills and generosity. We recognize that there are barriers to the development of such remote collaborations, and we discuss possible pathways to overcome these barriers. We point out that this Special Issue of Biophysical Reviews provides a much-needed forum for the development of several specific applications of CB. PMID:23066431

  20. CyberPsychological Computation on Social Community of Ubiquitous Learning.

    PubMed

    Zhou, Xuan; Dai, Genghui; Huang, Shuang; Sun, Xuemin; Hu, Feng; Hu, Hongzhi; Ivanović, Mirjana

    2015-01-01

    Under the modern network environment, ubiquitous learning has been a popular way for people to study knowledge, exchange ideas, and share skills in the cyberspace. Existing research findings indicate that the learners' initiative and community cohesion play vital roles in the social communities of ubiquitous learning, and therefore how to stimulate the learners' interest and participation willingness so as to improve their enjoyable experiences in the learning process should be the primary consideration on this issue. This paper aims to explore an effective method to monitor the learners' psychological reactions based on their behavioral features in cyberspace and therefore provide useful references for adjusting the strategies in the learning process. In doing so, this paper firstly analyzes the psychological assessment of the learners' situations as well as their typical behavioral patterns and then discusses the relationship between the learners' psychological reactions and their observable features in cyberspace. Finally, this paper puts forward a CyberPsychological computation method to estimate the learners' psychological states online. Considering the diversity of learners' habitual behaviors in the reactions to their psychological changes, a BP-GA neural network is proposed for the computation based on their personalized behavioral patterns. PMID:26557846

  1. CyberPsychological Computation on Social Community of Ubiquitous Learning

    PubMed Central

    Zhou, Xuan; Dai, Genghui; Huang, Shuang; Sun, Xuemin; Hu, Feng; Hu, Hongzhi; Ivanović, Mirjana

    2015-01-01

    Under the modern network environment, ubiquitous learning has been a popular way for people to study knowledge, exchange ideas, and share skills in the cyberspace. Existing research findings indicate that the learners' initiative and community cohesion play vital roles in the social communities of ubiquitous learning, and therefore how to stimulate the learners' interest and participation willingness so as to improve their enjoyable experiences in the learning process should be the primary consideration on this issue. This paper aims to explore an effective method to monitor the learners' psychological reactions based on their behavioral features in cyberspace and therefore provide useful references for adjusting the strategies in the learning process. In doing so, this paper firstly analyzes the psychological assessment of the learners' situations as well as their typical behavioral patterns and then discusses the relationship between the learners' psychological reactions and their observable features in cyberspace. Finally, this paper puts forward a CyberPsychological computation method to estimate the learners' psychological states online. Considering the diversity of learners' habitual behaviors in the reactions to their psychological changes, a BP-GA neural network is proposed for the computation based on their personalized behavioral patterns. PMID:26557846

  2. Computational meta'omics for microbial community studies

    PubMed Central

    Segata, Nicola; Boernigen, Daniela; Tickle, Timothy L; Morgan, Xochitl C; Garrett, Wendy S; Huttenhower, Curtis

    2013-01-01

    Complex microbial communities are an integral part of the Earth's ecosystem and of our bodies in health and disease. In the last two decades, culture-independent approaches have provided new insights into their structure and function, with the exponentially decreasing cost of high-throughput sequencing resulting in broadly available tools for microbial surveys. However, the field remains far from reaching a technological plateau, as both computational techniques and nucleotide sequencing platforms for microbial genomic and transcriptional content continue to improve. Current microbiome analyses are thus starting to adopt multiple and complementary meta'omic approaches, leading to unprecedented opportunities to comprehensively and accurately characterize microbial communities and their interactions with their environments and hosts. This diversity of available assays, analysis methods, and public data is in turn beginning to enable microbiome-based predictive and modeling tools. We thus review here the technological and computational meta'omics approaches that are already available, those that are under active development, their success in biological discovery, and several outstanding challenges. PMID:23670539

  3. Toward a web-based real-time radiation treatment planning system in a cloud computing environment.

    PubMed

    Na, Yong Hum; Suh, Tae-Suk; Kapp, Daniel S; Xing, Lei

    2013-09-21

    To exploit the potential dosimetric advantages of intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT), an in-depth approach is required to provide efficient computing methods. This needs to incorporate clinically related organ specific constraints, Monte Carlo (MC) dose calculations, and large-scale plan optimization. This paper describes our first steps toward a web-based real-time radiation treatment planning system in a cloud computing environment (CCE). The Amazon Elastic Compute Cloud (EC2) with a master node (named m2.xlarge containing 17.1 GB of memory, two virtual cores with 3.25 EC2 Compute Units each, 420 GB of instance storage, 64-bit platform) is used as the backbone of cloud computing for dose calculation and plan optimization. The master node is able to scale the workers on an 'on-demand' basis. MC dose calculation is employed to generate accurate beamlet dose kernels by parallel tasks. The intensity modulation optimization uses total-variation regularization (TVR) and generates piecewise constant fluence maps for each initial beam direction in a distributed manner over the CCE. The optimized fluence maps are segmented into deliverable apertures. The shape of each aperture is iteratively rectified to be a sequence of arcs using the manufacture's constraints. The output plan file from the EC2 is sent to the simple storage service. Three de-identified clinical cancer treatment plans have been studied for evaluating the performance of the new planning platform with 6 MV flattening filter free beams (40 × 40 cm(2)) from the Varian TrueBeam(TM) STx linear accelerator. A CCE leads to speed-ups of up to 14-fold for both dose kernel calculations and plan optimizations in the head and neck, lung, and prostate cancer cases considered in this study. The proposed system relies on a CCE that is able to provide an infrastructure for parallel and distributed computing. The resultant plans from the cloud computing are

  4. Toward a web-based real-time radiation treatment planning system in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Hum Na, Yong; Suh, Tae-Suk; Kapp, Daniel S.; Xing, Lei

    2013-09-01

    To exploit the potential dosimetric advantages of intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT), an in-depth approach is required to provide efficient computing methods. This needs to incorporate clinically related organ specific constraints, Monte Carlo (MC) dose calculations, and large-scale plan optimization. This paper describes our first steps toward a web-based real-time radiation treatment planning system in a cloud computing environment (CCE). The Amazon Elastic Compute Cloud (EC2) with a master node (named m2.xlarge containing 17.1 GB of memory, two virtual cores with 3.25 EC2 Compute Units each, 420 GB of instance storage, 64-bit platform) is used as the backbone of cloud computing for dose calculation and plan optimization. The master node is able to scale the workers on an ‘on-demand’ basis. MC dose calculation is employed to generate accurate beamlet dose kernels by parallel tasks. The intensity modulation optimization uses total-variation regularization (TVR) and generates piecewise constant fluence maps for each initial beam direction in a distributed manner over the CCE. The optimized fluence maps are segmented into deliverable apertures. The shape of each aperture is iteratively rectified to be a sequence of arcs using the manufacture’s constraints. The output plan file from the EC2 is sent to the simple storage service. Three de-identified clinical cancer treatment plans have been studied for evaluating the performance of the new planning platform with 6 MV flattening filter free beams (40 × 40 cm2) from the Varian TrueBeamTM STx linear accelerator. A CCE leads to speed-ups of up to 14-fold for both dose kernel calculations and plan optimizations in the head and neck, lung, and prostate cancer cases considered in this study. The proposed system relies on a CCE that is able to provide an infrastructure for parallel and distributed computing. The resultant plans from the cloud computing are identical

  5. The Choice Not To Use Computers: A Case Study of Community College Faculty Who Do Not Use Computers in Teaching.

    ERIC Educational Resources Information Center

    Stocker, Bradford R.

    This dissertation studies the motives of community college faculty who decide not to use computers in teaching. For the purpose of the study, non-adoption of computers in teaching is defined as not using computers for more than word processing. In spite of the fact that many of the environmental blocks that inhibit the use of computers have been…

  6. Delivering Unidata Technology via the Cloud

    NASA Astrophysics Data System (ADS)

    Fisher, Ward; Oxelson Ganter, Jennifer

    2016-04-01

    Over the last two years, Docker has emerged as the clear leader in open-source containerization. Containerization technology provides a means by which software can be pre-configured and packaged into a single unit, i.e. a container. This container can then be easily deployed either on local or remote systems. Containerization is particularly advantageous when moving software into the cloud, as it simplifies the process. Unidata is adopting containerization as part of our commitment to migrate our technologies to the cloud. We are using a two-pronged approach in this endeavor. In addition to migrating our data-portal services to a cloud environment, we are also exploring new and novel ways to use cloud-specific technology to serve our community. This effort has resulted in several new cloud/Docker-specific projects at Unidata: "CloudStream," "CloudIDV," and "CloudControl." CloudStream is a docker-based technology stack for bringing legacy desktop software to new computing environments, without the need to invest significant engineering/development resources. CloudStream helps make it easier to run existing software in a cloud environment via a technology called "Application Streaming." CloudIDV is a CloudStream-based implementation of the Unidata Integrated Data Viewer (IDV). CloudIDV serves as a practical example of application streaming, and demonstrates how traditional software can be easily accessed and controlled via a web browser. Finally, CloudControl is a web-based dashboard which provides administrative controls for running docker-based technologies in the cloud, as well as providing user management. In this work we will give an overview of these three open-source technologies and the value they offer to our community.

  7. ABrIL - Advanced Brain Imaging Lab : a cloud based computation environment for cooperative neuroimaging projects.

    PubMed

    Neves Tafula, Sérgio M; Moreira da Silva, Nádia; Rozanski, Verena E; Silva Cunha, João Paulo

    2014-01-01

    Neuroscience is an increasingly multidisciplinary and highly cooperative field where neuroimaging plays an important role. Neuroimaging rapid evolution is demanding for a growing number of computing resources and skills that need to be put in place at every lab. Typically each group tries to setup their own servers and workstations to support their neuroimaging needs, having to learn from Operating System management to specific neuroscience software tools details before any results can be obtained from each setup. This setup and learning process is replicated in every lab, even if a strong collaboration among several groups is going on. In this paper we present a new cloud service model - Brain Imaging Application as a Service (BiAaaS) - and one of its implementation - Advanced Brain Imaging Lab (ABrIL) - in the form of an ubiquitous virtual desktop remote infrastructure that offers a set of neuroimaging computational services in an interactive neuroscientist-friendly graphical user interface (GUI). This remote desktop has been used for several multi-institution cooperative projects with different neuroscience objectives that already achieved important results, such as the contribution to a high impact paper published in the January issue of the Neuroimage journal. The ABrIL system has shown its applicability in several neuroscience projects with a relatively low-cost, promoting truly collaborative actions and speeding up project results and their clinical applicability. PMID:25570014

  8. Efficient Nash Equilibrium Resource Allocation Based on Game Theory Mechanism in Cloud Computing by Using Auction

    PubMed Central

    Nezarat, Amin; Dastghaibifard, GH

    2015-01-01

    One of the most complex issues in the cloud computing environment is the problem of resource allocation so that, on one hand, the cloud provider expects the most profitability and, on the other hand, users also expect to have the best resources at their disposal considering the budget constraints and time. In most previous work conducted, heuristic and evolutionary approaches have been used to solve this problem. Nevertheless, since the nature of this environment is based on economic methods, using such methods can decrease response time and reducing the complexity of the problem. In this paper, an auction-based method is proposed which determines the auction winner by applying game theory mechanism and holding a repetitive game with incomplete information in a non-cooperative environment. In this method, users calculate suitable price bid with their objective function during several round and repetitions and send it to the auctioneer; and the auctioneer chooses the winning player based the suggested utility function. In the proposed method, the end point of the game is the Nash equilibrium point where players are no longer inclined to alter their bid for that resource and the final bid also satisfies the auctioneer’s utility function. To prove the response space convexity, the Lagrange method is used and the proposed model is simulated in the cloudsim and the results are compared with previous work. At the end, it is concluded that this method converges to a response in a shorter time, provides the lowest service level agreement violations and the most utility to the provider. PMID:26431035

  9. Efficient Nash Equilibrium Resource Allocation Based on Game Theory Mechanism in Cloud Computing by Using Auction.

    PubMed

    Nezarat, Amin; Dastghaibifard, G H

    2015-01-01

    One of the most complex issues in the cloud computing environment is the problem of resource allocation so that, on one hand, the cloud provider expects the most profitability and, on the other hand, users also expect to have the best resources at their disposal considering the budget constraints and time. In most previous work conducted, heuristic and evolutionary approaches have been used to solve this problem. Nevertheless, since the nature of this environment is based on economic methods, using such methods can decrease response time and reducing the complexity of the problem. In this paper, an auction-based method is proposed which determines the auction winner by applying game theory mechanism and holding a repetitive game with incomplete information in a non-cooperative environment. In this method, users calculate suitable price bid with their objective function during several round and repetitions and send it to the auctioneer; and the auctioneer chooses the winning player based the suggested utility function. In the proposed method, the end point of the game is the Nash equilibrium point where players are no longer inclined to alter their bid for that resource and the final bid also satisfies the auctioneer's utility function. To prove the response space convexity, the Lagrange method is used and the proposed model is simulated in the cloudsim and the results are compared with previous work. At the end, it is concluded that this method converges to a response in a shorter time, provides the lowest service level agreement violations and the most utility to the provider. PMID:26431035

  10. Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment

    PubMed Central

    Meng, Bowen; Pratx, Guillem; Xing, Lei

    2011-01-01

    Purpose: Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT/CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. Methods: In this work, we accelerated the Feldcamp–Davis–Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT/CT reconstruction algorithm. Results: Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10−7. Our study also proved that cloud computing with MapReduce is fault tolerant: the

  11. New framework for extending cloud chemistry in the Community Multiscale Air Quality (CMAQ) modeling

    EPA Science Inventory

    Clouds and fogs significantly impact the amount, composition, and spatial distribution of gas and particulate atmospheric species, not least of which through the chemistry that occurs in cloud droplets. Atmospheric sulfate is an important component of fine aerosol mass and in an...

  12. Proposed Use of the NASA Ames Nebula Cloud Computing Platform for Numerical Weather Prediction and the Distribution of High Resolution Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Limaye, A.; Molthan, A.

    2010-12-01

    The development of the Nebula Cloud Computing Platform at NASA Ames Research Center provides an open-source solution for the deployment of scalable computing and storage capabilities relevant to the execution of real-time weather forecasts and the distribution of high resolution satellite data to the operational weather community. Two projects at Marshall Space Flight Center may benefit from use of the Nebula system. The NASA Short-term Prediction Research and Transition (SPoRT) Center facilitates the use of unique NASA satellite data and research capabilities in the operational weather community by providing datasets relevant to numerical weather prediction, and satellite data sets useful in weather analysis. SERVIR provides satellite data products for decision support, emphasizing environmental threats such as wildfires, floods, landslides, and other hazards, with interests in numerical weather prediction in support of disaster response. The Weather Research and Forecast (WRF) model Environmental Modeling System (WRF-EMS) has been configured for Nebula cloud computing use via the creation of a disk image and deployment of repeated instances. Given the available infrastructure within Nebula and the “infrastructure as a service” concept, the system appears well-suited for the rapid deployment of additional forecast models over different domains, in response to real-time research applications or disaster response. Future investigations into Nebula capabilities will focus on the development of a web mapping server and load balancing configuration to support the distribution of high resolution satellite data sets to users within the National Weather Service and international partners of SERVIR.

  13. 42 CFR 417.594 - Computation of adjusted community rate (ACR).

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 3 2014-10-01 2014-10-01 false Computation of adjusted community rate (ACR). 417... adjusted community rate (ACR). (a) Basic rule. Each HMO or CMP must compute its basic rate as follows: (1... community rating system as defined in § 417.104(b); or (ii) A system, approved by CMS, under which the...

  14. 42 CFR 417.594 - Computation of adjusted community rate (ACR).

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 3 2013-10-01 2013-10-01 false Computation of adjusted community rate (ACR). 417... adjusted community rate (ACR). (a) Basic rule. Each HMO or CMP must compute its basic rate as follows: (1... community rating system as defined in § 417.104(b); or (ii) A system, approved by CMS, under which the...

  15. 42 CFR 417.594 - Computation of adjusted community rate (ACR).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 3 2012-10-01 2012-10-01 false Computation of adjusted community rate (ACR). 417... adjusted community rate (ACR). (a) Basic rule. Each HMO or CMP must compute its basic rate as follows: (1... community rating system as defined in § 417.104(b); or (ii) A system, approved by CMS, under which the...

  16. SciServer Compute brings Analysis to Big Data in the Cloud

    NASA Astrophysics Data System (ADS)

    Raddick, Jordan; Medvedev, Dmitry; Lemson, Gerard; Souter, Barbara

    2016-06-01

    SciServer Compute uses Jupyter Notebooks running within server-side Docker containers attached to big data collections to bring advanced analysis to big data "in the cloud." SciServer Compute is a component in the SciServer Big-Data ecosystem under development at JHU, which will provide a stable, reproducible, sharable virtual research environment.SciServer builds on the popular CasJobs and SkyServer systems that made the Sloan Digital Sky Survey (SDSS) archive one of the most-used astronomical instruments. SciServer extends those systems with server-side computational capabilities and very large scratch storage space, and further extends their functions to a range of other scientific disciplines.Although big datasets like SDSS have revolutionized astronomy research, for further analysis, users are still restricted to downloading the selected data sets locally – but increasing data sizes make this local approach impractical. Instead, researchers need online tools that are co-located with data in a virtual research environment, enabling them to bring their analysis to the data.SciServer supports this using the popular Jupyter notebooks, which allow users to write their own Python and R scripts and execute them on the server with the data (extensions to Matlab and other languages are planned). We have written special-purpose libraries that enable querying the databases and other persistent datasets. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files. Communication between the various components of the SciServer system is managed through SciServer‘s new Single Sign-on Portal.We have created a number of demos to illustrate the capabilities of SciServer Compute, including Python and R scripts

  17. Integrating Remote Sensing Data, Hybrid-Cloud Computing, and Event Notifications for Advanced Rapid Imaging & Analysis (Invited)

    NASA Astrophysics Data System (ADS)

    Hua, H.; Owen, S. E.; Yun, S.; Lundgren, P.; Fielding, E. J.; Agram, P.; Manipon, G.; Stough, T. M.; Simons, M.; Rosen, P. A.; Wilson, B. D.; Poland, M. P.; Cervelli, P. F.; Cruz, J.

    2013-12-01

    Space-based geodetic measurement techniques such as Interferometric Synthetic Aperture Radar (InSAR) and Continuous Global Positioning System (CGPS) are now important elements in our toolset for monitoring earthquake-generating faults, volcanic eruptions, hurricane damage, landslides, reservoir subsidence, and other natural and man-made hazards. Geodetic imaging's unique ability to capture surface deformation with high spatial and temporal resolution has revolutionized both earthquake science and volcanology. Continuous monitoring of surface deformation and surface change before, during, and after natural hazards improves decision-making from better forecasts, increased situational awareness, and more informed recovery. However, analyses of InSAR and GPS data sets are currently handcrafted following events and are not generated rapidly and reliably enough for use in operational response to natural disasters. Additionally, the sheer data volumes needed to handle a continuous stream of InSAR data sets also presents a bottleneck. It has been estimated that continuous processing of InSAR coverage of California alone over 3-years would reach PB-scale data volumes. Our Advanced Rapid Imaging and Analysis for Monitoring Hazards (ARIA-MH) science data system enables both science and decision-making communities to monitor areas of interest with derived geodetic data products via seamless data preparation, processing, discovery, and access. We will present our findings on the use of hybrid-cloud computing to improve the timely processing and delivery of geodetic data products, integrating event notifications from USGS to improve the timely processing for response, as well as providing browse results for quick looks with other tools for integrative analysis.

  18. SU-E-T-314: The Application of Cloud Computing in Pencil Beam Scanning Proton Therapy Monte Carlo Simulation

    SciTech Connect

    Wang, Z; Gao, M

    2014-06-01

    Purpose: Monte Carlo simulation plays an important role for proton Pencil Beam Scanning (PBS) technique. However, MC simulation demands high computing power and is limited to few large proton centers that can afford a computer cluster. We study the feasibility of utilizing cloud computing in the MC simulation of PBS beams. Methods: A GATE/GEANT4 based MC simulation software was installed on a commercial cloud computing virtual machine (Linux 64-bits, Amazon EC2). Single spot Integral Depth Dose (IDD) curves and in-air transverse profiles were used to tune the source parameters to simulate an IBA machine. With the use of StarCluster software developed at MIT, a Linux cluster with 2–100 nodes can be conveniently launched in the cloud. A proton PBS plan was then exported to the cloud where the MC simulation was run. Results: The simulated PBS plan has a field size of 10×10cm{sup 2}, 20cm range, 10cm modulation, and contains over 10,000 beam spots. EC2 instance type m1.medium was selected considering the CPU/memory requirement and 40 instances were used to form a Linux cluster. To minimize cost, master node was created with on-demand instance and worker nodes were created with spot-instance. The hourly cost for the 40-node cluster was $0.63 and the projected cost for a 100-node cluster was $1.41. Ten million events were simulated to plot PDD and profile, with each job containing 500k events. The simulation completed within 1 hour and an overall statistical uncertainty of < 2% was achieved. Good agreement between MC simulation and measurement was observed. Conclusion: Cloud computing is a cost-effective and easy to maintain platform to run proton PBS MC simulation. When proton MC packages such as GATE and TOPAS are combined with cloud computing, it will greatly facilitate the pursuing of PBS MC studies, especially for newly established proton centers or individual researchers.

  19. The cloud services innovation platform- enabling service-based environmental modelling using infrastructure-as-a-service cloud computing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Service oriented architectures allow modelling engines to be hosted over the Internet abstracting physical hardware configuration and software deployments from model users. Many existing environmental models are deployed as desktop applications running on user's personal computers (PCs). Migration ...

  20. Advancing global marine biogeography research with open-source GIS software and cloud-computing

    USGS Publications Warehouse

    Fujioka, Ei; Vanden Berghe, Edward; Donnelly, Ben; Castillo, Julio; Cleary, Jesse; Holmes, Chris; McKnight, Sean; Halpin, patrick

    2012-01-01

    Across many scientific domains, the ability to aggregate disparate datasets enables more meaningful global analyses. Within marine biology, the Census of Marine Life served as the catalyst for such a global data aggregation effort. Under the Census framework, the Ocean Biogeographic Information System was established to coordinate an unprecedented aggregation of global marine biogeography data. The OBIS data system now contains 31.3 million observations, freely accessible through a geospatial portal. The challenges of storing, querying, disseminating, and mapping a global data collection of this complexity and magnitude are significant. In the face of declining performance and expanding feature requests, a redevelopment of the OBIS data system was undertaken. Following an Open Source philosophy, the OBIS technology stack was rebuilt using PostgreSQL, PostGIS, GeoServer and OpenLayers. This approach has markedly improved the performance and online user experience while maintaining a standards-compliant and interoperable framework. Due to the distributed nature of the project and increasing needs for storage, scalability and deployment flexibility, the entire hardware and software stack was built on a Cloud Computing environment. The flexibility of the platform, combined with the power of the application stack, enabled rapid re-development of the OBIS infrastructure, and ensured complete standards-compliance.

  1. The Einstein Toolkit: a community computational infrastructure for relativistic astrophysics

    NASA Astrophysics Data System (ADS)

    Löffler, Frank; Faber, Joshua; Bentivegna, Eloisa; Bode, Tanja; Diener, Peter; Haas, Roland; Hinder, Ian; Mundim, Bruno C.; Ott, Christian D.; Schnetter, Erik; Allen, Gabrielle; Campanelli, Manuela; Laguna, Pablo

    2012-06-01

    We describe the Einstein Toolkit, a community-driven, freely accessible computational infrastructure intended for use in numerical relativity, relativistic astrophysics, and other applications. The toolkit, developed by a collaboration involving researchers from multiple institutions around the world, combines a core set of components needed to simulate astrophysical objects such as black holes, compact objects, and collapsing stars, as well as a full suite of analysis tools. The Einstein Toolkit is currently based on the Cactus framework for high-performance computing and the Carpet adaptive mesh refinement driver. It implements spacetime evolution via the BSSN evolution system and general relativistic hydrodynamics in a finite-volume discretization. The toolkit is under continuous development and contains many new code components that have been publicly released for the first time and are described in this paper. We discuss the motivation behind the release of the toolkit, the philosophy underlying its development, and the goals of the project. A summary of the implemented numerical techniques is included, as are results of numerical test covering a variety of sample astrophysical problems.

  2. Sensitivity Studies of Dust Ice Nuclei Effect on Cirrus Clouds with the Community Atmosphere Model CAM5

    NASA Technical Reports Server (NTRS)

    Liu, Xiaohong; Zhang, Kai; Jensen, Eric J.; Gettelman, Andrew; Barahona, Donifan; Nenes, Athanasios; Lawson, Paul

    2012-01-01

    In this study the effect of dust aerosol on upper tropospheric cirrus clouds through heterogeneous ice nucleation is investigated in the Community Atmospheric Model version 5 (CAM5) with two ice nucleation parameterizations. Both parameterizations consider homogeneous and heterogeneous nucleation and the competition between the two mechanisms in cirrus clouds, but differ significantly in the number concentration of heterogeneous ice nuclei (IN) from dust. Heterogeneous nucleation on dust aerosol reduces the occurrence frequency of homogeneous nucleation and thus the ice crystal number concentration in the Northern Hemisphere (NH) cirrus clouds compared to simulations with pure homogeneous nucleation. Global and annual mean shortwave and longwave cloud forcing are reduced by up to 2.0+/-0.1Wm (sup-2) (1 uncertainty) and 2.4+/-0.1Wm (sup-2), respectively due to the presence of dust IN, with the net cloud forcing change of -0.40+/-0.20W m(sup-2). Comparison of model simulations with in situ aircraft data obtained in NH mid-latitudes suggests that homogeneous ice nucleation may play an important role in the ice nucleation at these regions with temperatures of 205-230 K. However, simulations overestimate observed ice crystal number concentrations in the tropical tropopause regions with temperatures of 190- 205 K, and overestimate the frequency of occurrence of high ice crystal number concentration (greater than 200 L(sup-1) and underestimate the frequency of low ice crystal number concentration (less than 30 L(sup-1) at NH mid-latitudes. These results highlight the importance of quantifying the number concentrations and properties of heterogeneous IN (including dust aerosol) in the upper troposphere from the global perspective.

  3. Sensitivity Studies of Dust Ice Nuclei Effect on Cirrus Clouds with the Community Atmosphere Model CAM5

    SciTech Connect

    Liu, Xiaohong; Shi, Xiangjun; Zhang, Kai; Jensen, Eric; Gettelman, A.; Barahona, Donifan; Nenes, Athanasios; Lawson, Paul

    2012-12-19

    In this study the effect of dust aerosol on upper tropospheric cirrus clouds through heterogeneous ice nucleation is investigated in the Community Atmospheric Model version 5 (CAM5) with two ice nucleation parameterizations. Both parameterizations consider homogeneous and heterogeneous nucleation and the competition between the two mechanisms in cirrus clouds, but differ significantly in the number concentration of heterogeneous ice nuclei (IN) from dust. Heterogeneous nucleation on dust aerosol reduces the occurrence frequency of homogeneous nucleation and thus the ice crystal number concentration in the Northern Hemisphere (NH) cirrus clouds compared to simulations with pure homogeneous nucleation. Global and annual mean shortwave and longwave cloud forcing are reduced by up to 2.0 ± 0.1 W m-2 (1σ uncertainty) and 2.4 ± 0.1 W m-2, respectively due to the presence of dust IN, with the net cloud forcing change of -0.40 ± 0.20 W m-2. Comparison of model simulations with in situ aircraft data obtained in NH mid-latitudes suggests that homogeneous ice nucleation may play an important role in the ice nucleation at these regions with temperatures of 205–230 K. However, simulations overestimate observed ice crystal number concentrations in the tropical tropopause regions with temperatures of 190–205 K, and overestimate the frequency of occurrence of high ice crystal number concentration (> 200 L-1) and underestimate the frequency of low ice crystal number concentration (< 30 L-1) at NH mid-latitudes. These results highlight the importance of quantifying the number concentrations and properties of heterogeneous IN (including dust aerosol) in the upper troposphere from the global perspective.

  4. Cloud Cover

    ERIC Educational Resources Information Center

    Schaffhauser, Dian

    2012-01-01

    This article features a major statewide initiative in North Carolina that is showing how a consortium model can minimize risks for districts and help them exploit the advantages of cloud computing. Edgecombe County Public Schools in Tarboro, North Carolina, intends to exploit a major cloud initiative being refined in the state and involving every…

  5. Leveraging On-premise and Public Cloud Computing to Enable Advanced Rapid Imaging & Analysis for Monitoring Hazards (Invited)

    NASA Astrophysics Data System (ADS)

    Hua, H.; Owen, S. E.; Yun, S.; Lundgren, P.; Moore, A. W.; Fielding, E. J.; Agram, P.; Manipon, G.; Simons, M.; Rosen, P. A.; Stough, T. M.; Wilson, B. D.; Poland, M. P.; Cervelli, P. F.; Cruz, J.

    2013-12-01

    Many of the fundamental processes underlying hazards such as earthquakes and volcanoes are poorly understood. Hazard systems are difficult to replicate in lab environments, and so we need to observe them in 'natural laboratories'. The global coverage offered by satellite-based SAR missions, and rapidly expanding GPS networks can provide orders of magnitude more observations. These combined geodetic data products will enable greater understanding of processes leading up to, during, and after natural and man-made disasters. However, a science data system is needed that can efficiently monitor & analyze the voluminous data, and provide users the tools to access the data products. In the interpretation process from observations to decision-making, data from observations are first used to improve the understanding of the physical processes, which then lead to more informed knowledge. However the need for handling high data volumes and processing expertise are often bottlenecks to providing the data product streams needed for improved decision-making. To help address lower latency and high data volume needs for monitoring and response to globally distributed hazards, we leveraged a hybrid-cloud computing approach that utilizes a seamless mixture of an on-premise Eucalyptus-based cloud computing environment with public Amazon Web Services (AWS) cloud computing resources. We will present some findings on the automation of geodetic processing, use of hybrid-cloud computing to address on-premise resource constraint issues, scalability issues in processing latency and data movement, as well as data discovery, access, and integration for other tools for location analytics.

  6. A Cloud Computing Approach to Personal Risk Management: The Open Hazards Group

    NASA Astrophysics Data System (ADS)

    Graves, W. R.; Holliday, J. R.; Rundle, J. B.

    2010-12-01

    According to the California Earthquake Authority, only about 12% of current California residences are covered by any form of earthquake insurance, down from about 30% in 1996 following the 1994, M6.7 Northridge earthquake. Part of the reason for this decreasing rate of insurance uptake is the high deductible, either 10% or 15% of the value of the structure, and the relatively high cost of the premiums, as much as thousands of dollars per year. The earthquake insurance industry is composed of the CEA, a public-private partnership; modeling companies that produce damage and loss models similar to the FEMA HAZUS model; and financial companies such as the insurance, reinsurance, and investment banking companies in New York, London, the Cayman Islands, Zurich, Dubai, Singapore, and elsewhere. In setting earthquake insurance rates, financial companies rely on models like HAZUS, that calculate on risk and exposure. In California, the process begins with an official earthquake forecast by the Working Group on California Earthquake Probabilities. Modeling companies use these 30 year earthquake probabilities as inputs to their attenuation and damage models to estimate the possible damage factors from scenario earthquakes. Economic loss is then estimated from processes such as structural failure, lost economic activity, demand surge, and fire following the earthquake. Once the potential losses are known, rates can be set so that a target ruin probability of less than 1% or so can be assured. Open Hazards Group was founded with the idea that the global public might be interested in a personal estimate of earthquake risk, computed using data supplied by the public, with models running in a cloud computing environment. These models process data from the ANSS catalog, updated at least daily, to produce rupture forecasts that are backtested with standard Reliability/Attributes and Receiver Operating Characteristic tests, among others. Models for attenuation and structural damage

  7. Cloud Based Applications and Platforms (Presentation)

    SciTech Connect

    Brodt-Giles, D.

    2014-05-15

    Presentation to the Cloud Computing East 2014 Conference, where we are highlighting our cloud computing strategy, describing the platforms on the cloud (including Smartgrid.gov), and defining our process for implementing cloud based applications.

  8. Towards a Low-Cost Real-Time Photogrammetric Landslide Monitoring System Utilising Mobile and Cloud Computing Technology

    NASA Astrophysics Data System (ADS)

    Chidburee, P.; Mills, J. P.; Miller, P. E.; Fieber, K. D.

    2016-06-01

    Close-range photogrammetric techniques offer a potentially low-cost approach in terms of implementation and operation for initial assessment and monitoring of landslide processes over small areas. In particular, the Structure-from-Motion (SfM) pipeline is now extensively used to help overcome many constraints of traditional digital photogrammetry, offering increased user-friendliness to nonexperts, as well as lower costs. However, a landslide monitoring approach based on the SfM technique also presents some potential drawbacks due to the difficulty in managing and processing a large volume of data in real-time. This research addresses the aforementioned issues by attempting to combine a mobile device with cloud computing technology to develop a photogrammetric measurement solution as part of a monitoring system for landslide hazard analysis. The research presented here focusses on (i) the development of an Android mobile application; (ii) the implementation of SfM-based open-source software in the Amazon cloud computing web service, and (iii) performance assessment through a simulated environment using data collected at a recognized landslide test site in North Yorkshire, UK. Whilst the landslide monitoring mobile application is under development, this paper describes experiments carried out to ensure effective performance of the system in the future. Investigations presented here describe the initial assessment of a cloud-implemented approach, which is developed around the well-known VisualSFM algorithm. Results are compared to point clouds obtained from alternative SfM 3D reconstruction approaches considering a commercial software solution (Agisoft PhotoScan) and a web-based system (Autodesk 123D Catch). Investigations demonstrate that the cloud-based photogrammetric measurement system is capable of providing results of centimeter-level accuracy, evidencing its potential to provide an effective approach for quantifying and analyzing landslide hazard at a local-scale.

  9. Thunderstorm-associated cloud motions as computed from 5-minute SMS pictures. [Synchronous Meteorological Satellite

    NASA Technical Reports Server (NTRS)

    Tecson, J. J.; Umenhofer, T. A.; Fujita, T. T.

    1977-01-01

    The five-minute rapid-scan imagery from the Synchronous Meteorological Satellite is employed to study cloud motions associated with the Omaha tornado of May 6, 1975. Cloud-motion vectors derived from automated and man-machine interactive systems provide an account of the mesoscale phenomena. In addition to the geostationary satellite data, aerial photography obtained during a cloud-truth mission is used in the severe storm investigation. For tracking overland cumuli with short half-lives, a three-minute scan interval appears necessary for the satellite imagery.

  10. Climate Simulations Using the Community Atmosphere Model Coupled with a Multi-Variate PDF-Based Cloud Scheme

    NASA Astrophysics Data System (ADS)

    Bogenschutz, P.; Gettelman, A.; Larson, V. E.; Morrison, H.; Chen, C. C.; Thayer-Calder, K.; Craig, C.

    2014-12-01

    Supported by funding through a Climate Process Team (CPT), we have implemented a multi-variate probability density function (PDF) cloud and turbulence scheme into NCAR's Community Atmosphere Model (CAM). The parameterization is known as Cloud Layers Unified by Bi-normals (CLUBB) and is an incomplete third-order turbulence closure centered around a double-Gaussian assumed PDF. CLUBB replaces the existing planetary boundary layer, shallow convection, and cloud macrophysics schemes in CAM with a unified parameterization that drives one double moment microphysics scheme. This presentation documents the performance of CAM-CLUBB for both prescribed sea surface temeprature (SST) and coupled simulations. We will discuss the improved mean state climate, such as improved stratocumulus to cumulus transitions, that can result when compared to CAM5. In addition, CAM-CLUBB is able to improve many long-standing issues that many general circulation models (GCMs) struggle to realistically simulate; such as the Madden-Julian Oscillation (MJO), diurnal cycle of precipitation, and the frequency and intensity of precipitation. We will also discuss preliminary work being done to use CLUBB as a deep convection scheme in CAM.

  11. Academic/Instructional Computing in the Community and Junior College: Its Role and Its Institutional Implications.

    ERIC Educational Resources Information Center

    Creutz, Alan

    With the dramatic growth in the use of computers in recent years, questions have been raised concerning the role of computer education in community colleges. Four principal reasons can be advanced for implementing an academic/instructional computing program: (1) to increase computer awareness among students to prepare them for the growing number…

  12. Integration of cloud-based storage in BES III computing environment

    NASA Astrophysics Data System (ADS)

    Wang, L.; Hernandez, F.; Deng, Z.

    2014-06-01

    We present an on-going work that aims to evaluate the suitability of cloud-based storage as a supplement to the Lustre file system for storing experimental data for the BES III physics experiment and as a backend for storing files belonging to individual members of the collaboration. In particular, we discuss our findings regarding the support of cloud-based storage in the software stack of the experiment. We report on our development work that improves the support of CERN' s ROOT data analysis framework and allows efficient remote access to data through several cloud storage protocols. We also present our efforts providing the experiment with efficient command line tools for navigating and interacting with cloud storage-based data repositories both from interactive sessions and grid jobs.

  13. [Construction and analysis of a monitoring system with remote real-time multiple physiological parameters based on cloud computing].

    PubMed

    Zhu, Lingyun; Li, Lianjie; Meng, Chunyan

    2014-12-01

    There have been problems in the existing multiple physiological parameter real-time monitoring system, such as insufficient server capacity for physiological data storage and analysis so that data consistency can not be guaranteed, poor performance in real-time, and other issues caused by the growing scale of data. We therefore pro posed a new solution which was with multiple physiological parameters and could calculate clustered background data storage and processing based on cloud computing. Through our studies, a batch processing for longitudinal analysis of patients' historical data was introduced. The process included the resource virtualization of IaaS layer for cloud platform, the construction of real-time computing platform of PaaS layer, the reception and analysis of data stream of SaaS layer, and the bottleneck problem of multi-parameter data transmission, etc. The results were to achieve in real-time physiological information transmission, storage and analysis of a large amount of data. The simulation test results showed that the remote multiple physiological parameter monitoring system based on cloud platform had obvious advantages in processing time and load balancing over the traditional server model. This architecture solved the problems including long turnaround time, poor performance of real-time analysis, lack of extensibility and other issues, which exist in the traditional remote medical services. Technical support was provided in order to facilitate a "wearable wireless sensor plus mobile wireless transmission plus cloud computing service" mode moving towards home health monitoring for multiple physiological parameter wireless monitoring. PMID:25868263

  14. Computer Training for Seniors: An Academic-Community Partnership

    ERIC Educational Resources Information Center

    Sanders, Martha J.; O'Sullivan, Beth; DeBurra, Katherine; Fedner, Alesha

    2013-01-01

    Computer technology is integral to information retrieval, social communication, and social interaction. However, only 47% of seniors aged 65 and older use computers. The purpose of this study was to determine the impact of a client-centered computer program on computer skills, attitudes toward computer use, and generativity in novice senior…

  15. Leveraging Cloud Technology to Provide a Responsive, Reliable and Scalable Backend for the Virtual Ice Sheet Laboratory Using the Ice Sheet System Model and Amazon's Elastic Compute Cloud

    NASA Astrophysics Data System (ADS)

    Perez, G. L.; Larour, E. Y.; Halkides, D. J.; Cheng, D. L. C.

    2015-12-01

    The Virtual Ice Sheet Laboratory(VISL) is a Cryosphere outreach effort byscientists at the Jet Propulsion Laboratory(JPL) in Pasadena, CA, Earth and SpaceResearch(ESR) in Seattle, WA, and the University of California at Irvine (UCI), with the goal of providing interactive lessons for K-12 and college level students,while conforming to STEM guidelines. At the core of VISL is the Ice Sheet System Model(ISSM), an open-source project developed jointlyat JPL and UCI whose main purpose is to model the evolution of the polar ice caps in Greenland and Antarctica. By using ISSM, VISL students have access tostate-of-the-art modeling software that is being used to conduct scientificresearch by users all over the world. However, providing this functionality isby no means simple. The modeling of ice sheets in response to sea and atmospheric temperatures, among many other possible parameters, requiressignificant computational resources. Furthermore, this service needs to beresponsive and capable of handling burst requests produced by classrooms ofstudents. Cloud computing providers represent a burgeoning industry. With majorinvestments by tech giants like Amazon, Google and Microsoft, it has never beeneasier or more affordable to deploy computational elements on-demand. This isexactly what VISL needs and ISSM is capable of. Moreover, this is a promisingalternative to investing in expensive and rapidly devaluing hardware.

  16. An infrastructure with a unified control plane to integrate IP into optical metro networks to provide flexible and intelligent bandwidth on demand for cloud computing

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Hall, Trevor

    2012-12-01

    The Internet is entering an era of cloud computing to provide more cost effective, eco-friendly and reliable services to consumer and business users and the nature of the Internet traffic will undertake a fundamental transformation. Consequently, the current Internet will no longer suffice for serving cloud traffic in metro areas. This work proposes an infrastructure with a unified control plane that integrates simple packet aggregation technology with optical express through the interoperation between IP routers and electrical traffic controllers in optical metro networks. The proposed infrastructure provides flexible, intelligent, and eco-friendly bandwidth on demand for cloud computing in metro areas.

  17. A method of extracting ontology module using concept relations for sharing knowledge in mobile cloud computing environment.

    PubMed

    Lee, Keonsoo; Rho, Seungmin; Lee, Seok-Won

    2014-01-01

    In mobile cloud computing environment, the cooperation of distributed computing objects is one of the most important requirements for providing successful cloud services. To satisfy this requirement, all the members, who are employed in the cooperation group, need to share the knowledge for mutual understanding. Even if ontology can be the right tool for this goal, there are several issues to make a right ontology. As the cost and complexity of managing knowledge increase according to the scale of the knowledge, reducing the size of ontology is one of the critical issues. In this paper, we propose a method of extracting ontology module to increase the utility of knowledge. For the given signature, this method extracts the ontology module, which is semantically self-contained to fulfill the needs of the service, by considering the syntactic structure and semantic relation of concepts. By employing this module, instead of the original ontology, the cooperation of computing objects can be performed with less computing load and complexity. In particular, when multiple external ontologies need to be combined for more complex services, this method can be used to optimize the size of shared knowledge. PMID:25250374

  18. A Cloud Computing Workflow for Scalable Integration of Remote Sensing and Social Media Data in Urban Studies

    NASA Astrophysics Data System (ADS)

    Soliman, A.; Soltani, K.; Yin, J.; Subramaniam, B.; Liu, Y.; Padmanabhan, A.; Riteau, P.; Keahey, K.; Wang, S. W.

    2015-12-01

    Urban ecosystems are unique earth environments because both their physical and social components contribute to the overall dynamics of the system. Up-to-date, remote sensing data (e.g. optical and LiDAR) allowed researchers to monitor the development of impervious surfaces however, it was not adequate to detect associated social dynamics. Geo-located social media (e.g. Twitter) provides a data source to detect population dynamics and understand the interaction of people with their physical environment. Although, integrating social media with remote sensing data has been hindered by large volumes of data and the lack of models for integrating remote sensing products with unstructured social media data. In this research work, we leveraged the NSF chameleon cloud computing platform to provide virtual clusters and elastic auto-scaling of resources that are needed for the synthesis of landuse and geo-located Twitter data. In this context, data synthesis was used to address research questions related to population dynamics in major metropolitan areas. We provide an overview of a cloud computing workflow comprised of a set of coupled scalable synthesis modules for: a) preprocessing data, which includes storage and query of heterogeneous data streams, b) spatial data integration, which matches geo-located Twitter data with user defined landuse maps based on a conceptual model of human mobility and c) visualization of urban mobility patterns. Our results demonstrate the flexibility to connect data, synthesis methods and computing resources using cloud computing, which would be otherwise very difficult for untrained scientists to setup and control. Furthermore, we demonstrate the capabilities of CyberGIS-based workflow using the case study of comparing commuting distances across major US cities from 2013 through the present. We demonstrate how our workflow will support discoveries in urban ecological studies as well as linking human and physical dimensions in environmental

  19. Computation of the Effects of Inhomogeneous Clouds on Retrieval of Remotely Sensed Properties

    NASA Technical Reports Server (NTRS)

    Chambers, Lin H.

    1998-01-01

    Current and future earth observation programs depend on satellite measurements of radiance to retrieve the properties of clouds on a global basis. At present, this retrieval is made assuming that the clouds in the instrument field of view are plane parallel and independent of adjacent pixels. While this assumption is known to be false except in very limited cases, its impact can be evaluated, and if possible corrected, based on emerging theoretical techniques. In this study, the Spherical Harmonic Discrete Ordinate Method (SHDOM, Evans, 1996) has been used to assess the sensitivity of the retrieval to a variety of cloud parameters. SHDOM allows the plane parallel assumption to be relaxed and makes 2D and even 3D radiative solutions practical. A previous study (Chambers et al., 1996) assessed the effect of horizontal inhomogeneity in 45 LANDSAT scenes of boundary layer clouds over ocean. The four scenes studied here represent overcast, broken, scattered and strongly thermally forced cloud fields and are used to perform sensitivity studies to a wider variety of parameters. Comparisons are made at three solar zenith angles (theta (sub 0) = 0, 49, and 63 degrees) to avoid ambiguity in the results due to solar zenith angle.

  20. Jump for the Clouds: An Innovative Strategy Connecting Youth to Communities

    ERIC Educational Resources Information Center

    Kantor, Debra

    2012-01-01

    Many communities are searching for ways to help youth identify successful career options and strengthen the local economy. Based in a rural community that reflects Maine's decline in jobs in manufacturing and natural resource-based industries, the project described here provided youth with an opportunity to increase their aspirations by…

  1. Testing ice microphysics parameterizations in the NCAR Community Atmospheric Model Version 3 using Tropical Warm Pool-International Cloud Experiment data

    DOE PAGESBeta

    Wang, Weiguo; Liu, Xiaohong; Xie, Shaocheng; Boyle, Jim; McFarlane, Sally A.

    2009-07-23

    Here, cloud properties have been simulated with a new double-moment microphysics scheme under the framework of the single-column version of NCAR Community Atmospheric Model version 3 (CAM3). For comparison, the same simulation was made with the standard single-moment microphysics scheme of CAM3. Results from both simulations compared favorably with observations during the Tropical Warm Pool–International Cloud Experiment by the U.S. Department of Energy Atmospheric Radiation Measurement Program in terms of the temporal variation and vertical distribution of cloud fraction and cloud condensate. Major differences between the two simulations are in the magnitude and distribution of ice water content within themore » mixed-phase cloud during the monsoon period, though the total frozen water (snow plus ice) contents are similar. The ice mass content in the mixed-phase cloud from the new scheme is larger than that from the standard scheme, and ice water content extends 2 km further downward, which is in better agreement with observations. The dependence of the frozen water mass fraction on temperature from the new scheme is also in better agreement with available observations. Outgoing longwave radiation (OLR) at the top of the atmosphere (TOA) from the simulation with the new scheme is, in general, larger than that with the standard scheme, while the surface downward longwave radiation is similar. Sensitivity tests suggest that different treatments of the ice crystal effective radius contribute significantly to the difference in the calculations of TOA OLR, in addition to cloud water path. Numerical experiments show that cloud properties in the new scheme can respond reasonably to changes in the concentration of aerosols and emphasize the importance of correctly simulating aerosol effects in climate models for aerosol-cloud interactions. Further evaluation, especially for ice cloud properties based on in-situ data, is needed.« less

  2. Testing ice microphysics parameterizations in the NCAR Community Atmospheric Model Version 3 using Tropical Warm Pool-International Cloud Experiment data

    SciTech Connect

    Wang, Weiguo; Liu, Xiaohong; Xie, Shaocheng; Boyle, Jim; McFarlane, Sally A.

    2009-07-23

    Here, cloud properties have been simulated with a new double-moment microphysics scheme under the framework of the single-column version of NCAR Community Atmospheric Model version 3 (CAM3). For comparison, the same simulation was made with the standard single-moment microphysics scheme of CAM3. Results from both simulations compared favorably with observations during the Tropical Warm Pool–International Cloud Experiment by the U.S. Department of Energy Atmospheric Radiation Measurement Program in terms of the temporal variation and vertical distribution of cloud fraction and cloud condensate. Major differences between the two simulations are in the magnitude and distribution of ice water content within the mixed-phase cloud during the monsoon period, though the total frozen water (snow plus ice) contents are similar. The ice mass content in the mixed-phase cloud from the new scheme is larger than that from the standard scheme, and ice water content extends 2 km further downward, which is in better agreement with observations. The dependence of the frozen water mass fraction on temperature from the new scheme is also in better agreement with available observations. Outgoing longwave radiation (OLR) at the top of the atmosphere (TOA) from the simulation with the new scheme is, in general, larger than that with the standard scheme, while the surface downward longwave radiation is similar. Sensitivity tests suggest that different treatments of the ice crystal effective radius contribute significantly to the difference in the calculations of TOA OLR, in addition to cloud water path. Numerical experiments show that cloud properties in the new scheme can respond reasonably to changes in the concentration of aerosols and emphasize the importance of correctly simulating aerosol effects in climate models for aerosol-cloud interactions. Further evaluation, especially for ice cloud properties based on in-situ data, is needed.

  3. Community College Career Counseling and Computer Use. Sabbatical Report, Spring 1976.

    ERIC Educational Resources Information Center

    Norris, Jim

    The author stresses the benefits of utilizing computer information in career planning at the community college level. He cites Tondow's rationale for computer use: (1) exponential increase in information; (2) exponential increase in dissemination capabilities; and (3) accelerated curve of change. Computers should never supplant the counselor;…

  4. Administrators' Perceptions of Community College Students' Computer Literacy Skills in Beginner Courses

    ERIC Educational Resources Information Center

    Ragin, Tracey B.

    2013-01-01

    Fundamental computer skills are vital in the current technology-driven society. The purpose of this study was to investigate the development needs of students at a rural community college in the Southeast who lacked the computer literacy skills required in a basic computer course. Guided by Greenwood's pragmatic approach as a reformative force in…

  5. Google Earth Engine: a new cloud-computing platform for global-scale earth observation data and analysis

    NASA Astrophysics Data System (ADS)

    Moore, R. T.; Hansen, M. C.

    2011-12-01

    Google Earth Engine is a new technology platform that enables monitoring and measurement of changes in the earth's environment, at planetary scale, on a large catalog of earth observation data. The platform offers intrinsically-parallel computational access to thousands of computers in Google's data centers. Initial efforts have focused primarily on global forest monitoring and measurement, in support of REDD+ activities in the developing world. The intent is to put this platform into the hands of scientists and developing world nations, in order to advance the broader operational deployment of existing scientific methods, and strengthen the ability for public institutions and civil society to better understand, manage and report on the state of their natural resources. Earth Engine currently hosts online nearly the complete historical Landsat archive of L5 and L7 data collected over more than twenty-five years. Newly-collected Landsat imagery is downloaded from USGS EROS Center into Earth Engine on a daily basis. Earth Engine also includes a set of historical and current MODIS data products. The platform supports generation, on-demand, of spatial and temporal mosaics, "best-pixel" composites (for example to remove clouds and gaps in satellite imagery), as well as a variety of spectral indices. Supervised learning methods are available over the Landsat data catalog. The platform also includes a new application programming framework, or "API", that allows scientists access to these computational and data resources, to scale their current algorithms or develop new ones. Under the covers of the Google Earth Engine API is an intrinsically-parallel image-processing system. Several forest monitoring applications powered by this API are currently in development and expected to be operational in 2011. Combining science with massive data and technology resources in a cloud-computing framework can offer advantages of computational speed, ease-of-use and collaboration, as

  6. Implementing a New Cloud Computing Library Management Service: A Symbiotic Approach

    ERIC Educational Resources Information Center

    Dula, Michael; Jacobsen, Lynne; Ferguson, Tyler; Ross, Rob

    2012-01-01

    This article presents the story of how Pepperdine University migrated its library management functions to the cloud using what is now known as OCLC's WorldShare Management Services (WMS). The story of implementing this new service is told from two vantage points: (1) that of the library; and (2) that of the service provider. The authors were the…

  7. Fostering an Informal Learning Community of Computer Technologies at School

    ERIC Educational Resources Information Center

    Xiao, Lu; Carroll, John M.

    2007-01-01

    Computer technologies develop at a challenging fast pace. Formal education should not only teach students basic computer skills to meet current computer needs, but also foster student development of informal learning ability for a lifelong learning process. On the other hand, students growing up in the digital world are often more skilled with…

  8. High Tech Programmers in Low-Income Communities: Creating a Computer Culture in a Community Technology Center

    NASA Astrophysics Data System (ADS)

    Kafai, Yasmin B.; Peppler, Kylie A.; Chiu, Grace M.

    For the last twenty years, issues of the digital divide have driven efforts around the world to address the lack of access to computers and the Internet, pertinent and language appropriate content, and technical skills in low-income communities (Schuler & Day, 2004a and b). The title of our paper makes reference to a milestone publication (Schon, Sanyal, & Mitchell, 1998) that showcased some of the early work and thinking in this area. Schon, Sanyal and Mitchell's book edition included an article outlining the Computer Clubhouse, a type of community technology center model, which was developed to create opportunities for youth in low-income communities to become creators and designers of technologies by (1998). The model has been very successful scaling up, with over 110 Computer Clubhouses now in existence worldwide.

  9. How much does sea spray aerosol organic matter impact clouds and radiation? Sensitivity studies in the Community Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Burrows, S. M.; Liu, X.; Elliott, S.; Easter, R. C.; Singh, B.; Rasch, P. J.

    2015-12-01

    Submicron marine aerosol particles are frequently observed to contain substantial fractions of organic material, hypothesized to enter the atmosphere as part of the primary sea spray aerosol formed through bubble bursting. This organic matter in sea spray aerosol may affect cloud condensation nuclei and ice nuclei concentrations in the atmosphere, particularly in remote marine regions. Members of our team have developed a new, mechanistic representation of the enrichment of sea spray aerosol with organic matter, the OCEANFILMS parameterization (Burrows et al., 2014). This new representation uses fields from an ocean biogeochemistry model to predict properties of the emitted aerosol. We have recently implemented the OCEANFILMS representation of sea spray aerosol composition into the Community Atmosphere Model (CAM), and performed sensitivity experiments and comparisons with alternate formulations. Early results from these sensitivity simulations will be shown, including impacts on aerosols, clouds, and radiation. References: Burrows, S. M., Ogunro, O., Frossard, A. A., Russell, L. M., Rasch, P. J., and Elliott, S. M.: A physically based framework for modeling the organic fractionation of sea spray aerosol from bubble film Langmuir equilibria, Atmos. Chem. Phys., 14, 13601-13629, doi:10.5194/acp-14-13601-2014, 2014.

  10. Robotic disaster recovery efforts with ad-hoc deployable cloud computing

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy; Marsh, Ronald; Mohammad, Atif F.

    2013-06-01

    Autonomous operations of search and rescue (SaR) robots is an ill posed problem, which is complexified by the dynamic disaster recovery environment. In a typical SaR response scenario, responder robots will require different levels of processing capabilities during various parts of the response effort and will need to utilize multiple algorithms. Placing these capabilities onboard the robot is a mediocre solution that precludes algorithm specific performance optimization and results in mediocre performance. Architecture for an ad-hoc, deployable cloud environment suitable for use in a disaster response scenario is presented. Under this model, each service provider is optimized for the task and maintains a database of situation-relevant information. This service-oriented architecture (SOA 3.0) compliant framework also serves as an example of the efficient use of SOA 3.0 in an actual cloud application.

  11. Cloud Infrastructure & Applications - CloudIA

    NASA Astrophysics Data System (ADS)

    Sulistio, Anthony; Reich, Christoph; Doelitzscher, Frank

    The idea behind Cloud Computing is to deliver Infrastructure-as-a-Services and Software-as-a-Service over the Internet on an easy pay-per-use business model. To harness the potentials of Cloud Computing for e-Learning and research purposes, and to small- and medium-sized enterprises, the Hochschule Furtwangen University establishes a new project, called Cloud Infrastructure & Applications (CloudIA). The CloudIA project is a market-oriented cloud infrastructure that leverages different virtualization technologies, by supporting Service-Level Agreements for various service offerings. This paper describes the CloudIA project in details and mentions our early experiences in building a private cloud using an existing infrastructure.

  12. Development of a High Resolution Weather Forecast Model for Mesoamerica Using the NASA Nebula Cloud Computing Environment

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew L.; Case, Jonathan L.; Venner, Jason; Moreno-Madrinan, Max. J.; Delgado, Francisco

    2012-01-01

    Over the past two years, scientists in the Earth Science Office at NASA fs Marshall Space Flight Center (MSFC) have explored opportunities to apply cloud computing concepts to support near real ]time weather forecast modeling via the Weather Research and Forecasting (WRF) model. Collaborators at NASA fs Short ]term Prediction Research and Transition (SPoRT) Center and the SERVIR project at Marshall Space Flight Center have established a framework that provides high resolution, daily weather forecasts over Mesoamerica through use of the NASA Nebula Cloud Computing Platform at Ames Research Center. Supported by experts at Ames, staff at SPoRT and SERVIR have established daily forecasts complete with web graphics and a user interface that allows SERVIR partners access to high resolution depictions of weather in the next 48 hours, useful for monitoring and mitigating meteorological hazards such as thunderstorms, heavy precipitation, and tropical weather that can lead to other disasters such as flooding and landslides. This presentation will describe the framework for establishing and providing WRF forecasts, example applications of output provided via the SERVIR web portal, and early results of forecast model verification against available surface ] and satellite ]based observations.

  13. The application of data mining and cloud computing techniques in data-driven models for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Khazaeli, S.; Ravandi, A. G.; Banerji, S.; Bagchi, A.

    2016-04-01

    Recently, data-driven models for Structural Health Monitoring (SHM) have been of great interest among many researchers. In data-driven models, the sensed data are processed to determine the structural performance and evaluate the damages of an instrumented structure without necessitating the mathematical modeling of the structure. A framework of data-driven models for online assessment of the condition of a structure has been developed here. The developed framework is intended for automated evaluation of the monitoring data and structural performance by the Internet technology and resources. The main challenges in developing such framework include: (a) utilizing the sensor measurements to estimate and localize the induced damage in a structure by means of signal processing and data mining techniques, and (b) optimizing the computing and storage resources with the aid of cloud services. The main focus in this paper is to demonstrate the efficiency of the proposed framework for real-time damage detection of a multi-story shear-building structure in two damage scenarios (change in mass and stiffness) in various locations. Several features are extracted from the sensed data by signal processing techniques and statistical methods. Machine learning algorithms are deployed to select damage-sensitive features as well as classifying the data to trace the anomaly in the response of the structure. Here, the cloud computing resources from Amazon Web Services (AWS) have been used to implement the proposed framework.

  14. Analysis of real-time Earth magnetosphere simulation for space weather using space weather cloud computing system

    NASA Astrophysics Data System (ADS)

    Watari, S.; Tsubouchi, K.; Kato, H.; Tanaka, T.; Shinagawa, H.; Murata, K. T.

    2011-12-01

    The Earth magnetosphere simulation is continuously running in real-time for space weather in the National Institute of Information and Communications Technology (NICT). Code of this simulation was originally developed by Tanaka (JGR, 1995) and was implemented as one of the NICT real-time space weather simulations by Den et al. (Space Weather, 2006). The space weather cloud computing system has a distributed large storage system and a data analysis system and has been constructed in the NICT. Using this space weather cloud computing system, it becomes possible to preserve the result of the real-time magnetosphere simulation. It enables to analyze the response of the magnetosphere for various solar wind conditions. There are several works on the real-time simulation using AE index (Kitamura et al., JGR, 2008), the polar cap potential (Kunitake et al, Journal of NICT, 2009), the plasma environment at gestational orbit (Nakamura et al., Journal of NICT, 2009). In this analysis, we focused magnetic variation at gestational orbit and location of magnetopause. At gestational orbit, there are continuous magnetic field observations by the GOES satellites. On magnetopause location, there is an empirical model called the Shue model, which takes account of dynamic pressure and south-ward IMF of solar wind. We compared the result of the real-time simulation with magnetic field variations observed by the GOES satellites and magnetopause location calculated by the Shue model. We will report the result of this study.

  15. Computer-Based Education in a Developing University for a Developing Community.

    ERIC Educational Resources Information Center

    Sinclair, A. J. L.; Dennis, J. Richard

    1982-01-01

    Presents a discussion of university role in an economically disadvantaged community in South Africa, and offers five recommendations on how to implement and maintain computer based instruction in a university outreach program. (MER)

  16. ICT Oriented toward Nyaya: Community Computing in India's Slums

    ERIC Educational Resources Information Center

    Byker, Erik J.

    2014-01-01

    In many schools across India, access to information and communication technology (ICT) is still a rare privilege. While the Annual Status of Education Report in India (2013) showed a marginal uptick in the amount of computers, the opportunities for children to use those computers have remained stagnant. The lack of access to ICT is especially…

  17. Computers: New Avenues for Engineering Students at the Community College.

    ERIC Educational Resources Information Center

    Brillhart, Lia; Shawhan, Douglas

    At Triton College, student involvement with computers as a continued, multifaceted process is considered a primary objective of the physics and engineering department. To that end, the department implemented a low cost diverse program. The computer uses presented include labs, algorithms, testing, individual projects, drills, graphics, simulation,…

  18. A Communities of Practice Perspective on Educational Computer Games

    ERIC Educational Resources Information Center

    Reese, Curt

    2008-01-01

    Educational computer games provide an environment in which interactions among students, teachers, and texts differ non-trivially from those of the traditional classroom. In order to build and research computer games effectively, it is important to provide a theoretical background that adequately describes and explains learning and interactions in…

  19. Towards Monitoring-as-a-service for Scientific Computing Cloud applications using the ElasticSearch ecosystem

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Guarise, A.; Lusso, S.; Masera, M.; Vallero, S.

    2015-12-01

    The INFN computing centre in Torino hosts a private Cloud, which is managed with the OpenNebula cloud controller. The infrastructure offers Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) services to different scientific computing applications. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BESIII collaboration, plus an increasing number of other small tenants. The dynamic allocation of resources to tenants is partially automated. This feature requires detailed monitoring and accounting of the resource usage. We set up a monitoring framework to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the ElasticSearch, Logstash and Kibana (ELK) stack. The infrastructure relies on a MySQL database back-end for data preservation and to ensure flexibility to choose a different monitoring solution if needed. The heterogeneous accounting information is transferred from the database to the ElasticSearch engine via a custom Logstash plugin. Each use-case is indexed separately in ElasticSearch and we setup a set of Kibana dashboards with pre-defined queries in order to monitor the relevant information in each case. For the IaaS metering, we developed sensors for the OpenNebula API. The IaaS level information gathered through the API is sent to the MySQL database through an ad-hoc developed RESTful web service. Moreover, we have developed a billing system for our private Cloud, which relies on the RabbitMQ message queue for asynchronous communication to the database and on the ELK stack for its graphical interface. The Italian Grid accounting framework is also migrating to a similar set-up. Concerning the application level, we used the Root plugin TProofMonSenderSQL to collect accounting data from the interactive analysis facility. The BESIII

  20. Validation of cloud forcing simulated by the National Center for Atmospheric Research Community Climate Model using observations from the Earth Radiation Budget Experiment

    NASA Technical Reports Server (NTRS)

    Soden, B. J.

    1992-01-01

    Satellite measurements of the effect of clouds on the top of atmosphere radiative energy budget are used to validate model simulations from the National Center for Atmospheric Research Community Climate Model (NCAR CCM). The ability of the NCAR CCM to reproduce the monthly mean global distribution and temporal variability on both daily and seasonal time scales is assessed. The comparison reveals several deficiencies in the CCM cloud representation. Most notable are the difficulties in properly simulating the effect of clouds on the planetary albedo. This problem arises from discrepancies in the model's portrayal of low-level cloudiness and leads to significant errors in the absorbed solar radiation simulated by the model. The CCM performs much better in simulating the effect of clouds on the longwave radiation emitted to space, indicating its relative success in capturing the vertical distribution of cloudiness. The daily variability of the radiative effects of clouds in both the shortwave and longwave spectral regions is systematically overestimated. Analysis of the seasonal variations illustrates a distinct lack of coupling in the seasonal changes in the radiative effects of cloudiness between the tropics and mid-latitudes and between the Northern and Southern Hemisphere. Much of this problem also arises from difficulties in simulating low-level cloudiness, placing further emphasis on the need for better model parameterizations of boundary layer clouds.

  1. Computer-Aided Instruction in Mathematics Remediation at a Community College

    ERIC Educational Resources Information Center

    Brocato, Mary Anne

    2009-01-01

    Over the past ten years, traditional lecture style delivery has given way to computer-aided instruction (CAI) in post-secondary education. Developmental mathematics courses have been one of the most widely used applications. At a small community college in the Mississippi Delta, a computer assisted version of Intermediate Algebra was implemented.…

  2. Use of Computer Kiosks for Breast Cancer Education in Five Community Settings

    ERIC Educational Resources Information Center

    Kreuter, Matthew W.; Black, Wynona J.; Friend, LaBraunna; Booker, Angela C.; Klump, Paula; Bobra, Sonal; Holt, Cheryl L.

    2006-01-01

    Finding ways to bring effective computer-based behavioral interventions to those with limited access to technology is a continuing challenge for health educators. Computer kiosks placed in community settings may help reach such populations. The "Reflections of You" kiosk generates individually tailored magazines on breast cancer and mammography…

  3. Academic Computing at the Community College of Baltimore: A Case Study.

    ERIC Educational Resources Information Center

    Hunter, Beverly; Kearsley, Greg

    Part of a series of case studies on successful academic computing programs at minority institutions, this monograph focuses on the Community College of Baltimore (CCB). Sections I and II outline the purpose and background of the case study project, focusing on the 11 computing activities the case studies are designed to facilitate, the need for…

  4. Learning Concurrency as an Entry Point to the Community of Computer Science Practitioners

    ERIC Educational Resources Information Center

    Kolikant, Yifat Ben-David

    2004-01-01

    Computer Science formal education brings together old-timers from two computer-literate cultures. The curriculum is oriented toward the academic community, whose interest is in the abstraction, solution, and proofs of algorithmic problems, whereas many students are technology users, whose main interest is the manipulation of the products of…

  5. The Use of Computer Data Systems in Academic Counseling: Outcomes for Community College Students. ERIC Digest.

    ERIC Educational Resources Information Center

    McKinney, Kristen

    This Digest discusses computer assisted advisory practices currently in use in community colleges, outlining the types of data collected and how they are used, including the use of tracking to plan interventions for at-risk students. Enhanced computer technology has improved the effectiveness of academic advising by enabling more thorough and…

  6. Multidimensional Environmental Data Resource Brokering on Computational Grids and Scientific Clouds

    NASA Astrophysics Data System (ADS)

    Montella, Raffaele; Giunta, Giulio; Laccetti, Giuliano

    Grid computing has widely evolved over the past years, and its capabilities have found their way even into business products and are no longer relegated to scientific applications. Today, grid computing technology is not restricted to a set of specific grid open source or industrial products, but rather it is comprised of a set of capabilities virtually within any kind of software to create shared and highly collaborative production environments. These environments are focused on computational (workload) capabilities and the integration of information (data) into those computational capabilities. An active grid computing application field is the fully virtualization of scientific instruments in order to increase their availability and decrease operational and maintaining costs. Computational and information grids allow to manage real-world objects in a service-oriented way using industrial world-spread standards.

  7. Restricting 32-128 km horizontal scales hardly affects the MJO in the Superparameterized Community Atmosphere Model v.3.0 but the number of cloud-resolving grid columns constrains vertical mixing

    NASA Astrophysics Data System (ADS)

    Pritchard, Michael S.; Bretherton, Christopher S.; DeMott, Charlotte A.

    2014-09-01

    The effects of artificially restricting the 32-128 km horizontal scale regime on MJO dynamics in the Superparameterized Community Atmosphere Model v.3.0 have been explored through reducing the extent of its embedded cloud resolving model (CRM) arrays. Two and four-fold reductions in CRM extent (from 128 to 64 km and 32 km) produce statistical composite MJO signatures with spatial scale, zonal phase speed, and intrinsic wind-convection anomaly structure that are all remarkably similar to the standard SPCAM's MJO. This suggests that the physics of mesoscale convective organization on 32-128 km scales are not critical to MJO dynamics in SPCAM and that reducing CRM extent may be a viable strategy for 400% more computationally efficient analysis of superparameterized MJO dynamics. However several unexpected basic state responses caution that extreme CRM domain reduction can lead to systematic mean state issues in superparameterized models. We hypothesize that an artificial limit on the efficiency of vertical updraft mixing is set by the number of grid columns available for compensating subsidence in the embedded CRM arrays. This can lead to reduced moisture ventilation supporting too much liquid cloud and thus an overly strong cloud shortwave radiative forcing, particularly in regions of deep convection.

  8. WELCOME – innovative integrated care platform using wearable sensing and smart cloud computing for COPD patients with comorbidities.

    PubMed

    Chouvarda, Ioanna; Philip, Nada Y; Natsiavas, Pantelis; Kilintzis, Vasilis; Sobnath, Drishty; Kayyali, Reem; Henriques, Jorge; Paiva, Rui Pedro; Raptopoulos, Andreas; Chételat, Olivier; Maglaveras, Nicos

    2014-01-01

    We propose WELCOME, an innovative integrated care platform using wearable sensors and smart cloud computing for Chronic Obstructive Pulmonary Disease (COPD) patients with co-morbidities. WELCOME aims to bring about a change in the reactive nature of the management of chronic diseases and its comorbidities, in particular through the development of a patient centred and proactive approach to COPD management. The aim of WELCOME is to support healthcare services to give early detection of complications (potentially reducing hospitalisations) and the prevention and mitigation of comorbidities (Heart Failure, Diabetes, Anxiety and Depression). The system incorporates patient hub, where it interacts with the patient via a light vest including a large number of non-invasive chest sensors for monitoring various relevant parameters. In addition, interactive applications to monitor and manage diabetes, anxiety and lifestyle issues will be provided to the patient. Informal carers will also be supported in dealing with their patients. On the other hand, welcome smart cloud platform is the heart of the proposed system where all the medical records and the monitoring data are managed and processed via the decision support system. Healthcare professionals will be able to securely access the WELCOME applications to monitor and manage the patient's conditions and respond to alerts on personalized level. PMID:25570666

  9. The Influence of Computer-Mediated Communication Systems on Community

    ERIC Educational Resources Information Center

    Rockinson-Szapkiw, Amanda J.

    2012-01-01

    As higher education institutions enter the intense competition of the rapidly growing global marketplace of online education, the leaders within these institutions are challenged to identify factors critical for developing and for maintaining effective online courses. Computer-mediated communication (CMC) systems are considered critical to…

  10. Rhetoric, Civility, and Community: Political Debate on Computer Bulletin Boards.

    ERIC Educational Resources Information Center

    Benson, Thomas W.

    1996-01-01

    Indicates that political debates on computer bulletin boards (primarily USENET) are characterized by aggressiveness, angry assertion, insult, and the attempt to humiliate opponents; but that they also display a high degree of formal regularity and are robust exercises in free speech, virtuosic in argument and language, and rare opportunities for…

  11. Microphysical Simulation of Polar Stratospheric Clouds Within the Community Earth System Model

    NASA Astrophysics Data System (ADS)

    Zhu, Yunqian

    Polar stratospheric clouds (PSCs) are critical elements for polar ozone depletion. A new PSC model coupling stratospheric chemistry, microphysics and climate is constructed and the formation of STS (Super-cooled Ternary Solution) and NAT (Nitric-Acid Trihydrate) PSCs are explored. STS particle properties are dominated by thermodynamics. Simulations of particle volumes and size distributions are generally within the observational error bars. STS particles are not in equilibrium with their environment when the particle surface area is smaller than 4 mum2/cm 3. A new nucleation rate equation for NAT is derived based on observed denitrification in the 2010-2011 Arctic winter. The homogeneous nucleation scheme leads to supermicron NAT particles as observed. The simulated the lidar backscatter, and denitrification are generally within observational error bars. However, the simulations are very sensitive to temperature. Using the same STS and NAT schemes, as well as a prognostic treatment for ice PSC formation and dehydration, the PSCs are simulated during the Antarctic winter of 2010. The current model correctly simulates large NAT particles and denitrification, but cannot produce NAT with high backscattering ratio/number density sometimes observed by CALIPSO. However, our simulated ice has similar backscatter and depolarization which is often attributed to NAT by CALIPSO. Possibly the CALIPSO algorithm misclassifies ice as NAT when the stratosphere is denitrified or dehydrated. STS and NAT form near the pole in May and June, but form a ring outside 80?S later in the winter when polar HNO3 is depleted. Ice always forms in the coldest area, but becomes less abundant later in the winter. The model is missing some processes forming NAT such as gravity waves or evaporating ice. These processes should be added to the model in the future.

  12. A Computer Science Educational Program for Establishing an Entry Point into the Computing Community of Practice

    ERIC Educational Resources Information Center

    Haberman, Bruria; Yehezkel, Cecile

    2008-01-01

    The rapid evolvement of the computing domain has posed challenges in attempting to bridge the gap between school and the contemporary world of computing, which is related to content, learning culture, and professional norms. We believe that the interaction of high-school students who major in computer science or software engineering with leading…

  13. A preliminary study of a cloud-computing model for chronic illness self-care support in an underdeveloped country

    PubMed Central

    Piette, John D.; Mendoza-Avelares, Milton O.; Ganser, Martha; Mohamed, Muhima; Marinec, Nicolle; Krishnan, Sheila

    2013-01-01

    Background Although interactive voice response (IVR) calls can be an effective tool for chronic disease management, many regions of the world lack the infrastructure to provide these services. Objective This study evaluated the feasibility and potential impact of an IVR program using a cloud-computing model to improve diabetes management in Honduras. Methods A single group, pre-post study was conducted between June and August 2010. The telecommunications infrastructure was maintained on a U.S. server, and calls were directed to patients’ cell phones using VoIP. Eighty-five diabetes patients in Honduras received weekly IVR disease management calls for six weeks, with automated follow-up emails to clinicians, and voicemail reports to family caregivers. Patients completed interviews at enrollment and a six week follow-up. Other measures included patients’ glycemic control (A1c) and data from the IVR calling system. Results 55% of participants completed the majority of their IVR calls and 33% completed 80% or more. Higher baseline blood pressures, greater diabetes burden, greater distance from the clinic, and better adherence were related to higher call completion rates. Nearly all participants (98%) reported that because of the program, they improved in aspects of diabetes management such as glycemic control (56%) or foot care (89%). Mean A1c’s decreased from 10.0% at baseline to 8.9% at follow-up (p<.01). Most participants (92%) said that if the service were available in their clinic they would use it again. Conclusions Cloud computing is a feasible strategy for providing IVR services globally. IVR self-care support may improve self-care and glycemic control for patients in under-developed countries. PMID:21565655

  14. Computing and Representing Sea Ice Trends: Toward a Community Consensus

    NASA Technical Reports Server (NTRS)

    Wohlleben, T.; Tivy, A.; Stroeve, J.; Meier, Walter N.; Fetterer, F.; Wang, J.; Assel, R.

    2013-01-01

    Estimates of the recent decline in Arctic Ocean summer sea ice extent can vary due to differences in sea ice data sources, in the number of years used to compute the trend, and in the start and end years used in the trend computation. Compounding such differences, estimates of the relative decline in sea ice cover (given in percent change per decade) can further vary due to the choice of reference value (the initial point of the trend line, a climatological baseline, etc.). Further adding to the confusion, very often when relative trends are reported in research papers, the reference values used are not specified or made clear. This can lead to confusion when trend studies are cited in the press and public reports.

  15. A novel approach for discovering condition-specific correlations of gene expressions within biological pathways by using cloud computing technology.

    PubMed

    Chang, Tzu-Hao; Wu, Shih-Lin; Wang, Wei-Jen; Horng, Jorng-Tzong; Chang, Cheng-Wei

    2014-01-01

    Microarrays are widely used to assess gene expressions. Most microarray studies focus primarily on identifying differential gene expressions between conditions (e.g., cancer versus normal cells), for discovering the major factors that cause diseases. Because previous studies have not identified the correlations of differential gene expression between conditions, crucial but abnormal regulations that cause diseases might have been disregarded. This paper proposes an approach for discovering the condition-specific correlations of gene expressions within biological pathways. Because analyzing gene expression correlations is time consuming, an Apache Hadoop cloud computing platform was implemented. Three microarray data sets of breast cancer were collected from the Gene Expression Omnibus, and pathway information from the Kyoto Encyclopedia of Genes and Genomes was applied for discovering meaningful biological correlations. The results showed that adopting the Hadoop platform considerably decreased the computation time. Several correlations of differential gene expressions were discovered between the relapse and nonrelapse breast cancer samples, and most of them were involved in cancer regulation and cancer-related pathways. The results showed that breast cancer recurrence might be highly associated with the abnormal regulations of these gene pairs, rather than with their individual expression levels. The proposed method was computationally efficient and reliable, and stable results were obtained when different data sets were used. The proposed method is effective in identifying meaningful biological regulation patterns between conditions. PMID:24579087

  16. More reliable forecasts with less precise computations: a fast-track route to cloud-resolved weather and climate simulators?

    PubMed Central

    Palmer, T. N.

    2014-01-01

    This paper sets out a new methodological approach to solving the equations for simulating and predicting weather and climate. In this approach, the conventionally hard boundary between the dynamical core and the sub-grid parametrizations is blurred. This approach is motivated by the relatively shallow power-law spectrum for atmospheric energy on scales of hundreds of kilometres and less. It is first argued that, because of this, the closure schemes for weather and climate simulators should be based on stochastic–dynamic systems rather than deterministic formulae. Second, as high-wavenumber elements of the dynamical core will necessarily inherit this stochasticity during time integration, it is argued that the dynamical core will be significantly over-engineered if all computations, regardless of scale, are performed completely deterministically and if all variables are represented with maximum numerical precision (in practice using double-precision floating-point numbers). As the era of exascale computing is approached, an energy- and computationally efficient approach to cloud-resolved weather and climate simulation is described where determinism and numerical precision are focused on the largest scales only. PMID:24842038

  17. A Novel Approach for Discovering Condition-Specific Correlations of Gene Expressions within Biological Pathways by Using Cloud Computing Technology

    PubMed Central

    Chang, Tzu-Hao; Wu, Shih-Lin; Wang, Wei-Jen; Horng, Jorng-Tzong; Chang, Cheng-Wei

    2014-01-01

    Microarrays are widely used to assess gene expressions. Most microarray studies focus primarily on identifying differential gene expressions between conditions (e.g., cancer versus normal cells), for discovering the major factors that cause diseases. Because previous studies have not identified the correlations of differential gene expression between conditions, crucial but abnormal regulations that cause diseases might have been disregarded. This paper proposes an approach for discovering the condition-specific correlations of gene expressions within biological pathways. Because analyzing gene expression correlations is time consuming, an Apache Hadoop cloud computing platform was implemented. Three microarray data sets of breast cancer were collected from the Gene Expression Omnibus, and pathway information from the Kyoto Encyclopedia of Genes and Genomes was applied for discovering meaningful biological correlations. The results showed that adopting the Hadoop platform considerably decreased the computation time. Several correlations of differential gene expressions were discovered between the relapse and nonrelapse breast cancer samples, and most of them were involved in cancer regulation and cancer-related pathways. The results showed that breast cancer recurrence might be highly associated with the abnormal regulations of these gene pairs, rather than with their individual expression levels. The proposed method was computationally efficient and reliable, and stable results were obtained when different data sets were used. The proposed method is effective in identifying meaningful biological regulation patterns between conditions. PMID:24579087

  18. CTserver: A Computational Thermodynamics Server for the Geoscience Community

    NASA Astrophysics Data System (ADS)

    Kress, V. C.; Ghiorso, M. S.

    2006-12-01

    The CTserver platform is an Internet-based computational resource that provides on-demand services in Computational Thermodynamics (CT) to a diverse geoscience user base. This NSF-supported resource can be accessed at ctserver.ofm-research.org. The CTserver infrastructure leverages a high-quality and rigorously tested software library of routines for computing equilibrium phase assemblages and for evaluating internally consistent thermodynamic properties of materials, e.g. mineral solid solutions and a variety of geological fluids, including magmas. Thermodynamic models are currently available for 167 phases. Recent additions include Duan, Møller and Weare's model for supercritical C-O-H-S, extended to include SO2 and S2 species, and an entirely new associated solution model for O-S-Fe-Ni sulfide liquids. This software library is accessed via the CORBA Internet protocol for client-server communication. CORBA provides a standardized, object-oriented, language and platform independent, fast, low-bandwidth interface to phase property modules running on the server cluster. Network transport, language translation and resource allocation are handled by the CORBA interface. Users access server functionality in two principal ways. Clients written as browser- based Java applets may be downloaded which provide specific functionality such as retrieval of thermodynamic properties of phases, computation of phase equilibria for systems of specified composition, or modeling the evolution of these systems along some particular reaction path. This level of user interaction requires minimal programming effort and is ideal for classroom use. A more universal and flexible mode of CTserver access involves making remote procedure calls from user programs directly to the server public interface. The CTserver infrastructure relieves the user of the burden of implementing and testing the often complex thermodynamic models of real liquids and solids. A pilot application of this distributed

  19. Robust effects of cloud superparameterization on simulated daily rainfall intensity statistics across multiple versions of the Community Earth System Model

    NASA Astrophysics Data System (ADS)

    Kooperman, Gabriel J.; Pritchard, Michael S.; Burt, Melissa A.; Branson, Mark D.; Randall, David A.

    2016-03-01

    This study evaluates several important statistics of daily rainfall based on frequency and amount distributions as simulated by a global climate model whose precipitation does not depend on convective parameterization—Super-Parameterized Community Atmosphere Model (SPCAM). Three superparameterized and conventional versions of CAM, coupled within the Community Earth System Model (CESM1 and CCSM4), are compared against two modern rainfall products (GPCP 1DD and TRMM 3B42) to discriminate robust effects of superparameterization that emerge across multiple versions. The geographic pattern of annual-mean rainfall is mostly insensitive to superparameterization, with only slight improvements in the double-ITCZ bias. However, unfolding intensity distributions reveal several improvements in the character of rainfall simulated by SPCAM. The rainfall rate that delivers the most accumulated rain (i.e., amount mode) is systematically too weak in all versions of CAM relative to TRMM 3B42 and does not improve with horizontal resolution. It is improved by superparameterization though, with higher modes in regions of tropical wave, Madden-Julian Oscillation, and monsoon activity. Superparameterization produces better representations of extreme rates compared to TRMM 3B42, without sensitivity to horizontal resolution seen in CAM. SPCAM produces more dry days over land and fewer over the ocean. Updates to CAM's low cloud parameterizations have narrowed the frequency peak of light rain, converging toward SPCAM. Poleward of 50°, where more rainfall is produced by resolved-scale processes in CAM, few differences discriminate the rainfall properties of the two models. These results are discussed in light of their implication for future rainfall changes in response to climate forcing.

  20. A hybrid optical switch architecture to integrate IP into optical networks to provide flexible and intelligent bandwidth on demand for cloud computing

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Hall, Trevor J.

    2013-12-01

    The Internet is entering an era of cloud computing to provide more cost effective, eco-friendly and reliable services to consumer and business users. As a consequence, the nature of the Internet traffic has been fundamentally transformed from a pure packet-based pattern to today's predominantly flow-based pattern. Cloud computing has also brought about an unprecedented growth in the Internet traffic. In this paper, a hybrid optical switch architecture is presented to deal with the flow-based Internet traffic, aiming to offer flexible and intelligent bandwidth on demand to improve fiber capacity utilization. The hybrid optical switch is capable of integrating IP into optical networks for cloud-based traffic with predictable performance, for which the delay performance of the electronic module in the hybrid optical switch architecture is evaluated through simulation.

  1. An Analysis of Information Technology Managers' and Executives' Security Concerns on Willingness to Adopt Cloud Computing Solutions

    ERIC Educational Resources Information Center

    Tanque, Marcus M.

    2012-01-01

    The research conducted in this study inquires about Information Technology (IT) managers' and executives' attitudes, beliefs, and knowledge on Cloud Computing (CC) security. The study evaluated how these factors affect IT managers' and executives' willingness to adopt CC solutions in their organizations. Confidentiality,…

  2. Leveraging Cloud Computing to Improve Storage Durability, Availability, and Cost for MER Maestro

    NASA Technical Reports Server (NTRS)

    Chang, George W.; Powell, Mark W.; Callas, John L.; Torres, Recaredo J.; Shams, Khawaja S.

    2012-01-01

    The Maestro for MER (Mars Exploration Rover) software is the premiere operation and activity planning software for the Mars rovers, and it is required to deliver all of the processed image products to scientists on demand. These data span multiple storage arrays sized at 2 TB, and a backup scheme ensures data is not lost. In a catastrophe, these data would currently recover at 20 GB/hour, taking several days for a restoration. A seamless solution provides access to highly durable, highly available, scalable, and cost-effective storage capabilities. This approach also employs a novel technique that enables storage of the majority of data on the cloud and some data locally. This feature is used to store the most recent data locally in order to guarantee utmost reliability in case of an outage or disconnect from the Internet. This also obviates any changes to the software that generates the most recent data set as it still has the same interface to the file system as it did before updates

  3. Influence of host species on ectomycorrhizal communities associated with two co-occurring oaks (Quercus spp.) in a tropical cloud forest.

    PubMed

    Morris, Melissa H; Pérez-Pérez, Miguel A; Smith, Matthew E; Bledsoe, Caroline S

    2009-08-01

    Interactions between host tree species and ectomycorrhizal fungi are important in structuring ectomycorrhizal communities, but there are only a few studies on host influence of congeneric trees. We investigated ectomycorrhizal community assemblages on roots of deciduous Quercus crassifolia and evergreen Quercus laurina in a tropical montane cloud forest, one of the most endangered tropical forest ecosystems. Ectomycorrhizal fungi were identified by sequencing internal transcribed spacer and partial 28S rRNA gene. We sampled 80 soil cores and documented high ectomycorrhizal diversity with a total of 154 taxa. Canonical correspondence analysis indicated that oak host was significant in explaining some of the variation in ectomycorrhizal communities, despite the fact that the two Quercus species belong to the same red oak lineage (section Lobatae). A Tuber species, found in 23% of the soil cores, was the most frequent taxon. Similar to oak-dominated ectomycorrhizal communities in temperate forests, Thelephoraceae, Russulaceae and Sebacinales were diverse and dominant. PMID:19508503

  4. Farm Management Support on Cloud Computing Platform: A System for Cropland Monitoring Using Multi-Source Remotely Sensed Data

    NASA Astrophysics Data System (ADS)

    Coburn, C. A.; Qin, Y.; Zhang, J.; Staenz, K.

    2015-12-01

    Food security is one of the most pressing issues facing humankind. Recent estimates predict that over one billion people don't have enough food to meet their basic nutritional needs. The ability of remote sensing tools to monitor and model crop production and predict crop yield is essential for providing governments and farmers with vital information to ensure food security. Google Earth Engine (GEE) is a cloud computing platform, which integrates storage and processing algorithms for massive remotely sensed imagery and vector data sets. By providing the capabilities of storing and analyzing the data sets, it provides an ideal platform for the development of advanced analytic tools for extracting key variables used in regional and national food security systems. With the high performance computing and storing capabilities of GEE, a cloud-computing based system for near real-time crop land monitoring was developed using multi-source remotely sensed data over large areas. The system is able to process and visualize the MODIS time series NDVI profile in conjunction with Landsat 8 image segmentation for crop monitoring. With multi-temporal Landsat 8 imagery, the crop fields are extracted using the image segmentation algorithm developed by Baatz et al.[1]. The MODIS time series NDVI data are modeled by TIMESAT [2], a software package developed for analyzing time series of satellite data. The seasonality of MODIS time series data, for example, the start date of the growing season, length of growing season, and NDVI peak at a field-level are obtained for evaluating the crop-growth conditions. The system fuses MODIS time series NDVI data and Landsat 8 imagery to provide information of near real-time crop-growth conditions through the visualization of MODIS NDVI time series and comparison of multi-year NDVI profiles. Stakeholders, i.e., farmers and government officers, are able to obtain crop-growth information at crop-field level online. This unique utilization of GEE in

  5. Implementation of a Message Passing Interface into a Cloud-Resolving Model for Massively Parallel Computing

    NASA Technical Reports Server (NTRS)

    Juang, Hann-Ming Henry; Tao, Wei-Kuo; Zeng, Xi-Ping; Shie, Chung-Lin; Simpson, Joanne; Lang, Steve

    2004-01-01

    The capability for massively parallel programming (MPP) using a message passing interface (MPI) has been implemented into a three-dimensional version of the Goddard Cumulus Ensemble (GCE) model. The design for the MPP with MPI uses the concept of maintaining similar code structure between the whole domain as well as the portions after decomposition. Hence the model follows the same integration for single and multiple tasks (CPUs). Also, it provides for minimal changes to the original code, so it is easily modified and/or managed by the model developers and users who have little knowledge of MPP. The entire model domain could be sliced into one- or two-dimensional decomposition with a halo regime, which is overlaid on partial domains. The halo regime requires that no data be fetched across tasks during the computational stage, but it must be updated before the next computational stage through data exchange via MPI. For reproducible purposes, transposing data among tasks is required for spectral transform (Fast Fourier Transform, FFT), which is used in the anelastic version of the model for solving the pressure equation. The performance of the MPI-implemented codes (i.e., the compressible and anelastic versions) was tested on three different computing platforms. The major results are: 1) both versions have speedups of about 99% up to 256 tasks but not for 512 tasks; 2) the anelastic version has better speedup and efficiency because it requires more computations than that of the compressible version; 3) equal or approximately-equal numbers of slices between the x- and y- directions provide the fastest integration due to fewer data exchanges; and 4) one-dimensional slices in the x-direction result in the slowest integration due to the need for more memory relocation for computation.

  6. A Heuristic Placement Selection of Live Virtual Machine Migration for Energy-Saving in Cloud Computing Environment

    PubMed Central

    Zhao, Jia; Hu, Liang; Ding, Yan; Xu, Gaochao; Hu, Ming

    2014-01-01

    The field of live VM (virtual machine) migration has been a hotspot problem in green cloud computing. Live VM migration problem is divided into two research aspects: live VM migration mechanism and live VM migration policy. In the meanwhile, with the development of energy-aware computing, we have focused on the VM placement selection of live migration, namely live VM migration policy for energy saving. In this paper, a novel heuristic approach PS-ES is presented. Its main idea includes two parts. One is that it combines the PSO (particle swarm optimization) idea with the SA (simulated annealing) idea to achieve an improved PSO-based approach with the better global search's ability. The other one is that it uses the Probability Theory and Mathematical Statistics and once again utilizes the SA idea to deal with the data obtained from the improved PSO-based process to get the final solution. And thus the whole approach achieves a long-term optimization for energy saving as it has considered not only the optimization of the current problem scenario but also that of the future problem. The experimental results demonstrate that PS-ES evidently reduces the total incremental energy consumption and better protects the performance of VM running and migrating compared with randomly migrating and optimally migrating. As a result, the proposed PS-ES approach has capabilities to make the result of live VM migration events more high-effective and valuable. PMID:25251339

  7. A heuristic placement selection of live virtual machine migration for energy-saving in cloud computing environment.

    PubMed

    Zhao, Jia; Hu, Liang; Ding, Yan; Xu, Gaochao; Hu, Ming

    2014-01-01

    The field of live VM (virtual machine) migration has been a hotspot problem in green cloud computing. Live VM migration problem is divided into two research aspects: live VM migration mechanism and live VM migration policy. In the meanwhile, with the development of energy-aware computing, we have focused on the VM placement selection of live migration, namely live VM migration policy for energy saving. In this paper, a novel heuristic approach PS-ES is presented. Its main idea includes two parts. One is that it combines the PSO (particle swarm optimization) idea with the SA (simulated annealing) idea to achieve an improved PSO-based approach with the better global search's ability. The other one is that it uses the Probability Theory and Mathematical Statistics and once again utilizes the SA idea to deal with the data obtained from the improved PSO-based process to get the final solution. And thus the whole approach achieves a long-term optimization for energy saving as it has considered not only the optimization of the current problem scenario but also that of the future problem. The experimental results demonstrate that PS-ES evidently reduces the total incremental energy consumption and better protects the performance of VM running and migrating compared with randomly migrating and optimally migrating. As a result, the proposed PS-ES approach has capabilities to make the result of live VM migration events more high-effective and valuable. PMID:25251339

  8. A high performance cloud computing platform for mRNA analysis.

    PubMed

    Lin, Feng-Seng; Shen, Chia-Ping; Sung, Hsiao-Ya; Lam, Yan-Yu; Lin, Jeng-Wei; Lai, Feipei

    2013-01-01

    Multiclass classification is an important technique to many complex bioinformatics problems. However, their performance is limited by the computation power. Based on the Apache Hadoop design framework, this study proposes a two layer architecture that exploits the inherent parallelism of GA-SVM classification to speed up the work. The performance evaluations on an mRNA benchmark cancer dataset have reduced 86.55% features and raised accuracy from 97.53% to 98.03%. With a user-friendly web interface, the system provides researchers an easy way to investigate the unrevealed secrets in the fast-growing repository of bioinformatics data. PMID:24109986

  9. OpenID connect as a security service in Cloud-based diagnostic imaging systems

    NASA Astrophysics Data System (ADS)

    Ma, Weina; Sartipi, Kamran; Sharghi, Hassan; Koff, David; Bak, Peter

    2015-03-01

    The evolution of cloud computing is driving the next generation of diagnostic imaging (DI) systems. Cloud-based DI systems are able to deliver better services to patients without constraining to their own physical facilities. However, privacy and security concerns have been consistently regarded as the major obstacle for adoption of cloud computing by healthcare domains. Furthermore, traditional computing models and interfaces employed by DI systems are not ready for accessing diagnostic images through mobile devices. RESTful is an ideal technology for provisioning both mobile services and cloud computing. OpenID Connect, combining OpenID and OAuth together, is an emerging REST-based federated identity solution. It is one of the most perspective open standards to potentially become the de-facto standard for securing cloud computing and mobile applications, which has ever been regarded as "Kerberos of Cloud". We introduce OpenID Connect as an identity and authentication service in cloud-based DI systems and propose enhancements that allow for incorporating this technology within distributed enterprise environment. The objective of this study is to offer solutions for secure radiology image sharing among DI-r (Diagnostic Imaging Repository) and heterogeneous PACS (Picture Archiving and Communication Systems) as well as mobile clients in the cloud ecosystem. Through using OpenID Connect as an open-source identity and authentication service, deploying DI-r and PACS to private or community clouds should obtain equivalent security level to traditional computing model.

  10. Dropping Out of Computer Science: A Phenomenological Study of Student Lived Experiences in Community College Computer Science

    NASA Astrophysics Data System (ADS)

    Gilbert-Valencia, Daniel H.

    California community colleges contribute alarmingly few computer science degree or certificate earners. While the literature shows clear K-12 impediments to CS matriculation in higher education, very little is known about the experiences of those who overcome initial impediments to CS yet do not persist through to program completion. This phenomenological study explores insights into that specific experience by interviewing underrepresented, low income, first-generation college students who began community college intending to transfer to 4-year institutions majoring in CS but switched to another field and remain enrolled or graduated. This study explores the lived experiences of students facing barriers, their avenues for developing interest in CS, and the persistence support systems they encountered, specifically looking at how students constructed their academic choice from these experiences. The growing diversity within California's population necessitates that experiences specific to underrepresented students be considered as part of this exploration. Ten semi-structured interviews and observations were conducted, transcribed and coded. Artifacts supporting student experiences were also collected. Data was analyzed through a social-constructivist lens to provide insight into experiences and how they can be navigated to create actionable strategies for community college computer science departments wishing to increase student success. Three major themes emerged from this research: (1) students shared pre-college characteristics; (2) faced similar challenges in college CS courses; and (3) shared similar reactions to the "work" of computer science. Results of the study included (1) CS interest development hinged on computer ownership in the home; (2) participants shared characteristics that were ideal for college success but not CS success; and (3) encounters in CS departments produced unique challenges for participants. Though CS interest was and remains

  11. Transnational Computer Use in Urban Latino Immigrant Communities: Implications for Schooling

    ERIC Educational Resources Information Center

    Sanchez, Patricia; Salazar, Malena

    2012-01-01

    This article examines the ways in which transnational Latino immigrants in urban communities use computer technology. Drawing from a 3-year ethnographic study, it focuses on three second-generation transnational female youth, their families, and members of their respective immigrant networks. Data were collected in both the United States and…

  12. Women in Community College: Factors Related to Intentions to Pursue Computer Science

    ERIC Educational Resources Information Center

    Denner, Jill; Werner, Linda; O'Connor, Lisa

    2015-01-01

    Community colleges (CC) are obvious places to recruit more women into computer science. Enrollment at CCs has grown in response to a struggling economy, and students are more likely to be from underrepresented groups than students enrolled in 4-year universities (National Center for Education Statistics, 2008). However, we know little about why so…

  13. Evaluating How the Computer-Supported Collaborative Learning Community Fosters Critical Reflective Practices

    ERIC Educational Resources Information Center

    Ma, Ada W.W.

    2013-01-01

    In recent research, little attention has been paid to issues of methodology and analysis methods to evaluate the quality of the collaborative learning community. To address such issues, an attempt is made to adopt the Activity System Model as an analytical framework to examine the relationship between computer supported collaborative learning…

  14. An Open Letter to the Professional Communities of Australian Council for Computers in Education (ACCE)

    ERIC Educational Resources Information Center

    Williams, Michelle

    2005-01-01

    This article presents an open letter to the Professional Communities of Australian Council for Computers in Education (ACCE). In preparing for this article, the author looked back over the contributions of other fellows in publications and ACCE Minutes, and recognized that each had led the ACCE family in this endeavor. Creating direction was the…

  15. The Power of Computer-aided Tomography to Investigate Marine Benthic Communities

    EPA Science Inventory

    Utilization of Computer-aided-Tomography (CT) technology is a powerful tool to investigate benthic communities in aquatic systems. In this presentation, we will attempt to summarize our 15 years of experience in developing specific CT methods and applications to marine benthic co...

  16. Evaluation of the PLATO IV Computer-based Education System in the Community College. Final Report.

    ERIC Educational Resources Information Center

    Murphy, Richard T.; Appel, Lola Rhea

    PLATO IV (Programmed Logic for Automatic Teaching Operations) is the fourth generation of a computer assisted instructional system developed at the University of Illinois. The use of PLATO IV at five community colleges, and an evaluation of its educational impact on participating students, instructors, and colleges are described. The PLATO system…

  17. Negotiating Knowledge Contribution to Multiple Discourse Communities: A Doctoral Student of Computer Science Writing for Publication

    ERIC Educational Resources Information Center

    Li, Yongyan

    2006-01-01

    Despite the rich literature on disciplinary knowledge construction and multilingual scholars' academic literacy practices, little is known about how novice scholars are engaged in knowledge construction in negotiation with various target discourse communities. In this case study, with a focused analysis of a Chinese computer science doctoral…

  18. State College Scavenger: Evaluating the Perspectives of Mobile Computing Interactions within Community Spaces

    ERIC Educational Resources Information Center

    Hoffman, Blaine

    2013-01-01

    This work focuses on the impact of mobile computing on individuals' perspectives of places within their community. A technological intervention is designed and deployed to augment the user experience of visiting different locations around town, physically exploring them while also interacting with an online tool. The tool-supported activity serves…

  19. Community Colleges in the Information Age: Gains Associated with Students' Use of Computer Technology

    ERIC Educational Resources Information Center

    Anderson, Bodi; Horn, Robert

    2012-01-01

    Computer literacy is increasingly important in higher education, and many educational technology experts propose a more prominent integration of technology into pedagogy. Empirical evidence is needed to support these theories. This study examined community college students planning to transfer to 4-year universities and estimated the relationship…

  20. Cloud-based opportunities in scientific computing: insights from processing Suomi National Polar-Orbiting Partnership (S-NPP) Direct Broadcast data

    NASA Astrophysics Data System (ADS)

    Evans, J. D.; Hao, W.; Chettri, S.

    2013-12-01

    The cloud is proving to be a uniquely promising platform for scientific computing. Our experience with processing satellite data using Amazon Web Services highlights several opportunities for enhanced performance, flexibility, and cost effectiveness in the cloud relative to traditional computing -- for example: - Direct readout from a polar-orbiting satellite such as the Suomi National Polar-Orbiting Partnership (S-NPP) requires bursts of processing a few times a day, separated by quiet periods when the satellite is out of receiving range. In the cloud, by starting and stopping virtual machines in minutes, we can marshal significant computing resources quickly when needed, but not pay for them when not needed. To take advantage of this capability, we are automating a data-driven approach to the management of cloud computing resources, in which new data availability triggers the creation of new virtual machines (of variable size and processing power) which last only until the processing workflow is complete. - 'Spot instances' are virtual machines that run as long as one's asking price is higher than the provider's variable spot price. Spot instances can greatly reduce the cost of computing -- for software systems that are engineered to withstand unpredictable interruptions in service (as occurs when a spot price exceeds the asking price). We are implementing an approach to workflow management that allows data processing workflows to resume with minimal delays after temporary spot price spikes. This will allow systems to take full advantage of variably-priced 'utility computing.' - Thanks to virtual machine images, we can easily launch multiple, identical machines differentiated only by 'user data' containing individualized instructions (e.g., to fetch particular datasets or to perform certain workflows or algorithms) This is particularly useful when (as is the case with S-NPP data) we need to launch many very similar machines to process an unpredictable number of