Science.gov

Sample records for scientific computing service

  1. Scientific computing infrastructure and services in Moldova

    NASA Astrophysics Data System (ADS)

    Bogatencov, P. P.; Secrieru, G. V.; Degteariov, N. V.; Iliuha, N. P.

    2016-09-01

    In recent years distributed information processing and high-performance computing technologies (HPC, distributed Cloud and Grid computing infrastructures) for solving complex tasks with high demands of computing resources are actively developing. In Moldova the works on creation of high-performance and distributed computing infrastructures were started relatively recently due to participation in implementation of a number of international projects. Research teams from Moldova participated in a series of regional and pan-European projects that allowed them to begin forming the national heterogeneous computing infrastructure, get access to regional and European computing resources, and expand the range and areas of solving tasks.

  2. Availability measurement of grid services from the perspective of a scientific computing centre

    NASA Astrophysics Data System (ADS)

    Marten, H.; Koenig, T.

    2011-12-01

    The Karlsruhe Institute of Technology (KIT) is the merger of Forschungszentrum Karlsruhe and the Technical University Karlsruhe. The Steinbuch Centre for Computing (SCC) was one of the first new organizational units of KIT, combining the former Institute for Scientific Computing of Forschungszentrum Karlsruhe and the Computing Centre of the University. IT service management according to the worldwide de-facto-standard "IT Infrastructure Library (ITIL)" [1] was chosen by SCC as a strategic element to support the merging of the two existing computing centres located at a distance of about 10 km. The availability and reliability of IT services directly influence the customer satisfaction as well as the reputation of the service provider, and unscheduled loss of availability due to hardware or software failures may even result in severe consequences like data loss. Fault tolerant and error correcting design features are reducing the risk of IT component failures and help to improve the delivered availability. The ITIL process controlling the respective design is called Availability Management [1]. This paper discusses Availability Management regarding grid services delivered to WLCG and provides a few elementary guidelines for availability measurements and calculations of services consisting of arbitrary numbers of components.

  3. High-End Scientific Computing

    EPA Pesticide Factsheets

    EPA uses high-end scientific computing, geospatial services and remote sensing/imagery analysis to support EPA's mission. The Center for Environmental Computing (CEC) assists the Agency's program offices and regions to meet staff needs in these areas.

  4. Using Cloud-Computing Applications to Support Collaborative Scientific Inquiry: Examining Pre-Service Teachers' Perceived Barriers to Integration

    ERIC Educational Resources Information Center

    Donna, Joel D.; Miller, Brant G.

    2013-01-01

    Technology plays a crucial role in facilitating collaboration within the scientific community. Cloud-computing applications, such as Google Drive, can be used to model such collaboration and support inquiry within the secondary science classroom. Little is known about pre-service teachers' beliefs related to the envisioned use of collaborative,…

  5. Scientific Grid computing.

    PubMed

    Coveney, Peter V

    2005-08-15

    We introduce a definition of Grid computing which is adhered to throughout this Theme Issue. We compare the evolution of the World Wide Web with current aspirations for Grid computing and indicate areas that need further research and development before a generally usable Grid infrastructure becomes available. We discuss work that has been done in order to make scientific Grid computing a viable proposition, including the building of Grids, middleware developments, computational steering and visualization. We review science that has been enabled by contemporary computational Grids, and associated progress made through the widening availability of high performance computing.

  6. Towards Monitoring-as-a-service for Scientific Computing Cloud applications using the ElasticSearch ecosystem

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Guarise, A.; Lusso, S.; Masera, M.; Vallero, S.

    2015-12-01

    The INFN computing centre in Torino hosts a private Cloud, which is managed with the OpenNebula cloud controller. The infrastructure offers Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) services to different scientific computing applications. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BESIII collaboration, plus an increasing number of other small tenants. The dynamic allocation of resources to tenants is partially automated. This feature requires detailed monitoring and accounting of the resource usage. We set up a monitoring framework to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the ElasticSearch, Logstash and Kibana (ELK) stack. The infrastructure relies on a MySQL database back-end for data preservation and to ensure flexibility to choose a different monitoring solution if needed. The heterogeneous accounting information is transferred from the database to the ElasticSearch engine via a custom Logstash plugin. Each use-case is indexed separately in ElasticSearch and we setup a set of Kibana dashboards with pre-defined queries in order to monitor the relevant information in each case. For the IaaS metering, we developed sensors for the OpenNebula API. The IaaS level information gathered through the API is sent to the MySQL database through an ad-hoc developed RESTful web service. Moreover, we have developed a billing system for our private Cloud, which relies on the RabbitMQ message queue for asynchronous communication to the database and on the ELK stack for its graphical interface. The Italian Grid accounting framework is also migrating to a similar set-up. Concerning the application level, we used the Root plugin TProofMonSenderSQL to collect accounting data from the interactive analysis facility. The BESIII

  7. Scientific Services on the Cloud

    NASA Astrophysics Data System (ADS)

    Chapman, David; Joshi, Karuna P.; Yesha, Yelena; Halem, Milt; Yesha, Yaacov; Nguyen, Phuong

    Scientific Computing was one of the first every applications for parallel and distributed computation. To this date, scientific applications remain some of the most compute intensive, and have inspired creation of petaflop compute infrastructure such as the Oak Ridge Jaguar and Los Alamos RoadRunner. Large dedicated hardware infrastructure has become both a blessing and a curse to the scientific community. Scientists are interested in cloud computing for much the same reason as businesses and other professionals. The hardware is provided, maintained, and administrated by a third party. Software abstraction and virtualization provide reliability, and fault tolerance. Graduated fees allow for multi-scale prototyping and execution. Cloud computing resources are only a few clicks away, and by far the easiest high performance distributed platform to gain access to. There may still be dedicated infrastructure for ultra-scale science, but the cloud can easily play a major part of the scientific computing initiative.

  8. Parallel processing for scientific computations

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.

    1995-01-01

    The scope of this project dealt with the investigation of the requirements to support distributed computing of scientific computations over a cluster of cooperative workstations. Various experiments on computations for the solution of simultaneous linear equations were performed in the early phase of the project to gain experience in the general nature and requirements of scientific applications. A specification of a distributed integrated computing environment, DICE, based on a distributed shared memory communication paradigm has been developed and evaluated. The distributed shared memory model facilitates porting existing parallel algorithms that have been designed for shared memory multiprocessor systems to the new environment. The potential of this new environment is to provide supercomputing capability through the utilization of the aggregate power of workstations cooperating in a cluster interconnected via a local area network. Workstations, generally, do not have the computing power to tackle complex scientific applications, making them primarily useful for visualization, data reduction, and filtering as far as complex scientific applications are concerned. There is a tremendous amount of computing power that is left unused in a network of workstations. Very often a workstation is simply sitting idle on a desk. A set of tools can be developed to take advantage of this potential computing power to create a platform suitable for large scientific computations. The integration of several workstations into a logical cluster of distributed, cooperative, computing stations presents an alternative to shared memory multiprocessor systems. In this project we designed and evaluated such a system.

  9. Computers in Scientific Instrumentation.

    DTIC Science & Technology

    1982-01-13

    The CPU bus Attachment. In the first applications or d 4ata are connected to the central ues parallel digital lines for data an computers to...simple function se- Mg on ae results of its previous opera- designing instruments that can provide hotios by being directly labeled for the ties. In...that the signal from the sensor is with an operating system is powerful , that might be found in appropriately pro- interpretable to give the sought- for

  10. Computers and Computation. Readings from Scientific American.

    ERIC Educational Resources Information Center

    Fenichel, Robert R.; Weizenbaum, Joseph

    A collection of articles from "Scientific American" magazine has been put together at this time because the current period in computer science is one of consolidation rather than innovation. A few years ago, computer science was moving so swiftly that even the professional journals were more archival than informative; but today it is…

  11. Research on Web-based Scientific Computing Legacy Application Sharing

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Cui, Bin-Ge

    With the development of Internet technology, A legion of scientific computing legacy programs with rich domain knowledge and expertise were distributed across various disciplines. As the program implementations or interfaces and so on, scientific computing legacy programs can not be shared through the Internet. This paper proposes a method of packaging scientific computing legacy programs into DLL(Dynamic Link Library), and packaging them into Web services through the C# reflection, making the scientific computing legacy programs successfully share on the Internet.

  12. Parallel processing for scientific computations

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.

    1991-01-01

    The main contribution of the effort in the last two years is the introduction of the MOPPS system. After doing extensive literature search, we introduced the system which is described next. MOPPS employs a new solution to the problem of managing programs which solve scientific and engineering applications on a distributed processing environment. Autonomous computers cooperate efficiently in solving large scientific problems with this solution. MOPPS has the advantage of not assuming the presence of any particular network topology or configuration, computer architecture, or operating system. It imposes little overhead on network and processor resources while efficiently managing programs concurrently. The core of MOPPS is an intelligent program manager that builds a knowledge base of the execution performance of the parallel programs it is managing under various conditions. The manager applies this knowledge to improve the performance of future runs. The program manager learns from experience.

  13. Accelerating Scientific Computations using FPGAs

    NASA Astrophysics Data System (ADS)

    Pell, O.; Atasu, K.; Mencer, O.

    Field Programmable Gate Arrays (FPGAs) are semiconductor devices that contain a grid of programmable cells, which the user configures to implement any digital circuit of up to a few million gates. Modern FPGAs allow the user to reconfigure these circuits many times each second, making FPGAs fully programmable and general purpose. Recent FPGA technology provides sufficient resources to tackle scientific applications on large-scale parallel systems. As a case study, we implement the Fast Fourier Transform [1] in a flexible floating point implementation. We utilize A Stream Compiler [2] (ASC) which combines C++ syntax with flexible floating point support by providing a 'HWfloat' data-type. The resulting FFT can be targeted to a variety of FPGA platforms in FFTW-style, though not yet completely automatically. The resulting FFT circuit can be adapted to the particular resources available on the system. The optimal implementation of an FFT accelerator depends on the length and dimensionality of the FFT, the available FPGA area, the available hard DSP blocks, the FPGA board architecture, and the precision and range of the application [3]. Software-style object-orientated abstractions allow us to pursue an accelerated pace of development by maximizing re-use of design patterns. ASC allows a few core hardware descriptions to generate hundreds of different circuit variants to meet particular speed, area and precision goals. The key to achieving maximum acceleration of FFT computation is to match memory and compute bandwidths so that maximum use is made of computational resources. Modern FPGAs contain up to hundreds of independent SRAM banks to store intermediate results, providing ample scope for optimizing memory parallelism. At 175Mhz, one of Maxeler's Radix-4 FFT cores computes 4x as many 1024pt FFTs per second as a dual Pentium-IV Xeon machine running FFTW. Eight such parallel cores fit onto the largest FPGA in the Xilinx Virtex-4 family, providing a 32x speed-up over

  14. National Energy Research Scientific Computing Center 2007 Annual Report

    SciTech Connect

    Hules, John A.; Bashor, Jon; Wang, Ucilia; Yarris, Lynn; Preuss, Paul

    2008-10-23

    This report presents highlights of the research conducted on NERSC computers in a variety of scientific disciplines during the year 2007. It also reports on changes and upgrades to NERSC's systems and services aswell as activities of NERSC staff.

  15. Supporting the scientific lifecycle through cloud services

    NASA Astrophysics Data System (ADS)

    Gensch, S.; Klump, J. F.; Bertelmann, R.; Braune, C.

    2014-12-01

    Cloud computing has made resources and applications available for numerous use cases ranging from business processes in the private sector to scientific applications. Developers have created tools for data management, collaborative writing, social networking, data access and visualization, project management and many more; either for free or as paid premium services with additional or extended features. Scientists have begun to incorporate tools that fit their needs into their daily work. To satisfy specialized needs, some cloud applications specifically address the needs of scientists for sharing research data, literature search, laboratory documentation, or data visualization. Cloud services may vary in extent, user coverage, and inter-service integration and are also at risk of being abandonend or changed by the service providers making changes to their business model, or leaving the field entirely.Within the project Academic Enterprise Cloud we examine cloud based services that support the research lifecycle, using feature models to describe key properties in the areas of infrastructure and service provision, compliance to legal regulations, and data curation. Emphasis is put on the term Enterprise as to establish an academic cloud service provider infrastructure that satisfies demands of the research community through continious provision across the whole cloud stack. This could enable the research community to be independent from service providers regarding changes to terms of service and ensuring full control of its extent and usage. This shift towards a self-empowered scientific cloud provider infrastructure and its community raises implications about feasability of provision and overall costs. Legal aspects and licensing issues have to be considered, when moving data into cloud services, especially when personal data is involved.Educating researchers about cloud based tools is important to help in the transition towards effective and safe use. Scientists

  16. Scientific Computation of Optimal Statistical Estimators

    DTIC Science & Technology

    2015-07-13

    AFRL-AFOSR-VA-TR-2015-0276 Scientific Computation of Optimal Statistical Estimators Houman Owhadi CALIFORNIA INSTITUTE OF TECHNOLOGY 1200 E...CALIFORNIA BLDV PASADENA, CA 91125 07/13/2015 Final Report DISTRIBUTION A: Distribution approved for public release. AF Office Of Scientific Research (AFOSR...From - To) 8/1/12 - 7/31/15 4. TITLE AND SUBTITLE Scientific Computation of Optimal Statistical Estimators 5a. CONTRACT NUMBER FA9550-12-1-0389 5b

  17. Scientific computing environment for the 1980s

    NASA Technical Reports Server (NTRS)

    Bailey, F. R.

    1986-01-01

    An emerging scientific computing environment in which computers are used not only to solve large-scale models, but are also integrated into the daily activities of scientists and engineers, is discussed. The requirements of the scientific user in this environment are reviewed, and the hardware environment is described, including supercomputers, work stations, mass storage, and communications. Significant increases in memory capacity to keep pace with performance increases, the introduction of powerful graphics displays into the work station, and networking to integrate many computers are stressed. The emerging system software environment is considered, including the operating systems, communications software, and languages. New scientific user tools and utilities that will become available are described.

  18. DOE Advanced Scientific Computing Advisory Committee (ASCAC) Subcommittee Report on Scientific and Technical Information

    SciTech Connect

    Hey, Tony; Agarwal, Deborah; Borgman, Christine; Cartaro, Concetta; Crivelli, Silvia; Van Dam, Kerstin Kleese; Luce, Richard; Arjun, Shankar; Trefethen, Anne; Wade, Alex; Williams, Dean

    2015-09-04

    The Advanced Scientific Computing Advisory Committee (ASCAC) was charged to form a standing subcommittee to review the Department of Energy’s Office of Scientific and Technical Information (OSTI) and to begin by assessing the quality and effectiveness of OSTI’s recent and current products and services and to comment on its mission and future directions in the rapidly changing environment for scientific publication and data. The Committee met with OSTI staff and reviewed available products, services and other materials. This report summaries their initial findings and recommendations.

  19. Scientific Computing on the Grid

    SciTech Connect

    Allen, Gabrielle; Seidel, Edward; Shalf, John

    2001-12-12

    Computer simulations are becoming increasingly important as the only means for studying and interpreting the complex processes of nature. Yet the scope and accuracy of these simulations are severely limited by available computational power, even using today's most powerful supercomputers. As we endeavor to simulate the true complexity of nature, we will require much larger scale calculations than are possible at present. Such dynamic and large scale applications will require computational grids and grids require development of new latency tolerant algorithms, and sophisticated code frameworks like Cactus to carry out more complex and high fidelity simulations with a massive degree of parallelism.

  20. Ontology-Driven Discovery of Scientific Computational Entities

    ERIC Educational Resources Information Center

    Brazier, Pearl W.

    2010-01-01

    Many geoscientists use modern computational resources, such as software applications, Web services, scientific workflows and datasets that are readily available on the Internet, to support their research and many common tasks. These resources are often shared via human contact and sometimes stored in data portals; however, they are not necessarily…

  1. Introduction to the LaRC central scientific computing complex

    NASA Technical Reports Server (NTRS)

    Shoosmith, John N.

    1993-01-01

    The computers and associated equipment that make up the Central Scientific Computing Complex of the Langley Research Center are briefly described. The electronic networks that provide access to the various components of the complex and a number of areas that can be used by Langley and contractors staff for special applications (scientific visualization, image processing, software engineering, and grid generation) are also described. Flight simulation facilities that use the central computers are described. Management of the complex, procedures for its use, and available services and resources are discussed. This document is intended for new users of the complex, for current users who wish to keep appraised of changes, and for visitors who need to understand the role of central scientific computers at Langley.

  2. IODP Scientific Earth Drilling Information Service

    NASA Astrophysics Data System (ADS)

    Wallrabe-Adams, H.-J.; Diepenbroek, M.; Grobe, H.; Huber, R.; Schindler, U.; Collier, J.

    2012-04-01

    The Integrated Ocean Drilling Program (IODP) has set up a web-based information service (Scientific Earth Drilling Information Service, SEDIS, http://sedis.iodp.org), which integrates the data of the three IODP implementing organizations from the United States (USIO), Japan (CDEX) and Europe with Canada (ESO). The SEDIS portal provides information on ODP, DSDP and IODP expeditions, publications and data. Moreover, post-cruise data has been collected and published via the portal. A thesaurus supports information and data searches. Data sets can be downloaded as tab-delimited text files. SEDIS is also being prepared to include other IODP relevant scientific drilling data from terrestrial or lake drilling programs. The portal is designed to integrate available scientific data via metadata by employing international standards for metadata, data exchange and transfer.

  3. A Computing Environment to Support Repeatable Scientific Big Data Experimentation of World-Wide Scientific Literature

    SciTech Connect

    Schlicher, Bob G; Kulesz, James J; Abercrombie, Robert K; Kruse, Kara L

    2015-01-01

    A principal tenant of the scientific method is that experiments must be repeatable and relies on ceteris paribus (i.e., all other things being equal). As a scientific community, involved in data sciences, we must investigate ways to establish an environment where experiments can be repeated. We can no longer allude to where the data comes from, we must add rigor to the data collection and management process from which our analysis is conducted. This paper describes a computing environment to support repeatable scientific big data experimentation of world-wide scientific literature, and recommends a system that is housed at the Oak Ridge National Laboratory in order to provide value to investigators from government agencies, academic institutions, and industry entities. The described computing environment also adheres to the recently instituted digital data management plan mandated by multiple US government agencies, which involves all stages of the digital data life cycle including capture, analysis, sharing, and preservation. It particularly focuses on the sharing and preservation of digital research data. The details of this computing environment are explained within the context of cloud services by the three layer classification of Software as a Service , Platform as a Service , and Infrastructure as a Service .

  4. Web Services Provide Access to SCEC Scientific Research Application Software

    NASA Astrophysics Data System (ADS)

    Gupta, N.; Gupta, V.; Okaya, D.; Kamb, L.; Maechling, P.

    2003-12-01

    Web services offer scientific communities a new paradigm for sharing research codes and communicating results. While there are formal technical definitions of what constitutes a web service, for a user community such as the Southern California Earthquake Center (SCEC), we may conceptually consider a web service to be functionality provided on-demand by an application which is run on a remote computer located elsewhere on the Internet. The value of a web service is that it can (1) run a scientific code without the user needing to install and learn the intricacies of running the code; (2) provide the technical framework which allows a user's computer to talk to the remote computer which performs the service; (3) provide the computational resources to run the code; and (4) bundle several analysis steps and provide the end results in digital or (post-processed) graphical form. Within an NSF-sponsored ITR project coordinated by SCEC, we are constructing web services using architectural protocols and programming languages (e.g., Java). However, because the SCEC community has a rich pool of scientific research software (written in traditional languages such as C and FORTRAN), we also emphasize making existing scientific codes available by constructing web service frameworks which wrap around and directly run these codes. In doing so we attempt to broaden community usage of these codes. Web service wrapping of a scientific code can be done using a "web servlet" construction or by using a SOAP/WSDL-based framework. This latter approach is widely adopted in IT circles although it is subject to rapid evolution. Our wrapping framework attempts to "honor" the original codes with as little modification as is possible. For versatility we identify three methods of user access: (A) a web-based GUI (written in HTML and/or Java applets); (B) a Linux/OSX/UNIX command line "initiator" utility (shell-scriptable); and (C) direct access from within any Java application (and with the

  5. Intel Woodcrest: An Evaluation for Scientific Computing

    SciTech Connect

    Roth, Philip C; Vetter, Jeffrey S

    2007-01-01

    Intel recently began shipping its Xeon 5100 series processors, formerly known by their 'Woodcrest' code name. To evaluate the suitability of the Woodcrest processor for high-end scientific computing, we obtained access to a Woodcrest-based system at Intel and measured its performance first using computation and memory micro-benchmarks, followed by full applications from the areas of climate modeling and molecular dynamics. For computational benchmarks, the Woodcrest showed excellent performance compared to a test system that uses Opteron processors from Advanced Micro Devices (AMD), though its performance advantage for full applications was less definitive. Nevertheless, our evaluation suggests the Woodcrest to be a compelling foundation for future leadership class systems for scientific computing.

  6. Comparisons of some large scientific computers

    NASA Technical Reports Server (NTRS)

    Credeur, K. R.

    1981-01-01

    In 1975, the National Aeronautics and Space Administration (NASA) began studies to assess the technical and economic feasibility of developing a computer having sustained computational speed of one billion floating point operations per second and a working memory of at least 240 million words. Such a powerful computer would allow computational aerodynamics to play a major role in aeronautical design and advanced fluid dynamics research. Based on favorable results from these studies, NASA proceeded with developmental plans. The computer was named the Numerical Aerodynamic Simulator (NAS). To help insure that the estimated cost, schedule, and technical scope were realistic, a brief study was made of past large scientific computers. Large discrepancies between inception and operation in scope, cost, or schedule were studied so that they could be minimized with NASA's proposed new compter. The main computers studied were the ILLIAC IV, STAR 100, Parallel Element Processor Ensemble (PEPE), and Shuttle Mission Simulator (SMS) computer. Comparison data on memory and speed were also obtained on the IBM 650, 704, 7090, 360-50, 360-67, 360-91, and 370-195; the CDC 6400, 6600, 7600, CYBER 203, and CYBER 205; CRAY 1; and the Advanced Scientific Computer (ASC). A few lessons learned conclude the report.

  7. SCE: Grid Environment for Scientific Computing

    NASA Astrophysics Data System (ADS)

    Xiao, Haili; Wu, Hong; Chi, Xuebin

    Over the last few years Grid computing has evolved into an innovating technology and gotten increased commercial adoption. However, existing Grids do not have enough users as for sustainable development in the long term. This paper proposes several suggestions to this problem on the basis of long-term experience and careful analysis. The Scientific Computing Environment (SCE) in the Chinese Academy of Sciences is introduced as a completely new model and a feasible solution to this problem.

  8. Exploring HPCS Languages in Scientific Computing

    SciTech Connect

    Barrett, Richard F; Alam, Sadaf R; de Almeida, Valmor F; Bernholdt, David E; Elwasif, Wael R; Kuehn, Jeffery A; Poole, Stephen W; Shet, Aniruddha G

    2008-01-01

    As computers scale up dramatically to tens and hundreds of thousands of cores, develop deeper computational and memory hierarchies, and increased heterogeneity, developers of scientific software are increasingly challenged to express complex parallel simulations effectively and efficiently. In this paper, we explore the three languages developed under the DARPA High-Productivity Computing Systems (HPCS) program to help address these concerns: Chapel, Fortress, and X10. These languages provide a variety of features not found in currently popular HPC programming environments and make it easier to express powerful computational constructs, leading to new ways of thinking about parallel programming. Though the languages and their implementations are not yet mature enough for a comprehensive evaluation, we discuss some of the important features, and provide examples of how they can be used in scientific computing. We believe that these characteristics will be important to the future of high-performance scientific computing, whether the ultimate language of choice is one of the HPCS languages or something else.

  9. Accelerating Scientific Discovery Through Computation and Visualization

    PubMed Central

    Sims, James S.; Hagedorn, John G.; Ketcham, Peter M.; Satterfield, Steven G.; Griffin, Terence J.; George, William L.; Fowler, Howland A.; am Ende, Barbara A.; Hung, Howard K.; Bohn, Robert B.; Koontz, John E.; Martys, Nicos S.; Bouldin, Charles E.; Warren, James A.; Feder, David L.; Clark, Charles W.; Filla, B. James; Devaney, Judith E.

    2000-01-01

    The rate of scientific discovery can be accelerated through computation and visualization. This acceleration results from the synergy of expertise, computing tools, and hardware for enabling high-performance computation, information science, and visualization that is provided by a team of computation and visualization scientists collaborating in a peer-to-peer effort with the research scientists. In the context of this discussion, high performance refers to capabilities beyond the current state of the art in desktop computing. To be effective in this arena, a team comprising a critical mass of talent, parallel computing techniques, visualization algorithms, advanced visualization hardware, and a recurring investment is required to stay beyond the desktop capabilities. This article describes, through examples, how the Scientific Applications and Visualization Group (SAVG) at NIST has utilized high performance parallel computing and visualization to accelerate condensate modeling, (2) fluid flow in porous materials and in other complex geometries, (3) flows in suspensions, (4) x-ray absorption, (5) dielectric breakdown modeling, and (6) dendritic growth in alloys. PMID:27551642

  10. Accelerating Scientific Discovery Through Computation and Visualization.

    PubMed

    Sims, J S; Hagedorn, J G; Ketcham, P M; Satterfield, S G; Griffin, T J; George, W L; Fowler, H A; Am Ende, B A; Hung, H K; Bohn, R B; Koontz, J E; Martys, N S; Bouldin, C E; Warren, J A; Feder, D L; Clark, C W; Filla, B J; Devaney, J E

    2000-01-01

    The rate of scientific discovery can be accelerated through computation and visualization. This acceleration results from the synergy of expertise, computing tools, and hardware for enabling high-performance computation, information science, and visualization that is provided by a team of computation and visualization scientists collaborating in a peer-to-peer effort with the research scientists. In the context of this discussion, high performance refers to capabilities beyond the current state of the art in desktop computing. To be effective in this arena, a team comprising a critical mass of talent, parallel computing techniques, visualization algorithms, advanced visualization hardware, and a recurring investment is required to stay beyond the desktop capabilities. This article describes, through examples, how the Scientific Applications and Visualization Group (SAVG) at NIST has utilized high performance parallel computing and visualization to accelerate condensate modeling, (2) fluid flow in porous materials and in other complex geometries, (3) flows in suspensions, (4) x-ray absorption, (5) dielectric breakdown modeling, and (6) dendritic growth in alloys.

  11. Scientific Computing Kernels on the Cell Processor

    SciTech Connect

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  12. Numerical recipes, The art of scientific computing

    SciTech Connect

    Press, W.H.; Flannery, B.P.; Teukolsky, S.; Vetterling, W.T.

    1986-01-01

    Seventeen chapter are divided into 130 sections provide a self-contained treatment that derives, critically discusses, and actually implements over 200 of the most important numerical algorithms for scientific work. Each algorithm is presented both in FORTRAN and Pascal, with the source programs printed in the book itself. The scope of Numerical Recipes ranges from standard areas of numerical analysis (linear algebra, differential equations, roots) through subjects useful to signal processing (Fourier methods, filtering), data analysis (least squares, robust fitting, statistical functions), simulation (random deviates and Monte Carlo). The routines themselves are available for a wide variety of different computers, from personal computers to mainframes, and are largely portable among different machines.

  13. Enabling NVM for Data-Intensive Scientific Services

    SciTech Connect

    Carns, Philip; Jenkins, John; Seo, Sangmin; Snyder, Shane; Ross, Rob; Cranor, Chuck; Atchley, Scott; Hoefler, Torsten

    2016-01-01

    Specialized, transient data services are playing an increasingly prominent role in data-intensive scientific computing. These services offer flexible, on-demand pairing of applications with storage hardware using semantics that are optimized for the problem domain. Concurrent with this trend, upcoming scientific computing and big data systems will be deployed with emerging NVM technology to achieve the highest possible price/productivity ratio. Clearly, therefore, we must develop techniques to facilitate the confluence of specialized data services and NVM technology. In this work we explore how to enable the composition of NVM resources within transient distributed services while still retaining their essential performance characteristics. Our approach involves eschewing the conventional distributed file system model and instead projecting NVM devices as remote microservices that leverage user-level threads, RPC services, RMA-enabled network transports, and persistent memory libraries in order to maximize performance. We describe a prototype system that incorporates these concepts, evaluate its performance for key workloads on an exemplar system, and discuss how the system can be leveraged as a component of future data-intensive architectures.

  14. Scientific Visualization and Computational Science: Natural Partners

    NASA Technical Reports Server (NTRS)

    Uselton, Samuel P.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Scientific visualization is developing rapidly, stimulated by computational science, which is gaining acceptance as a third alternative to theory and experiment. Computational science is based on numerical simulations of mathematical models derived from theory. But each individual simulation is like a hypothetical experiment; initial conditions are specified, and the result is a record of the observed conditions. Experiments can be simulated for situations that can not really be created or controlled. Results impossible to measure can be computed.. Even for observable values, computed samples are typically much denser. Numerical simulations also extend scientific exploration where the mathematics is analytically intractable. Numerical simulations are used to study phenomena from subatomic to intergalactic scales and from abstract mathematical structures to pragmatic engineering of everyday objects. But computational science methods would be almost useless without visualization. The obvious reason is that the huge amounts of data produced require the high bandwidth of the human visual system, and interactivity adds to the power. Visualization systems also provide a single context for all the activities involved from debugging the simulations, to exploring the data, to communicating the results. Most of the presentations today have their roots in image processing, where the fundamental task is: Given an image, extract information about the scene. Visualization has developed from computer graphics, and the inverse task: Given a scene description, make an image. Visualization extends the graphics paradigm by expanding the possible input. The goal is still to produce images; the difficulty is that the input is not a scene description displayable by standard graphics methods. Visualization techniques must either transform the data into a scene description or extend graphics techniques to display this odd input. Computational science is a fertile field for visualization

  15. Enabling Computational Technologies for Terascale Scientific Simulations

    SciTech Connect

    Ashby, S.F.

    2000-08-24

    We develop scalable algorithms and object-oriented code frameworks for terascale scientific simulations on massively parallel processors (MPPs). Our research in multigrid-based linear solvers and adaptive mesh refinement enables Laboratory programs to use MPPs to explore important physical phenomena. For example, our research aids stockpile stewardship by making practical detailed 3D simulations of radiation transport. The need to solve large linear systems arises in many applications, including radiation transport, structural dynamics, combustion, and flow in porous media. These systems result from discretizations of partial differential equations on computational meshes. Our first research objective is to develop multigrid preconditioned iterative methods for such problems and to demonstrate their scalability on MPPs. Scalability describes how total computational work grows with problem size; it measures how effectively additional resources can help solve increasingly larger problems. Many factors contribute to scalability: computer architecture, parallel implementation, and choice of algorithm. Scalable algorithms have been shown to decrease simulation times by several orders of magnitude.

  16. National Energy Research Scientific Computing Center (NERSC): Advancing the frontiers of computational science and technology

    SciTech Connect

    Hules, J.

    1996-11-01

    National Energy Research Scientific Computing Center (NERSC) provides researchers with high-performance computing tools to tackle science`s biggest and most challenging problems. Founded in 1974 by DOE/ER, the Controlled Thermonuclear Research Computer Center was the first unclassified supercomputer center and was the model for those that followed. Over the years the center`s name was changed to the National Magnetic Fusion Energy Computer Center and then to NERSC; it was relocated to LBNL. NERSC, one of the largest unclassified scientific computing resources in the world, is the principal provider of general-purpose computing services to DOE/ER programs: Magnetic Fusion Energy, High Energy and Nuclear Physics, Basic Energy Sciences, Health and Environmental Research, and the Office of Computational and Technology Research. NERSC users are a diverse community located throughout US and in several foreign countries. This brochure describes: the NERSC advantage, its computational resources and services, future technologies, scientific resources, and computational science of scale (interdisciplinary research over a decade or longer; examples: combustion in engines, waste management chemistry, global climate change modeling).

  17. Parallel hypergraph partitioning for scientific computing.

    SciTech Connect

    Heaphy, Robert; Devine, Karen Dragon; Catalyurek, Umit; Bisseling, Robert; Hendrickson, Bruce Alan; Boman, Erik Gunnar

    2005-07-01

    Graph partitioning is often used for load balancing in parallel computing, but it is known that hypergraph partitioning has several advantages. First, hypergraphs more accurately model communication volume, and second, they are more expressive and can better represent nonsymmetric problems. Hypergraph partitioning is particularly suited to parallel sparse matrix-vector multiplication, a common kernel in scientific computing. We present a parallel software package for hypergraph (and sparse matrix) partitioning developed at Sandia National Labs. The algorithm is a variation on multilevel partitioning. Our parallel implementation is novel in that it uses a two-dimensional data distribution among processors. We present empirical results that show our parallel implementation achieves good speedup on several large problems (up to 33 million nonzeros) with up to 64 processors on a Linux cluster.

  18. Large scale scientific computing - future directions

    NASA Astrophysics Data System (ADS)

    Patterson, G. S.

    1982-06-01

    Every new generation of scientific computers has opened up new areas of science for exploration through the use of more realistic numerical models or the ability to process ever larger amounts of data. Concomitantly, scientists, because of the success of past models and the wide range of physical phenomena left unexplored, have pressed computer designers to strive for the maximum performance that current technology will permit. This encompasses not only increased processor speed, but also substantial improvements in processor memory, I/O bandwidth, secondary storage and facilities to augment the scientist's ability both to program and to understand the results of a computation. Over the past decade, performance improvements for scientific calculations have come from algoeithm development and a major change in the underlying architecture of the hardware, not from significantly faster circuitry. It appears that this trend will continue for another decade. A future archetectural change for improved performance will most likely be multiple processors coupled together in some fashion. Because the demand for a significantly more powerful computer system comes from users with single large applications, it is essential that an application be efficiently partitionable over a set of processors; otherwise, a multiprocessor system will not be effective. This paper explores some of the constraints on multiple processor architecture posed by these large applications. In particular, the trade-offs between large numbers of slow processors and small numbers of fast processors is examined. Strategies for partitioning range from partitioning at the language statement level (in-the-small) and at the program module level (in-the-large). Some examples of partitioning in-the-large are given and a strategy for efficiently executing a partitioned program is explored.

  19. 75 FR 64720 - DOE/Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-20

    .../Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing... Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U. S. Department...

  20. 75 FR 9887 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-04

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing... Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U.S. Department...

  1. 78 FR 6087 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-29

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing..., Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U. S. Department of...

  2. 75 FR 43518 - Advanced Scientific Computing Advisory Committee; Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-26

    ... Advanced Scientific Computing Advisory Committee; Meeting AGENCY: Office of Science, DOE. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing Advisory..., Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U. S. Department of...

  3. 75 FR 57742 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-22

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION... Scientific Computing Advisory Committee (ASCAC). Federal Advisory Committee Act (Pub. L. 92-463, 86 Stat. 770...: Melea Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown Building;...

  4. 76 FR 41234 - Advanced Scientific Computing Advisory Committee Charter Renewal

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-13

    ... Advanced Scientific Computing Advisory Committee Charter Renewal AGENCY: Department of Energy, Office of... Administration, notice is hereby given that the Advanced Scientific Computing Advisory Committee will be renewed... concerning the Advanced Scientific Computing program in response only to charges from the Director of...

  5. 76 FR 9765 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-22

    ... Advanced Scientific Computing Advisory Committee AGENCY: Office of Science, Department of Energy. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing..., Office of Advanced Scientific Computing Research, SC-21/Germantown Building, U.S. Department of...

  6. 77 FR 45345 - DOE/Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-31

    .../Advanced Scientific Computing Advisory Committee AGENCY: Office of Science, Department of Energy. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing... Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U.S. Department...

  7. 78 FR 41046 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-09

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION... hereby given that the Advanced Scientific Computing Advisory Committee will be renewed for a two-year... (DOE), on the Advanced Scientific Computing Research Program managed by the Office of...

  8. SDS: A Framework for Scientific Data Services

    SciTech Connect

    Dong, Bin; Byna, Surendra; Wu, Kesheng

    2013-10-31

    Large-scale scientific applications typically write their data to parallel file systems with organizations designed to achieve fast write speeds. Analysis tasks frequently read the data in a pattern that is different from the write pattern, and therefore experience poor I/O performance. In this paper, we introduce a prototype framework for bridging the performance gap between write and read stages of data access from parallel file systems. We call this framework Scientific Data Services, or SDS for short. This initial implementation of SDS focuses on reorganizing previously written files into data layouts that benefit read patterns, and transparently directs read calls to the reorganized data. SDS follows a client-server architecture. The SDS Server manages partial or full replicas of reorganized datasets and serves SDS Clients' requests for data. The current version of the SDS client library supports HDF5 programming interface for reading data. The client library intercepts HDF5 calls and transparently redirects them to the reorganized data. The SDS client library also provides a querying interface for reading part of the data based on user-specified selective criteria. We describe the design and implementation of the SDS client-server architecture, and evaluate the response time of the SDS Server and the performance benefits of SDS.

  9. PISCES: An environment for parallel scientific computation

    NASA Technical Reports Server (NTRS)

    Pratt, T. W.

    1985-01-01

    The parallel implementation of scientific computing environment (PISCES) is a project to provide high-level programming environments for parallel MIMD computers. Pisces 1, the first of these environments, is a FORTRAN 77 based environment which runs under the UNIX operating system. The Pisces 1 user programs in Pisces FORTRAN, an extension of FORTRAN 77 for parallel processing. The major emphasis in the Pisces 1 design is in providing a carefully specified virtual machine that defines the run-time environment within which Pisces FORTRAN programs are executed. Each implementation then provides the same virtual machine, regardless of differences in the underlying architecture. The design is intended to be portable to a variety of architectures. Currently Pisces 1 is implemented on a network of Apollo workstations and on a DEC VAX uniprocessor via simulation of the task level parallelism. An implementation for the Flexible Computing Corp. FLEX/32 is under construction. An introduction to the Pisces 1 virtual computer and the FORTRAN 77 extensions is presented. An example of an algorithm for the iterative solution of a system of equations is given. The most notable features of the design are the provision for several granularities of parallelism in programs and the provision of a window mechanism for distributed access to large arrays of data.

  10. InSAR Scientific Computing Environment

    NASA Astrophysics Data System (ADS)

    Gurrola, E. M.; Rosen, P. A.; Sacco, G.; Zebker, H. A.; Simons, M.; Sandwell, D. T.

    2010-12-01

    The InSAR Scientific Computing Environment (ISCE) is a software development effort in its second year within the NASA Advanced Information Systems and Technology program. The ISCE will provide a new computing environment for geodetic image processing for InSAR sensors that will enable scientists to reduce measurements directly from radar satellites and aircraft to new geophysical products without first requiring them to develop detailed expertise in radar processing methods. The environment can serve as the core of a centralized processing center to bring Level-0 raw radar data up to Level-3 data products, but is adaptable to alternative processing approaches for science users interested in new and different ways to exploit mission data. The NRC Decadal Survey-recommended DESDynI mission will deliver data of unprecedented quantity and quality, making possible global-scale studies in climate research, natural hazards, and Earth's ecosystem. The InSAR Scientific Computing Environment is planned to become a key element in processing DESDynI data into higher level data products and it is expected to enable a new class of analyses that take greater advantage of the long time and large spatial scales of these new data, than current approaches. At the core of ISCE is both legacy processing software from the JPL/Caltech ROI_PAC repeat-pass interferometry package as well as a new InSAR processing package containing more efficient and more accurate processing algorithms being developed at Stanford for this project that is based on experience gained in developing processors for missions such as SRTM and UAVSAR. Around the core InSAR processing programs we are building object-oriented wrappers to enable their incorporation into a more modern, flexible, extensible software package that is informed by modern programming methods, including rigorous componentization of processing codes, abstraction and generalization of data models, and a robust, intuitive user interface with

  11. OPENING REMARKS: Scientific Discovery through Advanced Computing

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2006-01-01

    Good morning. Welcome to SciDAC 2006 and Denver. I share greetings from the new Undersecretary for Energy, Ray Orbach. Five years ago SciDAC was launched as an experiment in computational science. The goal was to form partnerships among science applications, computer scientists, and applied mathematicians to take advantage of the potential of emerging terascale computers. This experiment has been a resounding success. SciDAC has emerged as a powerful concept for addressing some of the biggest challenges facing our world. As significant as these successes were, I believe there is also significance in the teams that achieved them. In addition to their scientific aims these teams have advanced the overall field of computational science and set the stage for even larger accomplishments as we look ahead to SciDAC-2. I am sure that many of you are expecting to hear about the results of our current solicitation for SciDAC-2. I’m afraid we are not quite ready to make that announcement. Decisions are still being made and we will announce the results later this summer. Nearly 250 unique proposals were received and evaluated, involving literally thousands of researchers, postdocs, and students. These collectively requested more than five times our expected budget. This response is a testament to the success of SciDAC in the community. In SciDAC-2 our budget has been increased to about 70 million for FY 2007 and our partnerships have expanded to include the Environment and National Security missions of the Department. The National Science Foundation has also joined as a partner. These new partnerships are expected to expand the application space of SciDAC, and broaden the impact and visibility of the program. We have, with our recent solicitation, expanded to turbulence, computational biology, and groundwater reactive modeling and simulation. We are currently talking with the Department’s applied energy programs about risk assessment, optimization of complex systems - such

  12. 76 FR 45786 - Advanced Scientific Computing Advisory Committee; Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-01

    ... Advanced Scientific Computing Advisory Committee; Meeting AGENCY: Office of Science, Department of Energy... Computing Advisory Committee (ASCAC). Federal Advisory Committee Act (Pub. L. 92-463, 86 Stat. 770) requires... INFORMATION CONTACT: Melea Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown...

  13. InSAR Scientific Computing Environment (Invited)

    NASA Astrophysics Data System (ADS)

    Rosen, P. A.; Gurrola, E. M.; Sacco, G.; Zebker, H. A.; Simons, M.; Sandwell, D. T.

    2009-12-01

    The InSAR Scientific Computing Environment (ISCE) is a new development effort within the NASA Advanced Information Systems and Technology program, with the intent of recasting the JPL/Caltech ROI_PAC repeat-pass interferometry package into a modern, reconfigurable, open-source computing environment. The new capability initiates the next generation of geodetic imaging processing technology for InSAR sensors, providing flexibility and extensibility in reducing measurements from radar satellites and aircraft to new geophysical products. The NRC Decadal Survey recommended DESDynI mission will deliver to the science community data of unprecedented quantity and quality, making possible global-scale studies in climate research, natural hazards, and Earth’s ecosystem. DESDynI will provide time series and multi-image measurements that permit four-dimensional models of Earth surface processes so that, for example, climate-induced changes over time become apparent and quantifiable. In this paper, we describe the Environment, and illustrate how it can facility space-based geodesy from InSAR. The ISCE invokes object oriented scripts to control legacy and new codes, and abstracts and generalizes the data model for efficient manipulation of objects among modules. The module interfaces are suitable for command-line execution or GUI-programming. It exposes users gradually to its levels of capability, allowing novices to apply it readily for simple tasks and for experienced users to mine the data with great facility. The intent of the effort is to encourage user contributions to the code, creating an open source community that will extend its life and utility.

  14. The InSAR Scientific Computing Environment

    NASA Technical Reports Server (NTRS)

    Rosen, Paul A.; Gurrola, Eric; Sacco, Gian Franco; Zebker, Howard

    2012-01-01

    We have developed a flexible and extensible Interferometric SAR (InSAR) Scientific Computing Environment (ISCE) for geodetic image processing. ISCE was designed from the ground up as a geophysics community tool for generating stacks of interferograms that lend themselves to various forms of time-series analysis, with attention paid to accuracy, extensibility, and modularity. The framework is python-based, with code elements rigorously componentized by separating input/output operations from the processing engines. This allows greater flexibility and extensibility in the data models, and creates algorithmic code that is less susceptible to unnecessary modification when new data types and sensors are available. In addition, the components support provenance and checkpointing to facilitate reprocessing and algorithm exploration. The algorithms, based on legacy processing codes, have been adapted to assume a common reference track approach for all images acquired from nearby orbits, simplifying and systematizing the geometry for time-series analysis. The framework is designed to easily allow user contributions, and is distributed for free use by researchers. ISCE can process data from the ALOS, ERS, EnviSAT, Cosmo-SkyMed, RadarSAT-1, RadarSAT-2, and TerraSAR-X platforms, starting from Level-0 or Level 1 as provided from the data source, and going as far as Level 3 geocoded deformation products. With its flexible design, it can be extended with raw/meta data parsers to enable it to work with radar data from other platforms

  15. Service-oriented infrastructure for scientific data mashups

    NASA Astrophysics Data System (ADS)

    Baru, C.; Krishnan, S.; Lin, K.; Moreland, J. L.; Nadeau, D. R.

    2009-12-01

    An important challenge in informatics is the development of concepts and corresponding architecture and tools to assist scientists with their data integration tasks. A typical Earth Science data integration request may be expressed, for example, as “For a given region (i.e. lat/long extent, plus depth), return a 3D structural model with accompanying physical parameters of density, seismic velocities, geochemistry, and geologic ages, using a cell size of 10km.” Such requests create “mashups” of scientific data. Currently, such integration is hand-crafted and depends heavily upon a scientist’s intimate knowledge of how to process, interpret, and integrate data from individual sources. In most case, the ultimate “integration” is performed by overlaying output images from individual processing steps using image manipulation software such as, say, Adobe Photoshop—leading to “Photoshop science”, where it is neither easy to repeat the integration steps nor to share the data mashup. As a result, scientists share only the final images and not the mashup itself. A more capable information infrastructure is needed to support the authoring and sharing of scientific data mashups. The infrastructure must include services for data discovery, access, and transformation and should be able to create mashups that are interactive, allowing users to probe and manipulate the data and follow its provenance. We present an architectural framework based on a service-oriented architecture for scientific data mashups in a distributed environment. The framework includes services for Data Access, Data Modeling, and Data Interaction. The Data Access services leverage capabilities for discovery and access to distributed data resources provided by efforts such as GEON and the EarthScope Data Portal, and services for federated metadata catalogs under development by projects like the Geosciences Information Network (GIN). The Data Modeling services provide 2D, 3D, and 4D modeling

  16. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.

    2014-06-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  17. On combining computational differentiation and toolkits for parallel scientific computing.

    SciTech Connect

    Bischof, C. H.; Buecker, H. M.; Hovland, P. D.

    2000-06-08

    Automatic differentiation is a powerful technique for evaluating derivatives of functions given in the form of a high-level programming language such as Fortran, C, or C++. The program is treated as a potentially very long sequence of elementary statements to which the chain rule of differential calculus is applied over and over again. Combining automatic differentiation and the organizational structure of toolkits for parallel scientific computing provides a mechanism for evaluating derivatives by exploiting mathematical insight on a higher level. In these toolkits, algorithmic structures such as BLAS-like operations, linear and nonlinear solvers, or integrators for ordinary differential equations can be identified by their standardized interfaces and recognized as high-level mathematical objects rather than as a sequence of elementary statements. In this note, the differentiation of a linear solver with respect to some parameter vector is taken as an example. Mathematical insight is used to reformulate this problem into the solution of multiple linear systems that share the same coefficient matrix but differ in their right-hand sides. The experiments reported here use ADIC, a tool for the automatic differentiation of C programs, and PETSC, an object-oriented toolkit for the parallel solution of scientific problems modeled by partial differential equations.

  18. Computational Simulations and the Scientific Method

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  19. Constructing Scientific Arguments Using Evidence from Dynamic Computational Climate Models

    ERIC Educational Resources Information Center

    Pallant, Amy; Lee, Hee-Sun

    2015-01-01

    Modeling and argumentation are two important scientific practices students need to develop throughout school years. In this paper, we investigated how middle and high school students (N = 512) construct a scientific argument based on evidence from computational models with which they simulated climate change. We designed scientific argumentation…

  20. Computing through Scientific Abstractions in SysBioPS

    SciTech Connect

    Chin, George; Stephan, Eric G.; Gracio, Deborah K.

    2004-10-13

    Today, biologists and bioinformaticists have a tremendous amount of computational power at their disposal. With the availability of supercomputers, burgeoning scientific databases and digital libraries such as GenBank and PubMed, and pervasive computational environments such as the Grid, biologists have access to a wealth of computational capabilities and scientific data at hand. Yet, the rapid development of computational technologies has far exceeded the typical biologist’s ability to effectively apply the technology in their research. Computational sciences research and development efforts such as the Biology Workbench, BioSPICE (Biological Simulation Program for Intra-Cellular Evaluation), and BioCoRE (Biological Collaborative Research Environment) are important in connecting biologists and their scientific problems to computational infrastructures. On the Computational Cell Environment and Heuristic Entity-Relationship Building Environment projects at the Pacific Northwest National Laboratory, we are jointly developing a new breed of scientific problem solving environment called SysBioPSE that will allow biologists to access and apply computational resources in the scientific research context. In contrast to other computational science environments, SysBioPSE operates as an abstraction layer above a computational infrastructure. The goal of SysBioPSE is to allow biologists to apply computational resources in the context of the scientific problems they are addressing and the scientific perspectives from which they conduct their research. More specifically, SysBioPSE allows biologists to capture and represent scientific concepts and theories and experimental processes, and to link these views to scientific applications, data repositories, and computer systems.

  1. Berkeley Lab Computing Sciences: Accelerating Scientific Discovery

    SciTech Connect

    Hules, John A

    2008-12-12

    Scientists today rely on advances in computer science, mathematics, and computational science, as well as large-scale computing and networking facilities, to increase our understanding of ourselves, our planet, and our universe. Berkeley Lab's Computing Sciences organization researches, develops, and deploys new tools and technologies to meet these needs and to advance research in such areas as global climate change, combustion, fusion energy, nanotechnology, biology, and astrophysics.

  2. Guidelines for Financing School District Computer Services.

    ERIC Educational Resources Information Center

    Splittgerber, Frederic L.; Stirzaker, Norbert A.

    1984-01-01

    School districts can obtain computer services with purchase, lease, or network options. The advantages and disadvantages of each are explained. Guidelines are offered for assessing needs and determining costs of computer services. (MLF)

  3. Using Interactive Computer to Communicate Scientific Information.

    ERIC Educational Resources Information Center

    Selnow, Gary W.

    1988-01-01

    Asks whether the computer is another channel of communication, if its interactive qualities make it an information source, or if it is an undefined hybrid. Concludes that computers are neither the medium nor the source but will in the future provide the possibility of a sophisticated interaction between human intelligence and artificial…

  4. Vocabulary services to support scientific data interoperability

    NASA Astrophysics Data System (ADS)

    Cox, Simon; Mills, Katie; Tan, Florence

    2013-04-01

    Shared vocabularies are a core element in interoperable systems. Vocabularies need to be available at run-time, and where the vocabularies are shared by a distributed community this implies the use of web technology to provide vocabulary services. Given the ubiquity of vocabularies or classifiers in systems, vocabulary services are effectively the base of the interoperability stack. In contemporary knowledge organization systems, a vocabulary item is considered a concept, with the "terms" denoting it appearing as labels. The Simple Knowledge Organization System (SKOS) formalizes this as an RDF Schema (RDFS) application, with a bridge to formal logic in Web Ontology Language (OWL). For maximum utility, a vocabulary should be made available through the following interfaces: * the vocabulary as a whole - at an ontology URI corresponding to a vocabulary document * each item in the vocabulary - at the item URI * summaries, subsets, and resources derived by transformation * through the standard RDF web API - i.e. a SPARQL endpoint * through a query form for human users. However, the vocabulary data model may be leveraged directly in a standard vocabulary API that uses the semantics provided by SKOS. SISSvoc3 [1] accomplishes this as a standard set of URI templates for a vocabulary. Any URI comforming to the template selects a vocabulary subset based on the SKOS properties, including labels (skos:prefLabel, skos:altLabel, rdfs:label) and a subset of the semantic relations (skos:broader, skos:narrower, etc). SISSvoc3 thus provides a RESTFul SKOS API to query a vocabulary, but hiding the complexity of SPARQL. It has been implemented using the Linked Data API (LDA) [2], which connects to a SPARQL endpoint. By using LDA, we also get content-negotiation, alternative views, paging, metadata and other functionality provided in a standard way. A number of vocabularies have been formalized in SKOS and deployed by CSIRO, the Australian Bureau of Meteorology (BOM) and their

  5. Simulation methods for advanced scientific computing

    SciTech Connect

    Booth, T.E.; Carlson, J.A.; Forster, R.A.

    1998-11-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of the project was to create effective new algorithms for solving N-body problems by computer simulation. The authors concentrated on developing advanced classical and quantum Monte Carlo techniques. For simulations of phase transitions in classical systems, they produced a framework generalizing the famous Swendsen-Wang cluster algorithms for Ising and Potts models. For spin-glass-like problems, they demonstrated the effectiveness of an extension of the multicanonical method for the two-dimensional, random bond Ising model. For quantum mechanical systems, they generated a new method to compute the ground-state energy of systems of interacting electrons. They also improved methods to compute excited states when the diffusion quantum Monte Carlo method is used and to compute longer time dynamics when the stationary phase quantum Monte Carlo method is used.

  6. Basic mathematical function libraries for scientific computation

    NASA Technical Reports Server (NTRS)

    Galant, David C.

    1989-01-01

    Ada packages implementing selected mathematical functions for the support of scientific and engineering applications were written. The packages provide the Ada programmer with the mathematical function support found in the languages Pascal and FORTRAN as well as an extended precision arithmetic and a complete complex arithmetic. The algorithms used are fully described and analyzed. Implementation assumes that the Ada type FLOAT objects fully conform to the IEEE 754-1985 standard for single binary floating-point arithmetic, and that INTEGER objects are 32-bit entities. Codes for the Ada packages are included as appendixes.

  7. Developing Concept-Based User Interfaces for Scientific Computing

    SciTech Connect

    Chin, George; Stephan, Eric G.; Gracio, Deborah K.; Kuchar, Olga A.; Whitney, Paul D.; Schuchardt, Karen L.

    2006-09-01

    From our interactions with researchers from different scientific fields and disciplines, we have observed that scientists often describe and convey concepts, theories, processes, and results using basic graphs and diagrams. Semantic graphs such as these provide a universal language that all scientists may apply to document their scientific knowledge and to communicate this knowledge to others. Furthermore, studies have shown that the cognitive processing of complex subject matter is improved when the structure of ideas and concepts are made explicit [39] and that semantic graphs may serve as effective “scaffolds” for cognitive processing [29]. At Pacific Northwest National Laboratory, we are deploying semantic graphs within scientific computing systems as central user representations of scientific knowledge. These systems provide concept-based user interfaces that allow scientists to visually define and capture conceptual models of their scientific problems, hypotheses, theories, and processes. Once defined, the visual models then become interaction framework for accessing and applying scientific and computational resources and capabilities. In this paper, through the examination of three visual research systems, we illustrate different ways concept-based user interfaces and semantic graph knowledge representations may make scientific knowledge concrete, usable, shareable, and computable in scientific computing systems.

  8. Cloud services for the Fermilab scientific stakeholders

    SciTech Connect

    Timm, S.; Garzoglio, G.; Mhashilkar, P.; Boyd, J.; Bernabeu, G.; Sharma, N.; Peregonow, N.; Kim, H.; Noh, S.; Palur, S.; Raicu, I.

    2015-12-23

    As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic ray simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.

  9. Cloud services for the Fermilab scientific stakeholders

    DOE PAGES

    Timm, S.; Garzoglio, G.; Mhashilkar, P.; ...

    2015-12-23

    As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic raymore » simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.« less

  10. Computational Epigenetics: the new scientific paradigm

    PubMed Central

    Lim, Shen Jean; Tan, Tin Wee; Tong, Joo Chuan

    2010-01-01

    Epigenetics has recently emerged as a critical field for studying how non-gene factors can influence the traits and functions of an organism. At the core of this new wave of research is the use of computational tools that play critical roles not only in directing the selection of key experiments, but also in formulating new testable hypotheses through detailed analysis of complex genomic information that is not achievable using traditional approaches alone. Epigenomics, which combines traditional genomics with computer science, mathematics, chemistry, biochemistry and proteomics for the large-scale analysis of heritable changes in phenotype, gene function or gene expression that are not dependent on gene sequence, offers new opportunities to further our understanding of transcriptional regulation, nuclear organization, development and disease. This article examines existing computational strategies for the study of epigenetic factors. The most important databases and bioinformatic tools in this rapidly growing field have been reviewed. PMID:20978607

  11. [Organisation of scientific and research work of Navy medical service].

    PubMed

    Gavrilov, V V; Myznikov, I L; Kuz'minov, O V; Shmelev, S V; Oparin, M Iu

    2013-03-01

    The main issues of organization of scientific and research work of medical service in the North Fleet are considered in the present article. Analysis of some paragraphs of documents, regulating this work at army level is given. The authors give an example of successful experience of such work in the North Fleet, table some suggestions which allow to improve the administration of scientific and research work in the navy and also on the district scale.

  12. Building Cognition: The Construction of Computational Representations for Scientific Discovery

    ERIC Educational Resources Information Center

    Chandrasekharan, Sanjay; Nersessian, Nancy J.

    2015-01-01

    Novel computational representations, such as simulation models of complex systems and video games for scientific discovery (Foldit, EteRNA etc.), are dramatically changing the way discoveries emerge in science and engineering. The cognitive roles played by such computational representations in discovery are not well understood. We present a…

  13. Network and computing infrastructure for scientific applications in Georgia

    NASA Astrophysics Data System (ADS)

    Kvatadze, R.; Modebadze, Z.

    2016-09-01

    Status of network and computing infrastructure and available services for research and education community of Georgia are presented. Research and Educational Networking Association - GRENA provides the following network services: Internet connectivity, network services, cyber security, technical support, etc. Computing resources used by the research teams are located at GRENA and at major state universities. GE-01-GRENA site is included in European Grid infrastructure. Paper also contains information about programs of Learning Center and research and development projects in which GRENA is participating.

  14. ASCR Cybersecurity for Scientific Computing Integrity

    SciTech Connect

    Piesert, Sean

    2015-02-27

    The Department of Energy (DOE) has the responsibility to address the energy, environmental, and nuclear security challenges that face our nation. Much of DOE’s enterprise involves distributed, collaborative teams; a signi¬cant fraction involves “open science,” which depends on multi-institutional, often international collaborations that must access or share signi¬cant amounts of information between institutions and over networks around the world. The mission of the Office of Science is the delivery of scienti¬c discoveries and major scienti¬c tools to transform our understanding of nature and to advance the energy, economic, and national security of the United States. The ability of DOE to execute its responsibilities depends critically on its ability to assure the integrity and availability of scienti¬c facilities and computer systems, and of the scienti¬c, engineering, and operational software and data that support its mission.

  15. InSAR Scientific Computing Environment

    NASA Technical Reports Server (NTRS)

    Rosen, Paul A.; Sacco, Gian Franco; Gurrola, Eric M.; Zabker, Howard A.

    2011-01-01

    This computing environment is the next generation of geodetic image processing technology for repeat-pass Interferometric Synthetic Aperture (InSAR) sensors, identified by the community as a needed capability to provide flexibility and extensibility in reducing measurements from radar satellites and aircraft to new geophysical products. This software allows users of interferometric radar data the flexibility to process from Level 0 to Level 4 products using a variety of algorithms and for a range of available sensors. There are many radar satellites in orbit today delivering to the science community data of unprecedented quantity and quality, making possible large-scale studies in climate research, natural hazards, and the Earth's ecosystem. The proposed DESDynI mission, now under consideration by NASA for launch later in this decade, would provide time series and multiimage measurements that permit 4D models of Earth surface processes so that, for example, climate-induced changes over time would become apparent and quantifiable. This advanced data processing technology, applied to a global data set such as from the proposed DESDynI mission, enables a new class of analyses at time and spatial scales unavailable using current approaches. This software implements an accurate, extensible, and modular processing system designed to realize the full potential of InSAR data from future missions such as the proposed DESDynI, existing radar satellite data, as well as data from the NASA UAVSAR (Uninhabited Aerial Vehicle Synthetic Aperture Radar), and other airborne platforms. The processing approach has been re-thought in order to enable multi-scene analysis by adding new algorithms and data interfaces, to permit user-reconfigurable operation and extensibility, and to capitalize on codes already developed by NASA and the science community. The framework incorporates modern programming methods based on recent research, including object-oriented scripts controlling legacy and

  16. Lattice gauge theory on the Intel parallel scientific computer

    NASA Astrophysics Data System (ADS)

    Gottlieb, Steven

    1990-08-01

    Intel Scientific Computers (ISC) has just started producing its third general of parallel computer, the iPSC/860. Based on the i860 chip that has a peak performance of 80 Mflops and with a current maximum of 128 nodes, this computer should achieve speeds in excess of those obtainable on conventional vector supercomputers. The hardware, software and computing techniques appropriate for lattice gauge theory calculations are described. The differences between a staggered fermion conjugate gradient program written under CANOPY and for the iPSC are detailed.

  17. A Component Architecture for High-Performance Scientific Computing

    SciTech Connect

    Bernholdt, D E; Allan, B A; Armstrong, R; Bertrand, F; Chiu, K; Dahlgren, T L; Damevski, K; Elwasif, W R; Epperly, T W; Govindaraju, M; Katz, D S; Kohl, J A; Krishnan, M; Kumfert, G; Larson, J W; Lefantzi, S; Lewis, M J; Malony, A D; McInnes, L C; Nieplocha, J; Norris, B; Parker, S G; Ray, J; Shende, S; Windus, T L; Zhou, S

    2004-12-14

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  18. A Component Architecture for High-Performance Scientific Computing

    SciTech Connect

    Bernholdt, David E; Allan, Benjamin A; Armstrong, Robert C; Bertrand, Felipe; Chiu, Kenneth; Dahlgren, Tamara L; Damevski, Kostadin; Elwasif, Wael R; Epperly, Thomas G; Govindaraju, Madhusudhan; Katz, Daniel S; Kohl, James A; Krishnan, Manoj Kumar; Kumfert, Gary K; Larson, J Walter; Lefantzi, Sophia; Lewis, Michael J; Malony, Allen D; McInnes, Lois C; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G; Ray, Jaideep; Shende, Sameer; Windus, Theresa L; Zhou, Shujia

    2006-07-03

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  19. Advances in Domain Mapping of Massively Parallel Scientific Computations

    SciTech Connect

    Leland, Robert W.; Hendrickson, Bruce A.

    2015-10-01

    One of the most important concerns in parallel computing is the proper distribution of workload across processors. For most scientific applications on massively parallel machines, the best approach to this distribution is to employ data parallelism; that is, to break the datastructures supporting a computation into pieces and then to assign those pieces to different processors. Collectively, these partitioning and assignment tasks comprise the domain mapping problem.

  20. Understanding the Performance and Potential of Cloud Computing for Scientific Applications

    SciTech Connect

    Sadooghi, Iman; Hernandez Martin, Jesus; Li, Tonglin; Brandstatter, Kevin; Zhao, Yong; Maheshwari, Ketan; Pais Pitta de Lacerda Ruivo, Tiago; Timm, Steven; Garzoglio, Gabriele; Raicu, Ioan

    2015-01-01

    Commercial clouds bring a great opportunity to the scientific computing area. Scientific applications usually require significant resources, however not all scientists have access to sufficient high-end computing systems, may of which can be found in the Top500 list. Cloud Computing has gained the attention of scientists as a competitive resource to run HPC applications at a potentially lower cost. But as a different infrastructure, it is unclear whether clouds are capable of running scientific applications with a reasonable performance per money spent. This work studies the performance of public clouds and places this performance in context to price. We evaluate the raw performance of different services of AWS cloud in terms of the basic resources, such as compute, memory, network and I/O. We also evaluate the performance of the scientific applications running in the cloud. This paper aims to assess the ability of the cloud to perform well, as well as to evaluate the cost of the cloud running scientific applications. We developed a full set of metrics and conducted a comprehensive performance evlauation over the Amazon cloud. We evaluated EC2, S3, EBS and DynamoDB among the many Amazon AWS services. We evaluated the memory sub-system performance with CacheBench, the network performance with iperf, processor and network performance with the HPL benchmark application, and shared storage with NFS and PVFS in addition to S3. We also evaluated a real scientific computing application through the Swift parallel scripting system at scale. Armed with both detailed benchmarks to gauge expected performance and a detailed monetary cost analysis, we expect this paper will be a recipe cookbook for scientists to help them decide where to deploy and run their scientific applications between public clouds, private clouds, or hybrid clouds.

  1. 76 FR 64330 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-18

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION... Reliability, Diffusion on Complex Networks, and Reversible Software Execution Systems Report from Applied Math... at: (301) 903-7486 or by email at: Melea.Baker@science.doe.gov . You must make your request for...

  2. 78 FR 56871 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-16

    ... Advanced Scientific Computing Advisory Committee AGENCY: Office of Science, Department of Energy. ACTION... Exascale technical approaches subcommittee Facilities update Report from Applied Math Committee of Visitors...: ( Melea.Baker@science.doe.gov ). You must make your request for an oral statement at least five...

  3. Tpetra, and the use of generic programming in scientific computing

    SciTech Connect

    Baker, Christopher G; Heroux, Dr. Michael A

    2012-01-01

    We present Tpetra, a Trilinos package for parallel linear algebra primitives implementing the Petra object model. We describe Tpetra s design, based on generic programming via C++ templated types and template metaprogramming. We discuss some benefits of this approach in the context of scientific computing, with illustrations consisting of code and notable empirical results.

  4. Research initiatives for plug-and-play scientific computing.

    SciTech Connect

    McInnes, L. C.; Dahlgren, T.; Nieplocha, J.; Bernholdt, D.; Allan, B.; Armstrong, R.; Chavarria, D.; Elwasif, W.; Gorton, I.; Krishan, M.; Malony, A.; Norris, B.; Ray, J.; Shende, S.; Mathematics and Computer Science; LLNL; PNNL; ORNL; SNL; Univ. of Oregon

    2007-01-01

    This paper introduces three component technology initiatives within the SciDAC Center for Technology for Advanced Scientific Component Software (TASCS) that address ever-increasing productivity challenges in creating, managing, and applying simulation software to scientific discovery. By leveraging the Common Component Architecture (CCA), a new component standard for high-performance scientific computing, these initiatives tackle difficulties at different but related levels in the development of component-based scientific software: (1) deploying applications on massively parallel and heterogeneous architectures, (2) investigating new approaches to the runtime enforcement of behavioral semantics, and (3) developing tools to facilitate dynamic composition, substitution, and reconfiguration of component implementations and parameters, so that application scientists can explore tradeoffs among factors such as accuracy, reliability, and performance.

  5. Research initiatives for plug-and-play scientific computing

    NASA Astrophysics Data System (ADS)

    Curfman McInnes, Lois; Dahlgren, Tamara; Nieplocha, Jarek; Bernholdt, David; Allan, Ben; Armstrong, Rob; Chavarria, Daniel; Elwasif, Wael; Gorton, Ian; Kenny, Joe; Krishan, Manoj; Malony, Allen; Norris, Boyana; Ray, Jaideep; Shende, Sameer

    2007-07-01

    This paper introduces three component technology initiatives within the SciDAC Center for Technology for Advanced Scientific Component Software (TASCS) that address ever-increasing productivity challenges in creating, managing, and applying simulation software to scientific discovery. By leveraging the Common Component Architecture (CCA), a new component standard for high-performance scientific computing, these initiatives tackle difficulties at different but related levels in the development of component-based scientific software: (1) deploying applications on massively parallel and heterogeneous architectures, (2) investigating new approaches to the runtime enforcement of behavioral semantics, and (3) developing tools to facilitate dynamic composition, substitution, and reconfiguration of component implementations and parameters, so that application scientists can explore tradeoffs among factors such as accuracy, reliability, and performance.

  6. Institute for Scientific Computing Research Annual Report: Fiscal Year 2004

    SciTech Connect

    Keyes, D E

    2005-02-07

    Large-scale scientific computation and all of the disciplines that support and help to validate it have been placed at the focus of Lawrence Livermore National Laboratory (LLNL) by the Advanced Simulation and Computing (ASC) program of the National Nuclear Security Administration (NNSA) and the Scientific Discovery through Advanced Computing (SciDAC) initiative of the Office of Science of the Department of Energy (DOE). The maturation of computational simulation as a tool of scientific and engineering research is underscored in the November 2004 statement of the Secretary of Energy that, ''high performance computing is the backbone of the nation's science and technology enterprise''. LLNL operates several of the world's most powerful computers--including today's single most powerful--and has undertaken some of the largest and most compute-intensive simulations ever performed. Ultrascale simulation has been identified as one of the highest priorities in DOE's facilities planning for the next two decades. However, computers at architectural extremes are notoriously difficult to use efficiently. Furthermore, each successful terascale simulation only points out the need for much better ways of interacting with the resulting avalanche of data. Advances in scientific computing research have, therefore, never been more vital to LLNL's core missions than at present. Computational science is evolving so rapidly along every one of its research fronts that to remain on the leading edge, LLNL must engage researchers at many academic centers of excellence. In Fiscal Year 2004, the Institute for Scientific Computing Research (ISCR) served as one of LLNL's main bridges to the academic community with a program of collaborative subcontracts, visiting faculty, student internships, workshops, and an active seminar series. The ISCR identifies researchers from the academic community for computer science and computational science collaborations with LLNL and hosts them for short- and

  7. Accelerating Scientific Discovery Through Computation and Visualization II

    PubMed Central

    Sims, James S.; George, William L.; Satterfield, Steven G.; Hung, Howard K.; Hagedorn, John G.; Ketcham, Peter M.; Griffin, Terence J.; Hagstrom, Stanley A.; Franiatte, Julien C.; Bryant, Garnett W.; Jaskólski, W.; Martys, Nicos S.; Bouldin, Charles E.; Simmons, Vernon; Nicolas, Oliver P.; Warren, James A.; am Ende, Barbara A.; Koontz, John E.; Filla, B. James; Pourprix, Vital G.; Copley, Stefanie R.; Bohn, Robert B.; Peskin, Adele P.; Parker, Yolanda M.; Devaney, Judith E.

    2002-01-01

    This is the second in a series of articles describing a wide variety of projects at NIST that synergistically combine physical science and information science. It describes, through examples, how the Scientific Applications and Visualization Group (SAVG) at NIST has utilized high performance parallel computing, visualization, and machine learning to accelerate research. The examples include scientific collaborations in the following areas: (1) High Precision Energies for few electron atomic systems, (2) Flows of suspensions, (3) X-ray absorption, (4) Molecular dynamics of fluids, (5) Nanostructures, (6) Dendritic growth in alloys, (7) Screen saver science, (8) genetic programming. PMID:27446728

  8. Educational NASA Computational and Scientific Studies (enCOMPASS)

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess

    2013-01-01

    Educational NASA Computational and Scientific Studies (enCOMPASS) is an educational project of NASA Goddard Space Flight Center aimed at bridging the gap between computational objectives and needs of NASA's scientific research, missions, and projects, and academia's latest advances in applied mathematics and computer science. enCOMPASS achieves this goal via bidirectional collaboration and communication between NASA and academia. Using developed NASA Computational Case Studies in university computer science/engineering and applied mathematics classes is a way of addressing NASA's goals of contributing to the Science, Technology, Education, and Math (STEM) National Objective. The enCOMPASS Web site at http://encompass.gsfc.nasa.gov provides additional information. There are currently nine enCOMPASS case studies developed in areas of earth sciences, planetary sciences, and astrophysics. Some of these case studies have been published in AIP and IEEE's Computing in Science and Engineering magazines. A few university professors have used enCOMPASS case studies in their computational classes and contributed their findings to NASA scientists. In these case studies, after introducing the science area, the specific problem, and related NASA missions, students are first asked to solve a known problem using NASA data and past approaches used and often published in a scientific/research paper. Then, after learning about the NASA application and related computational tools and approaches for solving the proposed problem, students are given a harder problem as a challenge for them to research and develop solutions for. This project provides a model for NASA scientists and engineers on one side, and university students, faculty, and researchers in computer science and applied mathematics on the other side, to learn from each other's areas of work, computational needs and solutions, and the latest advances in research and development. This innovation takes NASA science and

  9. The Potential of the Cell Processor for Scientific Computing

    SciTech Connect

    Williams, Samuel; Shalf, John; Oliker, Leonid; Husbands, Parry; Kamil, Shoaib; Yelick, Katherine

    2005-10-14

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of the using the forth coming STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. We are the first to present quantitative Cell performance data on scientific kernels and show direct comparisons against leading superscalar (AMD Opteron), VLIW (IntelItanium2), and vector (Cray X1) architectures. Since neither Cell hardware nor cycle-accurate simulators are currently publicly available, we develop both analytical models and simulators to predict kernel performance. Our work also explores the complexity of mapping several important scientific algorithms onto the Cells unique architecture. Additionally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  10. Heterogeneous concurrent computing with exportable services

    NASA Technical Reports Server (NTRS)

    Sunderam, Vaidy

    1995-01-01

    Heterogeneous concurrent computing, based on the traditional process-oriented model, is approaching its functionality and performance limits. An alternative paradigm, based on the concept of services, supporting data driven computation, and built on a lightweight process infrastructure, is proposed to enhance the functional capabilities and the operational efficiency of heterogeneous network-based concurrent computing. TPVM is an experimental prototype system supporting exportable services, thread-based computation, and remote memory operations that is built as an extension of and an enhancement to the PVM concurrent computing system. TPVM offers a significantly different computing paradigm for network-based computing, while maintaining a close resemblance to the conventional PVM model in the interest of compatibility and ease of transition Preliminary experiences have demonstrated that the TPVM framework presents a natural yet powerful concurrent programming interface, while being capable of delivering performance improvements of upto thirty percent.

  11. Technologies for Large Data Management in Scientific Computing

    NASA Astrophysics Data System (ADS)

    Pace, Alberto

    2014-01-01

    In recent years, intense usage of computing has been the main strategy of investigations in several scientific research projects. The progress in computing technology has opened unprecedented opportunities for systematic collection of experimental data and the associated analysis that were considered impossible only few years ago. This paper focuses on the strategies in use: it reviews the various components that are necessary for an effective solution that ensures the storage, the long term preservation, and the worldwide distribution of large quantities of data that are necessary in a large scientific research project. The paper also mentions several examples of data management solutions used in High Energy Physics for the CERN Large Hadron Collider (LHC) experiments in Geneva, Switzerland which generate more than 30,000 terabytes of data every year that need to be preserved, analyzed, and made available to a community of several tenth of thousands scientists worldwide.

  12. I/O-Efficient Scientific Computation Using TPIE

    NASA Technical Reports Server (NTRS)

    Vengroff, Darren Erik; Vitter, Jeffrey Scott

    1996-01-01

    In recent years, input/output (I/O)-efficient algorithms for a wide variety of problems have appeared in the literature. However, systems specifically designed to assist programmers in implementing such algorithms have remained scarce. TPIE is a system designed to support I/O-efficient paradigms for problems from a variety of domains, including computational geometry, graph algorithms, and scientific computation. The TPIE interface frees programmers from having to deal not only with explicit read and write calls, but also the complex memory management that must be performed for I/O-efficient computation. In this paper we discuss applications of TPIE to problems in scientific computation. We discuss algorithmic issues underlying the design and implementation of the relevant components of TPIE and present performance results of programs written to solve a series of benchmark problems using our current TPIE prototype. Some of the benchmarks we present are based on the NAS parallel benchmarks while others are of our own creation. We demonstrate that the central processing unit (CPU) overhead required to manage I/O is small and that even with just a single disk, the I/O overhead of I/O-efficient computation ranges from negligible to the same order of magnitude as CPU time. We conjecture that if we use a number of disks in parallel this overhead can be all but eliminated.

  13. Computer Systems and Services in Hospitals—1979

    PubMed Central

    Veazie, Stephen M.

    1979-01-01

    Starting at the end of 1978 and continuing through the first six months of 1979, the American Hospital Association (AHA) collected information on computer systems and services used in/by hospitals. The information has been compiled into the most comprehensive data base of hospital computer systems and services in existence today. Summaries of the findings of this project will be presented in this paper.

  14. Integrating Network Management for Cloud Computing Services

    DTIC Science & Technology

    2015-06-01

    Integrating Network Management For Cloud Computing Services Peng Sun A Dissertation Presented to the Faculty of Princeton University in Candidacy for...2015 2. REPORT TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Integrating Network Management for Cloud Computing Services... integrate the management of various network components. With commercial deployment, our operational experiences feed back into revision of the

  15. Evaluation of leading scalar and vector architectures for scientific computations

    SciTech Connect

    Simon, Horst D.; Oliker, Leonid; Canning, Andrew; Carter, Jonathan; Ethier, Stephane; Shalf, John

    2004-04-20

    The growing gap between sustained and peak performance for scientific applications is a well-known problem in high performance computing. The recent development of parallel vector systems offers the potential to reduce this gap for many computational science codes and deliver a substantial increase in computing capabilities. This project examines the performance of the cacheless vector Earth Simulator (ES) and compares it to superscalar cache-based IBM Power3 system. Results demonstrate that the ES is significantly faster than the Power3 architecture, highlighting the tremendous potential advantage of the ES for numerical simulation. However, vectorization of a particle-in-cell application (GTC) greatly increased the memory footprint preventing loop-level parallelism and limiting scalability potential.

  16. Globus-based Services for the Hydro-Meteorology Scientific Community

    NASA Astrophysics Data System (ADS)

    Muntean, Ioan-Lucian; Hofmann, Matthias; Heller, Helmut

    2013-04-01

    Scientific workflows in hydro-meteorology involve multiple applications with varying computational requirements. These are best met by different e-Infrastructures in Europe: sequential codes with modest requirements are well suited to resources offered in EGI (European Grid Infrastructure) while parallelized, computationally demanding codes have to run on PRACE (Partnership for Advanced Computing in Europe) resources. Access to major Distributed Computing Infrastructures (DCI) in Europe such as PRACE and EGI is provided by means of grid middleware like Globus, which is available in both eInfrastructures and thus can bridge between them. The consortium "Initiative for Globus in Europe" (IGE - http://www.ige-project.eu) and its community body EGCF (http://www.egcf.eu) act as European provider for Globus technology, offering the resource providers and scientific user communities professional services such as Globus software provisioning and certification, training and documentation, and community software adaptation to Globus technology. This presentation will cover the following two parts: an outline of the IGE/EGCF services for the DRIHM community and an introduction to data handling with Globus Online, with emphasis on the achievements to date. The set of Globus-centered services of potential interest to the hydro-meteorology community have been identified to be: Globus support for: data access and handling: GridFTP, Globus Online, Globus Connect, Globus Storage; computing: GRAM for submission of parallel jobs to PRACE or of high-throughput jobs to EGI; accounting: tracking the usage records with GridSAFE. Infrastructure and workflow integration support such as: setup of virtual organizations for DRIHM community; access to EGI and PRACE infrastructures via Globus-based tools; investigation of workflow interoperability technologies (such as SHIWA). Furthermore, IGE successfully provides access to test bed resources where developers of the DRIHM community can port

  17. Institute for Scientific Computing Research Fiscal Year 2002 Annual Report

    SciTech Connect

    Keyes, D E; McGraw, J R; Bodtker, L K

    2003-03-11

    The Institute for Scientific Computing Research (ISCR) at Lawrence Livermore National Laboratory is jointly administered by the Computing Applications and Research Department (CAR) and the University Relations Program (URP), and this joint relationship expresses its mission. An extensively externally networked ISCR cost-effectively expands the level and scope of national computational science expertise available to the Laboratory through CAR. The URP, with its infrastructure for managing six institutes and numerous educational programs at LLNL, assumes much of the logistical burden that is unavoidable in bridging the Laboratory's internal computational research environment with that of the academic community. As large-scale simulations on the parallel platforms of DOE's Advanced Simulation and Computing (ASCI) become increasingly important to the overall mission of LLNL, the role of the ISCR expands in importance, accordingly. Relying primarily on non-permanent staffing, the ISCR complements Laboratory research in areas of the computer and information sciences that are needed at the frontier of Laboratory missions. The ISCR strives to be the ''eyes and ears'' of the Laboratory in the computer and information sciences, in keeping the Laboratory aware of and connected to important external advances. It also attempts to be ''feet and hands, in carrying those advances into the Laboratory and incorporating them into practice. In addition to conducting research, the ISCR provides continuing education opportunities to Laboratory personnel, in the form of on-site workshops taught by experts on novel software or hardware technologies. The ISCR also seeks to influence the research community external to the Laboratory to pursue Laboratory-related interests and to train the workforce that will be required by the Laboratory. Part of the performance of this function is interpreting to the external community appropriate (unclassified) aspects of the Laboratory's own contributions

  18. InSAR Scientific Computing Environment on the Cloud

    NASA Astrophysics Data System (ADS)

    Rosen, P. A.; Shams, K. S.; Gurrola, E. M.; George, B. A.; Knight, D. S.

    2012-12-01

    In response to the needs of the international scientific and operational Earth observation communities, spaceborne Synthetic Aperture Radar (SAR) systems are being tasked to produce enormous volumes of raw data daily, with availability to scientists to increase substantially as more satellites come online and data becomes more accessible through more open data policies. The availability of these unprecedentedly dense and rich datasets has led to the development of sophisticated algorithms that can take advantage of them. In particular, interferometric time series analysis of SAR data provides insights into the changing earth and requires substantial computational power to process data across large regions and over large time periods. This poses challenges for existing infrastructure, software, and techniques required to process, store, and deliver the results to the global community of scientists. The current state-of-the-art solutions employ traditional data storage and processing applications that require download of data to the local repositories before processing. This approach is becoming untenable in light of the enormous volume of data that must be processed in an iterative and collaborative manner. We have analyzed and tested new cloud computing and virtualization approaches to address these challenges within the context of InSAR in the earth science community. Cloud computing is democratizing computational and storage capabilities for science users across the world. The NASA Jet Propulsion Laboratory has been an early adopter of this technology, successfully integrating cloud computing in a variety of production applications ranging from mission operations to downlink data processing. We have ported a new InSAR processing suite called ISCE (InSAR Scientific Computing Environment) to a scalable distributed system running in the Amazon GovCloud to demonstrate the efficacy of cloud computing for this application. We have integrated ISCE with Polyphony to

  19. Java Performance for Scientific Applications on LLNL Computer Systems

    SciTech Connect

    Kapfer, C; Wissink, A

    2002-05-10

    Languages in use for high performance computing at the laboratory--Fortran (f77 and f90), C, and C++--have many years of development behind them and are generally considered the fastest available. However, Fortran and C do not readily extend to object-oriented programming models, limiting their capability for very complex simulation software. C++ facilitates object-oriented programming but is a very complex and error-prone language. Java offers a number of capabilities that these other languages do not. For instance it implements cleaner (i.e., easier to use and less prone to errors) object-oriented models than C++. It also offers networking and security as part of the language standard, and cross-platform executables that make it architecture neutral, to name a few. These features have made Java very popular for industrial computing applications. The aim of this paper is to explain the trade-offs in using Java for large-scale scientific applications at LLNL. Despite its advantages, the computational science community has been reluctant to write large-scale computationally intensive applications in Java due to concerns over its poor performance. However, considerable progress has been made over the last several years. The Java Grande Forum [1] has been promoting the use of Java for large-scale computing. Members have introduced efficient array libraries, developed fast just-in-time (JIT) compilers, and built links to existing packages used in high performance parallel computing.

  20. An Introductory Course on Service-Oriented Computing for High Schools

    ERIC Educational Resources Information Center

    Tsai, W. T.; Chen, Yinong; Cheng, Calvin; Sun, Xin; Bitter, Gary; White, Mary

    2008-01-01

    Service-Oriented Computing (SOC) is a new computing paradigm that has been adopted by major computer companies as well as government agencies such as the Department of Defense for mission-critical applications. SOC is being used for developing Web and electronic business applications, as well as robotics, gaming, and scientific applications. Yet,…

  1. Review of An Introduction to Parallel and Vector Scientific Computing

    SciTech Connect

    Bailey, David H.; Lefton, Lew

    2006-06-30

    On one hand, the field of high-performance scientific computing is thriving beyond measure. Performance of leading-edge systems on scientific calculations, as measured say by the Top500 list, has increased by an astounding factor of 8000 during the 15-year period from 1993 to 2008, which is slightly faster even than Moore's Law. Even more importantly, remarkable advances in numerical algorithms, numerical libraries and parallel programming environments have led to improvements in the scope of what can be computed that are entirely on a par with the advances in computing hardware. And these successes have spread far beyond the confines of large government-operated laboratories, many universities, modest-sized research institutes and private firms now operate clusters that differ only in scale from the behemoth systems at the large-scale facilities. In the wake of these recent successes, researchers from fields that heretofore have not been part of the scientific computing world have been drawn into the arena. For example, at the recent SC07 conference, the exhibit hall, which long has hosted displays from leading computer systems vendors and government laboratories, featured some 70 exhibitors who had not previously participated. In spite of all these exciting developments, and in spite of the clear need to present these concepts to a much broader technical audience, there is a perplexing dearth of training material and textbooks in the field, particularly at the introductory level. Only a handful of universities offer coursework in the specific area of highly parallel scientific computing, and instructors of such courses typically rely on custom-assembled material. For example, the present reviewer and Robert F. Lucas relied on materials assembled in a somewhat ad-hoc fashion from colleagues and personal resources when presenting a course on parallel scientific computing at the University of California, Berkeley, a few years ago. Thus it is indeed refreshing to see

  2. AVES: A Computer Cluster System approach for INTEGRAL Scientific Analysis

    NASA Astrophysics Data System (ADS)

    Federici, M.; Martino, B. L.; Natalucci, L.; Umbertini, P.

    The AVES computing system, based on an "Cluster" architecture is a fully integrated, low cost computing facility dedicated to the archiving and analysis of the INTEGRAL data. AVES is a modular system that uses the software resource manager (SLURM) and allows almost unlimited expandibility (65,536 nodes and hundreds of thousands of processors); actually is composed by 30 Personal Computers with Quad-Cores CPU able to reach the computing power of 300 Giga Flops (300x10{9} Floating point Operations Per Second), with 120 GB of RAM and 7.5 Tera Bytes (TB) of storage memory in UFS configuration plus 6 TB for users area. AVES was designed and built to solve growing problems raised from the analysis of the large data amount accumulated by the INTEGRAL mission (actually about 9 TB) and due to increase every year. The used analysis software is the OSA package, distributed by the ISDC in Geneva. This is a very complex package consisting of dozens of programs that can not be converted to parallel computing. To overcome this limitation we developed a series of programs to distribute the workload analysis on the various nodes making AVES automatically divide the analysis in N jobs sent to N cores. This solution thus produces a result similar to that obtained by the parallel computing configuration. In support of this we have developed tools that allow a flexible use of the scientific software and quality control of on-line data storing. The AVES software package is constituted by about 50 specific programs. Thus the whole computing time, compared to that provided by a Personal Computer with single processor, has been enhanced up to a factor 70.

  3. The Visualization Management System Approach To Visualization In Scientific Computing

    NASA Astrophysics Data System (ADS)

    Butler, David M.; Pendley, Michael H.

    1989-09-01

    We introduce the visualization management system (ViMS), a new approach to the development of software for visualization in scientific computing (ViSC). The conceptual foundation for a ViMS is an abstract visualization model which specifies a class of geometric objects, the graphic representations of the objects and the operations on both. A ViMS provides a modular implementation of its visualization model. We describe ViMS requirements and a model-independent ViMS architecture. We briefly describe the vector bundle visualization model and the visualization taxonomy it generates. We conclude by summarizing the benefits of the ViMS approach.

  4. Molecular Science Computing Facility Scientific Challenges: Linking Across Scales

    SciTech Connect

    De Jong, Wibe A.; Windus, Theresa L.

    2005-07-01

    The purpose of this document is to define the evolving science drivers for performing environmental molecular research at the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and to provide guidance associated with the next-generation high-performance computing center that must be developed at EMSL's Molecular Science Computing Facility (MSCF) in order to address this critical research. The MSCF is the pre-eminent computing facility?supported by the U.S. Department of Energy's (DOE's) Office of Biological and Environmental Research (BER)?tailored to provide the fastest time-to-solution for current computational challenges in chemistry and biology, as well as providing the means for broad research in the molecular and environmental sciences. The MSCF provides integral resources and expertise to emerging EMSL Scientific Grand Challenges and Collaborative Access Teams that are designed to leverage the multiple integrated research capabilities of EMSL, thereby creating a synergy between computation and experiment to address environmental molecular science challenges critical to DOE and the nation.

  5. OPENING REMARKS: SciDAC: Scientific Discovery through Advanced Computing

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2005-01-01

    Good morning. Welcome to SciDAC 2005 and San Francisco. SciDAC is all about computational science and scientific discovery. In a large sense, computational science characterizes SciDAC and its intent is change. It transforms both our approach and our understanding of science. It opens new doors and crosses traditional boundaries while seeking discovery. In terms of twentieth century methodologies, computational science may be said to be transformational. There are a number of examples to this point. First are the sciences that encompass climate modeling. The application of computational science has in essence created the field of climate modeling. This community is now international in scope and has provided precision results that are challenging our understanding of our environment. A second example is that of lattice quantum chromodynamics. Lattice QCD, while adding precision and insight to our fundamental understanding of strong interaction dynamics, has transformed our approach to particle and nuclear science. The individual investigator approach has evolved to teams of scientists from different disciplines working side-by-side towards a common goal. SciDAC is also undergoing a transformation. This meeting is a prime example. Last year it was a small programmatic meeting tracking progress in SciDAC. This year, we have a major computational science meeting with a variety of disciplines and enabling technologies represented. SciDAC 2005 should position itself as a new corner stone for Computational Science and its impact on science. As we look to the immediate future, FY2006 will bring a new cycle to SciDAC. Most of the program elements of SciDAC will be re-competed in FY2006. The re-competition will involve new instruments for computational science, new approaches for collaboration, as well as new disciplines. There will be new opportunities for virtual experiments in carbon sequestration, fusion, and nuclear power and nuclear waste, as well as collaborations

  6. Charon Message-Passing Toolkit for Scientific Computations

    NASA Technical Reports Server (NTRS)

    VanderWijngarrt, Rob F.; Saini, Subhash (Technical Monitor)

    1998-01-01

    The Charon toolkit for piecemeal development of high-efficiency parallel programs for scientific computing is described. The portable toolkit, callable from C and Fortran, provides flexible domain decompositions and high-level distributed constructs for easy translation of serial legacy code or design to distributed environments. Gradual tuning can subsequently be applied to obtain high performance, possibly by using explicit message passing. Charon also features general structured communications that support stencil-based computations with complex recurrences. Through the separation of partitioning and distribution, the toolkit can also be used for blocking of uni-processor code, and for debugging of parallel algorithms on serial machines. An elaborate review of recent parallelization aids is presented to highlight the need for a toolkit like Charon. Some performance results of parallelizing the NAS Parallel Benchmark SP program using Charon are given, showing good scalability. Some performance results of parallelizing the NAS Parallel Benchmark SP program using Charon are given, showing good scalability.

  7. Charon Message-Passing Toolkit for Scientific Computations

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saini, Subhash (Technical Monitor)

    1998-01-01

    The Charon toolkit for piecemeal development of high-efficiency parallel programs for scientific computing is described. The portable toolkit, callable from C and Fortran, provides flexible domain decompositions and high-level distributed constructs for easy translation of serial legacy code or design to distributed environments. Gradual tuning can subsequently be applied to obtain high performance, possibly by using explicit message passing. Charon also features general structured communications that support stencil-based computations with complex recurrences. Through the separation of partitioning and distribution, the toolkit can also be used for blocking of uni-processor code, and for debugging of parallel algorithms on serial machines. An elaborate review of recent parallelization aids is presented to highlight the need for a toolkit like Charon. Some performance results of parallelizing the NAS Parallel Benchmark SP program using Charon are given, showing good scalability.

  8. A Scientific Cloud Computing Platform for Condensed Matter Physics

    NASA Astrophysics Data System (ADS)

    Jorissen, K.; Johnson, W.; Vila, F. D.; Rehr, J. J.

    2013-03-01

    Scientific Cloud Computing (SCC) makes possible calculations with high performance computational tools, without the need to purchase or maintain sophisticated hardware and software. We have recently developed an interface dubbed SC2IT that controls on-demand virtual Linux clusters within the Amazon EC2 cloud platform. Using this interface we have developed a more advanced, user-friendly SCC Platform configured especially for condensed matter calculations. This platform contains a GUI, based on a new Java version of SC2IT, that permits calculations of various materials properties. The cloud platform includes Virtual Machines preconfigured for parallel calculations and several precompiled and optimized materials science codes for electronic structure and x-ray and electron spectroscopy. Consequently this SCC makes state-of-the-art condensed matter calculations easy to access for general users. Proof-of-principle performance benchmarks show excellent parallelization and communication performance. Supported by NSF grant OCI-1048052

  9. Hubble Space Telescope servicing mission scientific instrument protective enclosure design requirements and contamination controls

    NASA Technical Reports Server (NTRS)

    Hansen, Patricia A.; Hughes, David W.; Hedgeland, Randy J.; Chivatero, Craig J.; Studer, Robert J.; Kostos, Peter J.

    1994-01-01

    The Scientific Instrument Protective Enclosures were designed for the Hubble Space Telescope Servicing Missions to provide a beginning environment to a Scientific Instrument during ground and on orbit activities. The Scientific Instruments required very stringent surface cleanliness and molecular outgassing levels to maintain ultraviolet performance. Data from the First Servicing Mission verified that both the Scientific Instruments and Scientific Instrument Protective Enclosures met surface cleanliness level requirements during ground and on-orbit activities.

  10. 76 FR 19189 - Clinical Science Research and Development Service Cooperative Studies Scientific Evaluation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-06

    ... AFFAIRS Clinical Science Research and Development Service Cooperative Studies Scientific Evaluation... (Federal Advisory Committee Act) that a meeting of the Clinical Science Research and Development Service... Science Research and Development Service on the relevance and feasibility of proposed projects and...

  11. 75 FR 79446 - Clinical Science Research and Development Service; Cooperative Studies Scientific Evaluation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-20

    ... AFFAIRS Clinical Science Research and Development Service; Cooperative Studies Scientific Evaluation... (Federal Advisory Committee Act) that a meeting of the Clinical Science Research and Development Service... Clinical Science Research and Development Service on the relevance and feasibility of proposed projects...

  12. 76 FR 73781 - Clinical Science Research and Development Service; Cooperative Studies Scientific Evaluation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-29

    ... AFFAIRS Clinical Science Research and Development Service; Cooperative Studies Scientific Evaluation... (Federal Advisory Committee Act) that a meeting of the Clinical Science Research and Development Service... Clinical Science Research and Development Service on the relevance and feasibility of proposed projects...

  13. Teaching scientific thinking skills: Students and computers coaching each other

    NASA Astrophysics Data System (ADS)

    Reif, Frederick; Scott, Lisa A.

    1999-09-01

    Our attempts to improve physics instruction have led us to analyze thought processes needed to apply scientific principles to problems—and to recognize that reliable performance requires the basic cognitive functions of deciding, implementing, and assessing. Using a reciprocal-teaching strategy to teach such thought processes explicitly, we have developed computer programs called PALs (P_ersonal A_ssistants for L_earning) in which computers and students alternately coach each other. These computer-implemented tutorials make it practically feasible to provide students with individual guidance and feedback ordinarily unavailable in most courses. We constructed PALs specifically designed to teach the application of Newton's laws. In a comparative experimental study these computer tutorials were found to be nearly as effective as individual tutoring by expert teachers—and considerably more effective than the instruction provided in a well-taught physics class. Furthermore, almost all of the students using the PALs perceived them as very helpful to their learning. These results suggest that the proposed instructional approach could fruitfully be extended to improve instruction in various practically realistic contexts.

  14. Hydra: a service oriented architecture for scientific simulation integration

    SciTech Connect

    Bent, Russell; Djidjev, Tatiana; Hayes, Birch P; Holland, Joe V; Khalsa, Hari S; Linger, Steve P; Mathis, Mark M; Mniszewski, Sue M; Bush, Brian

    2008-01-01

    One of the current major challenges in scientific modeling and simulation, in particular in the infrastructure-analysis community, is the development of techniques for efficiently and automatically coupling disparate tools that exist in separate locations on different platforms, implemented in a variety of languages and designed to be standalone. Recent advances in web-based platforms for integrating systems such as SOA provide an opportunity to address these challenges in a systematic fashion. This paper describes Hydra, an integrating architecture for infrastructure modeling and simulation that defines geography-based schemas that, when used to wrap existing tools as web services, allow for seamless plug-and-play composability. Existing users of these tools can enhance the value of their analysis by assessing how the simulations of one tool impact the behavior of another tool and can automate existing ad hoc processes and work flows for integrating tools together.

  15. Cloud Bursting with GlideinWMS: Means to satisfy ever increasing computing needs for Scientific Workflows

    SciTech Connect

    Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt; Larson, Krista; Sfiligoi, Igor; Rynge, Mats

    2014-01-01

    Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared over the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by 'Big Data' will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.

  16. Cloud Bursting with GlideinWMS: Means to satisfy ever increasing computing needs for Scientific Workflows

    NASA Astrophysics Data System (ADS)

    Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt; Larson, Krista; Sfiligoi, Igor; Rynge, Mats

    2014-06-01

    Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared over the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by "Big Data" will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.

  17. Space and Earth Sciences, Computer Systems, and Scientific Data Analysis Support, Volume 1

    NASA Technical Reports Server (NTRS)

    Estes, Ronald H. (Editor)

    1993-01-01

    This Final Progress Report covers the specific technical activities of Hughes STX Corporation for the last contract triannual period of 1 June through 30 Sep. 1993, in support of assigned task activities at Goddard Space Flight Center (GSFC). It also provides a brief summary of work throughout the contract period of performance on each active task. Technical activity is presented in Volume 1, while financial and level-of-effort data is presented in Volume 2. Technical support was provided to all Division and Laboratories of Goddard's Space Sciences and Earth Sciences Directorates. Types of support include: scientific programming, systems programming, computer management, mission planning, scientific investigation, data analysis, data processing, data base creation and maintenance, instrumentation development, and management services. Mission and instruments supported include: ROSAT, Astro-D, BBXRT, XTE, AXAF, GRO, COBE, WIND, UIT, SMM, STIS, HEIDI, DE, URAP, CRRES, Voyagers, ISEE, San Marco, LAGEOS, TOPEX/Poseidon, Pioneer-Venus, Galileo, Cassini, Nimbus-7/TOMS, Meteor-3/TOMS, FIFE, BOREAS, TRMM, AVHRR, and Landsat. Accomplishments include: development of computing programs for mission science and data analysis, supercomputer applications support, computer network support, computational upgrades for data archival and analysis centers, end-to-end management for mission data flow, scientific modeling and results in the fields of space and Earth physics, planning and design of GSFC VO DAAC and VO IMS, fabrication, assembly, and testing of mission instrumentation, and design of mission operations center.

  18. The Personnel Office and Computer Services: Tomorrow.

    ERIC Educational Resources Information Center

    Nicely, H. Phillip, Jr.

    1980-01-01

    It is suggested that the director of personnel should be making maximum use of available computer services. Four concerns of personnel directors are cited: number of government reports required, privacy and security, cost of space for personnel records and files, and the additional decision-making tools required for collective bargaining…

  19. An Execution Service for Grid Computing

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Hu, Chaumin

    2004-01-01

    This paper describes the design and implementation of the IPG Execution Service that reliably executes complex jobs on a computational grid. Our Execution Service is part of the IPG service architecture whose goal is to support location-independent computing. In such an environment, once n user ports an npplicntion to one or more hardware/software platfrms, the user can describe this environment to the grid the grid can locate instances of this platfrm, configure the platfrm as required for the application, and then execute the application. Our Execution Service runs jobs that set up such environments for applications and executes them. These jobs consist of a set of tasks for executing applications and managing data. The tasks have user-defined starting conditions that allow users to specih complex dependencies including task to execute when tasks fail, afiequent occurrence in a large distributed system, or are cancelled. The execution task provided by our service also configures the application environment exactly as specified by the user and captures the exit code of the application, features that many grid execution services do not support due to dflculties interfacing to local scheduling systems.

  20. Pre-Service Science Teachers in Xinjiang "Scientific Inquiry" - Pedagogical Content Knowledge Research

    ERIC Educational Resources Information Center

    Li, Yufeng; Xiong, Jianwen

    2012-01-01

    Scientific inquiry is one of the science curriculum content, "Scientific inquiry" - Pedagogical Content Knowledge is the face of scientific inquiry and teachers - of course pedagogical content knowledge and scientific inquiry a teaching practice with more direct expertise. Pre-service teacher training phase of acquisition of knowledge is…

  1. Domain analysis of computational science - Fifty years of a scientific computing group

    SciTech Connect

    Tanaka, M.

    2010-02-23

    I employed bibliometric- and historical-methods to study the domain of the Scientific Computing group at Brookhaven National Laboratory (BNL) for an extended period of fifty years, from 1958 to 2007. I noted and confirmed the growing emergence of interdisciplinarity within the group. I also identified a strong, consistent mathematics and physics orientation within it.

  2. An Adaptive Middleware Framework for Scientific Computing at Extreme Scales

    SciTech Connect

    Gosney, Arzu; Oehmen, Christopher S.; Wynne, Adam S.; Almquist, Justin P.

    2010-08-04

    Large computing systems including clusters, clouds, and grids, provide high-performance capabilities that can be utilized for many applications. But as the ubiquity of these systems increases and the scope of analysis being done on them grows, there is a growing need for applications that 1) do not require users to learn the details of high performance systems, and 2) are flexible and adaptive in their usage of these systems to accommodate the best time-to-solution for end users. We introduce a new adaptive interface design and a prototype implementation within the framework of an established middleware framework, MeDICi, for high performance computing systems and describe the applicability of this adaptive design to a real-life scientific workflow. This adaptive framework provides an access model for implementing a processing pipeline using high performance systems that are not local to the data source, making it possible for the compute capabilities at one site to be applied to analysis on data being generated at another site in an automated process. This adaptive design improves overall time-to-solution by moving the data analysis task to the most appropriate resource dynamically, reacting to failures and load fluctuations.

  3. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    SciTech Connect

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  4. InSAR Scientific Computing Environment - The Home Stretch

    NASA Astrophysics Data System (ADS)

    Rosen, P. A.; Gurrola, E. M.; Sacco, G.; Zebker, H. A.

    2011-12-01

    The Interferometric Synthetic Aperture Radar (InSAR) Scientific Computing Environment (ISCE) is a software development effort in its third and final year within the NASA Advanced Information Systems and Technology program. The ISCE is a new computing environment for geodetic image processing for InSAR sensors enabling scientists to reduce measurements directly from radar satellites to new geophysical products with relative ease. The environment can serve as the core of a centralized processing center to bring Level-0 raw radar data up to Level-3 data products, but is adaptable to alternative processing approaches for science users interested in new and different ways to exploit mission data. Upcoming international SAR missions will deliver data of unprecedented quantity and quality, making possible global-scale studies in climate research, natural hazards, and Earth's ecosystem. The InSAR Scientific Computing Environment has the functionality to become a key element in processing data from NASA's proposed DESDynI mission into higher level data products, supporting a new class of analyses that take advantage of the long time and large spatial scales of these new data. At the core of ISCE is a new set of efficient and accurate InSAR algorithms. These algorithms are placed into an object-oriented, flexible, extensible software package that is informed by modern programming methods, including rigorous componentization of processing codes, abstraction and generalization of data models. The environment is designed to easily allow user contributions, enabling an open source community to extend the framework into the indefinite future. ISCE supports data from nearly all of the available satellite platforms, including ERS, EnviSAT, Radarsat-1, Radarsat-2, ALOS, TerraSAR-X, and Cosmo-SkyMed. The code applies a number of parallelization techniques and sensible approximations for speed. It is configured to work on modern linux-based computers with gcc compilers and python

  5. Institute for scientific computing research;fiscal year 1999 annual report

    SciTech Connect

    Keyes, D

    2000-03-28

    Large-scale scientific computation, and all of the disciplines that support it and help to validate it, have been placed at the focus of Lawrence Livermore National Laboratory by the Accelerated Strategic Computing Initiative (ASCI). The Laboratory operates the computer with the highest peak performance in the world and has undertaken some of the largest and most compute-intensive simulations ever performed. Computers at the architectural extremes, however, are notoriously difficult to use efficiently. Even such successes as the Laboratory's two Bell Prizes awarded in November 1999 only emphasize the need for much better ways of interacting with the results of large-scale simulations. Advances in scientific computing research have, therefore, never been more vital to the core missions of the Laboratory than at present. Computational science is evolving so rapidly along every one of its research fronts that to remain on the leading edge, the Laboratory must engage researchers at many academic centers of excellence. In FY 1999, the Institute for Scientific Computing Research (ISCR) has expanded the Laboratory's bridge to the academic community in the form of collaborative subcontracts, visiting faculty, student internships, a workshop, and a very active seminar series. ISCR research participants are integrated almost seamlessly with the Laboratory's Center for Applied Scientific Computing (CASC), which, in turn, addresses computational challenges arising throughout the Laboratory. Administratively, the ISCR flourishes under the Laboratory's University Relations Program (URP). Together with the other four Institutes of the URP, it must navigate a course that allows the Laboratory to benefit from academic exchanges while preserving national security. Although FY 1999 brought more than its share of challenges to the operation of an academic-like research enterprise within the context of a national security laboratory, the results declare the challenges well met and well

  6. BOINC service for volunteer cloud computing

    NASA Astrophysics Data System (ADS)

    Høimyr, N.; Blomer, J.; Buncic, P.; Giovannozzi, M.; Gonzalez, A.; Harutyunyan, A.; Jones, P. L.; Karneyeu, A.; Marquina, M. A.; Mcintosh, E.; Segal, B.; Skands, P.; Grey, F.; Lombraña González, D.; Zacharov, I.

    2012-12-01

    Since a couple of years, a team at CERN and partners from the Citizen Cyberscience Centre (CCC) have been working on a project that enables general physics simulation programs to run in a virtual machine on volunteer PCs around the world. The project uses the Berkeley Open Infrastructure for Network Computing (BOINC) framework. Based on CERNVM and the job management framework Co-Pilot, this project was made available for public beta-testing in August 2011 with Monte Carlo simulations of LHC physics under the name “LHC@home 2.0” and the BOINC project: “Test4Theory”. At the same time, CERN's efforts on Volunteer Computing for LHC machine studies have been intensified; this project has previously been known as LHC@home, and has been running the “Sixtrack” beam dynamics application for the LHC accelerator, using a classic BOINC framework without virtual machines. CERN-IT has set up a BOINC server cluster, and has provided and supported the BOINC infrastructure for both projects. CERN intends to evolve the setup into a generic BOINC application service that will allow scientists and engineers at CERN to profit from volunteer computing. This paper describes the experience with the two different approaches to volunteer computing as well as the status and outlook of a general BOINC service.

  7. 75 FR 28686 - Clinical Science Research and Development Service; Cooperative Studies Scientific Evaluation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-21

    ... AFFAIRS Clinical Science Research and Development Service; Cooperative Studies Scientific Evaluation... (Federal Advisory Committee Act) that a meeting of the Clinical Science Research and Development Service... Committee advises the Chief Research and Development Officer through the Director of the Clinical...

  8. 76 FR 65781 - Clinical Science Research and Development Service Cooperative Studies Scientific Evaluation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-24

    ... AFFAIRS Clinical Science Research and Development Service Cooperative Studies Scientific Evaluation... (Federal Advisory Committee Act) that a meeting of the Clinical Science Research and Development Service... Research and Development Officer through the Director of the Clinical Science Research and...

  9. 77 FR 31072 - Clinical Science Research and Development Service Cooperative Studies Scientific Evaluation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-24

    ... No: 2012-12522] DEPARTMENT OF VETERANS AFFAIRS Clinical Science Research and Development Service... Science Research and Development Service Cooperative Studies Scientific Evaluation Committee will be held... Research and Development Officer through the Director of the Clinical Science Research and...

  10. DOE Advanced Scientific Computing Advisory Committee (ASCAC) Report: Exascale Computing Initiative Review

    SciTech Connect

    Reed, Daniel; Berzins, Martin; Pennington, Robert; Sarkar, Vivek; Taylor, Valerie

    2015-08-01

    On November 19, 2014, the Advanced Scientific Computing Advisory Committee (ASCAC) was charged with reviewing the Department of Energy’s conceptual design for the Exascale Computing Initiative (ECI). In particular, this included assessing whether there are significant gaps in the ECI plan or areas that need to be given priority or extra management attention. Given the breadth and depth of previous reviews of the technical challenges inherent in exascale system design and deployment, the subcommittee focused its assessment on organizational and management issues, considering technical issues only as they informed organizational or management priorities and structures. This report presents the observations and recommendations of the subcommittee.

  11. PS3 CELL Development for Scientific Computation and Research

    NASA Astrophysics Data System (ADS)

    Christiansen, M.; Sevre, E.; Wang, S. M.; Yuen, D. A.; Liu, S.; Lyness, M. D.; Broten, M.

    2007-12-01

    The Cell processor is one of the most powerful processors on the market, and researchers in the earth sciences may find its parallel architecture to be very useful. A cell processor, with 7 cores, can easily be obtained for experimentation by purchasing a PlayStation 3 (PS3) and installing linux and the IBM SDK. Each core of the PS3 is capable of 25 GFLOPS giving a potential limit of 150 GFLOPS when using all 6 SPUs (synergistic processing units) by using vectorized algorithms. We have used the Cell's computational power to create a program which takes simulated tsunami datasets, parses them, and returns a colorized height field image using ray casting techniques. As expected, the time required to create an image is inversely proportional to the number of SPUs used. We believe that this trend will continue when multiple PS3s are chained using OpenMP functionality and are in the process of researching this. By using the Cell to visualize tsunami data, we have found that its greatest feature is its power. This fact entwines well with the needs of the scientific community where the limiting factor is time. Any algorithm, such as the heat equation, that can be subdivided into multiple parts can take advantage of the PS3 Cell's ability to split the computations across the 6 SPUs reducing required run time by one sixth. Further vectorization of the code can allow for 4 simultanious floating point operations by using the SIMD (single instruction multiple data) capabilities of the SPU increasing efficiency 24 times.

  12. Data Publishing Services in a Scientific Project Platform

    NASA Astrophysics Data System (ADS)

    Schroeder, Matthias; Stender, Vivien; Wächter, Joachim

    2014-05-01

    Data-intensive science lives from data. More and more interdisciplinary projects are aligned to mutually gain access to their data, models and results. In order to achieving this, an umbrella project GLUES is established in the context of the "Sustainable Land Management" (LAMA) initiative funded by the German Federal Ministry of Education and Research (BMBF). The GLUES (Global Assessment of Land Use Dynamics, Greenhouse Gas Emissions and Ecosystem Services) project supports several different regional projects of the LAMA initiative: Within the framework of GLUES a Spatial Data Infrastructure (SDI) is implemented to facilitate publishing, sharing and maintenance of distributed global and regional scientific data sets as well as model results. The GLUES SDI supports several OGC webservices like the Catalog Service Web (CSW) which enables it to harvest data from varying regional projects. One of these regional projects is SuMaRiO (Sustainable Management of River Oases along the Tarim River) which aims to support oasis management along the Tarim River (PR China) under conditions of climatic and societal changes. SuMaRiO itself is an interdisciplinary and spatially distributed project. Working groups from twelve German institutes and universities are collecting data and driving their research in disciplines like Hydrology, Remote Sensing, and Agricultural Sciences among others. Each working group is dependent on the results of another working group. Due to the spatial distribution of participating institutes the data distribution is solved by using the eSciDoc infrastructure at the German Research Centre for Geosciences (GFZ). Further, the metadata based data exchange platform PanMetaDocs will be used by participants collaborative. PanMetaDocs supports an OAI-PMH interface which enables an Open Source metadata portal like GeoNetwork to harvest the information. The data added in PanMetaDocs can be labeled with a DOI (Digital Object Identifier) to publish the data and to

  13. Constructing Arguments: Investigating Pre-Service Science Teachers' Argumentation Skills in a Socio-Scientific Context

    ERIC Educational Resources Information Center

    Robertshaw, Brooke; Campbell, Todd

    2013-01-01

    As western society becomes increasingly reliant on scientific information to make decisions, citizens must be equipped to understand how scientific arguments are constructed. In order to do this, pre-service teachers must be prepared to foster students' abilities and understandings of scientific argumentation in the classroom. This study…

  14. 75 FR 40036 - Rehabilitation Research and Development Service Scientific Merit Review Board; Notice of Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-13

    ... research proposals and research underway which could lead to the loss of these projects to third parties... AFFAIRS Rehabilitation Research and Development Service Scientific Merit Review Board; Notice of Meeting... Act) that the Rehabilitation Research and Development Service Scientific Merit Review Board will...

  15. 75 FR 3542 - Rehabilitation Research and Development Service Scientific Merit Review Board; Notice of Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-21

    ... personal privacy. Disclosure would also reveal research proposals and research underway which could lead to... AFFAIRS Rehabilitation Research and Development Service Scientific Merit Review Board; Notice of Meeting... Act) that the Rehabilitation Research and Development Service Scientific Merit Review Board will...

  16. 75 FR 65404 - Rehabilitation Research and Development Service Scientific Merit Review Board; Notice of Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-22

    ... review of the research proposals and critiques. The purpose of the Board is to review rehabilitation... AFFAIRS Rehabilitation Research and Development Service Scientific Merit Review Board; Notice of Meeting... of the Rehabilitation Research and Development Service Scientific Merit Review Board will be...

  17. Scientific Application Requirements for Leadership Computing at the Exascale

    SciTech Connect

    Ahern, Sean; Alam, Sadaf R; Fahey, Mark R; Hartman-Baker, Rebecca J; Barrett, Richard F; Kendall, Ricky A; Kothe, Douglas B; Mills, Richard T; Sankaran, Ramanan; Tharrington, Arnold N; White III, James B

    2007-12-01

    The Department of Energy s Leadership Computing Facility, located at Oak Ridge National Laboratory s National Center for Computational Sciences, recently polled scientific teams that had large allocations at the center in 2007, asking them to identify computational science requirements for future exascale systems (capable of an exaflop, or 1018 floating point operations per second). These requirements are necessarily speculative, since an exascale system will not be realized until the 2015 2020 timeframe, and are expressed where possible relative to a recent petascale requirements analysis of similar science applications [1]. Our initial findings, which beg further data collection, validation, and analysis, did in fact align with many of our expectations and existing petascale requirements, yet they also contained some surprises, complete with new challenges and opportunities. First and foremost, the breadth and depth of science prospects and benefits on an exascale computing system are striking. Without a doubt, they justify a large investment, even with its inherent risks. The possibilities for return on investment (by any measure) are too large to let us ignore this opportunity. The software opportunities and challenges are enormous. In fact, as one notable computational scientist put it, the scale of questions being asked at the exascale is tremendous and the hardware has gotten way ahead of the software. We are in grave danger of failing because of a software crisis unless concerted investments and coordinating activities are undertaken to reduce and close this hardwaresoftware gap over the next decade. Key to success will be a rigorous requirement for natural mapping of algorithms to hardware in a way that complements (rather than competes with) compilers and runtime systems. The level of abstraction must be raised, and more attention must be paid to functionalities and capabilities that incorporate intent into data structures, are aware of memory hierarchy

  18. Evolving the Land Information System into a Cloud Computing Service

    SciTech Connect

    Houser, Paul R.

    2015-02-17

    The Land Information System (LIS) was developed to use advanced flexible land surface modeling and data assimilation frameworks to integrate extremely large satellite- and ground-based observations with advanced land surface models to produce continuous high-resolution fields of land surface states and fluxes. The resulting fields are extremely useful for drought and flood assessment, agricultural planning, disaster management, weather and climate forecasting, water resources assessment, and the like. We envisioned transforming the LIS modeling system into a scientific cloud computing-aware web and data service that would allow clients to easily setup and configure for use in addressing large water management issues. The focus of this Phase 1 project was to determine the scientific, technical, commercial merit and feasibility of the proposed LIS-cloud innovations that are currently barriers to broad LIS applicability. We (a) quantified the barriers to broad LIS utility and commercialization (high performance computing, big data, user interface, and licensing issues); (b) designed the proposed LIS-cloud web service, model-data interface, database services, and user interfaces; (c) constructed a prototype LIS user interface including abstractions for simulation control, visualization, and data interaction, (d) used the prototype to conduct a market analysis and survey to determine potential market size and competition, (e) identified LIS software licensing and copyright limitations and developed solutions, and (f) developed a business plan for development and marketing of the LIS-cloud innovation. While some significant feasibility issues were found in the LIS licensing, overall a high degree of LIS-cloud technical feasibility was found.

  19. 5 CFR 838.441 - Computing lengths of service.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Computing lengths of service. 838.441... Affecting Refunds of Employee Contributions Procedures for Computing the Amount Payable § 838.441 Computing lengths of service. (a) The smallest unit of time that OPM will calculate in computing a formula in...

  20. 5 CFR 838.242 - Computing lengths of service.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Computing lengths of service. 838.242... Affecting Employee Annuities Procedures for Computing the Amount Payable § 838.242 Computing lengths of service. (a)(1) The smallest unit of time that OPM will calculate in computing a formula in a court...

  1. Open Science in the Cloud: Towards a Universal Platform for Scientific and Statistical Computing

    NASA Astrophysics Data System (ADS)

    Chine, Karim

    The UK, through the e-Science program, the US through the NSF-funded cyber infrastructure and the European Union through the ICT Calls aimed to provide "the technological solution to the problem of efficiently connecting data, computers, and people with the goal of enabling derivation of novel scientific theories and knowledge".1 The Grid (Foster, 2002; Foster; Kesselman, Nick, & Tuecke, 2002), foreseen as a major accelerator of discovery, didn't meet the expectations it had excited at its beginnings and was not adopted by the broad population of research professionals. The Grid is a good tool for particle physicists and it has allowed them to tackle the tremendous computational challenges inherent to their field. However, as a technology and paradigm for delivering computing on demand, it doesn't work and it can't be fixed. On one hand, "the abstractions that Grids expose - to the end-user, to the deployers and to application developers - are inappropriate and they need to be higher level" (Jha, Merzky, & Fox), and on the other hand, academic Grids are inherently economically unsustainable. They can't compete with a service outsourced to the Industry whose quality and price would be driven by market forces. The virtualization technologies and their corollary, the Infrastructure-as-a-Service (IaaS) style cloud, hold the promise to enable what the Grid failed to deliver: a sustainable environment for computational sciences that would lower the barriers for accessing federated computational resources, software tools and data; enable collaboration and resources sharing and provide the building blocks of a ubiquitous platform for traceable and reproducible computational research.

  2. Bio-compute objects - a step towards evaluation and validation of bio-medical scientific computations.

    PubMed

    Simonyan, Vahan; Goecks, Jeremy; Mazumder, Raja

    2016-12-14

    The unpredictability of actual physical, chemical, and biological experiments due to the multitude of environmental and procedural factors is well-documented. What is systematically overlooked, however, is that computational biology algorithms are also affected by multiplicity of parameters and have no lesser volatility. The complexities of computation protocols and interpretation of outcomes is only a part of the challenge: there are also virtually no standardized and industry accepted metadata schemas for reporting the computational objects that record the parameters used for computations together with the results of computations. Thus, it is often impossible to reproduce the results of a previously performed computation due to missing information on parameters, versions, arguments, conditions, and procedures of application launch. In this publication we describe the concept of biocompute objects developed specifically to satisfy regulatory research needs for evaluation, validation, and verification of bioinformatics pipelines. We envision generalized versions of biocompute objects called biocompute templates that support a single class of analyses but can be adapted to meet unique needs. To make these templates widely usable, we outline a simple but powerful cross-platform implementation. We also discuss the reasoning and potential usability for such concept within larger scientific community through the creation of a biocompute object database consisting of records relevant to US Food and Drug Administration (FDA). A biocompute object database record will be similar to a GenBank record in form; the difference being -- instead of describing a sequence, the biocompute record will include information related to parameters, dependencies, usage and other related information related to specific computations. This mechanism will extend similar efforts and also serve as a collaborative ground to ensure interoperability between different platforms, industries

  3. Accelerating Scientific Discovery Through Computation and Visualization III. Tight-Binding Wave Functions for Quantum Dots.

    PubMed

    Sims, James S; George, William L; Griffin, Terence J; Hagedorn, John G; Hung, Howard K; Kelso, John T; Olano, Marc; Peskin, Adele P; Satterfield, Steven G; Terrill, Judith Devaney; Bryant, Garnett W; Diaz, Jose G

    2008-01-01

    This is the third in a series of articles that describe, through examples, how the Scientific Applications and Visualization Group (SAVG) at NIST has utilized high performance parallel computing, visualization, and machine learning to accelerate scientific discovery. In this article we focus on the use of high performance computing and visualization for simulations of nanotechnology.

  4. 77 FR 72438 - Clinical Science Research and Development Service Cooperative Studies Scientific Evaluation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-05

    ... AFFAIRS Clinical Science Research and Development Service Cooperative Studies Scientific Evaluation... Committee Act, 5 U.S.C. App. 2, that the Clinical Science Research and Development Service Cooperative... Clinical Science Research and Development Service on the relevance and feasibility of proposed projects...

  5. 78 FR 70102 - Clinical Science Research and Development Service Cooperative Studies; Scientific Evaluation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-22

    ... AFFAIRS Clinical Science Research and Development Service Cooperative Studies; Scientific Evaluation... Committee Act, 5 U.S.C. App. 2, that the Clinical Science Research and Development Service Cooperative... the Director of the Clinical Science Research and Development Service on the relevance and...

  6. Scientific and Technological Information Services in Australia: II. Discipline Formation in Information Management

    ERIC Educational Resources Information Center

    Middleton, Michael

    2006-01-01

    This second part of an analysis of scientific and technical information (STI) services in Australia considers their development in the context of discipline formation in information management. The case studies used are the STI services from Part I. A case study protocol is used to consider the extent to which the development of the services may…

  7. Pre-Service Science and Primary School Teachers' Identification of Scientific Process Skills

    ERIC Educational Resources Information Center

    Birinci Konur, Kader; Yildirim, Nagihan

    2016-01-01

    The purpose of this study was to conduct a comparative analysis of pre-service primary school and science teachers' identification of scientific process skills. The study employed the survey method, and the sample included 95 pre-service science teachers and 95 pre-service primary school teachers from the Faculty of Education at Recep Tayyip…

  8. Biomedical Scientific and Professional Social Networks in the Service of the Development of Modern Scientific Publishing.

    PubMed

    Masic, Izet; Begic, Edin

    2016-12-01

    Information technologies have found their application in virtually every branch of health care. In recent years they have demonstrated their potential in the development of online library, where scientists and researchers can share their latest findings. Academia.edu, ResearchGate, Mendeley, Kudos, with the support of platform GoogleScholar, have indeed increased the visibility of scientific work of one author, and enable a much greater availability of the scientific work to the broader audience. Online libraries have allowed free access to the scientific content to the countries that could not follow the economic costs of getting access to certain scientific bases. Especially great benefit occurred in countries in transition and developing countries. Online libraries have great potential in terms of expanding knowledge, but they also present a major problem for many publishers, because their rights can be violated, which are signed by the author when publishing the paper. In the future it will lead to a major conflict of the author, the editorial board and online database, about the right to scientific content This question certainly represents one of the most pressing issues of publishing, whose future in printed form is already in the past, and the future of the online editions will be a problem of large-scale.

  9. Biomedical Scientific and Professional Social Networks in the Service of the Development of Modern Scientific Publishing

    PubMed Central

    Masic, Izet; Begic, Edin

    2016-01-01

    Information technologies have found their application in virtually every branch of health care. In recent years they have demonstrated their potential in the development of online library, where scientists and researchers can share their latest findings. Academia.edu, ResearchGate, Mendeley, Kudos, with the support of platform GoogleScholar, have indeed increased the visibility of scientific work of one author, and enable a much greater availability of the scientific work to the broader audience. Online libraries have allowed free access to the scientific content to the countries that could not follow the economic costs of getting access to certain scientific bases. Especially great benefit occurred in countries in transition and developing countries. Online libraries have great potential in terms of expanding knowledge, but they also present a major problem for many publishers, because their rights can be violated, which are signed by the author when publishing the paper. In the future it will lead to a major conflict of the author, the editorial board and online database, about the right to scientific content This question certainly represents one of the most pressing issues of publishing, whose future in printed form is already in the past, and the future of the online editions will be a problem of large-scale. PMID:28077905

  10. The Centre of High-Performance Scientific Computing, Geoverbund, ABC/J - Geosciences enabled by HPSC

    NASA Astrophysics Data System (ADS)

    Kollet, Stefan; Görgen, Klaus; Vereecken, Harry; Gasper, Fabian; Hendricks-Franssen, Harrie-Jan; Keune, Jessica; Kulkarni, Ketan; Kurtz, Wolfgang; Sharples, Wendy; Shrestha, Prabhakar; Simmer, Clemens; Sulis, Mauro; Vanderborght, Jan

    2016-04-01

    The Centre of High-Performance Scientific Computing (HPSC TerrSys) was founded 2011 to establish a centre of competence in high-performance scientific computing in terrestrial systems and the geosciences enabling fundamental and applied geoscientific research in the Geoverbund ABC/J (geoscientfic research alliance of the Universities of Aachen, Cologne, Bonn and the Research Centre Jülich, Germany). The specific goals of HPSC TerrSys are to achieve relevance at the national and international level in (i) the development and application of HPSC technologies in the geoscientific community; (ii) student education; (iii) HPSC services and support also to the wider geoscientific community; and in (iv) the industry and public sectors via e.g., useful applications and data products. A key feature of HPSC TerrSys is the Simulation Laboratory Terrestrial Systems, which is located at the Jülich Supercomputing Centre (JSC) and provides extensive capabilities with respect to porting, profiling, tuning and performance monitoring of geoscientific software in JSC's supercomputing environment. We will present a summary of success stories of HPSC applications including integrated terrestrial model development, parallel profiling and its application from watersheds to the continent; massively parallel data assimilation using physics-based models and ensemble methods; quasi-operational terrestrial water and energy monitoring; and convection permitting climate simulations over Europe. The success stories stress the need for a formalized education of students in the application of HPSC technologies in future.

  11. 77 FR 12823 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-02

    ... final report, Advanced Networking update Status from Computer Science COV Early Career technical talks Summary of Applied Math and Computer Science Workshops ASCR's new SBIR awards Data-intensive...

  12. Building an infrastructure for scientific Grid computing: status and goals of the EGEE project.

    PubMed

    Gagliardi, Fabrizio; Jones, Bob; Grey, François; Bégin, Marc-Elian; Heikkurinen, Matti

    2005-08-15

    The state of computer and networking technology today makes the seamless sharing of computing resources on an international or even global scale conceivable. Scientific computing Grids that integrate large, geographically distributed computer clusters and data storage facilities are being developed in several major projects around the world. This article reviews the status of one of these projects, Enabling Grids for E-SciencE, describing the scientific opportunities that such a Grid can provide, while illustrating the scale and complexity of the challenge involved in establishing a scientific infrastructure of this kind.

  13. RELIABILITY, AVAILABILITY, AND SERVICEABILITY FOR PETASCALE HIGH-END COMPUTING AND BEYOND

    SciTech Connect

    Chokchai "Box" Leangsuksun

    2011-05-31

    Our project is a multi-institutional research effort that adopts interplay of RELIABILITY, AVAILABILITY, and SERVICEABILITY (RAS) aspects for solving resilience issues in highend scientific computing in the next generation of supercomputers. results lie in the following tracks: Failure prediction in a large scale HPC; Investigate reliability issues and mitigation techniques including in GPGPU-based HPC system; HPC resilience runtime & tools.

  14. 5 CFR 838.623 - Computing lengths of service.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Computing lengths of service. 838.623 Section 838.623 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE... Employee Annuities or Refunds of Employee Contributions Computation of Benefits § 838.623 Computing...

  15. Turkish Pre-Service Elementary Science Teachers' Scientific Literacy Level and Attitudes toward Science

    ERIC Educational Resources Information Center

    Cavas, Pinar Huyuguzel; Ozdem, Yasemin; Cavas, Bulent; Cakiroglu, Jale; Ertepinar, Hamide

    2013-01-01

    In order to educate elementary students scientifically literate as expected in the science curricula in many countries around the world, science teachers need to be equipped with the diverse aspects of scientific literacy. This study investigates whether pre-service elementary science teachers at universities in Turkey have a satisfactory level of…

  16. Pre-Service Science Teachers' Perception of the Principles of Scientific Research

    ERIC Educational Resources Information Center

    Can, Sendil; Kaymakci, Güliz

    2016-01-01

    The purpose of the current study employing the survey method is to determine the pre-service science teachers' perceptions of the principles of scientific research and to investigate the effects of gender, grade level and the state of following scientific publications on their perceptions. The sampling of the current research is comprised of 125…

  17. Scientific Data Services -- A High-Performance I/O System with Array Semantics

    SciTech Connect

    Wu, Kesheng; Byna, Surendra; Rotem, Doron; Shoshani, Arie

    2011-09-21

    As high-performance computing approaches exascale, the existing I/O system design is having trouble keeping pace in both performance and scalability. We propose to address this challenge by adopting database principles and techniques in parallel I/O systems. First, we propose to adopt an array data model because many scientific applications represent their data in arrays. This strategy follows a cardinal principle from database research, which separates the logical view from the physical layout of data. This high-level data model gives the underlying implementation more freedom to optimize the physical layout and to choose the most effective way of accessing the data. For example, knowing that a set of write operations is working on a single multi-dimensional array makes it possible to keep the subarrays in a log structure during the write operations and reassemble them later into another physical layout as resources permit. While maintaining the high-level view, the storage system could compress the user data to reduce the physical storage requirement, collocate data records that are frequently used together, or replicate data to increase availability and fault-tolerance. Additionally, the system could generate secondary data structures such as database indexes and summary statistics. We expect the proposed Scientific Data Services approach to create a “live” storage system that dynamically adjusts to user demands and evolves with the massively parallel storage hardware.

  18. Kepler + MeDICi - Service-Oriented Scientific Workflow Applications

    SciTech Connect

    Chase, Jared M.; Gorton, Ian; Sivaramakrishnan, Chandrika; Almquist, Justin P.; Wynne, Adam S.; Chin, George; Critchlow, Terence J.

    2009-07-30

    Scientific applications are often structured as workflows that execute a series of interdependent, distributed software modules to analyze large data sets. The order of execution of the tasks in a workflow is commonly controlled by complex scripts, which over time become difficult to maintain and evolve. In this paper, we describe how we have integrated the Kepler scientific workflow platform with the MeDICi Integration Framework, which has been specifically designed to provide a standards-based, lightweight and flexible integration platform. The MeDICi technology provides a scalable, component-based architecture that efficiently handles integration with heterogeneous, distributed software systems. This paper describes the MeDICi Integration Framework and the mechanisms we used to integrate MeDICi components with Kepler workflow actors. We illustrate this solution with a workflow application for an atmospheric sciences application. The resulting solution promotes a strong separation of concerns, simplifying the Kepler workflow description and promoting the creation of a reusable collection of components available for other workflow applications in this domain.

  19. Multicore Challenges and Benefits for High Performance Scientific Computing

    DOE PAGES

    Nielsen, Ida M. B.; Janssen, Curtis L.

    2008-01-01

    Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexitymore » of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.« less

  20. Scientific Uses and Directions of SPDF Data Services

    NASA Technical Reports Server (NTRS)

    Fung, Shing

    2007-01-01

    From a science user's perspective, the multi-mission data and orbit services of NASA's Space Physics Data Facility (SPDF) project perform as a working and highly functional heliophysics virtual observatory. CDAWeb enables plots, listings and file downloads for current data across the boundaries of missions and instrument types (and now including data from THEMIS and STEREO), VSPO access to a wide range of distributed data sources. SSCWeb, Helioweb and our 3D Animated Orbit Viewer (TIPSOD) provide position data and query logic for most missions currently-important to heliophysics science. OMNIWeb with its new extension to 1- and 5- minute resolution provides interplanetary parameters at the Earth's bow shock as a unique value-added data product. To enable easier integrated use of our capabilities by developers and by the emerging heliophysics VxOs, our data and services are available through webservices-based APls as well as through our direct user interfaces. SPDF has also now developed draft descriptions of its holdings in SPASE-compliant XML In addition to showcasing recent enhancements to SPDF capabilities, we will use these systems and our experience in developing them: to demonstrate a few typical science use cases; to discuss key scope and design issues among users, service providers and end data providers; and to identify key areas where existing capabilities and effective interface design are still inadequate to meet community needs.

  1. 78 FR 41198 - Clinical Science Research and Development Service Cooperative Studies Scientific Evaluation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-09

    ... AFFAIRS Clinical Science Research and Development Service Cooperative Studies Scientific Evaluation... Committee Act, 5 U.S.C. App. 2, that the Clinical Science Research and Development Service Cooperative... Research and Development Officer through the Director of the Clinical Science Research and...

  2. 78 FR 53015 - Clinical Science Research and Development Service Cooperative Studies Scientific Evaluation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-27

    ... AFFAIRS Clinical Science Research and Development Service Cooperative Studies Scientific Evaluation... Committee Act, 5 U.S.C. App. 2, that the Clinical Science Research and Development Service Cooperative... Chief Research and Development Officer through the Director of the Clinical Science Research...

  3. Comparison of Pre-Service Teachers' Metaphors Regarding the Concept of "Scientific Knowledge"

    ERIC Educational Resources Information Center

    Akinoglu, Orhan; Eren, Canan Dilek

    2016-01-01

    The aim of this research was to analyze pre-service teachers' perceptions of the concept "scientific knowledge" through metaphors. Phenomenology, one of qualitative research designs, was used in the study. A total of 189 pre-service teachers, including 158 females and 31 males, studying at different departments in the education faculty…

  4. Scientific and Technological Information Services in Australia: I. History and Development

    ERIC Educational Resources Information Center

    Middleton, Michael

    2006-01-01

    An investigation of the development of Australian scientific and technological information (STI) services has been undertaken. It comprises a consideration of the characteristics and development of the services, which is the focus of this part of the paper, along with a broader examination of discipline formation in information management covered…

  5. Better Information Management Policies Needed: A Study of Scientific and Technical Bibliographic Services.

    ERIC Educational Resources Information Center

    Comptroller General of the U.S., Washington, DC.

    This report discusses the management of scientific and technical bibliographic data bases by the Federal Government, the existence of overlapping and duplicative bibliographic information services, the application of cost recovery principles to bibliographic information services, and the need to manage information as a resource. Questionnaires…

  6. Position Paper: Applying Machine Learning to Software Analysis to Achieve Trusted, Repeatable Scientific Computing

    SciTech Connect

    Prowell, Stacy J; Symons, Christopher T

    2015-01-01

    Producing trusted results from high-performance codes is essential for policy and has significant economic impact. We propose combining rigorous analytical methods with machine learning techniques to achieve the goal of repeatable, trustworthy scientific computing.

  7. From Mars to Minerva: The origins of scientific computing in the AEC labs

    SciTech Connect

    Seidel, R.W. |

    1996-10-01

    Although the AEC laboratories are renowned for the development of nuclear weapons, their largess in promoting scientific computing also had a profound effect on scientific and technological development in the second half of the 20th century. {copyright} {ital 1996 American Institute of Physics.}

  8. Computer-Supported Aids to Making Sense of Scientific Articles: Cognitive, Motivational, and Attitudinal Effects

    ERIC Educational Resources Information Center

    Gegner, Julie A.; Mackay, Donald H. J.; Mayer, Richard E.

    2009-01-01

    High school students can access original scientific research articles on the Internet, but may have trouble understanding them. To address this problem of online literacy, the authors developed a computer-based prototype for guiding students' comprehension of scientific articles. High school students were asked to read an original scientific…

  9. High throughput computing: a solution for scientific analysis

    USGS Publications Warehouse

    O'Donnell, M.

    2011-01-01

    handle job failures due to hardware, software, or network interruptions (obviating the need to manually resubmit the job after each stoppage); be affordable; and most importantly, allow us to complete very large, complex analyses that otherwise would not even be possible. In short, we envisioned a job-management system that would take advantage of unused FORT CPUs within a local area network (LAN) to effectively distribute and run highly complex analytical processes. What we found was a solution that uses High Throughput Computing (HTC) and High Performance Computing (HPC) systems to do exactly that (Figure 1).

  10. A language comparison for scientific computing on MIMD architectures

    NASA Technical Reports Server (NTRS)

    Jones, Mark T.; Patrick, Merrell L.; Voigt, Robert G.

    1989-01-01

    Choleski's method for solving banded symmetric, positive definite systems is implemented on a multiprocessor computer using three FORTRAN based parallel programming languages, the Force, PISCES and Concurrent FORTRAN. The capabilities of the language for expressing parallelism and their user friendliness are discussed, including readability of the code, debugging assistance offered, and expressiveness of the languages. The performance of the different implementations is compared. It is argued that PISCES, using the Force for medium-grained parallelism, is the appropriate choice for programming Choleski's method on the multiprocessor computer, Flex/32.

  11. Argonne National Lab - Theory and Computing Sciences, Accelerating Scientific Discovery

    SciTech Connect

    Beckman, Pete

    2009-01-01

    Argonne's new TCS building houses all of Argonne's computing divisions, and is designed to foster collaboration of the Manhattan Project model "Getting the best people together and having them work on a problem with singular determination." More at http://www.anl.gov/Media_Center/News/2009/tcs0910.html

  12. Computer-Based Inquiry into Scientific Problem Solving.

    ERIC Educational Resources Information Center

    Berkowitz, Melissa S.; Szabo, Michael

    1979-01-01

    Problem solving performance of individuals was compared with that of dyads at three levels of mental ability using a computer-based inquiry into the riddle of the frozen Wooly Mammoth. Results indicated significant interactions between grouping and mental ability for certain problem solving internal measures. (RAO)

  13. PNNL pushing scientific discovery through data intensive computing breakthroughs

    ScienceCinema

    Deborah Gracio; David Koppenaal; Ruby Leung

    2016-07-12

    The Pacific Northwest National Laboratorys approach to data intensive computing (DIC) is focused on three key research areas: hybrid hardware architectures, software architectures, and analytic algorithms. Advancements in these areas will help to address, and solve, DIC issues associated with capturing, managing, analyzing and understanding, in near real time, data at volumes and rates that push the frontiers of current technologies.

  14. Tools for 3D scientific visualization in computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val

    1989-01-01

    The purpose is to describe the tools and techniques in use at the NASA Ames Research Center for performing visualization of computational aerodynamics, for example visualization of flow fields from computer simulations of fluid dynamics about vehicles such as the Space Shuttle. The hardware used for visualization is a high-performance graphics workstation connected to a super computer with a high speed channel. At present, the workstation is a Silicon Graphics IRIS 3130, the supercomputer is a CRAY2, and the high speed channel is a hyperchannel. The three techniques used for visualization are post-processing, tracking, and steering. Post-processing analysis is done after the simulation. Tracking analysis is done during a simulation but is not interactive, whereas steering analysis involves modifying the simulation interactively during the simulation. Using post-processing methods, a flow simulation is executed on a supercomputer and, after the simulation is complete, the results of the simulation are processed for viewing. The software in use and under development at NASA Ames Research Center for performing these types of tasks in computational aerodynamics is described. Workstation performance issues, benchmarking, and high-performance networks for this purpose are also discussed as well as descriptions of other hardware for digital video and film recording.

  15. PNNL pushing scientific discovery through data intensive computing breakthroughs

    SciTech Connect

    Deborah Gracio; David Koppenaal; Ruby Leung

    2009-11-01

    The Pacific Northwest National Laboratorys approach to data intensive computing (DIC) is focused on three key research areas: hybrid hardware architectures, software architectures, and analytic algorithms. Advancements in these areas will help to address, and solve, DIC issues associated with capturing, managing, analyzing and understanding, in near real time, data at volumes and rates that push the frontiers of current technologies.

  16. Expanding Career Services via the Campus-Wide Computer Network.

    ERIC Educational Resources Information Center

    Roth, Marvin J.; Jones, Deborah A.

    1991-01-01

    Describes campuswide computer network at Lafayette College and how it has allowed career services to help students search for, find, and secure careers. Describes use of computer network for interview sign-ups and on-line resumes, and plans to develop on-line services and alumni directories. (NB)

  17. AVES: A high performance computer cluster array for the INTEGRAL satellite scientific data analysis

    NASA Astrophysics Data System (ADS)

    Federici, Memmo; Martino, Bruno Luigi; Ubertini, Pietro

    2012-07-01

    In this paper we describe a new computing system array, designed, built and now used at the Space Astrophysics and Planetary Institute (IAPS) in Rome, Italy, for the INTEGRAL Space Observatory scientific data analysis. This new system has become necessary in order to reduce the processing time of the INTEGRAL data accumulated during the more than 9 years of in-orbit operation. In order to fulfill the scientific data analysis requirements with a moderately limited investment the starting approach has been to use a `cluster' array of commercial quad-CPU computers, featuring the extremely large scientific and calibration data archive on line.

  18. Continue Service Improvement at CERN Computing Centre

    NASA Astrophysics Data System (ADS)

    Barroso Lopez, M.; Everaerts, L.; Meinhard, H.; Baehler, P.; Haimyr, N.; Guijarro, J. M.

    2014-06-01

    Using the framework of ITIL best practises, the service managers within CERN-IT have engaged into a continuous improvement process, mainly focusing on service operation. This implies an explicit effort to understand and improve all service management aspects in order to increase efficiency and effectiveness. We will present the requirements, how they were addressed and share our experiences. We will describe how we measure, report and use the data to continually improve both the processes and the services being provided. The focus is not the tool or the process, but the results of the continuous improvement effort from a large team of IT experts providing services to thousands of users, supported by the tool and its local team. This is not an initiative to address user concerns in the way the services are managed but rather an on-going working habit of continually reviewing, analysing and improving the service management processes and the services themselves, having in mind the currently agreed service levels and whose results also improve the experience of the users about the current services.

  19. Carbon Nanotube Computer: Transforming Scientific Discoveries into Working Systems

    NASA Astrophysics Data System (ADS)

    Mitra, Subhasish

    2014-03-01

    The miniaturization of electronic devices has been the principal driving force behind the semiconductor industry, and has brought about major improvements in computational power and energy efficiency. Although advances with silicon-based electronics continue to be made, alternative technologies are being explored. Digital circuits based on transistors fabricated from carbon nanotubes (CNTs) have the potential to outperform silicon by improving the energy- delay product, a metric of energy efficiency, by more than an order of magnitude. Hence, CNTs are an exciting complement to existing semiconductor technologies. However, carbon nanotubes (CNTs) are subject to substantial inherent imperfections that pose major obstacles to the design of robust and very large-scale CNFET digital systems: (i) It is nearly impossible to guarantee perfect alignment and positioning of all CNTs. This limitation introduces stray conducting paths, resulting in incorrect circuit functionality. (ii) CNTs can be metallic or semiconducting depending on chirality. Metallic CNTs cause shorts resulting in excessive leakage and incorrect circuit functionality. A combination of design and processing technique overcomes these challenges by creating robust CNFET digital circuits that are immune to these inherent imperfections. This imperfection-immune design paradigm enables the first experimental demonstration of the carbon nanotube computer, and, more generally, arbitrary digital systems that can be built using CNFETs. The CNT computer is capable of performing multitasking: as a demonstration, we perform counting and integer-sorting simultaneously. In addition, we emulate 20 different instructions from the commercial MIPS instruction set to demonstrate the generality of our CNT computer. This is the most complex carbon-based electronic system yet demonstrated. It is a considerable advance because CNTs are prominent among a variety of emerging technologies that are being considered for the next

  20. Using the high-level based program interface to facilitate the large scale scientific computing.

    PubMed

    Shang, Yizi; Shang, Ling; Gao, Chuanchang; Lu, Guiming; Ye, Yuntao; Jia, Dongdong

    2014-01-01

    This paper is to make further research on facilitating the large-scale scientific computing on the grid and the desktop grid platform. The related issues include the programming method, the overhead of the high-level program interface based middleware, and the data anticipate migration. The block based Gauss Jordan algorithm as a real example of large-scale scientific computing is used to evaluate those issues presented above. The results show that the high-level based program interface makes the complex scientific applications on large-scale scientific platform easier, though a little overhead is unavoidable. Also, the data anticipation migration mechanism can improve the efficiency of the platform which needs to process big data based scientific applications.

  1. Using the High-Level Based Program Interface to Facilitate the Large Scale Scientific Computing

    PubMed Central

    Shang, Yizi; Shang, Ling; Gao, Chuanchang; Lu, Guiming; Ye, Yuntao; Jia, Dongdong

    2014-01-01

    This paper is to make further research on facilitating the large-scale scientific computing on the grid and the desktop grid platform. The related issues include the programming method, the overhead of the high-level program interface based middleware, and the data anticipate migration. The block based Gauss Jordan algorithm as a real example of large-scale scientific computing is used to evaluate those issues presented above. The results show that the high-level based program interface makes the complex scientific applications on large-scale scientific platform easier, though a little overhead is unavoidable. Also, the data anticipation migration mechanism can improve the efficiency of the platform which needs to process big data based scientific applications. PMID:24574931

  2. Scientific and Service Data Acquisition System for the GAMMA-400 Apparatus

    NASA Astrophysics Data System (ADS)

    Gorbunov, Maxim

    The data acquisition system for scientific information (ASSI) is the key part of the scientific GAMMA-400 apparatus. The functions of the ASSI are the acquisition of the data and the service information and the space flight control. It consists of 16 SpaceWire data channels for obtaining the data from detectors, the command driving channel (CDC) for transmission commands, the service information and on-board time for detectors, mainframe processing unit (CPU) for the primary data collection. The ASSI is based on 1907VM038, 1907VM014, 1907VM028 microprocessors and 1907KX018 switch, which are designed by Scientific Research Institute of System Analysis, Russian Academy of Sciences (SRISA). These chips are fabricated at scientific 0.25 μm SOI CMOS technology and provide the high level of radiation hardness and fault-tolerance. The high-speed data channels are based on SpaceWire and RapidIO standards.

  3. Charon Message-Passing Toolkit for Scientific Computations

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Yan, Jerry (Technical Monitor)

    2000-01-01

    Charon is a library, callable from C and Fortran, that aids the conversion of structured-grid legacy codes-such as those used in the numerical computation of fluid flows-into parallel, high- performance codes. Key are functions that define distributed arrays, that map between distributed and non-distributed arrays, and that allow easy specification of common communications on structured grids. The library is based on the widely accepted MPI message passing standard. We present an overview of the functionality of Charon, and some representative results.

  4. Computer Communications and New Services. CCITT Achievements.

    ERIC Educational Resources Information Center

    Hummel, Eckart

    New non-voice services (sometimes also called information services) and the possibilities of telecommunication networks to support them are described in this state-of-the-art review. It begins with a summary of the various data transmission techniques, which include several types of data transmission over the telephone network: general, telegraph…

  5. Performance Evaluation of Three Distributed Computing Environments for Scientific Applications

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod; Weeratunga, Sisira; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    We present performance results for three distributed computing environments using the three simulated CFD applications in the NAS Parallel Benchmark suite. These environments are the DCF cluster, the LACE cluster, and an Intel iPSC/860 machine. The DCF is a prototypic cluster of loosely coupled SGI R3000 machines connected by Ethernet. The LACE cluster is a tightly coupled cluster of 32 IBM RS6000/560 machines connected by Ethernet as well as by either FDDI or an IBM Allnode switch. Results of several parallel algorithms for the three simulated applications are presented and analyzed based on the interplay between the communication requirements of an algorithm and the characteristics of the communication network of a distributed system.

  6. Creating science-driven computer architecture: A new patch to scientific leadership

    SciTech Connect

    Simon, Horst D.; McCurdy, C. William; Kramer, T.C.; Stevens, Rick; McCoy,Mike; Seager, Mark; Zacharia, Thomas; Bair, Ray; Studham, Scott; Camp, William; Leland, Robert; Morrison, John; Feiereisen, William

    2003-05-16

    We believe that it is critical for the future of high end computing in the United States to bring into existence a new class of computational capability that is optimal for science. In recent years scientific computing has increasingly become dependent on hardware that is designed and optimized for commercial applications. Science in this country has greatly benefited from the improvements in computers that derive from advances in microprocessors following Moore's Law, and a strategy of relying on machines optimized primarily for business applications. However within the last several years, in part because of the challenge presented by the appearance of the Japanese Earth Simulator, the sense has been growing in the scientific community that a new strategy is needed. A more aggressive strategy than reliance only on market forces driven by business applications is necessary in order to achieve a better alignment between the needs of scientific computing and the platforms available. The United States should undertake a program that will result in scientific computing capability that durably returns the advantage to American science, because doing so is crucial to the country's future. Such a strategy must also be sustainable. New classes of computer designs will not only revolutionize the power of supercomputing for science, but will also affect scientific computing at all scales. What is called for is the opening of a new frontier of scientific capability that will ensure that American science is greatly enabled in its pursuit of research in critical areas such as nanoscience, climate prediction, combustion, modeling in the life sciences, and fusion energy, as well as in meeting essential needs for national security. In this white paper we propose a strategy for accomplishing this mission, pursuing different directions of hardware development and deployment, and establishing a highly capable networking and grid infrastructure connecting these platforms to the broad

  7. A study on strategic provisioning of cloud computing services.

    PubMed

    Whaiduzzaman, Md; Haque, Mohammad Nazmul; Rejaul Karim Chowdhury, Md; Gani, Abdullah

    2014-01-01

    Cloud computing is currently emerging as an ever-changing, growing paradigm that models "everything-as-a-service." Virtualised physical resources, infrastructure, and applications are supplied by service provisioning in the cloud. The evolution in the adoption of cloud computing is driven by clear and distinct promising features for both cloud users and cloud providers. However, the increasing number of cloud providers and the variety of service offerings have made it difficult for the customers to choose the best services. By employing successful service provisioning, the essential services required by customers, such as agility and availability, pricing, security and trust, and user metrics can be guaranteed by service provisioning. Hence, continuous service provisioning that satisfies the user requirements is a mandatory feature for the cloud user and vitally important in cloud computing service offerings. Therefore, we aim to review the state-of-the-art service provisioning objectives, essential services, topologies, user requirements, necessary metrics, and pricing mechanisms. We synthesize and summarize different provision techniques, approaches, and models through a comprehensive literature review. A thematic taxonomy of cloud service provisioning is presented after the systematic review. Finally, future research directions and open research issues are identified.

  8. A Study on Strategic Provisioning of Cloud Computing Services

    PubMed Central

    Rejaul Karim Chowdhury, Md

    2014-01-01

    Cloud computing is currently emerging as an ever-changing, growing paradigm that models “everything-as-a-service.” Virtualised physical resources, infrastructure, and applications are supplied by service provisioning in the cloud. The evolution in the adoption of cloud computing is driven by clear and distinct promising features for both cloud users and cloud providers. However, the increasing number of cloud providers and the variety of service offerings have made it difficult for the customers to choose the best services. By employing successful service provisioning, the essential services required by customers, such as agility and availability, pricing, security and trust, and user metrics can be guaranteed by service provisioning. Hence, continuous service provisioning that satisfies the user requirements is a mandatory feature for the cloud user and vitally important in cloud computing service offerings. Therefore, we aim to review the state-of-the-art service provisioning objectives, essential services, topologies, user requirements, necessary metrics, and pricing mechanisms. We synthesize and summarize different provision techniques, approaches, and models through a comprehensive literature review. A thematic taxonomy of cloud service provisioning is presented after the systematic review. Finally, future research directions and open research issues are identified. PMID:25032243

  9. Certainty in Stockpile Computing: Recommending a Verification and Validation Program for Scientific Software

    SciTech Connect

    Lee, J.R.

    1998-11-01

    As computing assumes a more central role in managing the nuclear stockpile, the consequences of an erroneous computer simulation could be severe. Computational failures are common in other endeavors and have caused project failures, significant economic loss, and loss of life. This report examines the causes of software failure and proposes steps to mitigate them. A formal verification and validation program for scientific software is recommended and described.

  10. SIAM Conference on Parallel Processing for Scientific Computing - March 12-14, 2008

    SciTech Connect

    Kolata, William G.

    2008-09-08

    The themes of the 2008 conference included, but were not limited to: Programming languages, models, and compilation techniques; The transition to ubiquitous multicore/manycore processors; Scientific computing on special-purpose processors (Cell, GPUs, etc.); Architecture-aware algorithms; From scalable algorithms to scalable software; Tools for software development and performance evaluation; Global perspectives on HPC; Parallel computing in industry; Distributed/grid computing; Fault tolerance; Parallel visualization and large scale data management; and The future of parallel architectures.

  11. Security Risks of Cloud Computing and Its Emergence as 5th Utility Service

    NASA Astrophysics Data System (ADS)

    Ahmad, Mushtaq

    Cloud Computing is being projected by the major cloud services provider IT companies such as IBM, Google, Yahoo, Amazon and others as fifth utility where clients will have access for processing those applications and or software projects which need very high processing speed for compute intensive and huge data capacity for scientific, engineering research problems and also e- business and data content network applications. These services for different types of clients are provided under DASM-Direct Access Service Management based on virtualization of hardware, software and very high bandwidth Internet (Web 2.0) communication. The paper reviews these developments for Cloud Computing and Hardware/Software configuration of the cloud paradigm. The paper also examines the vital aspects of security risks projected by IT Industry experts, cloud clients. The paper also highlights the cloud provider's response to cloud security risks.

  12. National facility for advanced computational science: A sustainable path to scientific discovery

    SciTech Connect

    Simon, Horst; Kramer, William; Saphir, William; Shalf, John; Bailey, David; Oliker, Leonid; Banda, Michael; McCurdy, C. William; Hules, John; Canning, Andrew; Day, Marc; Colella, Philip; Serafini, David; Wehner, Michael; Nugent, Peter

    2004-04-02

    Lawrence Berkeley National Laboratory (Berkeley Lab) proposes to create a National Facility for Advanced Computational Science (NFACS) and to establish a new partnership between the American computer industry and a national consortium of laboratories, universities, and computing facilities. NFACS will provide leadership-class scientific computing capability to scientists and engineers nationwide, independent of their institutional affiliation or source of funding. This partnership will bring into existence a new class of computational capability in the United States that is optimal for science and will create a sustainable path towards petaflops performance.

  13. Grid computing enhances standards-compatible geospatial catalogue service

    NASA Astrophysics Data System (ADS)

    Chen, Aijun; Di, Liping; Bai, Yuqi; Wei, Yaxing; Liu, Yang

    2010-04-01

    A catalogue service facilitates sharing, discovery, retrieval, management of, and access to large volumes of distributed geospatial resources, for example data, services, applications, and their replicas on the Internet. Grid computing provides an infrastructure for effective use of computing, storage, and other resources available online. The Open Geospatial Consortium has proposed a catalogue service specification and a series of profiles for promoting the interoperability of geospatial resources. By referring to the profile of the catalogue service for Web, an innovative information model of a catalogue service is proposed to offer Grid-enabled registry, management, retrieval of and access to geospatial resources and their replicas. This information model extends the e-business registry information model by adopting several geospatial data and service metadata standards—the International Organization for Standardization (ISO)'s 19115/19119 standards and the US Federal Geographic Data Committee (FGDC) and US National Aeronautics and Space Administration (NASA) metadata standards for describing and indexing geospatial resources. In order to select the optimal geospatial resources and their replicas managed by the Grid, the Grid data management service and information service from the Globus Toolkits are closely integrated with the extended catalogue information model. Based on this new model, a catalogue service is implemented first as a Web service. Then, the catalogue service is further developed as a Grid service conforming to Grid service specifications. The catalogue service can be deployed in both the Web and Grid environments and accessed by standard Web services or authorized Grid services, respectively. The catalogue service has been implemented at the George Mason University/Center for Spatial Information Science and Systems (GMU/CSISS), managing more than 17 TB of geospatial data and geospatial Grid services. This service makes it easy to share and

  14. High-Precision Floating-Point Arithmetic in ScientificComputation

    SciTech Connect

    Bailey, David H.

    2004-12-31

    At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required: some of these applications require roughly twice this level; others require four times; while still others require hundreds or more digits to obtain numerically meaningful results. Such calculations have been facilitated by new high-precision software packages that include high-level language translation modules to minimize the conversion effort. These activities have yielded a number of interesting new scientific results in fields as diverse as quantum theory, climate modeling and experimental mathematics, a few of which are described in this article. Such developments suggest that in the future, the numeric precision used for a scientific computation may be as important to the program design as are the algorithms and data structures.

  15. 76 FR 14323 - Small Business Size Standards: Professional, Scientific and Technical Services

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-16

    ... From the Federal Register Online via the Government Publishing Office SMALL BUSINESS ADMINISTRATION 13 CFR Part 121 RIN 3245-AG07 Small Business Size Standards: Professional, Scientific and Technical Services AGENCY: U.S. Small Business Administration. ACTION: Proposed rule. SUMMARY: The...

  16. Pre-Service Elementary Mathematics Teachers' Metaphors on Scientific Research and Foundations of Their Perceptions

    ERIC Educational Resources Information Center

    Bas, Fatih

    2016-01-01

    In this study, it is aimed to investigate pre-service elementary mathematics teachers' perceptions about scientific research with metaphor analysis and determine the foundations of these perceptions. This phenomenological study was conducted with 182 participants. The data were collected with two open-ended survey forms formed for investigating…

  17. CACTUS: Calculator and Computer Technology User Service.

    ERIC Educational Resources Information Center

    Hyde, Hartley

    1998-01-01

    Presents an activity in which students use computer-based spreadsheets to find out how much grain should be added to a chess board when a grain of rice is put on the first square, the amount is doubled for the next square, and the chess board is covered. (ASK)

  18. Computers in the Service of Science,

    DTIC Science & Technology

    1979-05-02

    82172Y~ Joint Institute of Nuclear i’.maulor ioa1a. ~ IM~ h’~~g Research in Dubna. Am..-4M____- - 9. Regional Computer Center I ,vj~uh1W Haes 1070 di)4J...University, the Academv of Mininw and Metallur’v, the Cracow Polytechnic, the Economic Academy, the Agricultural Academv. the vedical Academy, the

  19. ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop

    SciTech Connect

    Peisert, Sean; Potok, Thomas E.; Jones, Todd

    2015-06-03

    At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues included research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the three

  20. Thomson Scientific's expanding Web of Knowledge: beyond citation databases and current awareness services.

    PubMed

    London, Sue; Brahmi, Frances A

    2005-01-01

    As end-user demand for easy access to electronic full text continues to climb, an increasing number of information providers are combining that access with their other products and services, making navigating their Web sites by librarians seeking information on a given product or service more daunting than ever. One such provider of a complex array of products and services is Thomson Scientific. This paper looks at some of the many products and tools available from two of Thomson Scientific's businesses, Thomson ISI and Thomson ResearchSoft. Among the items of most interest to health sciences and veterinary librarians and their users are the variety of databases available via the ISI Web of Knowledge platform and the information management products available from ResearchSoft.

  1. Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi

    DOE PAGES

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; ...

    2015-05-22

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less

  2. Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi

    SciTech Connect

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-22

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  3. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    NASA Astrophysics Data System (ADS)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  4. Smart learning services based on smart cloud computing.

    PubMed

    Kim, Svetlana; Song, Su-Mi; Yoon, Yong-Ik

    2011-01-01

    Context-aware technologies can make e-learning services smarter and more efficient since context-aware services are based on the user's behavior. To add those technologies into existing e-learning services, a service architecture model is needed to transform the existing e-learning environment, which is situation-aware, into the environment that understands context as well. The context-awareness in e-learning may include the awareness of user profile and terminal context. In this paper, we propose a new notion of service that provides context-awareness to smart learning content in a cloud computing environment. We suggest the elastic four smarts (E4S)--smart pull, smart prospect, smart content, and smart push--concept to the cloud services so smart learning services are possible. The E4S focuses on meeting the users' needs by collecting and analyzing users' behavior, prospecting future services, building corresponding contents, and delivering the contents through cloud computing environment. Users' behavior can be collected through mobile devices such as smart phones that have built-in sensors. As results, the proposed smart e-learning model in cloud computing environment provides personalized and customized learning services to its users.

  5. Smart Learning Services Based on Smart Cloud Computing

    PubMed Central

    Kim, Svetlana; Song, Su-Mi; Yoon, Yong-Ik

    2011-01-01

    Context-aware technologies can make e-learning services smarter and more efficient since context-aware services are based on the user’s behavior. To add those technologies into existing e-learning services, a service architecture model is needed to transform the existing e-learning environment, which is situation-aware, into the environment that understands context as well. The context-awareness in e-learning may include the awareness of user profile and terminal context. In this paper, we propose a new notion of service that provides context-awareness to smart learning content in a cloud computing environment. We suggest the elastic four smarts (E4S)—smart pull, smart prospect, smart content, and smart push—concept to the cloud services so smart learning services are possible. The E4S focuses on meeting the users’ needs by collecting and analyzing users’ behavior, prospecting future services, building corresponding contents, and delivering the contents through cloud computing environment. Users’ behavior can be collected through mobile devices such as smart phones that have built-in sensors. As results, the proposed smart e-learning model in cloud computing environment provides personalized and customized learning services to its users. PMID:22164048

  6. RAPPORT: running scientific high-performance computing applications on the cloud.

    PubMed

    Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt

    2013-01-28

    Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.

  7. Biomedical cloud computing with Amazon Web Services.

    PubMed

    Fusaro, Vincent A; Patil, Prasad; Gafni, Erik; Wall, Dennis P; Tonellato, Peter J

    2011-08-01

    In this overview to biomedical computing in the cloud, we discussed two primary ways to use the cloud (a single instance or cluster), provided a detailed example using NGS mapping, and highlighted the associated costs. While many users new to the cloud may assume that entry is as straightforward as uploading an application and selecting an instance type and storage options, we illustrated that there is substantial up-front effort required before an application can make full use of the cloud's vast resources. Our intention was to provide a set of best practices and to illustrate how those apply to a typical application pipeline for biomedical informatics, but also general enough for extrapolation to other types of computational problems. Our mapping example was intended to illustrate how to develop a scalable project and not to compare and contrast alignment algorithms for read mapping and genome assembly. Indeed, with a newer aligner such as Bowtie, it is possible to map the entire African genome using one m2.2xlarge instance in 48 hours for a total cost of approximately $48 in computation time. In our example, we were not concerned with data transfer rates, which are heavily influenced by the amount of available bandwidth, connection latency, and network availability. When transferring large amounts of data to the cloud, bandwidth limitations can be a major bottleneck, and in some cases it is more efficient to simply mail a storage device containing the data to AWS (http://aws.amazon.com/importexport/). More information about cloud computing, detailed cost analysis, and security can be found in references.

  8. VRESCo - Vienna Runtime Environment for Service-oriented Computing

    NASA Astrophysics Data System (ADS)

    Hummer, Waldemar; Leitner, Philipp; Michlmayr, Anton; Rosenberg, Florian; Dustdar, Schahram

    Throughout the last years, the Service-Oriented Architecture (SOA) paradigm has been promoted as a means to create loosely coupled distributed applications. In theory, SOAs make use of a service registry, which can be used by providers to publish their services and by clients to discover these services in order to execute them. However, service registries such as UDDI did not succeed and are rarely used today. In practice, the binding often takes place at design time (for instance by generating client-side stubs), which leads to a tighter coupling between service endpoints. Alternative solutions using dynamic invocations often lack a data abstraction and require developers to construct messages on XML or SOAP level. In this paper we present VRESCo, the Vienna Runtime Environment for Service-oriented Computing, which addresses several distinct issues that are currently prevalent in Service-Oriented Architecture (SOA) research and practice. VRESCo reemphasizes the importance of registries to support dynamic selection, binding and invocation of services. Service providers publish their services and clients retrieve the data stored in the registry using a specialized query language. The data model distinguishes between abstract features and concrete service implementations, which enables grouping of services according to their functionality. An abstracted message format allows VRESCo to mediate between services which provide the same feature but use a different message syntax. Furthermore, VRESCo allows for explicit versioning of services. In addition to functional entities, the VRESCo service metadata model contains QoS (Quality of Service) attributes. Clients can be configured to dynamically rebind to different service instances based on the QoS data. The paper presents an illustrative scenario taken from the telecommunications domain, which serves as the basis for the discussion of the features of VRESCo.

  9. A distributed computing environment with support for constraint-based task scheduling and scientific experimentation

    SciTech Connect

    Ahrens, J.P.; Shapiro, L.G.; Tanimoto, S.L.

    1997-04-01

    This paper describes a computing environment which supports computer-based scientific research work. Key features include support for automatic distributed scheduling and execution and computer-based scientific experimentation. A new flexible and extensible scheduling technique that is responsive to a user`s scheduling constraints, such as the ordering of program results and the specification of task assignments and processor utilization levels, is presented. An easy-to-use constraint language for specifying scheduling constraints, based on the relational database query language SQL, is described along with a search-based algorithm for fulfilling these constraints. A set of performance studies show that the environment can schedule and execute program graphs on a network of workstations as the user requests. A method for automatically generating computer-based scientific experiments is described. Experiments provide a concise method of specifying a large collection of parameterized program executions. The environment achieved significant speedups when executing experiments; for a large collection of scientific experiments an average speedup of 3.4 on an average of 5.5 scheduled processors was obtained.

  10. Institute for Scientific Computing Research Annual Report for Fiscal Year 2003

    SciTech Connect

    Keyes, D; McGraw, J

    2004-02-12

    The University Relations Program (URP) encourages collaborative research between Lawrence Livermore National Laboratory (LLNL) and the University of California campuses. The Institute for Scientific Computing Research (ISCR) actively participates in such collaborative research, and this report details the Fiscal Year 2003 projects jointly served by URP and ISCR.

  11. Modeling input spaace for testing scientific computational software: a case study

    SciTech Connect

    Vilkomir, Sergiy; Swain, W. Thomas; Poore, Jr., Jesse; Clarno, Kevin T

    2008-01-01

    An application of a method of test case generation for scientific computational software is presented. NEWTRNX, neutron transport software being developed at Oak Ridge National Laboratory, is treated as a case study. A model of dependencies between input parameters of NEWTRNX is created. Results of NEWTRNX model analysis and test case generation are evaluated.

  12. The Wooly Mammoth as a Computer-Simulated Scientific Problem-Solving Tool.

    ERIC Educational Resources Information Center

    Szabo, Michael

    Mammo I and Mammo II are two versions of a computer simulation based upon scientific problems surrounding the finds of carcasses of the Wooly Mammoth in Siberia. The simulation program consists of two parts: the data base and program logic. The purpose of the data pieces is to provide data of an informative nature and to enable problem solvers to…

  13. The application of cloud computing to scientific workflows: a study of cost and performance.

    PubMed

    Berriman, G Bruce; Deelman, Ewa; Juve, Gideon; Rynge, Mats; Vöckler, Jens-S

    2013-01-28

    The current model of transferring data from data centres to desktops for analysis will soon be rendered impractical by the accelerating growth in the volume of science datasets. Processing will instead often take place on high-performance servers co-located with data. Evaluations of how new technologies such as cloud computing would support such a new distributed computing model are urgently needed. Cloud computing is a new way of purchasing computing and storage resources on demand through virtualization technologies. We report here the results of investigations of the applicability of commercial cloud computing to scientific computing, with an emphasis on astronomy, including investigations of what types of applications can be run cheaply and efficiently on the cloud, and an example of an application well suited to the cloud: processing a large dataset to create a new science product.

  14. Scientific Evidence as Content Knowledge: A Replication Study with English and Turkish Pre-Service Primary Teachers

    ERIC Educational Resources Information Center

    Roberts, Ros; Sahin-Pekmez, Esin

    2012-01-01

    Pre-service teachers around the world need to develop their content knowledge of scientific evidence to meet the requirements of recent school curriculum developments which prepare pupils to be scientifically literate. This research reports a replication study in Turkey of an intervention originally carried out with pre-service primary teachers in…

  15. Evaluation of Cache-based Superscalar and Cacheless Vector Architectures for Scientific Computations

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Carter, Jonathan; Shalf, John; Skinner, David; Ethier, Stephane; Biswas, Rupak; Djomehri, Jahed; VanderWijngaart, Rob

    2003-01-01

    The growing gap between sustained and peak performance for scientific applications has become a well-known problem in high performance computing. The recent development of parallel vector systems offers the potential to bridge this gap for a significant number of computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX6 vector processor and the cache-based IBM Power3/4 superscalar architectures across a number of key scientific computing areas. First, we present the performance of a microbenchmark suite that examines a full spectrum of low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks using some simple optimizations. Finally, we evaluate the perfor- mance of several numerical codes from key scientific computing domains. Overall results demonstrate that the SX6 achieves high performance on a large fraction of our application suite and in many cases significantly outperforms the RISC-based architectures. However, certain classes of applications are not easily amenable to vectorization and would likely require extensive reengineering of both algorithm and implementation to utilize the SX6 effectively.

  16. Evaluation of cache-based superscalar and cacheless vector architectures for scientific computations

    SciTech Connect

    Oliker, Leonid; Canning, Andrew; Carter, Jonathan; Shalf, John; Skinner, David; Ethier, Stephane; Biswas, Rupak; Djomehri, Jahed; Van der Wijngaart, Rob

    2003-05-01

    The growing gap between sustained and peak performance for scientific applications is a well-known problem in high end computing. The recent development of parallel vector systems offers the potential to bridge this gap for many computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX-6 vector processor and the cache-based IBM Power3/4 superscalar architectures across a number of scientific computing areas. First, we present the performance of a microbenchmark suite that examines low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks. Finally, we evaluate the performance of several scientific computing codes. Results demonstrate that the SX-6 achieves high performance on a large fraction of our applications and often significantly out performs the cache-based architectures. However, certain applications are not easily amenable to vectorization and would re quire extensive algorithm and implementation reengineering to utilize the SX-6 effectively.

  17. Acts -- A collection of high performing software tools for scientific computing

    SciTech Connect

    Drummond, L.A.; Marques, O.A.

    2002-11-01

    During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Further, many new discoveries depend on high performance computer simulations to satisfy their demands for large computational resources and short response time. The Advanced CompuTational Software (ACTS) Collection brings together a number of general-purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS collection promotes code portability, reusability, reduction of duplicate efforts, and tool maturity. This paper presents a brief introduction to the functionality available in ACTS. It also highlight the tools that are in demand by Climate and Weather modelers.

  18. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    SciTech Connect

    Khaleel, Mohammad A.

    2009-10-01

    This report is an account of the deliberations and conclusions of the workshop on "Forefront Questions in Nuclear Science and the Role of High Performance Computing" held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to 1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; 2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; 3) provide nuclear physicists the opportunity to influence the development of high performance computing; and 4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  19. A fault detection service for wide area distributed computations.

    SciTech Connect

    Stelling, P.

    1998-06-09

    The potential for faults in distributed computing systems is a significant complicating factor for application developers. While a variety of techniques exist for detecting and correcting faults, the implementation of these techniques in a particular context can be difficult. Hence, we propose a fault detection service designed to be incorporated, in a modular fashion, into distributed computing systems, tools, or applications. This service uses well-known techniques based on unreliable fault detectors to detect and report component failure, while allowing the user to tradeoff timeliness of reporting against false positive rates. We describe the architecture of this service, report on experimental results that quantify its cost and accuracy, and describe its use in two applications, monitoring the status of system components of the GUSTO computational grid testbed and as part of the NetSolve network-enabled numerical solver.

  20. The UK Human Genome Mapping Project online computing service.

    PubMed

    Rysavy, F R; Bishop, M J; Gibbs, G P; Williams, G W

    1992-04-01

    This paper presents an overview of computing and networking facilities developed by the Medical Research Council to provide online computing support to the Human Genome Mapping Project (HGMP) in the UK. The facility is connected to a number of other computing facilities in various centres of genetics and molecular biology research excellence, either directly via high-speed links or through national and international wide-area networks. The paper describes the design and implementation of the current system, a 'client/server' network of Sun, IBM, DEC and Apple servers, gateways and workstations. A short outline of online computing services currently delivered by this system to the UK human genetics research community is also provided. More information about the services and their availability could be obtained by a direct approach to the UK HGMP-RC.

  1. The cloud services innovation platform- enabling service-based environmental modelling using infrastructure-as-a-service cloud computing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Service oriented architectures allow modelling engines to be hosted over the Internet abstracting physical hardware configuration and software deployments from model users. Many existing environmental models are deployed as desktop applications running on user's personal computers (PCs). Migration ...

  2. Greek Pre-Service Teachers' Intentions to Use Computers as In-Service Teachers

    ERIC Educational Resources Information Center

    Fokides, Emmanuel

    2017-01-01

    The study examines the factors affecting Greek pre-service teachers' intention to use computers when they become practicing teachers. Four variables (perceived usefulness, perceived ease of use, self-efficacy, and attitude toward use) as well as behavioral intention to use computers were used so as to build a research model that extended the…

  3. MiniGhost : a miniapp for exploring boundary exchange strategies using stencil computations in scientific parallel computing.

    SciTech Connect

    Barrett, Richard Frederick; Heroux, Michael Allen; Vaughan, Courtenay Thomas

    2012-04-01

    A broad range of scientific computation involves the use of difference stencils. In a parallel computing environment, this computation is typically implemented by decomposing the spacial domain, inducing a 'halo exchange' of process-owned boundary data. This approach adheres to the Bulk Synchronous Parallel (BSP) model. Because commonly available architectures provide strong inter-node bandwidth relative to latency costs, many codes 'bulk up' these messages by aggregating data into a message as a means of reducing the number of messages. A renewed focus on non-traditional architectures and architecture features provides new opportunities for exploring alternatives to this programming approach. In this report we describe miniGhost, a 'miniapp' designed for exploration of the capabilities of current as well as emerging and future architectures within the context of these sorts of applications. MiniGhost joins the suite of miniapps developed as part of the Mantevo project.

  4. Relative performances of several scientific computers for a liquid molecular dynamics simulation. [Computers tested are: VAX 11/70, CDC 7600, CRAY-1, CRAY-1*, VAX-FPSAP

    SciTech Connect

    Ceperley, D.M.

    1980-08-01

    Some of the computational characteristics of simulations and the author's experience in using his standard simulation program called CLAMPS on several scientific computers are discussed. CLAMPS is capable of performing Metropolis Monte Carlo and Molecular Dynamics simulations of arbitrary mixtures of single atoms. The computational characteristics of simulations and what makes a good simulation computer are also summarized.

  5. Scientific computation of big data in real-world clinical research.

    PubMed

    Li, Guozheng; Zuo, Xuewen; Liu, Baoyan

    2014-09-01

    The advent of the big data era creates both opportunities and challenges for traditional Chinese medicine (TCM). This study describes the origin, concept, connotation, and value of studies regarding the scientific computation of TCM. It also discusses the integration of science, technology, and medicine under the guidance of the paradigm of real-world, clinical scientific research. TCM clinical diagnosis, treatment, and knowledge were traditionally limited to literature and sensation levels; however, primary methods are used to convert them into statistics, such as the methods of feature subset optimizing, multi-label learning, and complex networks based on complexity, intelligence, data, and computing sciences. Furthermore, these methods are applied in the modeling and analysis of the various complex relationships in individualized clinical diagnosis and treatment, as well as in decision-making related to such diagnosis and treatment. Thus, these methods strongly support the real-world clinical research paradigm of TCM.

  6. 5 CFR 630.301 - Annual leave accrual and accumulation-Senior Executive Service, Senior-Level, and Scientific and...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...-Senior Executive Service, Senior-Level, and Scientific and Professional Employees. 630.301 Section 630... LEAVE Annual Leave § 630.301 Annual leave accrual and accumulation—Senior Executive Service, Senior... the full pay period, and who— (1) Holds a position in the Senior Executive Service (SES) which...

  7. DB90: A Fortran Callable Relational Database Routine for Scientific and Engineering Computer Programs

    NASA Technical Reports Server (NTRS)

    Wrenn, Gregory A.

    2005-01-01

    This report describes a database routine called DB90 which is intended for use with scientific and engineering computer programs. The software is written in the Fortran 90/95 programming language standard with file input and output routines written in the C programming language. These routines should be completely portable to any computing platform and operating system that has Fortran 90/95 and C compilers. DB90 allows a program to supply relation names and up to 5 integer key values to uniquely identify each record of each relation. This permits the user to select records or retrieve data in any desired order.

  8. DOE Advanced Scientific Computing Advisory Subcommittee (ASCAC) Report: Top Ten Exascale Research Challenges

    SciTech Connect

    Lucas, Robert; Ang, James; Bergman, Keren; Borkar, Shekhar; Carlson, William; Carrington, Laura; Chiu, George; Colwell, Robert; Dally, William; Dongarra, Jack; Geist, Al; Haring, Rud; Hittinger, Jeffrey; Hoisie, Adolfy; Klein, Dean Micron; Kogge, Peter; Lethin, Richard; Sarkar, Vivek; Schreiber, Robert; Shalf, John; Sterling, Thomas; Stevens, Rick; Bashor, Jon; Brightwell, Ron; Coteus, Paul; Debenedictus, Erik; Hiller, Jon; Kim, K. H.; Langston, Harper; Murphy, Richard Micron; Webster, Clayton; Wild, Stefan; Grider, Gary; Ross, Rob; Leyffer, Sven; Laros III, James

    2014-02-10

    Exascale computing systems are essential for the scientific fields that will transform the 21st century global economy, including energy, biotechnology, nanotechnology, and materials science. Progress in these fields is predicated on the ability to perform advanced scientific and engineering simulations, and analyze the deluge of data. On July 29, 2013, ASCAC was charged by Patricia Dehmer, the Acting Director of the Office of Science, to assemble a subcommittee to provide advice on exascale computing. This subcommittee was directed to return a list of no more than ten technical approaches (hardware and software) that will enable the development of a system that achieves the Department's goals for exascale computing. Numerous reports over the past few years have documented the technical challenges and the non¬-viability of simply scaling existing computer designs to reach exascale. The technical challenges revolve around energy consumption, memory performance, resilience, extreme concurrency, and big data. Drawing from these reports and more recent experience, this ASCAC subcommittee has identified the top ten computing technology advancements that are critical to making a capable, economically viable, exascale system.

  9. The Expanding Use of Computers in Reference Service.

    ERIC Educational Resources Information Center

    Ensor, Pat

    1982-01-01

    Briefly looks at the present use of computers in library reference service for bibliographic database searching, then surveys the future as indicated by recent developments and ongoing research. Cooperative online reference, local databases, and increasing ease of online information retrieval are discussed. Included are 40 references. (Author/JL)

  10. Incorporating Web services into Earth Science Computational Environments

    NASA Astrophysics Data System (ADS)

    Fox, G.

    2002-12-01

    Grid technology promises to greatly enhance the analysis of data and their integration into all Earth Science fields. To prepare for this, one should "package" applications as Web Services using standards develop by the computer industry and W3C consortium. We report on some early experience with several earthquake simulation programs.

  11. A "Service-Learning Approach" to Teaching Computer Graphics

    ERIC Educational Resources Information Center

    Hutzel, Karen

    2007-01-01

    The author taught a computer graphics course through a service-learning framework to undergraduate and graduate students in the spring of 2003 at Florida State University (FSU). The students in this course participated in learning a software program along with youths from a neighboring, low-income, primarily African-American community. Together,…

  12. Scientific Grand Challenges: Crosscutting Technologies for Computing at the Exascale - February 2-4, 2010, Washington, D.C.

    SciTech Connect

    Khaleel, Mohammad A.

    2011-02-06

    The goal of the "Scientific Grand Challenges - Crosscutting Technologies for Computing at the Exascale" workshop in February 2010, jointly sponsored by the U.S. Department of Energy’s Office of Advanced Scientific Computing Research and the National Nuclear Security Administration, was to identify the elements of a research and development agenda that will address these challenges and create a comprehensive exascale computing environment. This exascale computing environment will enable the science applications identified in the eight previously held Scientific Grand Challenges Workshop Series.

  13. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm.

    PubMed

    Abdulhamid, Shafi'i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.

  14. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm

    PubMed Central

    Abdulhamid, Shafi’i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239

  15. E-Governance and Service Oriented Computing Architecture Model

    NASA Astrophysics Data System (ADS)

    Tejasvee, Sanjay; Sarangdevot, S. S.

    2010-11-01

    E-Governance is the effective application of information communication and technology (ICT) in the government processes to accomplish safe and reliable information lifecycle management. Lifecycle of the information involves various processes as capturing, preserving, manipulating and delivering information. E-Governance is meant to transform of governance in better manner to the citizens which is transparent, reliable, participatory, and accountable in point of view. The purpose of this paper is to attempt e-governance model, focus on the Service Oriented Computing Architecture (SOCA) that includes combination of information and services provided by the government, innovation, find out the way of optimal service delivery to citizens and implementation in transparent and liable practice. This paper also try to enhance focus on the E-government Service Manager as a essential or key factors service oriented and computing model that provides a dynamically extensible structural design in which all area or branch can bring in innovative services. The heart of this paper examine is an intangible model that enables E-government communication for trade and business, citizen and government and autonomous bodies.

  16. Development of data infrastructure to support scientific analysis for the International GNSS Service

    NASA Astrophysics Data System (ADS)

    Noll, C.; Bock, Y.; Habrich, H.; Moore, A.

    2009-03-01

    The International GNSS Service provides data and products to support a wide range of global, multidisciplinary scientific research. The service has established a hierarchy of components to facilitate its mission: a globally distributed network of Tracking Stations, Data Centers, Analysis Centers, a Central Bureau, and a Governing Board. The Data Centers, in conjunction with the Central Bureau, serve as the primary means of distributing GNSS data, products, and general information to the user community through ftp and Web servers and email services. The requirements of analysis centers and the scientific community have evolved over the lifetime of the IGS, requiring enhancement and extension of the supporting data center infrastructure. The diversity of IGS data and products extends today from the realm of the real-time and near real-time to the long-term archive and thus forms a basis for multidisciplinary research spanning decades. Reliability of all components is a key requirement within the IGS and is accomplished through the geographic distribution of data centers and the creation of independent, redundant, parallel channels for the transmission of data and products. We discuss the development of the IGS data infrastructure, current status, and plans for future enhancements. Descriptions of IGS data and products and associated metadata are also included.

  17. A multi-service data management platform for scientific oceanographic products

    NASA Astrophysics Data System (ADS)

    D'Anca, Alessandro; Conte, Laura; Nassisi, Paola; Palazzo, Cosimo; Lecci, Rita; Cretì, Sergio; Mancini, Marco; Nuzzo, Alessandra; Mirto, Maria; Mannarini, Gianandrea; Coppini, Giovanni; Fiore, Sandro; Aloisio, Giovanni

    2017-02-01

    An efficient, secure and interoperable data platform solution has been developed in the TESSA project to provide fast navigation and access to the data stored in the data archive, as well as a standard-based metadata management support. The platform mainly targets scientific users and the situational sea awareness high-level services such as the decision support systems (DSS). These datasets are accessible through the following three main components: the Data Access Service (DAS), the Metadata Service and the Complex Data Analysis Module (CDAM). The DAS allows access to data stored in the archive by providing interfaces for different protocols and services for downloading, variables selection, data subsetting or map generation. Metadata Service is the heart of the information system of the TESSA products and completes the overall infrastructure for data and metadata management. This component enables data search and discovery and addresses interoperability by exploiting widely adopted standards for geospatial data. Finally, the CDAM represents the back-end of the TESSA DSS by performing on-demand complex data analysis tasks.

  18. Models and Simulations as a Service: Exploring the Use of Galaxy for Delivering Computational Models

    PubMed Central

    Walker, Mark A.; Madduri, Ravi; Rodriguez, Alex; Greenstein, Joseph L.; Winslow, Raimond L.

    2016-01-01

    We describe the ways in which Galaxy, a web-based reproducible research platform, can be used for web-based sharing of complex computational models. Galaxy allows users to seamlessly customize and run simulations on cloud computing resources, a concept we refer to as Models and Simulations as a Service (MaSS). To illustrate this application of Galaxy, we have developed a tool suite for simulating a high spatial-resolution model of the cardiac Ca2+ spark that requires supercomputing resources for execution. We also present tools for simulating models encoded in the SBML and CellML model description languages, thus demonstrating how Galaxy’s reproducible research features can be leveraged by existing technologies. Finally, we demonstrate how the Galaxy workflow editor can be used to compose integrative models from constituent submodules. This work represents an important novel approach, to our knowledge, to making computational simulations more accessible to the broader scientific community. PMID:26958881

  19. EDP Sciences and A&A: partnering to providing services to support the scientific community

    NASA Astrophysics Data System (ADS)

    Henri, Agnes

    2015-08-01

    Scholarly publishing is no longer about simply producing and packaging articles and sending out to subscribers. To be successful, as well as being global and digital, Publishers and their journals need to be fully engaged with their stakeholders (authors, readers, funders, libraries etc), and constantly developing new products and services to support their needs in the ever-changing environment that we work in.Astronomy & Astrophysics (A&A) is a high quality, major international Journal that belongs to the astronomical communities of a consortium of European and South American countries supported by ESO who sponsor the journal. EDP Sciences is a non-profit publisher belonging to several learned societies and is appointed by ESO to publish the journal.Over the last decade, as well as publishing the results of worldwide astronomical and astrophysical research, A&A and EDP Sciences have worked in partnership to develop a wide range of services for the authors and readers of A&A:- A specialist language editing service: to provide a clear and excellent level of English ensuring full understanding of the high-quality science.- A flexible and progressive Open Access Policy including Gold and Green options and strong links with arXiv.- Enriched articles: authors are able to enhance their articles using a wide range of rich media such as 3D models, videos and animations.Multiple publishing formats: allowing readers to browse articles on multiple devices including eReaders and Kindles.- “Scientific Writing for Young Astronomers”: In 2008 EDP Sciences and A&A set up the Scientific Writing for Young Astronomers (SWYA) School with the objective to teach early PhD Students how write correct and efficient scientific papers for different mediums (journals, proceedings, thesis manuscripts, etc.).

  20. A directory service for configuring high-performance distributed computations

    SciTech Connect

    Fitzgerald, S.; Kesselman, C.; Foster, I.

    1997-08-01

    High-performance execution in distributed computing environments often requires careful selection and configuration not only of computers, networks, and other resources but also of the protocols and algorithms used by applications. Selection and configuration in turn require access to accurate, up-to-date information on the structure and state of available resources. Unfortunately, no standard mechanism exists for organizing or accessing such information. Consequently, different tools and applications adopt ad hoc mechanisms, or they compromise their portability and performance by using default configurations. We propose a Metacomputing Directory Service that provides efficient and scalable access to diverse, dynamic, and distributed information about resource structure and state. We define an extensible data model to represent required information and present a scalable, high-performance, distributed implementation. The data representation and application programming interface are adopted from the Lightweight Directory Access Protocol; the data model and implementation are new. We use the Globus distributed computing toolkit to illustrate how this directory service enables the development of more flexible and efficient distributed computing services and applications.

  1. Balancing the Pros and Cons of GMOs: Socio-Scientific Argumentation in Pre-Service Teacher Education

    ERIC Educational Resources Information Center

    Cinici, Ayhan

    2016-01-01

    This study investigates the role of the discursive process in the act of scientific knowledge building. Specifically, it links scientific knowledge building to risk perception of Genetically Modified Organisms (GMOs). To this end, this study designed and implemented a three-stage argumentation programme giving pre-service teachers (PSTs) the…

  2. Software, component, and service deployment in computational Grids.

    SciTech Connect

    von Laszewski, G.; Blau, E.; Bletzinger, M.; Gawor, J.; Lane, P.; Martin, S.; Russell, M.

    2002-04-18

    Grids comprise an infrastructure that enables scientists to use a diverse set of distributed remote services and resources as part of complex scientific problem-solving processes. We analyze some of the challenges involved in deploying software and components transparently in Grids. We report on three practical solutions used by the Globus Project. Lessons learned from this experience lead us to believe that it is necessary to support a variety of software and component deployment strategies. These strategies are based on the hosting environment.

  3. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY15 Q4.

    SciTech Connect

    Moreland, Kenneth D.; Sewell, Christopher; Childs, Hank; Ma, Kwan-Liu; Geveci, Berk; Meredith, Jeremy

    2015-12-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  4. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem. Mid-year report FY16 Q2

    SciTech Connect

    Moreland, Kenneth D.; Sewell, Christopher; Childs, Hank; Ma, Kwan-Liu; Geveci, Berk; Meredith, Jeremy

    2016-05-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  5. Advanced Scientific Computing Research Network Requirements: ASCR Network Requirements Review Final Report

    SciTech Connect

    Bacon, Charles; Bell, Greg; Canon, Shane; Dart, Eli; Dattoria, Vince; Goodwin, Dave; Lee, Jason; Hicks, Susan; Holohan, Ed; Klasky, Scott; Lauzon, Carolyn; Rogers, Jim; Shipman, Galen; Skinner, David; Tierney, Brian

    2013-03-08

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

  6. Primary pre-service teachers' skills in planning a guided scientific inquiry

    NASA Astrophysics Data System (ADS)

    García-Carmona, Antonio; Criado, Ana M.; Cruz-Guzmán, Marta

    2016-08-01

    A study is presented of the skills that primary pre-service teachers (PPTs) have in completing the planning of a scientific inquiry on the basis of a guiding script. The sample comprised 66 PPTs who constituted a group-class of the subject Science Teaching, taught in the second year of an undergraduate degree in primary education at a Spanish university. The data was acquired from the responses of the PPTs (working in teams) to open-ended questions posed to them in the script concerning the various tasks involved in a scientific inquiry (formulation of hypotheses, design of the experiment, data collection, interpretation of results, drawing conclusions). Data were analyzed within the framework of a descriptive-interpretive qualitative research study with a combination of inter- and intra-rater methods, and the use of low-inference descriptors. The results showed that the PPTs have major shortcomings in planning the complete development of a guided scientific inquiry. The discussion of the results includes a number of implications for rethinking the Science Teaching course so that PPTs can attain a basic level of training in inquiry-based science education.

  7. A computer-supported information system for forensic services.

    PubMed

    Petrila, J P; Hedlund, J L

    1983-05-01

    Recently many state departments of mental health have decentralized their forensic services programs. This trend has increased administrative needs for accurate, easily accessible information on the forensic services' caseload. The Missouri Department of Mental Health and the Missouri Institute of Psychiatry have developed and implemented a computer-supported system that provides data on the number of cases referred by criminal courts, the questions asked by the courts, the clinical answers to those questions, and demographic information about the evaluated population. The system is a part of the department's other computer systems so that forensic clients may be tracked through various mental health facilities. Mental health administrators may use the system to monitor department policies, ensure appropriate allocation of resources, and improve the quality of forensic reports.

  8. Smart Libraries: Best SQE Practices for Libraries with an Emphasis on Scientific Computing

    SciTech Connect

    Miller, M C; Reus, J F; Matzke, R P; Koziol, Q A; Cheng, A P

    2004-12-15

    As scientific computing applications grow in complexity, more and more functionality is being packaged in independently developed libraries. Worse, as the computing environments in which these applications run grow in complexity, it gets easier to make mistakes in building, installing and using libraries as well as the applications that depend on them. Unfortunately, SQA standards so far developed focus primarily on applications, not libraries. We show that SQA standards for libraries differ from applications in many respects. We introduce and describe a variety of practices aimed at minimizing the likelihood of making mistakes in using libraries and at maximizing users' ability to diagnose and correct them when they occur. We introduce the term Smart Library to refer to a library that is developed with these basic principles in mind. We draw upon specific examples from existing products we believe incorporate smart features: MPI, a parallel message passing library, and HDF5 and SAF, both of which are parallel I/O libraries supporting scientific computing applications. We conclude with a narrative of some real-world experiences in using smart libraries with Ale3d, VisIt and SAF.

  9. Fortran Transformational Tools in Support of Scientific Application Development for Petascale Computer Architectures

    SciTech Connect

    Sottille, Matthew

    2013-09-12

    This document is the final report for a multi-year effort building infrastructure to support tool development for Fortran programs. We also investigated static analysis and code transformation methods relevant to scientific programmers who are writing Fortran programs for petascale-class high performance computing systems. This report details our accomplishments, technical approaches, and provides information on where the research results and code may be obtained from an open source software repository. The report for the first year of the project that was performed at the University of Oregon prior to the PI moving to Galois, Inc. is included as an appendix.

  10. Using Social Media to Promote Pre-Service Science Teachers' Practices of Socio-Scientific Issue (SSI) - Based Teaching

    ERIC Educational Resources Information Center

    Pitiporntapin, Sasithep; Lankford, Deanna Marie

    2015-01-01

    This paper addresses using social media to promote pre-service science teachers' practices of Socio-Scientific Issue (SSI) based teaching in a science classroom setting. We designed our research in two phases. The first phase examined pre-service science teachers' perceptions about using social media to promote their SSI-based teaching. The…

  11. An Investigation into Specifying Service Level Agreements for Provisioning Cloud Computing Services

    DTIC Science & Technology

    2012-12-01

    of the computing environment . An SLA for a traditional network system covers network support services, application performance, client-side...53  G.   LESSONS LEARNED FROM AMAZON EC2 BLACKOUTS ................56  VI.  CONCLUSIONS...61  A.   ISSUES AND LESSONS LEARNED

  12. Computational Scientific Inquiry with Virtual Worlds and Agent-Based Models: New Ways of Doing Science to Learn Science

    ERIC Educational Resources Information Center

    Jacobson, Michael J.; Taylor, Charlotte E.; Richards, Deborah

    2016-01-01

    In this paper, we propose computational scientific inquiry (CSI) as an innovative model for learning important scientific knowledge and new practices for "doing" science. This approach involves the use of a "game-like" virtual world for students to experience virtual biological fieldwork in conjunction with using an agent-based…

  13. Challenges and Opportunities in Using Automatic Differentiation with Object-Oriented Toolkits for Scientific Computing

    SciTech Connect

    Hovland, P; Lee, S; McInnes, L; Norris, B; Smith, B

    2001-04-17

    The increased use of object-oriented toolkits in large-scale scientific simulation presents new opportunities and challenges for the use of automatic (or algorithmic) differentiation (AD) techniques, especially in the context of optimization. Because object-oriented toolkits use well-defined interfaces and data structures, there is potential for simplifying the AD process. Furthermore, derivative computation can be improved by exploiting high-level information about numerical and computational abstractions. However, challenges to the successful use of AD with these toolkits also exist. Among the greatest challenges is balancing the desire to limit the scope of the AD process with the desire to minimize the work required of a user. They discuss their experiences in integrating AD with the PETSc, PVODE, and TAO toolkits and the plans for future research and development in this area.

  14. JavaTech, an Introduction to Scientific and Technical Computing with Java

    NASA Astrophysics Data System (ADS)

    Lindsey, Clark S.; Tolliver, Johnny S.; Lindblad, Thomas

    2010-06-01

    Preface; Acknowledgements; Part I. Introduction to Java: 1. Introduction; 2. Language basics; 3. Classes and objects in Java; 4. More about objects in Java; 5. Organizing Java files and other practicalities; 6. Java graphics; 7. Graphical user interfaces; 8. Threads; 9. Java input/output; 10. Java utilities; 11. Image handling and processing; 12. More techniques and tips; Part II. Java and the Network: 13. Java networking basics; 14. A Java web server; 15. Client/server with sockets; 16. Distributed computing; 17. Distributed computing - the client; 18. Java remote method invocation (RMI); 19. CORBA; 20. Distributed computing - putting it all together; 21. Introduction to web services and XML; Part III. Out of the Sandbox: 22. The Java native interface (JNI); 23. Accessing the platform; 24. Embedded Java; Appendices; Index.

  15. JavaTech, an Introduction to Scientific and Technical Computing with Java

    NASA Astrophysics Data System (ADS)

    Lindsey, Clark S.; Tolliver, Johnny S.; Lindblad, Thomas

    2005-10-01

    Preface; Acknowledgements; Part I. Introduction to Java: 1. Introduction; 2. Language basics; 3. Classes and objects in Java; 4. More about objects in Java; 5. Organizing Java files and other practicalities; 6. Java graphics; 7. Graphical user interfaces; 8. Threads; 9. Java input/output; 10. Java utilities; 11. Image handling and processing; 12. More techniques and tips; Part II. Java and the Network: 13. Java networking basics; 14. A Java web server; 15. Client/server with sockets; 16. Distributed computing; 17. Distributed computing - the client; 18. Java remote method invocation (RMI); 19. CORBA; 20. Distributed computing - putting it all together; 21. Introduction to web services and XML; Part III. Out of the Sandbox: 22. The Java native interface (JNI); 23. Accessing the platform; 24. Embedded Java; Appendices; Index.

  16. NATO Scientific and Technical Information Service (NSTIS): functional description. Final report

    SciTech Connect

    Molholm, K.N.; Blados, W.N.; Bulca, C.; Cotter, G.A.; Cuffez, A.

    1987-08-01

    This report provides a functional description of the requirements for a NATO Scientific and Technical Information Service (NSTIS). The user requirements and much of the background information in this report were derived primarily from interviews with more than 60 NATO Headquarters staff members between 2 March and 25 March 1987. In addition, representatives of the Supreme Headquarters Applied Powers Europe (SHAPE) Technical Centre (STC), the Supreme Allied Commander Atlantic (Anti-Submarine Warfare Research) Centre (SACLANTCEN), the NATO Communications and Information Systems Agency (NACISA), The Advisory Group for Aerospace Research and Development (AGARD), the U.S. Defense Technical Information Center (DTIC), and the Technical Documentation Center for the Armed Forces in the Netherlands (TDCK), were interviewed, either in person or by telephone.

  17. Above the cloud computing: applying cloud computing principles to create an orbital services model

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy; Mohammad, Atif; Berk, Josh; Nervold, Anders K.

    2013-05-01

    Large satellites and exquisite planetary missions are generally self-contained. They have, onboard, all of the computational, communications and other capabilities required to perform their designated functions. Because of this, the satellite or spacecraft carries hardware that may be utilized only a fraction of the time; however, the full cost of development and launch are still bone by the program. Small satellites do not have this luxury. Due to mass and volume constraints, they cannot afford to carry numerous pieces of barely utilized equipment or large antennas. This paper proposes a cloud-computing model for exposing satellite services in an orbital environment. Under this approach, each satellite with available capabilities broadcasts a service description for each service that it can provide (e.g., general computing capacity, DSP capabilities, specialized sensing capabilities, transmission capabilities, etc.) and its orbital elements. Consumer spacecraft retain a cache of service providers and select one utilizing decision making heuristics (e.g., suitability of performance, opportunity to transmit instructions and receive results - based on the orbits of the two craft). The two craft negotiate service provisioning (e.g., when the service can be available and for how long) based on the operating rules prioritizing use of (and allowing access to) the service on the service provider craft, based on the credentials of the consumer. Service description, negotiation and sample service performance protocols are presented. The required components of each consumer or provider spacecraft are reviewed. These include fully autonomous control capabilities (for provider craft), a lightweight orbit determination routine (to determine when consumer and provider craft can see each other and, possibly, pointing requirements for craft with directional antennas) and an authentication and resource utilization priority-based access decision making subsystem (for provider craft

  18. Applied Use Value of Scientific Information for Management of Ecosystem Services

    NASA Astrophysics Data System (ADS)

    Raunikar, R. P.; Forney, W.; Bernknopf, R.; Mishra, S.

    2012-12-01

    The U.S. Geological Survey has developed and applied methods for quantifying the value of scientific information (VOI) that are based on the applied use value of the information. In particular the applied use value of U.S. Geological Survey information often includes efficient management of ecosystem services. The economic nature of U.S. Geological Survey scientific information is largely equivalent to that of any information, but we focus application of our VOI quantification methods on the information products provided freely to the public by the U.S. Geological Survey. We describe VOI economics in general and illustrate by referring to previous studies that use the evolving applied use value methods, which includes examples of the siting of landfills in Louden County, the mineral exploration efficiencies of finer resolution geologic maps in Canada, and improved agricultural production and groundwater protection in Eastern Iowa possible with Landsat moderate resolution satellite imagery. Finally, we describe the adaptation of the applied use value method to the case of streamgage information used to improve the efficiency of water markets in New Mexico.

  19. An u-Service Model Based on a Smart Phone for Urban Computing Environments

    NASA Astrophysics Data System (ADS)

    Cho, Yongyun; Yoe, Hyun

    In urban computing environments, all of services should be based on the interaction between humans and environments around them, which frequently and ordinarily in home and office. This paper propose an u-service model based on a smart phone for urban computing environments. The suggested service model includes a context-aware and personalized service scenario development environment that can instantly describe user's u-service demand or situation information with smart devices. To do this, the architecture of the suggested service model consists of a graphical service editing environment for smart devices, an u-service platform, and an infrastructure with sensors and WSN/USN. The graphic editor expresses contexts as execution conditions of a new service through a context model based on ontology. The service platform deals with the service scenario according to contexts. With the suggested service model, an user in urban computing environments can quickly and easily make u-service or new service using smart devices.

  20. 12 CFR 225.118 - Computer services for customers of subsidiary banks.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Computer services for customers of subsidiary...) Regulations Financial Holding Companies Interpretations § 225.118 Computer services for customers of... understood from the facts presented that the service company owns a computer which it utilizes to...

  1. 12 CFR 225.118 - Computer services for customers of subsidiary banks.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 3 2011-01-01 2011-01-01 false Computer services for customers of subsidiary...) Regulations Financial Holding Companies Interpretations § 225.118 Computer services for customers of... understood from the facts presented that the service company owns a computer which it utilizes to...

  2. 12 CFR 225.118 - Computer services for customers of subsidiary banks.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 3 2012-01-01 2012-01-01 false Computer services for customers of subsidiary...) Regulations Financial Holding Companies Interpretations § 225.118 Computer services for customers of... understood from the facts presented that the service company owns a computer which it utilizes to...

  3. 12 CFR 225.118 - Computer services for customers of subsidiary banks.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 3 2013-01-01 2013-01-01 false Computer services for customers of subsidiary... (REGULATION Y) Regulations Financial Holding Companies Interpretations § 225.118 Computer services for.... (b) The Board understood from the facts presented that the service company owns a computer which...

  4. 12 CFR 225.118 - Computer services for customers of subsidiary banks.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 3 2014-01-01 2014-01-01 false Computer services for customers of subsidiary... (REGULATION Y) Regulations Financial Holding Companies Interpretations § 225.118 Computer services for.... (b) The Board understood from the facts presented that the service company owns a computer which...

  5. The Effects of Inquiry-Based Computer Simulation with Cooperative Learning on Scientific Thinking and Conceptual Understanding of Gas Laws

    ERIC Educational Resources Information Center

    Abdullah, Sopiah; Shariff, Adilah

    2008-01-01

    The purpose of the study was to investigate the effects of inquiry-based computer simulation with heterogeneous-ability cooperative learning (HACL) and inquiry-based computer simulation with friendship cooperative learning (FCL) on (a) scientific reasoning (SR) and (b) conceptual understanding (CU) among Form Four students in Malaysian Smart…

  6. NSF Antarctic and Arctic Data Consortium; Scientific Research Support & Data Services for the Polar Community

    NASA Astrophysics Data System (ADS)

    Morin, P. J.; Pundsack, J. W.; Carbotte, S. M.; Tweedie, C. E.; Grunow, A.; Lazzara, M. A.; Carpenter, P.; Sjunneskog, C. M.; Yarmey, L.; Bauer, R.; Adrian, B. M.; Pettit, J.

    2014-12-01

    The U.S. National Science Foundation Antarctic & Arctic Data Consortium (a2dc) is a collaboration of research centers and support organizations that provide polar scientists with data and tools to complete their research objectives. From searching historical weather observations to submitting geologic samples, polar researchers utilize the a2dc to search andcontribute to the wealth of polar scientific and geospatial data.The goals of the Antarctic & Arctic Data Consortium are to increase visibility in the research community of the services provided by resource and support facilities. Closer integration of individual facilities into a "one stop shop" will make it easier for researchers to take advantage of services and products provided by consortium members. The a2dc provides a common web portal where investigators can go to access data and samples needed to build research projects, develop student projects, or to do virtual field reconnaissance without having to utilize expensive logistics to go into the field.Participation by the international community is crucial for the success of a2dc. There are 48 nations that are signatories of the Antarctic Treaty, and 8 sovereign nations in the Arctic. Many of these organizations have unique capabilities and data that would benefit US ­funded polar science and vice versa.We'll present an overview of the Antarctic & Arctic Data Consortium, current participating organizations, challenges & opportunities, and plans to better coordinate data through a geospatial strategy and infrastructure.

  7. Computing Spatial Distance Histograms for Large Scientific Datasets On-the-Fly

    PubMed Central

    Kumar, Anand; Grupcev, Vladimir; Yuan, Yongke; Huang, Jin; Shen, Gang

    2014-01-01

    This paper focuses on an important query in scientific simulation data analysis: the Spatial Distance Histogram (SDH). The computation time of an SDH query using brute force method is quadratic. Often, such queries are executed continuously over certain time periods, increasing the computation time. We propose highly efficient approximate algorithm to compute SDH over consecutive time periods with provable error bounds. The key idea of our algorithm is to derive statistical distribution of distances from the spatial and temporal characteristics of particles. Upon organizing the data into a Quad-tree based structure, the spatiotemporal characteristics of particles in each node of the tree are acquired to determine the particles’ spatial distribution as well as their temporal locality in consecutive time periods. We report our efforts in implementing and optimizing the above algorithm in Graphics Processing Units (GPUs) as means to further improve the efficiency. The accuracy and efficiency of the proposed algorithm is backed by mathematical analysis and results of extensive experiments using data generated from real simulation studies. PMID:25264418

  8. Model-Driven Development for scientific computing. Computations of RHEED intensities for a disordered surface. Part I

    NASA Astrophysics Data System (ADS)

    Daniluk, Andrzej

    2010-03-01

    Scientific computing is the field of study concerned with constructing mathematical models, numerical solution techniques and with using computers to analyse and solve scientific and engineering problems. Model-Driven Development (MDD) has been proposed as a means to support the software development process through the use of a model-centric approach. This paper surveys the core MDD technology that was used to develop an application that allows computation of the RHEED intensities dynamically for a disordered surface. New version program summaryProgram title: RHEED1DProcess Catalogue identifier: ADUY_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUY_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 31 971 No. of bytes in distributed program, including test data, etc.: 3 039 820 Distribution format: tar.gz Programming language: Embarcadero C++ Builder Computer: Intel Core Duo-based PC Operating system: Windows XP, Vista, 7 RAM: more than 1 GB Classification: 4.3, 7.2, 6.2, 8, 14 Catalogue identifier of previous version: ADUY_v3_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2394 Does the new version supersede the previous version?: No Nature of problem: An application that implements numerical simulations should be constructed according to the CSFAR rules: clear and well-documented, simple, fast, accurate, and robust. A clearly written, externally and internally documented program is much easier to understand and modify. A simple program is much less prone to error and is more easily modified than one that is complicated. Simplicity and clarity also help make the program flexible. Making the program fast has economic benefits. It also allows flexibility because some of the features that make a program efficient can be traded off for

  9. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    NASA Technical Reports Server (NTRS)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  10. Impact of Quad-core Cray XT4 System and Software Stack on Scientific Computation

    SciTech Connect

    Alam, Sadaf R; Barrett, Richard F; Jagode, Heike; Kuehn, Jeffery A; Poole, Stephen W; Sankaran, Ramanan

    2009-01-01

    An upgrade from dual-core to quad-core AMD processor on the Cray XT system at the Oak Ridge National Laboratory (ORNL) Leadership Computing Facility (LCF) has resulted in significant changes in the hardware and software stack, including a deeper memory hierarchy, SIMD instructions and a multi-core aware MPI library. In this paper, we evaluate impact of a subset of these key changes on large-scale scientific applications. We will provide insights into application tuning and optimization process and report on how different strategies yield varying rates of successes and failures across different application domains. For instance, we demonstrate that the vectorization instructions (SSE) provide a performance boost of as much as 50% on fusion and combustion applications. Moreover, we reveal how the resource contentions could limit the achievable performance and provide insights into how application could exploit Petascale XT5 system's hierarchical parallelism.

  11. Nonlinear analysis, scientific computation, and continuum mechanics applied to the science of materials

    NASA Astrophysics Data System (ADS)

    Gurtin, Morton E.; Williams, William O.

    1993-03-01

    This grant enabled the department to form the Research Group in Mathematical Materials Science in 1990, a group that formed the nucleus of the Center for Nonlinear Analysis, established in 1991, by the ARO. The Center has created a vigorous environment for collaboration among mathematicians and allied scientists. Within the international mathematics community the Center has assumed a leadership role, especially for questions related to materials science. The major research effort has focused toward developing, analyzing, and unifying mathematical models that characterize material behavior at a phenomenological level. The main thrust is applied nonlinear analysis, nonlinear continuum physics, and scientific computation. The educational goals have been to train young scientists and to train and involve female and minority students in the sciences.

  12. Data mining techniques for scientific computing: Application to asymptotic paraxial approximations to model ultrarelativistic particles

    NASA Astrophysics Data System (ADS)

    Assous, Franck; Chaskalovic, Joël

    2011-06-01

    We propose a new approach that consists in using data mining techniques for scientific computing. Indeed, data mining has proved to be efficient in other contexts which deal with huge data like in biology, medicine, marketing, advertising and communications. Our aim, here, is to deal with the important problem of the exploitation of the results produced by any numerical method. Indeed, more and more data are created today by numerical simulations. Thus, it seems necessary to look at efficient tools to analyze them. In this work, we focus our presentation to a test case dedicated to an asymptotic paraxial approximation to model ultrarelativistic particles. Our method directly deals with numerical results of simulations and try to understand what each order of the asymptotic expansion brings to the simulation results over what could be obtained by other lower-order or less accurate means. This new heuristic approach offers new potential applications to treat numerical solutions to mathematical models.

  13. Exploring prospective secondary science teachers' understandings of scientific inquiry and Mendelian genetics concepts using computer simulation

    NASA Astrophysics Data System (ADS)

    Cakir, Mustafa

    The primary objective of this case study was to examine prospective secondary science teachers' developing understanding of scientific inquiry and Mendelian genetics. A computer simulation of basic Mendelian inheritance processes (Catlab) was used in combination with small-group discussions and other instructional scaffolds to enhance prospective science teachers' understandings. The theoretical background for this research is derived from a social constructivist perspective. Structuring scientific inquiry as investigation to develop explanations presents meaningful context for the enhancement of inquiry abilities and understanding of the science content. The context of the study was a teaching and learning course focused on inquiry and technology. Twelve prospective science teachers participated in this study. Multiple data sources included pre- and post-module questionnaires of participants' view of scientific inquiry, pre-posttests of understandings of Mendelian concepts, inquiry project reports, class presentations, process videotapes of participants interacting with the simulation, and semi-structured interviews. Seven selected prospective science teachers participated in in-depth interviews. Findings suggest that while studying important concepts in science, carefully designed inquiry experiences can help prospective science teachers to develop an understanding about the types of questions scientists in that field ask, the methodological and epistemological issues that constrain their pursuit of answers to those questions, and the ways in which they construct and share their explanations. Key findings included prospective teachers' initial limited abilities to create evidence-based arguments, their hesitancy to include inquiry in their future teaching, and the impact of collaboration on thinking. Prior to this experience the prospective teachers held uninformed views of scientific inquiry. After the module, participants demonstrated extended expertise in

  14. Testing framework for GRASS GIS: ensuring reproducibility of scientific geospatial computing

    NASA Astrophysics Data System (ADS)

    Petras, V.; Gebbert, S.

    2014-12-01

    GRASS GIS, a free and open source GIS, is used by many scientists directly or through other projects such as R or QGIS to perform geoprocessing tasks. Thus, a large number of scientific geospatial computations depend on quality and correct functionality of GRASS GIS. Automatic functionality testing is therefore necessary to ensure software reliability. Here we present a testing framework for GRASS GIS which addresses different needs of GRASS GIS and geospatial software in general. It allows to test GRASS tools (referred to as GRASS modules) and examine outputs including large raster and vector maps as well as temporal datasets. Furthermore, it enables to test all levels of GRASS GIS architecture including C and Python application programming interface and GRASS modules invoked as subprocesses. Since GRASS GIS is used as a platform for development of geospatial algorithms and models, the testing framework allows not only to test GRASS GIS core functionality but also tools developed by scientists as a part of their research. Using testing framework we can test GRASS GIS and related tools automatically and repetitively and thus detect errors caused by code changes and new developments. Tools and code are then easier to maintain which results in preserving reproducibility of scientific results over time. Similarly to open source code, the test results are publicly accessible, so that all current and potential users can see them. The usage of testing framework will be presented on an example of a test suite for r.slope.aspect module, a tool for computation of terrain slope, aspect, curvatures and other terrain characteristics.

  15. Instruction-Level Characterization of Scientific Computing Applications Using Hardware Performance Counters

    SciTech Connect

    Luo, Y.; Cameron, K.W.

    1998-11-24

    Workload characterization has been proven an essential tool to architecture design and performance evaluation in both scientific and commercial computing areas. Traditional workload characterization techniques include FLOPS rate, cache miss ratios, CPI (cycles per instruction or IPC, instructions per cycle) etc. With the complexity of sophisticated modern superscalar microprocessors, these traditional characterization techniques are not powerful enough to pinpoint the performance bottleneck of an application on a specific microprocessor. They are also incapable of immediately demonstrating the potential performance benefit of any architectural or functional improvement in a new processor design. To solve these problems, many people rely on simulators, which have substantial constraints especially on large-scale scientific computing applications. This paper presents a new technique of characterizing applications at the instruction level using hardware performance counters. It has the advantage of collecting instruction-level characteristics in a few runs virtually without overhead or slowdown. A variety of instruction counts can be utilized to calculate some average abstract workload parameters corresponding to microprocessor pipelines or functional units. Based on the microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. In particular, the analysis results can provide some insight to the problem that only a small percentage of processor peak performance can be achieved even for many very cache-friendly codes. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. Eventually, these abstract parameters can lead to the creation of an analytical microprocessor pipeline model and memory hierarchy model.

  16. BioModels.net Web Services, a free and integrated toolkit for computational modelling software.

    PubMed

    Li, Chen; Courtot, Mélanie; Le Novère, Nicolas; Laibe, Camille

    2010-05-01

    Exchanging and sharing scientific results are essential for researchers in the field of computational modelling. BioModels.net defines agreed-upon standards for model curation. A fundamental one, MIRIAM (Minimum Information Requested in the Annotation of Models), standardises the annotation and curation process of quantitative models in biology. To support this standard, MIRIAM Resources maintains a set of standard data types for annotating models, and provides services for manipulating these annotations. Furthermore, BioModels.net creates controlled vocabularies, such as SBO (Systems Biology Ontology) which strictly indexes, defines and links terms used in Systems Biology. Finally, BioModels Database provides a free, centralised, publicly accessible database for storing, searching and retrieving curated and annotated computational models. Each resource provides a web interface to submit, search, retrieve and display its data. In addition, the BioModels.net team provides a set of Web Services which allows the community to programmatically access the resources. A user is then able to perform remote queries, such as retrieving a model and resolving all its MIRIAM Annotations, as well as getting the details about the associated SBO terms. These web services use established standards. Communications rely on SOAP (Simple Object Access Protocol) messages and the available queries are described in a WSDL (Web Services Description Language) file. Several libraries are provided in order to simplify the development of client software. BioModels.net Web Services make one step further for the researchers to simulate and understand the entirety of a biological system, by allowing them to retrieve biological models in their own tool, combine queries in workflows and efficiently analyse models.

  17. Scientific workflow and support for high resolution global climate modeling at the Oak Ridge Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Anantharaj, V.; Mayer, B.; Wang, F.; Hack, J.; McKenna, D.; Hartman-Baker, R.

    2012-04-01

    The Oak Ridge Leadership Computing Facility (OLCF) facilitates the execution of computational experiments that require tens of millions of CPU hours (typically using thousands of processors simultaneously) while generating hundreds of terabytes of data. A set of ultra high resolution climate experiments in progress, using the Community Earth System Model (CESM), will produce over 35,000 files, ranging in sizes from 21 MB to 110 GB each. The execution of the experiments will require nearly 70 Million CPU hours on the Jaguar and Titan supercomputers at OLCF. The total volume of the output from these climate modeling experiments will be in excess of 300 TB. This model output must then be archived, analyzed, distributed to the project partners in a timely manner, and also made available more broadly. Meeting this challenge would require efficient movement of the data, staging the simulation output to a large and fast file system that provides high volume access to other computational systems used to analyze the data and synthesize results. This file system also needs to be accessible via high speed networks to an archival system that can provide long term reliable storage. Ideally this archival system is itself directly available to other systems that can be used to host services making the data and analysis available to the participants in the distributed research project and to the broader climate community. The various resources available at the OLCF now support this workflow. The available systems include the new Jaguar Cray XK6 2.63 petaflops (estimated) supercomputer, the 10 PB Spider center-wide parallel file system, the Lens/EVEREST analysis and visualization system, the HPSS archival storage system, the Earth System Grid (ESG), and the ORNL Climate Data Server (CDS). The ESG features federated services, search & discovery, extensive data handling capabilities, deep storage access, and Live Access Server (LAS) integration. The scientific workflow enabled on

  18. Investigating the Relationship between Pre-Service Teachers' Scientific Literacy, Environmental Literacy and Life-Long Learning Tendency

    ERIC Educational Resources Information Center

    Saribas, D.

    2015-01-01

    The study investigates the relationship between pre-service teachers' scientific literacy (SL) and their environmental literacy (EL). It also seeks significant differences in SL at different levels of a tendency towards life-long learning (LLT). With the world facing critical environmental problems, an interdisciplinary approach to teaching…

  19. [Hospital food: proposals for qualification of the Food and Nutrition Service, evaluated by the scientific community].

    PubMed

    Diez-Garcia, Rosa Wanda; Padilha, Marina; Sanches, Maísa

    2012-02-01

    The scope of this paper is to validate proposals used to qualify hospital food by the Brazilian scientific community. An electronic questionnaire was applied to clinical nutrition professionals registered on the Lattes Platform (Brazilian database of institutions and researchers' curricula in the areas of Science and Technology). The questionnaire incorporated a Likert scale and had spaces for comments. The themes dealt with patient participation, the nutritional and sensory quality of hospital diets, and planning and goals of the Hospital Food and Nutrition Service (HFNS). The questionnaire also asked for the top five priorities for a HFNS. Proposals with total or partial adherence equal to or greater than 70% were considered to be approved. All proposals had total adherence equal to or greater than 70%. The proposal that had minimal adherence (70%) was the one that proposed that nutritional intervention must be arranged by mutual agreement with the patient. The proposal that had maximal adherence (93%) was the one advocating that there must be statistical control on diets prescribed by the HFNS. The most cited priorities referred to infrastructure and training of human resources (40%), the quality of hospital food (27%) and the nutritional status of the patient.

  20. Services + Components = Data Intensive Scientific Workflow Applications with MeDICi

    SciTech Connect

    Gorton, Ian; Chase, Jared M.; Wynne, Adam S.; Almquist, Justin P.; Chappell, Alan R.

    2009-06-01

    Scientific applications are often structured as workflows that execute a series of distributed software modules to analyze large data sets. Such workflows are typically constructed using general-purpose scripting languages to coordinate the execution of the various modules and to exchange data sets between them. While such scripts provide a cost-effective approach for simple workflows, as the workflow structure becomes complex and evolves, the scripts quickly become complex and difficult to modify. This makes them a major barrier to easily and quickly deploying new algorithms and exploiting new, scalable hardware platforms. In this paper, we describe the MeDICi Workflow technology that is specifically designed to reduce the complexity of workflow application development, and to efficiently handle data intensive workflow applications. MeDICi integrates standard component-based and service-based technologies, and employs an efficient integration mechanism to ensure large data sets can be efficiently processed. We illustrate the use of MeDICi with a climate data processing example that we have built, and describe some of the new features

  1. Advanced information processing system: Inter-computer communication services

    NASA Technical Reports Server (NTRS)

    Burkhardt, Laura; Masotto, Tom; Sims, J. Terry; Whittredge, Roy; Alger, Linda S.

    1991-01-01

    The purpose is to document the functional requirements and detailed specifications for the Inter-Computer Communications Services (ICCS) of the Advanced Information Processing System (AIPS). An introductory section is provided to outline the overall architecture and functional requirements of the AIPS and to present an overview of the ICCS. An overview of the AIPS architecture as well as a brief description of the AIPS software is given. The guarantees of the ICCS are provided, and the ICCS is described as a seven-layered International Standards Organization (ISO) Model. The ICCS functional requirements, functional design, and detailed specifications as well as each layer of the ICCS are also described. A summary of results and suggestions for future work are presented.

  2. Multithreaded transactions in scientific computing. The Growth06_v2 program

    NASA Astrophysics Data System (ADS)

    Daniluk, Andrzej

    2009-07-01

    efficient than the previous ones [3]. Summary of revisions:The design pattern (See Fig. 2 of Ref. [3]) has been modified according to the scheme shown on Fig. 1. A graphical user interface (GUI) for the program has been reconstructed. Fig. 2 presents a hybrid diagram of a GUI that shows how onscreen objects connect to use cases. The program has been compiled with English/USA regional and language options. Note: The figures mentioned above are contained in the program distribution file. Unusual features: The program is distributed in the form of source project GROWTH06_v2.dpr with associated files, and should be compiled using Borland Delphi compilers versions 6 or latter (including Borland Developer Studio 2006 and Code Gear compilers for Delphi). Additional comments: Two figures are included in the program distribution file. These are captioned Static classes model for Transaction design pattern. A model of a window that shows how onscreen objects connect to use cases. Running time: The typical running time is machine and user-parameters dependent. References: [1] A. Daniluk, Comput. Phys. Comm. 170 (2005) 265. [2] W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, Numerical Recipes in Pascal: The Art of Scientific Computing, first ed., Cambridge University Press, 1989. [3] M. Brzuszek, A. Daniluk, Comput. Phys. Comm. 175 (2006) 678.

  3. Emergency healthcare process automation using mobile computing and cloud services.

    PubMed

    Poulymenopoulou, M; Malamateniou, F; Vassilacopoulos, G

    2012-10-01

    Emergency care is basically concerned with the provision of pre-hospital and in-hospital medical and/or paramedical services and it typically involves a wide variety of interdependent and distributed activities that can be interconnected to form emergency care processes within and between Emergency Medical Service (EMS) agencies and hospitals. Hence, in developing an information system for emergency care processes, it is essential to support individual process activities and to satisfy collaboration and coordination needs by providing readily access to patient and operational information regardless of location and time. Filling this information gap by enabling the provision of the right information, to the right people, at the right time fosters new challenges, including the specification of a common information format, the interoperability among heterogeneous institutional information systems or the development of new, ubiquitous trans-institutional systems. This paper is concerned with the development of an integrated computer support to emergency care processes by evolving and cross-linking institutional healthcare systems. To this end, an integrated EMS cloud-based architecture has been developed that allows authorized users to access emergency case information in standardized document form, as proposed by the Integrating the Healthcare Enterprise (IHE) profile, uses the Organization for the Advancement of Structured Information Standards (OASIS) standard Emergency Data Exchange Language (EDXL) Hospital Availability Exchange (HAVE) for exchanging operational data with hospitals and incorporates an intelligent module that supports triaging and selecting the most appropriate ambulances and hospitals for each case.

  4. Cloud Computing for Geosciences--GeoCloud for standardized geospatial service platforms (Invited)

    NASA Astrophysics Data System (ADS)

    Nebert, D. D.; Huang, Q.; Yang, C.

    2013-12-01

    The 21st century geoscience faces challenges of Big Data, spike computing requirements (e.g., when natural disaster happens), and sharing resources through cyberinfrastructure across different organizations (Yang et al., 2011). With flexibility and cost-efficiency of computing resources a primary concern, cloud computing emerges as a promising solution to provide core capabilities to address these challenges. Many governmental and federal agencies are adopting cloud technologies to cut costs and to make federal IT operations more efficient (Huang et al., 2010). However, it is still difficult for geoscientists to take advantage of the benefits of cloud computing to facilitate the scientific research and discoveries. This presentation reports using GeoCloud to illustrate the process and strategies used in building a common platform for geoscience communities to enable the sharing, integration of geospatial data, information and knowledge across different domains. GeoCloud is an annual incubator project coordinated by the Federal Geographic Data Committee (FGDC) in collaboration with the U.S. General Services Administration (GSA) and the Department of Health and Human Services. It is designed as a staging environment to test and document the deployment of a common GeoCloud community platform that can be implemented by multiple agencies. With these standardized virtual geospatial servers, a variety of government geospatial applications can be quickly migrated to the cloud. In order to achieve this objective, multiple projects are nominated each year by federal agencies as existing public-facing geospatial data services. From the initial candidate projects, a set of common operating system and software requirements was identified as the baseline for platform as a service (PaaS) packages. Based on these developed common platform packages, each project deploys and monitors its web application, develops best practices, and documents cost and performance information. This

  5. Operations analysis (study 2.6). Volume 4: Computer specification; logistics of orbiting vehicle servicing (LOVES)

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The logistics of orbital vehicle servicing computer specifications was developed and a number of alternatives to improve utilization of the space shuttle and the tug were investigated. Preliminary results indicate that space servicing offers a potential for reducing future operational and program costs over ground refurbishment of satellites. A computer code which could be developed to simulate space servicing is presented.

  6. The effects of integrating service learning into computer science: an inter-institutional longitudinal study

    NASA Astrophysics Data System (ADS)

    Payton, Jamie; Barnes, Tiffany; Buch, Kim; Rorrer, Audrey; Zuo, Huifang

    2015-07-01

    This study is a follow-up to one published in computer science education in 2010 that reported preliminary results showing a positive impact of service learning on student attitudes associated with success and retention in computer science. That paper described how service learning was incorporated into a computer science course in the context of the Students & Technology in Academia, Research, and Service (STARS) Alliance, an NSF-supported broadening participation in computing initiative that aims to diversify the computer science pipeline through innovative pedagogy and inter-institutional partnerships. The current paper describes how the STARS Alliance has expanded to diverse institutions, all using service learning as a vehicle for broadening participation in computing and enhancing attitudes and behaviors associated with student success. Results supported the STARS model of service learning for enhancing computing efficacy and computing commitment and for providing diverse students with many personal and professional development benefits.

  7. The Effects of Integrating Service Learning into Computer Science: An Inter-Institutional Longitudinal Study

    ERIC Educational Resources Information Center

    Payton, Jamie; Barnes, Tiffany; Buch, Kim; Rorrer, Audrey; Zuo, Huifang

    2015-01-01

    This study is a follow-up to one published in computer science education in 2010 that reported preliminary results showing a positive impact of service learning on student attitudes associated with success and retention in computer science. That paper described how service learning was incorporated into a computer science course in the context of…

  8. Pedagogical Strategies to Increase Pre-Service Teachers' Confidence in Computer Learning

    ERIC Educational Resources Information Center

    Chen, Li-Ling

    2004-01-01

    Pre-service teachers' attitudes towards computers significantly influence their future adoption of integrating computer technology into their teaching. What are the pedagogical strategies that a teacher education instructor or an instructional designer can incorporate to enhance a pre-service teacher's comfort level in using computers? In this…

  9. Assessing Pre-Service Teachers' Computer Phobia Levels in Terms of Gender and Experience, Turkish Sample

    ERIC Educational Resources Information Center

    Ursavas, Omer Faruk; Karal, Hasan

    2009-01-01

    In this study it is aimed to determine the level of pre-service teachers' computer phobia. Whether or not computer phobia meaningfully varies statistically according to gender and computer experience has been tested in the study. The study was performed on 430 pre-service teachers at the Education Faculty in Rize/Turkey. Data in the study were…

  10. Analysis, scientific computing and fundamental studies in fluid mechanics. Summary report number 19, May 1, 1995--April 30, 1996

    SciTech Connect

    1996-07-01

    Summaries are given of the progress on each of the following research projects: (1) a multi-resolution finite element method for computing multiscale solutions; (2) numerical study of free interface problems; (3) numerical simulation of two-dimensional particle coarsening; (4) numerical simulation of three-dimensional water waves; (5) vortex dynamics; (6) vortex models and turbulence; (7) flow in a non-uniform Hele-Shaw cell; (8) numerical analysis/scientific computing.

  11. The Operation of a Specialized Scientific Information and Data Analysis Center With Computer Base and Associated Communications Network.

    ERIC Educational Resources Information Center

    Cottrell, William B.; And Others

    The Nuclear Safety Information Center (NSIC) is a highly sophisticated scientific information center operated at Oak Ridge National Laboratory (ORNL) for the U.S. Atomic Energy Commission. Its information file, which consists of both data and bibliographic information, is computer stored and numerous programs have been developed to facilitate the…

  12. The Goal Specificity Effect on Strategy Use and Instructional Efficiency during Computer-Based Scientific Discovery Learning

    ERIC Educational Resources Information Center

    Kunsting, Josef; Wirth, Joachim; Paas, Fred

    2011-01-01

    Using a computer-based scientific discovery learning environment on buoyancy in fluids we investigated the "effects of goal specificity" (nonspecific goals vs. specific goals) for two goal types (problem solving goals vs. learning goals) on "strategy use" and "instructional efficiency". Our empirical findings close an important research gap,…

  13. [Text mining, a method for computer-assisted analysis of scientific texts, demonstrated by an analysis of author networks].

    PubMed

    Hahn, P; Dullweber, F; Unglaub, F; Spies, C K

    2014-06-01

    Searching for relevant publications is becoming more difficult with the increasing number of scientific articles. Text mining as a specific form of computer-based data analysis may be helpful in this context. Highlighting relations between authors and finding relevant publications concerning a specific subject using text analysis programs are illustrated graphically by 2 performed examples.

  14. Final Scientific Report: A Scalable Development Environment for Peta-Scale Computing

    SciTech Connect

    Karbach, Carsten; Frings, Wolfgang

    2013-02-22

    This document is the final scientific report of the project DE-SC000120 (A scalable Development Environment for Peta-Scale Computing). The objective of this project is the extension of the Parallel Tools Platform (PTP) for applying it to peta-scale systems. PTP is an integrated development environment for parallel applications. It comprises code analysis, performance tuning, parallel debugging and system monitoring. The contribution of the Juelich Supercomputing Centre (JSC) aims to provide a scalable solution for system monitoring of supercomputers. This includes the development of a new communication protocol for exchanging status data between the target remote system and the client running PTP. The communication has to work for high latency. PTP needs to be implemented robustly and should hide the complexity of the supercomputer's architecture in order to provide a transparent access to various remote systems via a uniform user interface. This simplifies the porting of applications to different systems, because PTP functions as abstraction layer between parallel application developer and compute resources. The common requirement for all PTP components is that they have to interact with the remote supercomputer. E.g. applications are built remotely and performance tools are attached to job submissions and their output data resides on the remote system. Status data has to be collected by evaluating outputs of the remote job scheduler and the parallel debugger needs to control an application executed on the supercomputer. The challenge is to provide this functionality for peta-scale systems in real-time. The client server architecture of the established monitoring application LLview, developed by the JSC, can be applied to PTP's system monitoring. LLview provides a well-arranged overview of the supercomputer's current status. A set of statistics, a list of running and queued jobs as well as a node display mapping running jobs to their compute resources form the

  15. 47 CFR 54.709 - Computations of required contributions to universal service support mechanisms.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... universal service support mechanisms. 54.709 Section 54.709 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) UNIVERSAL SERVICE Administration § 54.709 Computations of required contributions to universal service support mechanisms. (a) Prior to April 1,...

  16. 47 CFR 54.709 - Computations of required contributions to universal service support mechanisms.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... universal service support mechanisms. 54.709 Section 54.709 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) UNIVERSAL SERVICE Administration § 54.709 Computations of required contributions to universal service support mechanisms. (a) Prior to April 1,...

  17. 47 CFR 54.709 - Computations of required contributions to universal service support mechanisms.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... universal service support mechanisms. 54.709 Section 54.709 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) UNIVERSAL SERVICE Administration § 54.709 Computations of required contributions to universal service support mechanisms. (a) Prior to April 1,...

  18. First-Year Pre-Service Teachers in Taiwan--Do They Enter the Teacher Program with Satisfactory Scientific Literacy and Attitudes Toward Science?

    ERIC Educational Resources Information Center

    Chin, Chi-Chin

    2005-01-01

    Scientific literacy and attitudes toward science play an important role in human daily lives. The purpose of this study was to investigate whether first-year pre-service teachers in colleges in Taiwan have a satisfactory level of scientific literacy. The domains of scientific literacy selected in this study include: (1) science content; (2) the…

  19. Lowering the Barrier to Cross-Disciplinary Scientific Data Access via a Brokering Service Built Around a Unified Data Model

    NASA Astrophysics Data System (ADS)

    Lindholm, D. M.; Wilson, A.

    2012-12-01

    The steps many scientific data users go through to use data (after discovering it) can be rather tedious, even when dealing with datasets within their own discipline. Accessing data across domains often seems intractable. We present here, LaTiS, an Open Source brokering solution that bridges the gap between the source data and the user's code by defining a unified data model plus a plugin framework for "adapters" to read data from their native source, "filters" to perform server side data processing, and "writers" to output any number of desired formats or streaming protocols. A great deal of work is being done in the informatics community to promote multi-disciplinary science with a focus on search and discovery based on metadata - information about the data. The goal of LaTiS is to go that last step to provide a uniform interface to read the dataset into computer programs and other applications once it has been identified. The LaTiS solution for integrating a wide variety of data models is to return to mathematical fundamentals. The LaTiS data model emphasizes functional relationships between variables. For example, a time series of temperature measurements can be thought of as a function that maps a time to a temperature. With just three constructs: "Scalar" for a single variable, "Tuple" for a collection of variables, and "Function" to represent a set of independent and dependent variables, the LaTiS data model can represent most scientific datasets at a low level that enables uniform data access. Higher level abstractions can be built on top of the basic model to add more meaningful semantics for specific user communities. LaTiS defines its data model in terms of the Unified Modeling Language (UML). It also defines a very thin Java Interface that can be implemented by numerous existing data interfaces (e.g. NetCDF-Java) such that client code can access any dataset via the Java API, independent of the underlying data access mechanism. LaTiS also provides a

  20. Computer Classifieds: Electronic Career Services Link Alumni with Employers.

    ERIC Educational Resources Information Center

    Dessoff, Alan L.

    1992-01-01

    Electronic service companies are marketing electronic career services to college and university alumni associations. These electronic alternatives to traditional placement services offer schools a way to provide alumni with a desired service while increasing alumni association revenue. Typically, both applicants and companies pay a fee for a…

  1. Identity Management and Trust Services: Foundations for Cloud Computing

    ERIC Educational Resources Information Center

    Suess, Jack; Morooney, Kevin

    2009-01-01

    Increasingly, IT organizations will move from providing IT services locally to becoming an integrator of IT services--some provided locally and others provided outside the institution. As a result, institutions must immediately begin to plan for shared services and must understand the essential role that identity management and trust services play…

  2. Above the cloud computing orbital services distributed data model

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2014-05-01

    Technology miniaturization and system architecture advancements have created an opportunity to significantly lower the cost of many types of space missions by sharing capabilities between multiple spacecraft. Historically, most spacecraft have been atomic entities that (aside from their communications with and tasking by ground controllers) operate in isolation. Several notable example exist; however, these are purpose-designed systems that collaborate to perform a single goal. The above the cloud computing (ATCC) concept aims to create ad-hoc collaboration between service provider and consumer craft. Consumer craft can procure processing, data transmission, storage, imaging and other capabilities from provider craft. Because of onboard storage limitations, communications link capability limitations and limited windows of communication, data relevant to or required for various operations may span multiple craft. This paper presents a model for the identification, storage and accessing of this data. This model includes appropriate identification features for this highly distributed environment. It also deals with business model constraints such as data ownership, retention and the rights of the storing craft to access, resell, transmit or discard the data in its possession. The model ensures data integrity and confidentiality (to the extent applicable to a given data item), deals with unique constraints of the orbital environment and tags data with business model (contractual) obligation data.

  3. The InSAR Scientific Computing Environment (ISCE): A Python Framework for Earth Science

    NASA Astrophysics Data System (ADS)

    Rosen, P. A.; Gurrola, E. M.; Agram, P. S.; Sacco, G. F.; Lavalle, M.

    2015-12-01

    The InSAR Scientific Computing Environment (ISCE, funded by NASA ESTO) provides a modern computing framework for geodetic image processing of InSAR data from a diverse array of radar satellites and aircraft. ISCE is both a modular, flexible, and extensible framework for building software components and applications as well as a toolbox of applications for processing raw or focused InSAR and Polarimetric InSAR data. The ISCE framework contains object-oriented Python components layered to construct Python InSAR components that manage legacy Fortran/C InSAR programs. Components are independently configurable in a layered manner to provide maximum control. Polymorphism is used to define a workflow in terms of abstract facilities for each processing step that are realized by specific components at run-time. This enables a single workflow to work on either raw or focused data from all sensors. ISCE can serve as the core of a production center to process Level-0 radar data to Level-3 products, but is amenable to interactive processing approaches that allow scientists to experiment with data to explore new ways of doing science with InSAR data. The NASA-ISRO SAR (NISAR) Mission will deliver data of unprecedented quantity and quality, making possible global-scale studies in climate research, natural hazards, and Earth's ecosystems. ISCE is planned as the foundational element in processing NISAR data, enabling a new class of analyses that take greater advantage of the long time and large spatial scales of these new data. NISAR will be but one mission in a constellation of radar satellites in the future delivering such data. ISCE currently supports all publicly available strip map mode space-borne SAR data since ERS and is expected to include support for upcoming missions. ISCE has been incorporated into two prototype cloud-based systems that have demonstrated its elasticity in addressing larger data processing problems in a "production" context and its ability to be

  4. Persistence and availability of Web services in computational biology.

    PubMed

    Schultheiss, Sebastian J; Münch, Marc-Christian; Andreeva, Gergana D; Rätsch, Gunnar

    2011-01-01

    We have conducted a study on the long-term availability of bioinformatics Web services: an observation of 927 Web services published in the annual Nucleic Acids Research Web Server Issues between 2003 and 2009. We found that 72% of Web sites are still available at the published addresses, only 9% of services are completely unavailable. Older addresses often redirect to new pages. We checked the functionality of all available services: for 33%, we could not test functionality because there was no example data or a related problem; 13% were truly no longer working as expected; we could positively confirm functionality only for 45% of all services. Additionally, we conducted a survey among 872 Web Server Issue corresponding authors; 274 replied. 78% of all respondents indicate their services have been developed solely by students and researchers without a permanent position. Consequently, these services are in danger of falling into disrepair after the original developers move to another institution, and indeed, for 24% of services, there is no plan for maintenance, according to the respondents. We introduce a Web service quality scoring system that correlates with the number of citations: services with a high score are cited 1.8 times more often than low-scoring services. We have identified key characteristics that are predictive of a service's survival, providing reviewers, editors, and Web service developers with the means to assess or improve Web services. A Web service conforming to these criteria receives more citations and provides more reliable service for its users. The most effective way of ensuring continued access to a service is a persistent Web address, offered either by the publishing journal, or created on the authors' own initiative, for example at http://bioweb.me. The community would benefit the most from a policy requiring any source code needed to reproduce results to be deposited in a public repository.

  5. Can cloud computing benefit health services? - a SWOT analysis.

    PubMed

    Kuo, Mu-Hsing; Kushniruk, Andre; Borycki, Elizabeth

    2011-01-01

    In this paper, we discuss cloud computing, the current state of cloud computing in healthcare, and the challenges and opportunities of adopting cloud computing in healthcare. A Strengths, Weaknesses, Opportunities and Threats (SWOT) analysis was used to evaluate the feasibility of adopting this computing model in healthcare. The paper concludes that cloud computing could have huge benefits for healthcare but there are a number of issues that will need to be addressed before its widespread use in healthcare.

  6. 78 FR 9455 - Rehabilitation Research and Development Service Scientific Merit Review Board, Notice of Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-08

    ... and February 20, 2013 Courtyard DC/U.S. Prosthetics/Orthotics. Capitol. Brain Injury: TBI & Stroke... on the scientific and technical merit, the mission relevance, and the protection of human and...

  7. 76 FR 5650 - Rehabilitation Research and Development Service Scientific Merit Review Board; Notice of Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-01

    ... Health and Social Reintegration. February 23-25--Brain Injury Musculoskeletal/Orthopedic Rehabilitation... Development Officer on the scientific and technical merit, the mission relevance, and the protection of...

  8. Balancing the pros and cons of GMOs: socio-scientific argumentation in pre-service teacher education

    NASA Astrophysics Data System (ADS)

    Cinici, Ayhan

    2016-07-01

    This study investigates the role of the discursive process in the act of scientific knowledge building. Specifically, it links scientific knowledge building to risk perception of Genetically Modified Organisms (GMOs). To this end, this study designed and implemented a three-stage argumentation programme giving pre-service teachers (PSTs) the opportunity to consider, discuss and construct shared decisions about GMOs. The study involved 101 third-year PSTs from two different classes, randomly divided into control and experimental groups. The study utilised both quantitative and qualitative methods. During the quantitative phase, researchers administered a pre- and post-intervention scale to measure both groups' risk perception of GMOs. During the qualitative phase, data were collected from the experimental group alone through individual and group reports and an open-ended questionnaire. T-test results showed a statistically significant difference between the experimental and control groups' risk perception of GMOs. Qualitative analysis also revealed differences, for example, in PSTs' weighing of the pros and cons of scientific research demonstrating positive results of GMOs. In addition, PSTs' acceptance of GMOs increased. Consequently, this study suggests that developing familiarity with scientific enterprise may play an effective role in adopting a scientific perspective as well as a more balanced risk perception of GMOs.

  9. A Cloud-Computing Service for Environmental Geophysics and Seismic Data Processing

    NASA Astrophysics Data System (ADS)

    Heilmann, B. Z.; Maggi, P.; Piras, A.; Satta, G.; Deidda, G. P.; Bonomi, E.

    2012-04-01

    Cloud computing is establishing worldwide as a new high performance computing paradigm that offers formidable possibilities to industry and science. The presented cloud-computing portal, part of the Grida3 project, provides an innovative approach to seismic data processing by combining open-source state-of-the-art processing software and cloud-computing technology, making possible the effective use of distributed computation and data management with administratively distant resources. We substituted the user-side demanding hardware and software requirements by remote access to high-performance grid-computing facilities. As a result, data processing can be done quasi in real-time being ubiquitously controlled via Internet by a user-friendly web-browser interface. Besides the obvious advantages over locally installed seismic-processing packages, the presented cloud-computing solution creates completely new possibilities for scientific education, collaboration, and presentation of reproducible results. The web-browser interface of our portal is based on the commercially supported grid portal EnginFrame, an open framework based on Java, XML, and Web Services. We selected the hosted applications with the objective to allow the construction of typical 2D time-domain seismic-imaging workflows as used for environmental studies and, originally, for hydrocarbon exploration. For data visualization and pre-processing, we chose the free software package Seismic Un*x. We ported tools for trace balancing, amplitude gaining, muting, frequency filtering, dip filtering, deconvolution and rendering, with a customized choice of options as services onto the cloud-computing portal. For structural imaging and velocity-model building, we developed a grid version of the Common-Reflection-Surface stack, a data-driven imaging method that requires no user interaction at run time such as manual picking in prestack volumes or velocity spectra. Due to its high level of automation, CRS stacking

  10. 77 FR 27263 - Computer Matching Between the Selective Service System and the Department of Education

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-09

    ... From the Federal Register Online via the Government Publishing Office SELECTIVE SERVICE SYSTEM Computer Matching Between the Selective Service System and the Department of Education AGENCY: Selective... the Computer Matching and Privacy Protection Act of 1988 (Pub. L. 100-503), and the Office...

  11. Toward a common component architecture for high-performance scientific computing

    SciTech Connect

    Armstrong, R; Gannon, D; Geist, A; Katarzyna, K; Kohn, S; McInnes, L; Parker, S; Smolinski, B

    1999-06-09

    This paper describes work in progress to develop a standard for interoperability among high-performance scientific components. This research stems from growing recognition that the scientific community must better manage the complexity of multidisciplinary simulations and better address scalable performance issues on parallel and distributed architectures. Driving forces are the need for fast connections among components that perform numerically intensive work and parallel collective interactions among components that use multiple processes or threads. This paper focuses on the areas we believe are most crucial for such interactions, namely an interface definition language that supports scientific abstractions for specifying component interfaces and a ports connection model for specifying component interactions.

  12. Assessing availability of scientific journals, databases, and health library services in Canadian health ministries: a cross-sectional study

    PubMed Central

    2013-01-01

    Background Evidence-informed health policymaking logically depends on timely access to research evidence. To our knowledge, despite the substantial political and societal pressure to enhance the use of the best available research evidence in public health policy and program decision making, there is no study addressing availability of peer-reviewed research in Canadian health ministries. Objectives To assess availability of (1) a purposive sample of high-ranking scientific journals, (2) bibliographic databases, and (3) health library services in the fourteen Canadian health ministries. Methods From May to October 2011, we conducted a cross-sectional survey among librarians employed by Canadian health ministries to collect information relative to availability of scientific journals, bibliographic databases, and health library services. Availability of scientific journals in each ministry was determined using a sample of 48 journals selected from the 2009 Journal Citation Reports (Sciences and Social Sciences Editions). Selection criteria were: relevance for health policy based on scope note information about subject categories and journal popularity based on impact factors. Results We found that the majority of Canadian health ministries did not have subscription access to key journals and relied heavily on interlibrary loans. Overall, based on a sample of high-ranking scientific journals, availability of journals through interlibrary loans, online and print-only subscriptions was estimated at 63%, 28% and 3%, respectively. Health Canada had a 2.3-fold higher number of journal subscriptions than that of the provincial ministries’ average. Most of the organisations provided access to numerous discipline-specific and multidisciplinary databases. Many organisations provided access to the library resources described through library partnerships or consortia. No professionally led health library environment was found in four out of fourteen Canadian health ministries

  13. Cloud-based opportunities in scientific computing: insights from processing Suomi National Polar-Orbiting Partnership (S-NPP) Direct Broadcast data

    NASA Astrophysics Data System (ADS)

    Evans, J. D.; Hao, W.; Chettri, S.

    2013-12-01

    The cloud is proving to be a uniquely promising platform for scientific computing. Our experience with processing satellite data using Amazon Web Services highlights several opportunities for enhanced performance, flexibility, and cost effectiveness in the cloud relative to traditional computing -- for example: - Direct readout from a polar-orbiting satellite such as the Suomi National Polar-Orbiting Partnership (S-NPP) requires bursts of processing a few times a day, separated by quiet periods when the satellite is out of receiving range. In the cloud, by starting and stopping virtual machines in minutes, we can marshal significant computing resources quickly when needed, but not pay for them when not needed. To take advantage of this capability, we are automating a data-driven approach to the management of cloud computing resources, in which new data availability triggers the creation of new virtual machines (of variable size and processing power) which last only until the processing workflow is complete. - 'Spot instances' are virtual machines that run as long as one's asking price is higher than the provider's variable spot price. Spot instances can greatly reduce the cost of computing -- for software systems that are engineered to withstand unpredictable interruptions in service (as occurs when a spot price exceeds the asking price). We are implementing an approach to workflow management that allows data processing workflows to resume with minimal delays after temporary spot price spikes. This will allow systems to take full advantage of variably-priced 'utility computing.' - Thanks to virtual machine images, we can easily launch multiple, identical machines differentiated only by 'user data' containing individualized instructions (e.g., to fetch particular datasets or to perform certain workflows or algorithms) This is particularly useful when (as is the case with S-NPP data) we need to launch many very similar machines to process an unpredictable number of

  14. 47 CFR 54.709 - Computations of required contributions to universal service support mechanisms.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... universal service support mechanisms. 54.709 Section 54.709 Telecommunication FEDERAL COMMUNICATIONS... Computations of required contributions to universal service support mechanisms. (a) Prior to April 1, 2003, contributions to the universal service support mechanisms shall be based on contributors'...

  15. 47 CFR 54.709 - Computations of required contributions to universal service support mechanisms.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... universal service support mechanisms. 54.709 Section 54.709 Telecommunication FEDERAL COMMUNICATIONS... Computations of required contributions to universal service support mechanisms. (a) Prior to April 1, 2003, contributions to the universal service support mechanisms shall be based on contributors'...

  16. Comparison of Scientific Calipers and Computer-Enabled CT Review for the Measurement of Skull Base and Craniomaxillofacial Dimensions

    PubMed Central

    Citardi, Martin J.; Herrmann, Brian; Hollenbeak, Chris S.; Stack, Brendan C.; Cooper, Margaret; Bucholz, Richard D.

    2001-01-01

    Traditionally, cadaveric studies and plain-film cephalometrics provided information about craniomaxillofacial proportions and measurements; however, advances in computer technology now permit software-based review of computed tomography (CT)-based models. Distances between standardized anatomic points were measured on five dried human skulls with standard scientific calipers (Geneva Gauge, Albany, NY) and through computer workstation (StealthStation 2.6.4, Medtronic Surgical Navigation Technology, Louisville, CO) review of corresponding CT scans. Differences in measurements between the caliper and CT model were not statistically significant for each parameter. Measurements obtained by computer workstation CT review of the cranial skull base are an accurate representation of actual bony anatomy. Such information has important implications for surgical planning and clinical research. ImagesFigure 1Figure 2Figure 3 PMID:17167599

  17. Scientific Grand Challenges: Challenges in Climate Change Science and the Role of Computing at the Extreme Scale

    SciTech Connect

    Khaleel, Mohammad A.; Johnson, Gary M.; Washington, Warren M.

    2009-07-02

    The U.S. Department of Energy (DOE) Office of Biological and Environmental Research (BER) in partnership with the Office of Advanced Scientific Computing Research (ASCR) held a workshop on the challenges in climate change science and the role of computing at the extreme scale, November 6-7, 2008, in Bethesda, Maryland. At the workshop, participants identified the scientific challenges facing the field of climate science and outlined the research directions of highest priority that should be pursued to meet these challenges. Representatives from the national and international climate change research community as well as representatives from the high-performance computing community attended the workshop. This group represented a broad mix of expertise. Of the 99 participants, 6 were from international institutions. Before the workshop, each of the four panels prepared a white paper, which provided the starting place for the workshop discussions. These four panels of workshop attendees devoted to their efforts the following themes: Model Development and Integrated Assessment; Algorithms and Computational Environment; Decadal Predictability and Prediction; Data, Visualization, and Computing Productivity. The recommendations of the panels are summarized in the body of this report.

  18. Instrumentation for Scientific Computing in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics.

    DTIC Science & Technology

    1987-10-01

    This was an instrumentation grant to purchase equipment of support of research in neural networks, information science , artificial intelligence, and applied mathematics. Computer lab equipment, motor control and robotics lab equipment, speech analysis equipment and computational vision equipment were purchased.

  19. Engaging Pre-Service Teachers in Multinational, Multi-Campus Scientific and Mathematical Inquiry

    ERIC Educational Resources Information Center

    Wilhelm, Jennifer Anne; Smith, Walter S.; Walters, Kendra L.; Sherrod, Sonya E.; Mulholland, Judith

    2008-01-01

    Pre-service teachers from Texas and Indiana in the United States and from Queensland, Australia, observed the Moon for a semester and compared and contrasted their findings in asynchronous Internet discussion groups. The 188 pre-service teachers were required to conduct inquiry investigations for their methods coursework which included an initial…

  20. Measuring the Economic Value of the Electronic Scientific Information Services in Portuguese Academic Libraries

    ERIC Educational Resources Information Center

    Melo, Luiza Baptista; Pires, Cesaltina Pacheco

    2011-01-01

    This article has three main objectives: i) to describe the use patterns of electronic and traditional resources in Portuguese academic libraries; ii) to estimate the value of the Portuguese electronic scientific information consortium b-on by using two alternative valuation methodologies; iii) to relate the use patterns with the valuation of b-on.…

  1. The Tentativeness of Scientific Theories: Conceptions of Pre-Service Science Teachers

    ERIC Educational Resources Information Center

    Jain, Jasmine; Abdullah, Nabilah; Lim, Beh Kian

    2014-01-01

    The recognition of sound understanding of Nature of Science (NOS) in promoting scientific literacy among individuals has heightened the need to probe NOS conceptions among various groups. However, the nature of quantitative studies in gauging NOS understanding has left the understanding on few NOS aspects insufficiently informed. This paper aimed…

  2. Examining Design Assumptions for an Information Retrieval Service: SDI Use for Scientific and Technical Databases.

    ERIC Educational Resources Information Center

    Cole, Elliot

    1981-01-01

    Examines the assumptions which underlie the design of SDI systems with respect to context of use, activities supported by SDI, and characteristics which make documents useful. Relates these assumptions to the failure of SDI to gain widespread acceptance among scientific information users. A 43-item reference list is included. (JL)

  3. Applying Service Learning to Computer Science: Attracting and Engaging Under-Represented Students

    ERIC Educational Resources Information Center

    Dahlberg, Teresa; Barnes, Tiffany; Buch, Kim; Bean, Karen

    2010-01-01

    This article describes a computer science course that uses service learning as a vehicle to accomplish a range of pedagogical and BPC (broadening participation in computing) goals: (1) to attract a diverse group of students and engage them in outreach to younger students to help build a diverse computer science pipeline, (2) to develop leadership…

  4. Scientific Subsurface data for EPOS - integration of 3D and 4D data services

    NASA Astrophysics Data System (ADS)

    Kerschke, Dorit; Hammitzsch, Martin; Wächter, Joachim

    2016-04-01

    The provision of efficient and easy access to scientific subsurface data sets obtained from field studies and scientific observatories or by geological 3D/4D-modeling is an important contribution to modern research infrastructures as they can facilitate the integrated analysis and evaluation as well as the exchange of scientific data. Within the project EPOS - European Plate Observing System, access to 3D and 4D data sets will be provided by 'WP15 - Geological information and modeling' and include structural geology models as well as numerical models, e.g., temperature, aquifers, and velocity. This also includes validated raw data, e.g., seismic profiles, from which the models where derived. All these datasets are of high quality and of unique scientific value as the process of modeling is time and cost intensive. However, these models are currently not easily accessible for the wider scientific community, much less to the public. For the provision of these data sets a data management platform based on common and standardized data models, protocols, and encodings as well as on a predominant use of Free and Open Source Software (FOSS) has been devised. The interoperability for disciplinary and domain applications thus highly depends on the adoption of generally agreed technologies and standards (OGC, ISO…) originating from Spatial Data Infrastructure related efforts (e.g., INSPIRE). However, since not many standards for 3D and 4D geological data exists, this work also includes new approaches for project data management, interfaces for tools used by the researchers, and interfaces for the sharing and reusing of data.

  5. Journal of the Royal Naval Scientific Service. Volume 29, Number 5

    DTIC Science & Technology

    1974-09-01

    International Con- ference on CAD, Southampton University, Apr. 1969. Talbot. G. P. and Acland -Hood, E. P. Computer Drawing of Gun Rules and Fuze...Indicators. RARDE Report 2/71. Talbot, G. P. and Acland -Hood, E. P. Computer Drawing of GFT Ballistic Slide Rules. RARDE Memo. 23/71. Gawlik, H. J. MIRA

  6. Ethics issues in scientific data and service provision: evidence and challenges for the European Plate Observing System (EPOS)

    NASA Astrophysics Data System (ADS)

    Cocco, Massimo; Freda, Carmela; Haslinger, Florian; Consortium, Epos

    2016-04-01

    Addressing Ethics issues is nowadays a relevant challenge for any initiative, program or project dealing with scientific data and products provision, access to services for scientific purposes and communication with different stakeholders, including society. This is corroborated by the evidence that Ethics has very high priority in EU funded research. Indeed, all the activities carried out under Horizon 2020 must comply with ethical principles and national, Union and international legislation. This implies that "For all activities funded by the European Union, Ethics is an integral part of research from beginning to end, and ethical compliance is seen as pivotal to achieve real research excellence." Here, we present the experience of EPOS, a public pan-European research infrastructure. EPOS aims at integrating data, data products, services and software (DDSS) for solid Earth science generated and provided by monitoring networks, observing systems and facilities belonging to European countries. EPOS fosters the integrated use of multidisciplinary solid Earth data to improve the understanding of physical and chemical processes controlling earthquakes, volcanic eruptions, tsunamis as well as those driving tectonics and surface dynamics. The EPOS integration plan will make significant contributions to understanding and mitigating geo-hazards, yielding data for hazard assessment, data products for engaging different stakeholders, and services for training, education and communication to society. Numerous national research infrastructures engaged in EPOS are deployed for the monitoring of areas prone to geo-hazards and for the surveillance of the national territory including areas used for exploiting geo-resources. The EPOS community is therefore already trained to provide services to public (civil defence agencies, local and national authorities) and private (petroleum industry, mining industry, geothermal companies, aviation security) stakeholders. Our ability to

  7. TESOL In-Service Teachers' Attitudes towards Computer Use

    ERIC Educational Resources Information Center

    Rezaee, Abbas Ali; Abidin, Mohd Jafre bin Zainol; Issa, Jinan Hatem; Mustafa, Paiman Omer

    2012-01-01

    The way education is being delivered has been altered via the rapid development of computer technology. This is especially the case in the delivery of English language teaching where the combination of various variables is pertinent to computer attitudes to enhance instructional outcomes. This paper reports the study undertaken to elucidate…

  8. Academic Computers in Service. Effective Uses for Higher Education.

    ERIC Educational Resources Information Center

    Mosmann, Charles

    This book is designed for noncomputing, specialist administrators who want straight answers to questions about the value and use of computers. The book: (1) summarizes the ways computers are being effectively used in higher education today, in science, social science, and humanities research-instruction, and administration; (2) describes the…

  9. Scientific and technical services directed toward the development of planetary quarantine measures for automated spacecraft

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The work is reported, which was performed in the specific tasks of the Planetary Quarantine research program for developing parameter specifications of unmanned scientific missions to the planets. The effort was directed principally toward the advancement of the quarantine technology, applicable to all future missions to planets of biological interest. The emphasis of the research was on coordinated evaluation, analysis, documentation, and presentation of PQ requirements for flight projects such as Viking and Pioneer.

  10. 20 CFR 655.430 - Service and computation of time.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... EMPLOYMENT OF FOREIGN WORKERS IN THE UNITED STATES Enforcement of H-1A Attestations § 655.430 Service and... Fair Labor Standards, Office of the Solicitor, U.S. Department of Labor, 200 Constitution Avenue...

  11. 20 CFR 655.430 - Service and computation of time.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... EMPLOYMENT OF FOREIGN WORKERS IN THE UNITED STATES Enforcement of H-1A Attestations § 655.430 Service and... Fair Labor Standards, Office of the Solicitor, U.S. Department of Labor, 200 Constitution Avenue...

  12. Automatic differentiation of C++ codes for large-scale scientific computing.

    SciTech Connect

    Gay, David M.; Bartlett, Roscoe A; Phipps, Eric Todd

    2006-02-01

    We discuss computing first derivatives for models based on elements, such as large-scale finite-element PDE discretizations, implemented in the C++ programming language.We use a hybrid technique of automatic differentiation (AD) and manual assembly, with local element-level derivatives computed via AD and manually summed into the global derivative. C++ templating and operator overloading work well for both forward- and reverse-mode derivative computations. We found that AD derivative computations compared favorably in time to finite differencing for a scalable finite-element discretization of a convection-diffusion problem in two dimensions.

  13. Scientific Inquiry, Digital Literacy, and Mobile Computing in Informal Learning Environments

    ERIC Educational Resources Information Center

    Marty, Paul F.; Alemanne, Nicole D.; Mendenhall, Anne; Maurya, Manisha; Southerland, Sherry A.; Sampson, Victor; Douglas, Ian; Kazmer, Michelle M.; Clark, Amanda; Schellinger, Jennifer

    2013-01-01

    Understanding the connections between scientific inquiry and digital literacy in informal learning environments is essential to furthering students' critical thinking and technology skills. The Habitat Tracker project combines a standards-based curriculum focused on the nature of science with an integrated system of online and mobile computing…

  14. Computer Series, 52: Scientific Exploration with a Microcomputer: Simulations for Nonscientists.

    ERIC Educational Resources Information Center

    Whisnant, David M.

    1984-01-01

    Describes two simulations, written for Apple II microcomputers, focusing on scientific methodology. The first is based on the tendency of colloidal iron in high concentrations to stick to fish gills and cause breathing difficulties. The second, modeled after the dioxin controversy, examines a hypothetical chemical thought to cause cancer. (JN)

  15. 75 FR 65639 - Center for Scientific Review; Notice of Closed Meetings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-26

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Center for Scientific Review; Notice of Closed Meetings... Committee: Center for Scientific Review Special Emphasis Panel; Member Conflict: Computational...

  16. The Representation of Anatomical Structures through Computer Animation for Scientific, Educational and Artistic Applications.

    ERIC Educational Resources Information Center

    Stredney, Donald Larry

    An overview of computer animation and the techniques involved in its creation is provided in the introduction to this masters thesis, which focuses on the problems encountered by students in learning the forms and functions of complex anatomical structures and ways in which computer animation can address these problems. The objectives for,…

  17. Creating science-driven computer architecture: A new path to scientific leadership

    SciTech Connect

    McCurdy, C. William; Stevens, Rick; Simon, Horst; Kramer, William; Bailey, David; Johnston, William; Catlett, Charlie; Lusk, Rusty; Morgan, Thomas; Meza, Juan; Banda, Michael; Leighton, James; Hules, John

    2002-10-14

    This document proposes a multi-site strategy for creating a new class of computing capability for the U.S. by undertaking the research and development necessary to build supercomputers optimized for science in partnership with the American computer industry.

  18. High-performance, distributed computing software libraries and services

    SciTech Connect

    Foster, Ian; Kesselman, Carl; Tuecke, Steven

    2002-01-24

    The Globus toolkit provides basic Grid software infrastructure (i.e. middleware), to facilitate the development of applications which securely integrate geographically separated resources, including computers, storage systems, instruments, immersive environments, etc.

  19. Morgan Receives 2013 Paul G. Silver Award for Outstanding Scientific Service: Response

    NASA Astrophysics Data System (ADS)

    Morgan, Julia K.

    2014-09-01

    Thank you, Kelin, for your kind words and nomination, and thanks to the Tectonophysics, Seismology, and Geodesy sections for extending this honor. I also want to recognize the efforts of so many others who really drove the GeoPRISMS program; my job was primarily as a facilitator, channeling the great ideas of the community into distinctive scientific opportunities benefiting a large number of researchers, and what a creative, energetic, and generous community it is. It has been particularly satisfying to watch GeoPRISMS grow during my term as chair, especially with the enthusiastic involvement of the students and early-career researchers who are the future of the program.

  20. 77 FR 7489 - Small Business Size Standards: Professional, Technical, and Scientific Services

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-10

    ... services of architects, engineers, surveyors, etc. However, under the current $4.5 million size standard... consultants) account for 35 percent of the gross revenues of architects, engineers and surveyors and suggested... designers and landscape architects and a lower $4.5 million size standard for architects and engineers....

  1. Scientific Discovery through Advanced Computing (SciDAC-3) Partnership Project Annual Report

    SciTech Connect

    Hoffman, Forest M.; Bochev, Pavel B.; Cameron-Smith, Philip J..; Easter, Richard C; Elliott, Scott M.; Ghan, Steven J.; Liu, Xiaohong; Lowrie, Robert B.; Lucas, Donald D.; Ma, Po-lun; Sacks, William J.; Shrivastava, Manish; Singh, Balwinder; Tautges, Timothy J.; Taylor, Mark A.; Vertenstein, Mariana; Worley, Patrick H.

    2014-01-15

    The Applying Computationally Efficient Schemes for BioGeochemical Cycles ACES4BGC Project is advancing the predictive capabilities of Earth System Models (ESMs) by reducing two of the largest sources of uncertainty, aerosols and biospheric feedbacks, with a highly efficient computational approach. In particular, this project is implementing and optimizing new computationally efficient tracer advection algorithms for large numbers of tracer species; adding important biogeochemical interactions between the atmosphere, land, and ocean models; and applying uncertainty quanti cation (UQ) techniques to constrain process parameters and evaluate uncertainties in feedbacks between biogeochemical cycles and the climate system.

  2. Computational Everyday Life Human Behavior Model as Servicable Knowledge

    NASA Astrophysics Data System (ADS)

    Motomura, Yoichi; Nishida, Yoshifumi

    A project called `Open life matrix' is not only a research activity but also real problem solving as an action research. This concept is realized by large-scale data collection, probabilistic causal structure model construction and information service providing using the model. One concrete outcome of this project is childhood injury prevention activity in new team consist of hospital, government, and many varieties of researchers. The main result from the project is a general methodology to apply probabilistic causal structure models as servicable knowledge for action research. In this paper, the summary of this project and future direction to emphasize action research driven by artificial intelligence technology are discussed.

  3. Sudden Cardiac Risk Stratification with Electrocardiographic Indices - A Review on Computational Processing, Technology Transfer, and Scientific Evidence.

    PubMed

    Gimeno-Blanes, Francisco J; Blanco-Velasco, Manuel; Barquero-Pérez, Óscar; García-Alberola, Arcadi; Rojo-Álvarez, José L

    2016-01-01

    Great effort has been devoted in recent years to the development of sudden cardiac risk predictors as a function of electric cardiac signals, mainly obtained from the electrocardiogram (ECG) analysis. But these prediction techniques are still seldom used in clinical practice, partly due to its limited diagnostic accuracy and to the lack of consensus about the appropriate computational signal processing implementation. This paper addresses a three-fold approach, based on ECG indices, to structure this review on sudden cardiac risk stratification. First, throughout the computational techniques that had been widely proposed for obtaining these indices in technical literature. Second, over the scientific evidence, that although is supported by observational clinical studies, they are not always representative enough. And third, via the limited technology transfer of academy-accepted algorithms, requiring further meditation for future systems. We focus on three families of ECG derived indices which are tackled from the aforementioned viewpoints, namely, heart rate turbulence (HRT), heart rate variability (HRV), and T-wave alternans. In terms of computational algorithms, we still need clearer scientific evidence, standardizing, and benchmarking, siting on advanced algorithms applied over large and representative datasets. New scenarios like electronic health recordings, big data, long-term monitoring, and cloud databases, will eventually open new frameworks to foresee suitable new paradigms in the near future.

  4. Sudden Cardiac Risk Stratification with Electrocardiographic Indices - A Review on Computational Processing, Technology Transfer, and Scientific Evidence

    PubMed Central

    Gimeno-Blanes, Francisco J.; Blanco-Velasco, Manuel; Barquero-Pérez, Óscar; García-Alberola, Arcadi; Rojo-Álvarez, José L.

    2016-01-01

    Great effort has been devoted in recent years to the development of sudden cardiac risk predictors as a function of electric cardiac signals, mainly obtained from the electrocardiogram (ECG) analysis. But these prediction techniques are still seldom used in clinical practice, partly due to its limited diagnostic accuracy and to the lack of consensus about the appropriate computational signal processing implementation. This paper addresses a three-fold approach, based on ECG indices, to structure this review on sudden cardiac risk stratification. First, throughout the computational techniques that had been widely proposed for obtaining these indices in technical literature. Second, over the scientific evidence, that although is supported by observational clinical studies, they are not always representative enough. And third, via the limited technology transfer of academy-accepted algorithms, requiring further meditation for future systems. We focus on three families of ECG derived indices which are tackled from the aforementioned viewpoints, namely, heart rate turbulence (HRT), heart rate variability (HRV), and T-wave alternans. In terms of computational algorithms, we still need clearer scientific evidence, standardizing, and benchmarking, siting on advanced algorithms applied over large and representative datasets. New scenarios like electronic health recordings, big data, long-term monitoring, and cloud databases, will eventually open new frameworks to foresee suitable new paradigms in the near future. PMID:27014083

  5. Multithreaded transactions in scientific computing: New versions of a computer program for kinematical calculations of RHEED intensity oscillations

    NASA Astrophysics Data System (ADS)

    Brzuszek, Marcin; Daniluk, Andrzej

    2006-11-01

    Writing a concurrent program can be more difficult than writing a sequential program. Programmer needs to think about synchronisation, race conditions and shared variables. Transactions help reduce the inconvenience of using threads. A transaction is an abstraction, which allows programmers to group a sequence of actions on the program into a logical, higher-level computation unit. This paper presents multithreaded versions of the GROWTH program, which allow to calculate the layer coverages during the growth of thin epitaxial films and the corresponding RHEED intensities according to the kinematical approximation. The presented programs also contain graphical user interfaces, which enable displaying program data at run-time. New version program summaryTitles of programs:GROWTHGr, GROWTH06 Catalogue identifier:ADVL_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL_v2_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Catalogue identifier of previous version:ADVL Does the new version supersede the original program:No Computer for which the new version is designed and others on which it has been tested: Pentium-based PC Operating systems or monitors under which the new version has been tested: Windows 9x, XP, NT Programming language used:Object Pascal Memory required to execute with typical data:More than 1 MB Number of bits in a word:64 bits Number of processors used:1 No. of lines in distributed program, including test data, etc.:20 931 Number of bytes in distributed program, including test data, etc.: 1 311 268 Distribution format:tar.gz Nature of physical problem: The programs compute the RHEED intensities during the growth of thin epitaxial structures prepared using the molecular beam epitaxy (MBE). The computations are based on the use of kinematical diffraction theory [P.I. Cohen, G.S. Petrich, P.R. Pukite, G.J. Whaley, A.S. Arrott, Surf. Sci. 216 (1989) 222. [1

  6. Providing Assistive Technology Applications as a Service Through Cloud Computing.

    PubMed

    Mulfari, Davide; Celesti, Antonio; Villari, Massimo; Puliafito, Antonio

    2015-01-01

    Users with disabilities interact with Personal Computers (PCs) using Assistive Technology (AT) software solutions. Such applications run on a PC that a person with a disability commonly uses. However the configuration of AT applications is not trivial at all, especially whenever the user needs to work on a PC that does not allow him/her to rely on his / her AT tools (e.g., at work, at university, in an Internet point). In this paper, we discuss how cloud computing provides a valid technological solution to enhance such a scenario.With the emergence of cloud computing, many applications are executed on top of virtual machines (VMs). Virtualization allows us to achieve a software implementation of a real computer able to execute a standard operating system and any kind of application. In this paper we propose to build personalized VMs running AT programs and settings. By using the remote desktop technology, our solution enables users to control their customized virtual desktop environment by means of an HTML5-based web interface running on any computer equipped with a browser, whenever they are.

  7. Scientific Computers at the Helsinki University of Technology during the Post Pioneering Stage

    NASA Astrophysics Data System (ADS)

    Nykänen, Panu; Andersin, Hans

    The paper describes the process leading from the pioneering phase when the university was free to develop and build its own computers through the period when the university was dependent on cooperation with the local computer companies to the stage when a bureaucratic state organization took over the power to decide on acquiring computing equipment to the universities. This stage ended in the late 1970s when computing power gradually became a commodity that the individual laboratories and research projects could acquire just like any resource. This development paralleled the situation in many other countries and universities as well. We have chosen the Helsinki University of Technology (TKK) as a case to illustrate this development process, which for the researchers was very annoying and frustrating when it happened.

  8. 34 CFR 365.11 - How is the allotment of Federal funds for State independent living (IL) services computed?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... independent living (IL) services computed? 365.11 Section 365.11 Education Regulations of the Offices of the... the allotment of Federal funds for State independent living (IL) services computed? (a) The allotment of Federal funds for State IL services for each State is computed in accordance with the...

  9. Computer Aided Reference Services in the Academic Library: Experiences in Organizing and Operating an Online Reference Service.

    ERIC Educational Resources Information Center

    Hoover, Ryan E.

    1979-01-01

    Summarizes the development of the Computer-Aided Reference Services (CARS) division of the University of Utah Libraries' reference department. Development, organizational structure, site selection, equipment, management, staffing and training considerations, promotion and marketing, budget and pricing, record keeping, statistics, and evaluation…

  10. Wearable Notification via Dissemination Service in a Pervasive Computing Environment

    DTIC Science & Technology

    2015-09-01

    Approved for public release; distribution unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT This report describes an architecture of wearable sensors ...in the context of an Army tactical environment. The architecture is implemented in functional software that integrates the sensor data, performs...environment (PCE), networking, computers, and sensors abound all around humans’ daily activities. The environment that Weiser envisioned and NIST

  11. Merging Libraries and Computer Centers: Manifest Destiny or Manifestly Deranged? An Academic Services Perspective.

    ERIC Educational Resources Information Center

    Neff, Raymond K.

    1985-01-01

    Details trends in information access, services, packaging, dissemination, and networking, service fees, archival storage devices, and electronic information packaging that could lead to complete mergers of academic libraries and computing centers with shared responsibilities. University of California at Berkeley's comprehensive strategy for…

  12. To the Pacific: An Exploration of Computer-Based Reference Services.

    ERIC Educational Resources Information Center

    Cound, William T.

    In contrast to the usual approach in computer-based reference services, which is to go into a specific data base to retrieve citations to material on a specific, narrowly-defined topic, this report demonstrates how such services could be useful in a broad approach to a complex subject, using an investigation of trends in the world aluminum…

  13. Relationship between Pre-Service Music Teachers' Personality and Motivation for Computer-Assisted Instruction

    ERIC Educational Resources Information Center

    Perkmen, Serkan; Cevik, Beste

    2010-01-01

    The main purpose of this study was to examine the relationship between pre-service music teachers' personalities and their motivation for computer-assisted music instruction (CAI). The "Big Five" Model of Personality served as the framework. Participants were 83 pre-service music teachers in Turkey. Correlation analysis revealed that three…

  14. Factors Influencing Pre-Service Science Teachers' Perception of Computer Self-Efficacy

    ERIC Educational Resources Information Center

    Hakverdi, Meral; Gucum, Berna; Korkmaz, Hunkar

    2007-01-01

    This study examined the factors influencing pre-service teachers' perceptions of computers' self- efficacy. Participants in the study were 305 pre-service science teachers at a four-year public university in Turkey. Two instruments were used for this study: the Turkish version of the Microcomputer Utilization in Teaching Efficacy Beliefs…

  15. Measuring and Supporting Pre-Service Teachers' Self-Efficacy towards Computers, Teaching, and Technology Integration

    ERIC Educational Resources Information Center

    Killi, Carita; Kauppinen, Merja; Coiro, Julie; Utriainen, Jukka

    2016-01-01

    This paper reports on two studies designed to examine pre-service teachers' self-efficacy beliefs. Study I investigated the measurement properties of a self-efficacy beliefs questionnaire comprising scales for computer self-efficacy, teacher self-efficacy, and self-efficacy towards technology integration. In Study I, 200 pre-service teachers…

  16. Pre-Service Science Teachers' Written Argumentation Qualities: From the Perspectives of Socio- Scientific Issues, Epistemic Belief Levels and Online Discussion Environment

    ERIC Educational Resources Information Center

    Isbilir, Erdinc; Cakiroglu, Jale; Ertepinar, Hamide

    2014-01-01

    This study investigated the relationship between pre-service science teachers' written argumentation levels about socio-scientific issues and epistemic belief levels in an online discussion environment. A mixed-methods approach was used: 30 Turkish pre-service science teachers contributed with their written argumentations to four socio-scientific…

  17. Online data storage service strategy for the CERN computer Centre

    NASA Astrophysics Data System (ADS)

    Cancio, G.; Duellmann, D.; Lamanna, M.; Pace, A.

    2011-12-01

    The Data and Storage Services group at CERN is conducting several service and software development projects to address possible scalability issues, to prepare the integration of upcoming technologies and to anticipate changing access patterns. Particular emphasis is put on: very high performance disk pools for analysis based on XROOTD [1] lower latency archive storage using large, cost and power effective disk pools more efficient use of tape resources by aggregation of user data collections on the tape media a consolidated system for monitoring and usage trend analysis This contribution will outline the underlying storage architecture and focus on the key functional and operational advantages, which drive the development. The discussion will include a review of proof-of-concept and prototype studies and propose a plan for the integration of these components in the existing storage infrastructure at CERN.

  18. Using computers for planning and evaluating nursing in the health care services.

    PubMed

    Emuziene, Vilma

    2009-01-01

    This paper describes that the nurses attitudes, using and motivation towards the computer usage significantly influenced by area of nursing/health care service. Today most of the nurses traditionally document patient information in a medical record using pen and paper. Most nursing administrators not currently involved with computer applications in their settings are interested in exploring whether technology could help them with the day-to-day and long - range tasks of planning and evaluating nursing services. The results of this investigation showed that respondents (nurses), as specialists and nursing informatics, make their activity well: they had "positive" attitude towards computers and "good" or "average" computer skills. The nurses overall computer attitude did influence by the age of the nurses, by sex, by professional qualification. Younger nurses acquire informatics skills while in nursing school and are more accepting of computer advancements. The knowledge about computer among nurses who don't have any training in computers' significantly differs, who have training and using the computer once a week or everyday. In the health care services often are using the computers and the automated data systems, data for the statistical information (visit information, patient information) and billing information. In nursing field often automated data systems are using for statistical information, billing information, information about the vaccination, patient assessment and patient classification.

  19. Towards Distributed Service Discovery in Pervasive Computing Environments

    DTIC Science & Technology

    2005-01-01

    applications to discover remote services residing on stable networked machines in the wired network . Some of these protocols (e.g. UPnP) can also be... network . The mobility of the nodes was assumed to follow random-waypoint [26] pattern. We used an application layer packet generation function to generate...Traditional P2P networks derive basic boot-strap support from some trusted hosts that are robust and available. We cannot assume such support in an ad-hoc

  20. The Impact of Misspelled Words on Automated Computer Scoring: A Case Study of Scientific Explanations

    NASA Astrophysics Data System (ADS)

    Ha, Minsu; Nehm, Ross H.

    2016-06-01

    Automated computerized scoring systems (ACSSs) are being increasingly used to analyze text in many educational settings. Nevertheless, the impact of misspelled words (MSW) on scoring accuracy remains to be investigated in many domains, particularly jargon-rich disciplines such as the life sciences. Empirical studies confirm that MSW are a pervasive feature of human-generated text and that despite improvements, spell-check and auto-replace programs continue to be characterized by significant errors. Our study explored four research questions relating to MSW and text-based computer assessments: (1) Do English language learners (ELLs) produce equivalent magnitudes and types of spelling errors as non-ELLs? (2) To what degree do MSW impact concept-specific computer scoring rules? (3) What impact do MSW have on computer scoring accuracy? and (4) Are MSW more likely to impact false-positive or false-negative feedback to students? We found that although ELLs produced twice as many MSW as non-ELLs, MSW were relatively uncommon in our corpora. The MSW in the corpora were found to be important features of the computer scoring models. Although MSW did not significantly or meaningfully impact computer scoring efficacy across nine different computer scoring models, MSW had a greater impact on the scoring algorithms for naïve ideas than key concepts. Linguistic and concept redundancy in student responses explains the weak connection between MSW and scoring accuracy. Lastly, we found that MSW tend to have a greater impact on false-positive feedback. We discuss the implications of these findings for the development of next-generation science assessments.

  1. Implementing Scientific Simulation Codes Highly Tailored for Vector Architectures Using Custom Configurable Computing Machines

    NASA Technical Reports Server (NTRS)

    Rutishauser, David

    2006-01-01

    The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters

  2. A Queue Simulation Tool for a High Performance Scientific Computing Center

    NASA Technical Reports Server (NTRS)

    Spear, Carrie; McGalliard, James

    2007-01-01

    The NASA Center for Computational Sciences (NCCS) at the Goddard Space Flight Center provides high performance highly parallel processors, mass storage, and supporting infrastructure to a community of computational Earth and space scientists. Long running (days) and highly parallel (hundreds of CPUs) jobs are common in the workload. NCCS management structures batch queues and allocates resources to optimize system use and prioritize workloads. NCCS technical staff use a locally developed discrete event simulation tool to model the impacts of evolving workloads, potential system upgrades, alternative queue structures and resource allocation policies.

  3. Distributed management of scientific projects - An analysis of two computer-conferencing experiments at NASA

    NASA Technical Reports Server (NTRS)

    Vallee, J.; Gibbs, B.

    1976-01-01

    Between August 1975 and March 1976, two NASA projects with geographically separated participants used a computer-conferencing system developed by the Institute for the Future for portions of their work. Monthly usage statistics for the system were collected in order to examine the group and individual participation figures for all conferences. The conference transcripts were analysed to derive observations about the use of the medium. In addition to the results of these analyses, the attitudes of users and the major components of the costs of computer conferencing are discussed.

  4. Modelling the Influences of Beliefs on Pre-Service Teachers' Attitudes towards Computer Use

    ERIC Educational Resources Information Center

    Teo, Timothy

    2012-01-01

    The purpose of this study is to examine the pre-service teachers' attitudes toward computers use. The impact of five variables (perceived usefulness, perceived ease of use, subjective norm, facilitating conditions, and technological complexity) on attitude towards computer was assessed. Data were collected from 230 preservice teachers through…

  5. Computing Services Planning, Downsizing, and Organization at the University of Alberta.

    ERIC Educational Resources Information Center

    Beltrametti, Monica

    1993-01-01

    In a six-month period, the University of Alberta (Canada) campus computing services department formulated a strategic plan, and downsized and reorganized to meet financial constraints and respond to changing technology, especially distributed computing. The new department is organized to react more effectively to trends in technology and user…

  6. Investigating Pre-Service Early Childhood Teachers' Attitudes towards the Computer Based Education in Science Activities

    ERIC Educational Resources Information Center

    Yilmaz, Nursel; Alici, Sule

    2011-01-01

    The purpose of this study was to investigate pre-service early childhood teachers' attitudes towards using Computer Based Education (CBE) while implementing science activities. More specifically, the present study examined the effect of different variables such as gender, year in program, experience in preschool, owing a computer, and the…

  7. High Performance Computing Innovation Service Portal Study (HPC-ISP)

    DTIC Science & Technology

    2009-04-01

    develop technical and business concepts that could help enable these “desktop-only” users to employ more advanced computing solutions in their...Card", a case study of Woodward, a supplier of P&W, “Woodward FST: Software Costs and Finding Experts Are Stalling HPC Adoption”, and a business and...Dynamics ...................................................................... 3  2.2.2  Task 2: Business and Technical Concept Study

  8. Community Coordinated Modeling Center (CCMC): Using innovative tools and services to support worldwide space weather scientific communities and networks

    NASA Astrophysics Data System (ADS)

    Mendoza, A. M.; Bakshi, S.; Berrios, D.; Chulaki, A.; Evans, R. M.; Kuznetsova, M. M.; Lee, H.; MacNeice, P. J.; Maddox, M. M.; Mays, M. L.; Mullinix, R. E.; Ngwira, C. M.; Patel, K.; Pulkkinen, A.; Rastaetter, L.; Shim, J.; Taktakishvili, A.; Zheng, Y.

    2012-12-01

    the general public about the importance and impacts of space weather effects. Although CCMC is organizationally comprised of United States federal agencies, CCMC services are open to members of the international science community and encourages interagency and international collaboration. In this poster, we provide an overview of using Community Coordinated Modeling Center (CCMC) tools and services to support worldwide space weather scientific communities and networks.;

  9. SciCADE 95: International conference on scientific computation and differential equations

    SciTech Connect

    1995-12-31

    This report consists of abstracts from the conference. Topics include algorithms, computer codes, and numerical solutions for differential equations. Linear and nonlinear as well as boundary-value and initial-value problems are covered. Various applications of these problems are also included.

  10. The Difficult Process of Scientific Modelling: An Analysis Of Novices' Reasoning During Computer-Based Modelling

    ERIC Educational Resources Information Center

    Sins, Patrick H. M.; Savelsbergh, Elwin R.; van Joolingen, Wouter R.

    2005-01-01

    Although computer modelling is widely advocated as a way to offer students a deeper understanding of complex phenomena, the process of modelling is rather complex itself and needs scaffolding. In order to offer adequate support, a thorough understanding of the reasoning processes students employ and of difficulties they encounter during a…

  11. Conducting Scientific Research on Learning and Health Behavior Change with Computer-Based Health Games

    ERIC Educational Resources Information Center

    Mayer, Richard E.; Lieberman, Debra A.

    2011-01-01

    This article is a guide for researchers interested in assessing the effectiveness of serious computer-based games (or video games, digital games, or electronic games) intended to improve health and health care. It presents a definition of health games, a rationale for their use, an overview of the current state of research, and recommendations for…

  12. Modeling and Computer Simulation of Dynamic Societal, Scientific, and Engineering Systems.

    ERIC Educational Resources Information Center

    D'Angelo, Henry

    A course in modeling and computer simulation of dynamic systems uses three methods to introduce students to these topics. Core studies, the consideration of the theoretical fundamentals of modeling and simulation, and the execution by students of a project are employed. Taught in the Electrical Engineering Department at Michigan Technological…

  13. Steps to Opening Scientific Inquiry: Pre-Service Teachers' Practicum Experiences with a New Support Framework

    NASA Astrophysics Data System (ADS)

    Rees, Carol; Pardo, Richard; Parker, Jennifer

    2013-04-01

    This qualitative multiple-comparative case study investigates (1) The reported experiences and impressions of four pre-service teachers (PTs) on practicum placement in four different classrooms (grades 1-9) where a new Steps to Inquiry (SI) framework was being utilized to support students conducting open inquiry; (2) The relative dispositions of the PTs toward conducting open inquiry, as indicated by their core conceptions regarding science, the purpose of education, effective teaching, and the capacity of students. Findings indicate that (1) although there were differences in the experiences of the four PTs, all four had an opportunity to observe and/or facilitate students conducting open inquiry with the SI framework, and after the practicum, all of them reported that they would like to include open inquiry in their own classrooms in the future; (2) one PT already possessed core conceptions indicative of a favorable disposition toward open inquiry before the placement; another altered her core conceptions substantially toward a favorable disposition during the placement; a third altered her conceptions regarding the capacity of students; and one PT maintained core conceptions indicative of a disposition that was not favorable to open inquiry despite the pronouncements that she would like to conduct open inquiry with students in their own future classroom. Possible reasons for the differences in the responses of the four pre-services teachers to the practicum placement are discussed.

  14. Opportunities and Challenges of Cloud Computing to Improve Health Care Services

    PubMed Central

    2011-01-01

    Cloud computing is a new way of delivering computing resources and services. Many managers and experts believe that it can improve health care services, benefit health care research, and change the face of health information technology. However, as with any innovation, cloud computing should be rigorously evaluated before its widespread adoption. This paper discusses the concept and its current place in health care, and uses 4 aspects (management, technology, security, and legal) to evaluate the opportunities and challenges of this computing model. Strategic planning that could be used by a health organization to determine its direction, strategy, and resource allocation when it has decided to migrate from traditional to cloud-based health services is also discussed. PMID:21937354

  15. Opportunities and challenges of cloud computing to improve health care services.

    PubMed

    Kuo, Alex Mu-Hsing

    2011-09-21

    Cloud computing is a new way of delivering computing resources and services. Many managers and experts believe that it can improve health care services, benefit health care research, and change the face of health information technology. However, as with any innovation, cloud computing should be rigorously evaluated before its widespread adoption. This paper discusses the concept and its current place in health care, and uses 4 aspects (management, technology, security, and legal) to evaluate the opportunities and challenges of this computing model. Strategic planning that could be used by a health organization to determine its direction, strategy, and resource allocation when it has decided to migrate from traditional to cloud-based health services is also discussed.

  16. Breadth of Scientific Activities and Network Station Specifications in the International GPS Service (IGS)

    NASA Technical Reports Server (NTRS)

    Moore, A. W.; Neilan, R. E.; Springer, T. A.; Reigber, Ch.

    2000-01-01

    A strong multipurpose aspect of the International GPS Service (IGS) is revealed by a glance at the titles of current projects and working groups within the IGS: IGS/BIPM Time Transfer Project; Ionosphere Working Group; Troposphere Working Group; International GLONASS Experiment; Working Group on Low-Earth Orbiter Missions; and Tide Gauges, CGPS, and the IGS. The IGS network infrastructure, in large part originally commissioned for geodynamical investigations, has proved to be a valuable asset in developing application-oriented subnetworks whose requirements overlap the characteristics of existing IGS stations and future station upgrades. Issues encountered thus far in the development of multipurpose or multitechnique IGS projects as well as future possibilities will be reviewed.

  17. Scientific Research on a Barrier Island: A Rice University In-service K-12 Summer Course

    NASA Astrophysics Data System (ADS)

    Wallace, D. J.; Sawyer, D. S.

    2011-12-01

    In July 2011, a Rice University Summer course consisting of 21 in-service K-12 science teachers conducted fieldwork on Galveston Island, Texas. The goal of this class was to better prepare local teachers to teach the Texas state standards (TEKS) in addition to gaining valuable research, fieldwork, and technology experience. Participant groups developed independent research projects aimed at investigating hurricane impacts, barrier island erosion, nearshore oceanographic conditions, coastal environments, and natural versus anthropogenic dunes. These projects were carried out through the collection and use of ArcGIS, stratigraphical, sedimentological, and geophysical data comprised of satellite/aerial photography, elevation data, sediment push cores, and Ground Penetrating Radar (GPR). Participants analyzed Light Detection and Ranging (LIDAR) elevation data and several years of Galveston Island imagery, in order to locate appropriate areas to collect cores and GPR lines. Sediment push cores up to ~2 meters in length were collected, and they provided a valuable stratigraphic framework. Participants analyzed core sections for grain size and color variations, mollusc shells, and organic material to make environmental interpretations. These cores revealed hurricane washover beds, dune stratigraphy, and shoreline facies. GPR profiles were collected along the central part of Galveston Island, and the teachers interpreted and characterized the subsurface to ~2 meters depth. Several coastal geologic features, such as beach ridges, swales, and barrier island seaward dipping reflectors, were imaged clearly. Furthermore, sediment cores were collected along many GPR lines in order to verify subsurface interpretations. To seamlessly record and share data in the field, 3G enabled iPads were incorporated into the course. Participants used these devices to take field notes, pictures, and videos, in addition to having access to LIDAR and satellite imagery processed on campus. Merging

  18. Eighth SIAM conference on parallel processing for scientific computing: Final program and abstracts

    SciTech Connect

    1997-12-31

    This SIAM conference is the premier forum for developments in parallel numerical algorithms, a field that has seen very lively and fruitful developments over the past decade, and whose health is still robust. Themes for this conference were: combinatorial optimization; data-parallel languages; large-scale parallel applications; message-passing; molecular modeling; parallel I/O; parallel libraries; parallel software tools; parallel compilers; particle simulations; problem-solving environments; and sparse matrix computations.

  19. New modalities for scientific engagement in Africa - the case for computational physics

    NASA Astrophysics Data System (ADS)

    Chetty, N.

    2011-09-01

    Computational physics as a mode of studying the mathematical and physical sciences has grown world-wide over the past two decades, but this trend is yet to fully develop in Africa. The essential ingredients are there for this to happen: increasing internet connectivity, cheaper computing resources and the widespread availability of open source and freeware. The missing ingredients centre on intellectual isolation and the low levels of quality international collaborations. Low level of funding for research from local governments remains a critical issue. This paper gives a motivation for the importance of developing computational physics at the university undergraduate level, graduate level and research levels and gives suggestions on how this may be achieved within the African context. It is argued that students develop a more intuitive feel for the mathematical and physical sciences, that they learn useful, transferable skills that make our graduates well-sought after in the industrial and commercial environments, and that such graduates are better prepared to tackle research problems at the masters and doctoral levels. At the research level, the case of the African School Series on Electronic Structure Methods and Applications (ASESMA) is presented as a new multi-national modality for engaging with African scientists. There are many novel aspects to this School series, which are discussed.

  20. Comparison of scientific computing platforms for MCNP4A Monte Carlo calculations

    SciTech Connect

    Hendricks, J.S.; Brockhoff, R.C. . Applied Theoretical Physics Division)

    1994-04-01

    The performance of seven computer platforms is evaluated with the widely used and internationally available MCNP4A Monte Carlo radiation transport code. All results are reproducible and are presented in such a way as to enable comparison with computer platforms not in the study. The authors observed that the HP/9000-735 workstation runs MCNP 50% faster than the Cray YMP 8/64. Compared with the Cray YMP 8/64, the IBM RS/6000-560 is 68% as fast, the Sun Sparc10 is 66% as fast, the Silicon Graphics ONYX is 90% as fast, the Gateway 2000 model 4DX2-66V personal computer is 27% as fast, and the Sun Sparc2 is 24% as fast. In addition to comparing the timing performance of the seven platforms, the authors observe that changes in compilers and software over the past 2 yr have resulted in only modest performance improvements, hardware improvements have enhanced performance by less than a factor of [approximately]3, timing studies are very problem dependent, MCNP4Q runs about as fast as MCNP4.

  1. Service-Oriented Architecture for NVO and TeraGrid Computing

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph; Miller, Craig; Williams, Roy; Steenberg, Conrad; Graham, Matthew

    2008-01-01

    The National Virtual Observatory (NVO) Extensible Secure Scalable Service Infrastructure (NESSSI) is a Web service architecture and software framework that enables Web-based astronomical data publishing and processing on grid computers such as the National Science Foundation's TeraGrid. Characteristics of this architecture include the following: (1) Services are created, managed, and upgraded by their developers, who are trusted users of computing platforms on which the services are deployed. (2) Service jobs can be initiated by means of Java or Python client programs run on a command line or with Web portals. (3) Access is granted within a graduated security scheme in which the size of a job that can be initiated depends on the level of authentication of the user.

  2. Investigation of the computer experiences and attitudes of pre-service mathematics teachers: new evidence from Turkey.

    PubMed

    Birgin, Osman; Catlioğlu, Hakan; Gürbüz, Ramazan; Aydin, Serhat

    2010-10-01

    This study aimed to investigate the experiences of pre-service mathematics (PSM) teachers with computers and their attitudes toward them. The Computer Attitude Scale, Computer Competency Survey, and Computer Use Information Form were administered to 180 Turkish PSM teachers. Results revealed that most PSM teachers used computers at home and at Internet cafes, and that their competency was generally intermediate and upper level. The study concludes that PSM teachers' attitudes about computers differ according to their years of study, computer ownership, level of computer competency, frequency of computer use, computer experience, and whether they had attended a computer-aided instruction course. However, computer attitudes were not affected by gender.

  3. Modeling, Simulation and Analysis of Complex Networked Systems: A Program Plan for DOE Office of Advanced Scientific Computing Research

    SciTech Connect

    Brown, D L

    2009-05-01

    Many complex systems of importance to the U.S. Department of Energy consist of networks of discrete components. Examples are cyber networks, such as the internet and local area networks over which nearly all DOE scientific, technical and administrative data must travel, the electric power grid, social networks whose behavior can drive energy demand, and biological networks such as genetic regulatory networks and metabolic networks. In spite of the importance of these complex networked systems to all aspects of DOE's operations, the scientific basis for understanding these systems lags seriously behind the strong foundations that exist for the 'physically-based' systems usually associated with DOE research programs that focus on such areas as climate modeling, fusion energy, high-energy and nuclear physics, nano-science, combustion, and astrophysics. DOE has a clear opportunity to develop a similarly strong scientific basis for understanding the structure and dynamics of networked systems by supporting a strong basic research program in this area. Such knowledge will provide a broad basis for, e.g., understanding and quantifying the efficacy of new security approaches for computer networks, improving the design of computer or communication networks to be more robust against failures or attacks, detecting potential catastrophic failure on the power grid and preventing or mitigating its effects, understanding how populations will respond to the availability of new energy sources or changes in energy policy, and detecting subtle vulnerabilities in large software systems to intentional attack. This white paper outlines plans for an aggressive new research program designed to accelerate the advancement of the scientific basis for complex networked systems of importance to the DOE. It will focus principally on four research areas: (1) understanding network structure, (2) understanding network dynamics, (3) predictive modeling and simulation for complex networked systems

  4. An Analysis on the Effect of Computer Self-Efficacy over Scientific Research Self-Efficacy and Information Literacy Self-Efficacy

    ERIC Educational Resources Information Center

    Tuncer, Murat

    2013-01-01

    Present research investigates reciprocal relations amidst computer self-efficacy, scientific research and information literacy self-efficacy. Research findings have demonstrated that according to standardized regression coefficients, computer self-efficacy has a positive effect on information literacy self-efficacy. Likewise it has been detected…

  5. The Automatic Parallelisation of Scientific Application Codes Using a Computer Aided Parallelisation Toolkit

    NASA Technical Reports Server (NTRS)

    Ierotheou, C.; Johnson, S.; Leggett, P.; Cross, M.; Evans, E.; Jin, Hao-Qiang; Frumkin, M.; Yan, J.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. Historically, the lack of a programming standard for using directives and the rather limited performance due to scalability have affected the take-up of this programming model approach. Significant progress has been made in hardware and software technologies, as a result the performance of parallel programs with compiler directives has also made improvements. The introduction of an industrial standard for shared-memory programming with directives, OpenMP, has also addressed the issue of portability. In this study, we have extended the computer aided parallelization toolkit (developed at the University of Greenwich), to automatically generate OpenMP based parallel programs with nominal user assistance. We outline the way in which loop types are categorized and how efficient OpenMP directives can be defined and placed using the in-depth interprocedural analysis that is carried out by the toolkit. We also discuss the application of the toolkit on the NAS Parallel Benchmarks and a number of real-world application codes. This work not only demonstrates the great potential of using the toolkit to quickly parallelize serial programs but also the good performance achievable on up to 300 processors for hybrid message passing and directive-based parallelizations.

  6. Development of high performance scientific components for interoperability of computing packages

    SciTech Connect

    Gulabani, Teena Pratap

    2008-01-01

    Three major high performance quantum chemistry computational packages, NWChem, GAMESS and MPQC have been developed by different research efforts following different design patterns. The goal is to achieve interoperability among these packages by overcoming the challenges caused by the different communication patterns and software design of each of these packages. A chemistry algorithm is hard to develop as well as being a time consuming process; integration of large quantum chemistry packages will allow resource sharing and thus avoid reinvention of the wheel. Creating connections between these incompatible packages is the major motivation of the proposed work. This interoperability is achieved by bringing the benefits of Component Based Software Engineering through a plug-and-play component framework called Common Component Architecture (CCA). In this thesis, I present a strategy and process used for interfacing two widely used and important computational chemistry methodologies: Quantum Mechanics and Molecular Mechanics. To show the feasibility of the proposed approach the Tuning and Analysis Utility (TAU) has been coupled with NWChem code and its CCA components. Results show that the overhead is negligible when compared to the ease and potential of organizing and coping with large-scale software applications.

  7. Industrial information database service by personal computer network 'Saitamaken Industrial Information System'

    NASA Astrophysics Data System (ADS)

    Sugahara, Keiji

    Saitamaken Industrial Information System provides onlined database services, which does not rely on computers for the whole operation, but utilizes computers, optical disk files or facsimiles for certain operations as we think fit. It employes the method of providing information for various, outputs, that is, image information is sent from optical disk files to facsimiles, or other information is provided from computers to terminals as well as facsimiles. Locating computers as a core in the system, it enables integrated operations. The system at terminal side was developed separately with functions such as operation by turnkey style, down-loading of statistical information and the newest menu.

  8. Consumer Satisfaction with Telerehabilitation Service Provision of Alternative Computer Access and Augmentative and Alternative Communication.

    PubMed

    Lopresti, Edmund F; Jinks, Andrew; Simpson, Richard C

    2015-01-01

    Telerehabilitation (TR) services for assistive technology evaluation and training have the potential to reduce travel demands for consumers and assistive technology professionals while allowing evaluation in more familiar, salient environments for the consumer. Sixty-five consumers received TR services for augmentative and alternative communication or alternative computer access, and consumer satisfaction was compared with twenty-eight consumers who received exclusively in-person services. TR recipients rated their TR services at a median of 6 on a 6-point Likert scale TR satisfaction questionnaire, although individual responses did indicate room for improvement in the technology. Overall satisfaction with AT services was rated highly by both in-person (100% satisfaction) and TR (99% satisfaction) service recipients.

  9. Consumer Satisfaction with Telerehabilitation Service Provision of Alternative Computer Access and Augmentative and Alternative Communication

    PubMed Central

    LOPRESTI, EDMUND F.; JINKS, ANDREW; SIMPSON, RICHARD C.

    2015-01-01

    Telerehabilitation (TR) services for assistive technology evaluation and training have the potential to reduce travel demands for consumers and assistive technology professionals while allowing evaluation in more familiar, salient environments for the consumer. Sixty-five consumers received TR services for augmentative and alternative communication or alternative computer access, and consumer satisfaction was compared with twenty-eight consumers who received exclusively in-person services. TR recipients rated their TR services at a median of 6 on a 6-point Likert scale TR satisfaction questionnaire, although individual responses did indicate room for improvement in the technology. Overall satisfaction with AT services was rated highly by both in-person (100% satisfaction) and TR (99% satisfaction) service recipients. PMID:27563382

  10. UIMX: A User Interface Management System For Scientific Computing With X Windows

    NASA Astrophysics Data System (ADS)

    Foody, Michael

    1989-09-01

    Applications with iconic user interfaces, (for example, interfaces with pulldown menus, radio buttons, and scroll bars), such as those found on Apple's Macintosh computer and the IBM PC under Microsoft's Presentation Manager, have become very popular, and for good reason. They are much easier to use than applications with traditional keyboard-oriented interfaces, so training costs are much lower and just about anyone can use them. They are standardized between applications, so once you learn one application you are well along the way to learning another. The use of one reinforces the common elements between applications of the interface, and, as a result, you remember how to use them longer. Finally, for the developer, their support costs can be much lower because of their ease of use.

  11. Leveraging Data Intensive Computing to Support Automated Event Services

    NASA Technical Reports Server (NTRS)

    Clune, Thomas L.; Freeman, Shawn M.; Kuo, Kwo-Sen

    2012-01-01

    A large portion of Earth Science investigations is phenomenon- or event-based, such as the studies of Rossby waves, mesoscale convective systems, and tropical cyclones. However, except for a few high-impact phenomena, e.g. tropical cyclones, comprehensive records are absent for the occurrences or events of these phenomena. Phenomenon-based studies therefore often focus on a few prominent cases while the lesser ones are overlooked. Without an automated means to gather the events, comprehensive investigation of a phenomenon is at least time-consuming if not impossible. An Earth Science event (ES event) is defined here as an episode of an Earth Science phenomenon. A cumulus cloud, a thunderstorm shower, a rogue wave, a tornado, an earthquake, a tsunami, a hurricane, or an EI Nino, is each an episode of a named ES phenomenon," and, from the small and insignificant to the large and potent, all are examples of ES events. An ES event has a finite duration and an associated geolocation as a function of time; its therefore an entity in four-dimensional . (4D) spatiotemporal space. The interests of Earth scientists typically rivet on Earth Science phenomena with potential to cause massive economic disruption or loss of life, but broader scientific curiosity also drives the study of phenomena that pose no immediate danger. We generally gain understanding of a given phenomenon by observing and studying individual events - usually beginning by identifying the occurrences of these events. Once representative events are identified or found, we must locate associated observed or simulated data prior to commencing analysis and concerted studies of the phenomenon. Knowledge concerning the phenomenon can accumulate only after analysis has started. However, except for a few high-impact phenomena. such as tropical cyclones and tornadoes, finding events and locating associated data currently may take a prohibitive amount of time and effort on the part of an individual investigator. And

  12. Three-dimensional dynamics of scientific balloon systems in response to sudden gust loadings. [including a computer program user manual

    NASA Technical Reports Server (NTRS)

    Dorsey, D. R., Jr.

    1975-01-01

    A mathematical model was developed of the three-dimensional dynamics of a high-altitude scientific research balloon system perturbed from its equilibrium configuration by an arbitrary gust loading. The platform is modelled as a system of four coupled pendula, and the equations of motion were developed in the Lagrangian formalism assuming a small-angle approximation. Three-dimensional pendulation, torsion, and precessional motion due to Coriolis forces are considered. Aerodynamic and viscous damping effects on the pendulatory and torsional motions are included. A general model of the gust field incident upon the balloon system was developed. The digital computer simulation program is described, and a guide to its use is given.

  13. Computer simulation and performance assessment of the packet-data service of the Aeronautical Mobile Satellite Service (AMSS)

    NASA Technical Reports Server (NTRS)

    Ferzali, Wassim; Zacharakis, Vassilis; Upadhyay, Triveni; Weed, Dennis; Burke, Gregory

    1995-01-01

    The ICAO Aeronautical Mobile Communications Panel (AMCP) completed the drafting of the Aeronautical Mobile Satellite Service (AMSS) Standards and Recommended Practices (SARP's) and the associated Guidance Material and submitted these documents to ICAO Air Navigation Commission (ANC) for ratification in May 1994. This effort, encompassed an extensive, multi-national SARP's validation. As part of this activity, the US Federal Aviation Administration (FAA) sponsored an effort to validate the SARP's via computer simulation. This paper provides a description of this effort. Specifically, it describes: (1) the approach selected for the creation of a high-fidelity AMSS computer model; (2) the test traffic generation scenarios; and (3) the resultant AMSS performance assessment. More recently, the AMSS computer model was also used to provide AMSS performance statistics in support of the RTCA standardization activities. This paper describes this effort as well.

  14. A Computational Unification of Scientific Law:. Spelling out a Universal Semantics for Physical Reality

    NASA Astrophysics Data System (ADS)

    Marcer, Peter J.; Rowlands, Peter

    2013-09-01

    The principal criteria Cn (n = 1 to 23) and grammatical production rules are set out of a universal computational rewrite language spelling out a semantic description of an emergent, self-organizing architecture for the cosmos. These language productions already predicate: (1) Einstein's conservation law of energy, momentum and mass and, subsequently, (2) with respect to gauge invariant relativistic space time (both Lorentz special & Einstein general); (3) Standard Model elementary particle physics; (4) the periodic table of the elements & chemical valence; and (5) the molecular biological basis of the DNA / RNA genetic code; so enabling the Cybernetic Machine specialist Groups Mission Statement premise;** (6) that natural semantic language thinking at the higher level of the self-organized emergent chemical molecular complexity of the human brain (only surpassed by that of the cosmos itself!) would be realized (7) by this same universal semantic language via (8) an architecture of a conscious human brain/mind and self which, it predicates consists of its neural / glia and microtubule substrates respectively, so as to endow it with; (9) the intelligent semantic capability to be able to specify, symbolize, spell out and understand the cosmos that conceived it; and (10) provide a quantum physical explanation of consciousness and of how (11) the dichotomy between first person subjectivity and third person objectivity or `hard problem' is resolved.

  15. An Architecture and Supporting Environment of Service-Oriented Computing Based-On Context Awareness

    NASA Astrophysics Data System (ADS)

    Ma, Tianxiao; Wu, Gang; Huang, Jun

    Service-oriented computing (SOC) is emerging to be an important computing paradigm of the next future. Based on context awareness, this paper proposes an architecture of SOC. A definition of the context in open environments such as Internet is given, which is based on ontology. The paper also proposes a supporting environment for the context-aware SOC, which focus on services on-demand composition and context-awareness evolving. A reference implementation of the supporting environment based on OSGi[11] is given at last.

  16. Secure encapsulation and publication of biological services in the cloud computing environment.

    PubMed

    Zhang, Weizhe; Wang, Xuehui; Lu, Bo; Kim, Tai-hoon

    2013-01-01

    Secure encapsulation and publication for bioinformatics software products based on web service are presented, and the basic function of biological information is realized in the cloud computing environment. In the encapsulation phase, the workflow and function of bioinformatics software are conducted, the encapsulation interfaces are designed, and the runtime interaction between users and computers is simulated. In the publication phase, the execution and management mechanisms and principles of the GRAM components are analyzed. The functions such as remote user job submission and job status query are implemented by using the GRAM components. The services of bioinformatics software are published to remote users. Finally the basic prototype system of the biological cloud is achieved.

  17. Enabling Water Quality Management Decision Support and Public Outreach Using Cloud-Computing Services

    NASA Astrophysics Data System (ADS)

    Sun, A. Y.; Scanlon, B. R.; Uhlman, K.

    2013-12-01

    Watershed management is a participatory process that requires collaboration among multiple groups of people. Environmental decision support systems (EDSS) have long been used to support such co-management and co-learning processes in watershed management. However, implementing and maintaining EDSS in-house can be a significant burden to many water agencies because of budget, technical, and policy constraints. Basing on experiences from several web-GIS environmental management projects in Texas, we showcase how cloud-computing services can help shift the design and hosting of EDSS from the traditional client-server-based platforms to be simple clients of cloud-computing services.

  18. Model-Driven Development for scientific computing. An upgrade of the RHEEDGr program

    NASA Astrophysics Data System (ADS)

    Daniluk, Andrzej

    2009-11-01

    Model-Driven Engineering (MDE) is the software engineering discipline, which considers models as the most important element for software development, and for the maintenance and evolution of software, through model transformation. Model-Driven Architecture (MDA) is the approach for software development under the Model-Driven Engineering framework. This paper surveys the core MDA technology that was used to upgrade of the RHEEDGR program to C++0x language standards. New version program summaryProgram title: RHEEDGR-09 Catalogue identifier: ADUY_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUY_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 21 263 No. of bytes in distributed program, including test data, etc.: 1 266 982 Distribution format: tar.gz Programming language: Code Gear C++ Builder Computer: Intel Core Duo-based PC Operating system: Windows XP, Vista, 7 RAM: more than 1 MB Classification: 4.3, 7.2, 6.2, 8, 14 Does the new version supersede the previous version?: Yes Nature of problem: Reflection High-Energy Electron Diffraction (RHEED) is a very useful technique for studying growth and surface analysis of thin epitaxial structures prepared by the Molecular Beam Epitaxy (MBE). The RHEED technique can reveal, almost instantaneously, changes either in the coverage of the sample surface by adsorbates or in the surface structure of a thin film. Solution method: The calculations are based on the use of a dynamical diffraction theory in

  19. Applying analytic hierarchy process to assess healthcare-oriented cloud computing service systems.

    PubMed

    Liao, Wen-Hwa; Qiu, Wan-Li

    2016-01-01

    Numerous differences exist between the healthcare industry and other industries. Difficulties in the business operation of the healthcare industry have continually increased because of the volatility and importance of health care, changes to and requirements of health insurance policies, and the statuses of healthcare providers, which are typically considered not-for-profit organizations. Moreover, because of the financial risks associated with constant changes in healthcare payment methods and constantly evolving information technology, healthcare organizations must continually adjust their business operation objectives; therefore, cloud computing presents both a challenge and an opportunity. As a response to aging populations and the prevalence of the Internet in fast-paced contemporary societies, cloud computing can be used to facilitate the task of balancing the quality and costs of health care. To evaluate cloud computing service systems for use in health care, providing decision makers with a comprehensive assessment method for prioritizing decision-making factors is highly beneficial. Hence, this study applied the analytic hierarchy process, compared items related to cloud computing and health care, executed a questionnaire survey, and then classified the critical factors influencing healthcare cloud computing service systems on the basis of statistical analyses of the questionnaire results. The results indicate that the primary factor affecting the design or implementation of optimal cloud computing healthcare service systems is cost effectiveness, with the secondary factors being practical considerations such as software design and system architecture.

  20. A Framework for Safe Composition of Heterogeneous SOA Services in a Pervasive Computing Environment with Resource Constraints

    ERIC Educational Resources Information Center

    Reyes Alamo, Jose M.

    2010-01-01

    The Service Oriented Computing (SOC) paradigm, defines services as software artifacts whose implementations are separated from their specifications. Application developers rely on services to simplify the design, reduce the development time and cost. Within the SOC paradigm, different Service Oriented Architectures (SOAs) have been developed.…

  1. ODI - Portal, Pipeline, and Archive (ODI-PPA): a web-based astronomical compute archive, visualization, and analysis service

    NASA Astrophysics Data System (ADS)

    Gopu, Arvind; Hayashi, Soichi; Young, Michael D.; Harbeck, Daniel R.; Boroson, Todd; Liu, Wilson; Kotulla, Ralf; Shaw, Richard; Henschel, Robert; Rajagopal, Jayadev; Stobie, Elizabeth; Knezek, Patricia; Martin, R. Pierre; Archbold, Kevin

    2014-07-01

    The One Degree Imager-Portal, Pipeline, and Archive (ODI-PPA) is a web science gateway that provides astronomers a modern web interface that acts as a single point of access to their data, and rich computational and visualization capabilities. Its goal is to support scientists in handling complex data sets, and to enhance WIYN Observatory's scientific productivity beyond data acquisition on its 3.5m telescope. ODI-PPA is designed, with periodic user feedback, to be a compute archive that has built-in frameworks including: (1) Collections that allow an astronomer to create logical collations of data products intended for publication, further research, instructional purposes, or to execute data processing tasks (2) Image Explorer and Source Explorer, which together enable real-time interactive visual analysis of massive astronomical data products within an HTML5 capable web browser, and overlaid standard catalog and Source Extractor-generated source markers (3) Workflow framework which enables rapid integration of data processing pipelines on an associated compute cluster and users to request such pipelines to be executed on their data via custom user interfaces. ODI-PPA is made up of several light-weight services connected by a message bus; the web portal built using Twitter/Bootstrap, AngularJS and jQuery JavaScript libraries, and backend services written in PHP (using the Zend framework) and Python; it leverages supercomputing and storage resources at Indiana University. ODI-PPA is designed to be reconfigurable for use in other science domains with large and complex datasets, including an ongoing offshoot project for electron microscopy data.

  2. Investigation of Pre-Service Physical Education Teachers' Attitudes Towards Computer Technologies (Case of Turkey)

    ERIC Educational Resources Information Center

    Can, Suleyman

    2015-01-01

    Elicitation of pre-service physical education teachers' attitudes towards computer technologies seems to be of great importance to satisfy the conditions to be met for the conscious and effective use of the technologies required by the age to be used in educational settings. In this respect, the purpose of the present study is to investigate…

  3. Deploying the Win TR-20 computational engine as a web service

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Despite its simplicity and limitations, the runoff curve number method remains a widely-used hydrologic modeling tool, and its use through the USDA Natural Resources Conservation Service (NRCS) computer application WinTR-20 is expected to continue for the foreseeable future. To facilitate timely up...

  4. Computer Technology in Clinical Psychology Services for People with Mental Retardation: A Review.

    ERIC Educational Resources Information Center

    Davies, Sara; Hastings, Richard P.

    2003-01-01

    Review of the literature on computer technology in clinical psychology services for people with mental retardation is organized around stages of a scientist-practitioner working model: assessment, formulation, and intervention. Examples of technologies to facilitate work at each stage are given. Practical difficulties with implementation of…

  5. Changes in Pre-Service Teachers' Algebraic Misconceptions by Using Computer-Assisted Instruction

    ERIC Educational Resources Information Center

    Lin, ByCheng-Yao; Ko, Yi-Yin; Kuo, Yu-Chun

    2014-01-01

    In order to carry out current reforms regarding algebra and technology in elementary school mathematics successfully, pre-service elementary mathematics teachers must be equipped with adequate understandings of algebraic concepts and self-confidence in using computers for their future teaching. This paper examines the differences in preservice…

  6. Customer Service: What I Learned When I Bought My New Computer

    ERIC Educational Resources Information Center

    Neugebauer, Roger

    2009-01-01

    In this article, the author relates that similar to the time he bought his new computer, he had the opportunity to experience poor customer service when he and his wife signed their child up for a preschool program. They learned that the staff at the preschool didn't want parents looking over their shoulders and questioning their techniques. He…

  7. The ALL-OUT Library; A Design for Computer-Powered, Multidimensional Services.

    ERIC Educational Resources Information Center

    Sleeth, Jim; LaRue, James

    1983-01-01

    Preliminary description of design of electronic library and home information delivery system highlights potentials of personal computer interface program (applying for service, assuring that users are valid, checking for measures, searching, locating titles) and incorporation of concepts used in other information systems (security checks,…

  8. 14 CFR 13.85 - Filing, service and computation of time.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Filing, service and computation of time. 13.85 Section 13.85 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROCEDURAL RULES INVESTIGATIVE AND ENFORCEMENT PROCEDURES Orders of Compliance Under the Hazardous...

  9. 14 CFR 13.85 - Filing, service and computation of time.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Filing, service and computation of time. 13.85 Section 13.85 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROCEDURAL RULES INVESTIGATIVE AND ENFORCEMENT PROCEDURES Orders of Compliance Under the Hazardous...

  10. 14 CFR 13.85 - Filing, service and computation of time.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Filing, service and computation of time. 13.85 Section 13.85 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROCEDURAL RULES INVESTIGATIVE AND ENFORCEMENT PROCEDURES Orders of Compliance Under the Hazardous...

  11. 14 CFR 13.85 - Filing, service and computation of time.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Filing, service and computation of time. 13.85 Section 13.85 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROCEDURAL RULES INVESTIGATIVE AND ENFORCEMENT PROCEDURES Orders of Compliance Under the Hazardous...

  12. 14 CFR 13.85 - Filing, service and computation of time.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Filing, service and computation of time. 13.85 Section 13.85 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROCEDURAL RULES INVESTIGATIVE AND ENFORCEMENT PROCEDURES Orders of Compliance Under the Hazardous...

  13. Pre-Service Teachers' Opinions about the Course on Scientific Research Methods and the Levels of Knowledge and Skills They Gained in This Course

    ERIC Educational Resources Information Center

    Tosun, Cemal

    2014-01-01

    The purpose of this study was to ascertain whether the pre-service teachers taking the Scientific Research Methods course attained basic research knowledge and skills. In addition, the impact of the process, which is followed while implementing the course, on the students' anxiety and attitude during the course is examined. Moreover, the study…

  14. Turkish Pre-Service Science and Mathematics Teachers' Computer Related Self-Efficacies, Attitudes, and the Relationship between These Variables

    ERIC Educational Resources Information Center

    Pamuk, Savas; Peker, Deniz

    2009-01-01

    The purpose of this study was to investigate Turkish pre-service science and mathematics teachers' computer self-efficacies (CSEs) and computer attitude (CA) considering gender, year in program, and computer ownership as independent variables. Additionally the study aimed to examine the relationship between CSE and CA. Computer Self-efficacy Scale…

  15. A Computer Services Program for Residents of a Continuing Care Retirement Community: Needs Assessment and Program Design

    ERIC Educational Resources Information Center

    Grad, Jane; Berdes, Celia

    2005-01-01

    Preparatory to establishing a computer services program for residents, Presbyterian Homes, a multi-campus continuing care retirement community, conducted an assessment of residents' computer needs, ownership, and usage. Based on the results of the needs assessment, computer resource rooms were established at each facility, with computer hardware…

  16. On the use of brain-computer interfaces outside scientific laboratories toward an application in domotic environments.

    PubMed

    Babiloni, F; Cincotti, F; Marciani, M; Salinari, S; Astolfi, L; Aloise, F; De Vico Fallani, F; Mattia, D

    2009-01-01

    Brain-computer interface (BCI) applications were initially designed to provide final users with special capabilities, like writing letters on a screen, to communicate with others without muscular effort. In these last few years, the BCI scientific community has been interested in bringing BCI applications outside the scientific laboratories, initially to provide useful applications in everyday life and in future in more complex environments, such as space. Recently, we implemented a control of a domestic environment realized with BCI applications. In the present chapter, we analyze the methodological approach employed to allow the interaction between subjects and domestic devices by use of noninvasive EEG recordings. In particular, we analyze whether the cortical activity estimated from noninvasive EEG recordings could be useful in detecting mental states related to imagined limb movements. We estimate cortical activity from high-resolution EEG recordings in a group of healthy subjects by using realistic head models. Such cortical activity was estimated in a region of interest associated with the subjects' Brodmann areas by use of depth-weighted minimum norm solutions. Results show that the use of the estimated cortical activity instead of unprocessed EEG improves the recognition of the mental states associated with limb-movement imagination in a group of healthy subjects. The BCI methodology here presented has been used in a group of disabled patients to give them suitable control of several electronic devices disposed in a three-room environment devoted to neurorehabilitation. Four of six patients were able to control several electronic devices in the domotic context with the BCI system, with a percentage of correct responses averaging over 63%.

  17. A computer services program for residents of a continuing care retirement community: needs assessment and program design.

    PubMed

    Grad, Jane; Berdes, Celia

    2005-01-01

    Preparatory to establishing a computer services program for residents, Presbyterian Homes, a multi-campus continuing care retirement community, conducted an assessment of residents' computer needs, ownership, and usage. Based on the results of the needs assessment, computer resource rooms were established at each facility, with computer hardware and software adapted for the use of seniors. We also deliver adapted computer education for residents, including small-group training in basic software skills; classes on software for computer accessibility; and workshops on themes motivating computer use. Ongoing evaluation shows that half of residents make use of computer resources, and that their computer skills have opened the door to other educational opportunities.

  18. Intelligence in Scientific Computing

    DTIC Science & Technology

    1988-11-01

    motion of the planet Pluto , and by implication the dynamics of the Solar System, is chaotic [24]. The stability question was settled using the...divergence of nearby Pluto trajectories over 400 million years. This data is taken from an 845-million-year integra- tion performed with the Orrery...

  19. Handling the Diversity in the Coming Flood of InSAR Data with the InSAR Scientific Computing Environment

    NASA Astrophysics Data System (ADS)

    Rosen, P. A.; Gurrola, E. M.; Sacco, G. F.; Agram, P. S.; Lavalle, M.; Zebker, H. A.

    2014-12-01

    The NASA ESTO-developed InSAR Scientific Computing Environment (ISCE) provides acomputing framework for geodetic image processing for InSAR sensors that ismodular, flexible, and extensible, enabling scientists to reduce measurementsdirectly from a diverse array of radar satellites and aircraft to newgeophysical products. ISCE can serve as the core of a centralized processingcenter to bring Level-0 raw radar data up to Level-3 data products, but isadaptable to alternative processing approaches for science users interested innew and different ways to exploit mission data. This is accomplished throughrigorous componentization of processing codes, abstraction and generalization ofdata models, and a xml-based input interface with multi-level prioritizedcontrol of the component configurations depending on the science processingcontext. The proposed NASA-ISRO SAR (NISAR) Mission would deliver data ofunprecedented quantity and quality, making possible global-scale studies inclimate research, natural hazards, and Earth's ecosystems. ISCE is planned tobecome a key element in processing projected NISAR data into higher level dataproducts, enabling a new class of analyses that take greater advantage of thelong time and large spatial scales of these new data than current approaches.NISAR would be but one mission in a constellation of radar satellites in thefuture delivering such data. ISCE has been incorporated into two prototypecloud-based systems that have demonstrated its elasticity to addressing largerdata processing problems in a "production" context and its ability to becontrolled by individual science users on the cloud for large data problems.

  20. TimeSet: A computer program that accesses five atomic time services on two continents

    NASA Technical Reports Server (NTRS)

    Petrakis, P. L.

    1993-01-01

    TimeSet is a shareware program for accessing digital time services by telephone. At its initial release, it was capable of capturing time signals only from the U.S. Naval Observatory to set a computer's clock. Later the ability to synchronize with the National Institute of Standards and Technology was added. Now, in Version 7.10, TimeSet is able to access three additional telephone time services in Europe - in Sweden, Austria, and Italy - making a total of five official services addressable by the program. A companion program, TimeGen, allows yet another source of telephone time data strings for callers equipped with TimeSet version 7.10. TimeGen synthesizes UTC time data strings in the Naval Observatory's format from an accurately set and maintained DOS computer clock, and transmits them to callers. This allows an unlimited number of 'freelance' time generating stations to be created. Timesetting from TimeGen is made feasible by the advent of Becker's RighTime, a shareware program that learns the drift characteristics of a computer's clock and continuously applies a correction to keep it accurate, and also brings .01 second resolution to the DOS clock. With clock regulation by RighTime and periodic update calls by the TimeGen station to an official time source via TimeSet, TimeGen offers the same degree of accuracy within the resolution of the computer clock as any official atomic time source.

  1. The Effects of Mentored Problem-Based STEM Teaching on Pre-Service Elementary Teachers: Scientific Reasoning and Attitudes Toward STEM Subjects

    NASA Astrophysics Data System (ADS)

    Caliendo, Julia C.

    Problem-based learning in clinical practice has become an integral part of many professional preparation programs. This quasi-experimental study compared the effect of a specialized 90-hour field placement on elementary pre-service teachers' scientific reasoning and attitudes towards teaching STEM (science, technology, engineering, and math) subjects. A cohort of 53 undergraduate elementary education majors, concurrent to their enrollment in science and math methods classes, were placed into one of two clinical practice experiences: (a) a university-based, problem-based learning (PBL), STEM classroom, or (b) a traditional public school classroom. Group gain scores on the Classroom Test of Scientific Reasoning (CTSR) and the Teacher Efficacy and Attitudes Toward STEM Survey-Elementary Teachers (T-STEM) survey were calculated. A MANCOVA revealed that there was a significant difference in gain scores between the treatment and comparison groups' scientific reasoning (p = .011) and attitudes towards teaching STEM subjects (p = .004). The results support the hypothesis that the pre-service elementary teachers who experienced STEM mentoring in a PBL setting will have an increase in their scientific reasoning and produce positive attitudes towards teaching STEM subjects. In addition, the results add to the existing research suggesting that elementary pre-service teachers require significant academic preparation and mentored support in STEM content.

  2. A Comparative Study of Scientific Publications in Health Care Sciences and Services from Mainland China, Taiwan, Japan, and India (2007-2014).

    PubMed

    Lv, Yipeng; Tang, Bihan; Liu, Xu; Xue, Chen; Liu, Yuan; Kang, Peng; Zhang, Lulu

    2015-12-24

    In this study, we aimed to compare the quantity and quality of publications in health care sciences and services journals from the Chinese mainland, Taiwan, Japan, and India. Journals in this category of the Science Citation Index Expanded were included in the study. Scientific papers were retrieved from the Web of Science online database. Quality was measured according to impact factor, citation of articles, number of articles published in top 10 journals, and the 10 most popular journals by country (area). In the field of health care sciences and services, the annual incremental rates of scientific articles published from 2007 to 2014 were higher than rates of published scientific articles in all fields. Researchers from the Chinese mainland published the most original articles and reviews and had the highest accumulated impact factors, highest total article citations, and highest average citation. Publications from India had the highest average impact factor. In the field of health care sciences and services, China has made remarkable progress during the past eight years in the annual number and percentage of scientific publications. Yet, there is room for improvement in the quantity and quality of such articles.

  3. Pre-Service English Language Teachers' Perceptions of Computer Self-Efficacy and General Self-Efficacy

    ERIC Educational Resources Information Center

    Topkaya, Ece Zehir

    2010-01-01

    The primary aim of this study is to investigate pre-service English language teachers' perceptions of computer self-efficacy in relation to different variables. Secondarily, the study also explores the relationship between pre-service English language teachers' perceptions of computer self-efficacy and their perceptions of general self-efficacy.…

  4. Measuring the Effect of Gender on Computer Attitudes among Pre-Service Teachers: A Multiple Indicators, Multiple Causes (MIMIC) Modeling

    ERIC Educational Resources Information Center

    Teo, Timothy

    2010-01-01

    Purpose: The purpose of this paper is to examine the effect of gender on pre-service teachers' computer attitudes. Design/methodology/approach: A total of 157 pre-service teachers completed a survey questionnaire measuring their responses to four constructs which explain computer attitude. These were administered during the teaching term where…

  5. 22 CFR 19.4 - Special rules for computing creditable service for purposes of payments to former spouses.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Special rules for computing creditable service for purposes of payments to former spouses. 19.4 Section 19.4 Foreign Relations DEPARTMENT OF STATE... DISABILITY SYSTEM § 19.4 Special rules for computing creditable service for purposes of payments to...

  6. Models of Dynamic Relations Among Service Activities, System State and Service Quality on Computer and Network Systems

    DTIC Science & Technology

    2010-01-01

    service process in throughput, delay, and so on. Chen et al. [4] describe service quality requirements of various network applications for online services , e.g...relations among service activities, system state and service quality voice over IP, etc. Service quality requirements for those online services are

  7. Analysis of Scientific Attitude, Computer Anxiety, Educational Internet Use, Problematic Internet Use, and Academic Achievement of Middle School Students According to Demographic Variables

    ERIC Educational Resources Information Center

    Bekmezci, Mehmet; Celik, Ismail; Sahin, Ismail; Kiray, Ahmet; Akturk, Ahmet Oguz

    2015-01-01

    In this research, students' scientific attitude, computer anxiety, educational use of the Internet, academic achievement, and problematic use of the Internet are analyzed based on different variables (gender, parents' educational level and daily access to the Internet). The research group involves 361 students from two middle schools which are…

  8. Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation.

    SciTech Connect

    Saffer, Shelley I.

    2014-12-01

    This is a final report of the DOE award DE-SC0001132, Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation. This document describes the achievements of the goals, and resulting research made possible by this award.

  9. An Analysis of Cloud Computing with Amazon Web Services for the Atmospheric Science Data Center

    NASA Astrophysics Data System (ADS)

    Gleason, J. L.; Little, M. M.

    2013-12-01

    NASA science and engineering efforts rely heavily on compute and data handling systems. The nature of NASA science data is such that it is not restricted to NASA users, instead it is widely shared across a globally distributed user community including scientists, educators, policy decision makers, and the public. Therefore NASA science computing is a candidate use case for cloud computing where compute resources are outsourced to an external vendor. Amazon Web Services (AWS) is a commercial cloud computing service developed to use excess computing capacity at Amazon, and potentially provides an alternative to costly and potentially underutilized dedicated acquisitions whenever NASA scientists or engineers require additional data processing. AWS desires to provide a simplified avenue for NASA scientists and researchers to share large, complex data sets with external partners and the public. AWS has been extensively used by JPL for a wide range of computing needs and was previously tested on a NASA Agency basis during the Nebula testing program. Its ability to support the Langley Science Directorate needs to be evaluated by integrating it with real world operational needs across NASA and the associated maturity that would come with that. The strengths and weaknesses of this architecture and its ability to support general science and engineering applications has been demonstrated during the previous testing. The Langley Office of the Chief Information Officer in partnership with the Atmospheric Sciences Data Center (ASDC) has established a pilot business interface to utilize AWS cloud computing resources on a organization and project level pay per use model. This poster discusses an effort to evaluate the feasibility of the pilot business interface from a project level perspective by specifically using a processing scenario involving the Clouds and Earth's Radiant Energy System (CERES) project.

  10. 5 CFR 847.905 - How is the present value of an immediate annuity with credit for NAFI service computed?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false How is the present value of an immediate....905 How is the present value of an immediate annuity with credit for NAFI service computed? (a) OPM will determine the present value of the immediate annuity including service credit for NAFI service...

  11. Implementation of Service Learning and Civic Engagement for Computer Information Systems Students through a Course Project at the Hashemite University

    ERIC Educational Resources Information Center

    Al-Khasawneh, Ahmad; Hammad, Bashar K.

    2013-01-01

    Service learning methodologies provide information systems students with the opportunity to create and implement systems in real-world, public service-oriented social contexts. This paper presents a case study of integrating a service learning project into an undergraduate Computer Information Systems course titled "Information Systems"…

  12. Explicitly Targeting Pre-Service Teacher Scientific Reasoning Abilities and Understanding of Nature of Science through an Introductory Science Course

    ERIC Educational Resources Information Center

    Koenig, Kathleen; Schen, Melissa; Bao, Lei

    2012-01-01

    Development of a scientifically literate citizenry has become a national focus and highlights the need for K-12 students to develop a solid foundation of scientific reasoning abilities and an understanding of nature of science, along with appropriate content knowledge. This implies that teachers must also be competent in these areas; but…

  13. A computational infrastructure for evaluating Care-Coordination and Telehealth services in Europe.

    PubMed

    Natsiavas, Pantelis; Filos, Dimitiris; Maramis, Christos; Chouvarda, Ioanna; Schonenberg, Helen; Pauws, Steffen; Bescos, Cristina; Westerteicher, Christoph; Maglaveras, Nicos

    2014-01-01

    This paper presents the computational framework that is employed for the analysis of health related key drivers and indicators within ACT, a project aiming to improve the deployment of Care Coordination and Telehealth services/programmes across Europe, through an iterative evidence collection-evaluation-refinement process. An open-source solution is proposed, combining a series of established software technologies. The paper focuses on technical aspects of the framework and presents a worked example of a usage scenario.

  14. Mobile cloud-computing-based healthcare service by noncontact ECG monitoring.

    PubMed

    Fong, Ee-May; Chung, Wan-Young

    2013-12-02

    Noncontact electrocardiogram (ECG) measurement technique has gained popularity these days owing to its noninvasive features and convenience in daily life use. This paper presents mobile cloud computing for a healthcare system where a noncontact ECG measurement method is employed to capture biomedical signals from users. Healthcare service is provided to continuously collect biomedical signals from multiple locations. To observe and analyze the ECG signals in real time, a mobile device is used as a mobile monitoring terminal. In addition, a personalized healthcare assistant is installed on the mobile device; several healthcare features such as health status summaries, medication QR code scanning, and reminders are integrated into the mobile application. Health data are being synchronized into the healthcare cloud computing service (Web server system and Web server dataset) to ensure a seamless healthcare monitoring system and anytime and anywhere coverage of network connection is available. Together with a Web page application, medical data are easily accessed by medical professionals or family members. Web page performance evaluation was conducted to ensure minimal Web server latency. The system demonstrates better availability of off-site and up-to-the-minute patient data, which can help detect health problems early and keep elderly patients out of the emergency room, thus providing a better and more comprehensive healthcare cloud computing service.

  15. Mobile Cloud-Computing-Based Healthcare Service by Noncontact ECG Monitoring

    PubMed Central

    Fong, Ee-May; Chung, Wan-Young

    2013-01-01

    Noncontact electrocardiogram (ECG) measurement technique has gained popularity these days owing to its noninvasive features and convenience in daily life use. This paper presents mobile cloud computing for a healthcare system where a noncontact ECG measurement method is employed to capture biomedical signals from users. Healthcare service is provided to continuously collect biomedical signals from multiple locations. To observe and analyze the ECG signals in real time, a mobile device is used as a mobile monitoring terminal. In addition, a personalized healthcare assistant is installed on the mobile device; several healthcare features such as health status summaries, medication QR code scanning, and reminders are integrated into the mobile application. Health data are being synchronized into the healthcare cloud computing service (Web server system and Web server dataset) to ensure a seamless healthcare monitoring system and anytime and anywhere coverage of network connection is available. Together with a Web page application, medical data are easily accessed by medical professionals or family members. Web page performance evaluation was conducted to ensure minimal Web server latency. The system demonstrates better availability of off-site and up-to-the-minute patient data, which can help detect health problems early and keep elderly patients out of the emergency room, thus providing a better and more comprehensive healthcare cloud computing service. PMID:24316562

  16. 75 FR 18251 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Internal Revenue Service (IRS...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-09

    ... ADMINISTRATION Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Internal Revenue Service (IRS... computer matching program that is scheduled to expire on March 31, 2010. SUMMARY: In accordance with the provisions of the Privacy Act, as amended, this notice announces a renewal of an existing computer...

  17. 75 FR 62623 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Internal Revenue Service (IRS...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-12

    ... ADMINISTRATION Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Internal Revenue Service (IRS.... SUPPLEMENTARY INFORMATION: A. General The Computer Matching and Privacy Protection Act of 1988 (Pub. L. 100-503), amended the Privacy Act (5 U.S.C. 552a) by describing the conditions under which computer...

  18. 78 FR 37875 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Bureau of the Fiscal Service...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-24

    ... ADMINISTRATION Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Bureau of the Fiscal Service... regarding protections for such persons. The Privacy Act, as amended, regulates the use of computer matching..., or denying a person's benefits or payments. ] B. SSA Computer Matches Subject to the Privacy Act...

  19. 78 FR 69925 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Bureau of the Fiscal Service...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-21

    ... ADMINISTRATION Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Bureau of the Fiscal Service... INFORMATION: A. General The Computer Matching and Privacy Protection Act of 1988 (Pub. L. 100-503), amended the Privacy Act (5 U.S.C. 552a) by describing the conditions under which computer matching...

  20. The Role of Computer-Aided Instruction in Science Courses and the Relevant Misconceptions of Pre-Service Teachers

    ERIC Educational Resources Information Center

    Aksakalli, Ayhan; Turgut, Umit; Salar, Riza

    2016-01-01

    This research aims to investigate the ways in which pre-service physics teachers interact with computers, which, as an indispensable means of today's technology, are of major value in education and training, and to identify any misconceptions said teachers may have about computer-aided instruction. As part of the study, computer-based physics…

  1. Looking beneath the Edges and Nodes: Ranking and Mining Scientific Workflows

    ERIC Educational Resources Information Center

    Dong, Xiao

    2010-01-01

    Workflow technology has emerged as an eminent way to support scientific computing nowadays. Supported by mature technological infrastructures such as web services and high performance computing infrastructure, workflow technology has been well adopted by scientific community as it offers an effective framework to prototype, modify and manage…

  2. Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist.

    PubMed

    Drawert, Brian; Hellander, Andreas; Bales, Ben; Banerjee, Debjani; Bellesia, Giovanni; Daigle, Bernie J; Douglas, Geoffrey; Gu, Mengyuan; Gupta, Anand; Hellander, Stefan; Horuk, Chris; Nath, Dibyendu; Takkar, Aviral; Wu, Sheng; Lötstedt, Per; Krintz, Chandra; Petzold, Linda R

    2016-12-01

    We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources and exchange models via a public model repository. We demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity.

  3. A novel resource management method of providing operating system as a service for mobile transparent computing.

    PubMed

    Xiong, Yonghua; Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua

    2014-01-01

    This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable.

  4. A Novel Resource Management Method of Providing Operating System as a Service for Mobile Transparent Computing

    PubMed Central

    Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua

    2014-01-01

    This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable. PMID:24883353

  5. Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist

    PubMed Central

    Banerjee, Debjani; Bellesia, Giovanni; Daigle, Bernie J.; Douglas, Geoffrey; Gu, Mengyuan; Gupta, Anand; Hellander, Stefan; Horuk, Chris; Nath, Dibyendu; Takkar, Aviral; Lötstedt, Per; Petzold, Linda R.

    2016-01-01

    We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources and exchange models via a public model repository. We demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity. PMID:27930676

  6. A Rich Metadata Filesystem for Scientific Data

    ERIC Educational Resources Information Center

    Bui, Hoang

    2012-01-01

    As scientific research becomes more data intensive, there is an increasing need for scalable, reliable, and high performance storage systems. Such data repositories must provide both data archival services and rich metadata, and cleanly integrate with large scale computing resources. ROARS is a hybrid approach to distributed storage that provides…

  7. Scientific Grand Challenges: Discovery In Basic Energy Sciences: The Role of Computing at the Extreme Scale - August 13-15, 2009, Washington, D.C.

    SciTech Connect

    Galli, Giulia; Dunning, Thom

    2009-08-13

    The U.S. Department of Energy’s (DOE) Office of Basic Energy Sciences (BES) and Office of Advanced Scientific Computing Research (ASCR) workshop in August 2009 on extreme-scale computing provided a forum for more than 130 researchers to explore the needs and opportunities that will arise due to expected dramatic advances in computing power over the next decade. This scientific community firmly believes that the development of advanced theoretical tools within chemistry, physics, and materials science—combined with the development of efficient computational techniques and algorithms—has the potential to revolutionize the discovery process for materials and molecules with desirable properties. Doing so is necessary to meet the energy and environmental challenges of the 21st century as described in various DOE BES Basic Research Needs reports. Furthermore, computational modeling and simulation are a crucial complement to experimental studies, particularly when quantum mechanical processes controlling energy production, transformations, and storage are not directly observable and/or controllable. Many processes related to the Earth’s climate and subsurface need better modeling capabilities at the molecular level, which will be enabled by extreme-scale computing.

  8. Performance, Agility and Cost of Cloud Computing Services for NASA GES DISC Giovanni Application

    NASA Astrophysics Data System (ADS)

    Pham, L.; Chen, A.; Wharton, S.; Winter, E. L.; Lynnes, C.

    2013-12-01

    The NASA Goddard Earth Science Data and Information Services Center (GES DISC) is investigating the performance, agility and cost of Cloud computing for GES DISC applications. Giovanni (Geospatial Interactive Online Visualization ANd aNalysis Infrastructure), one of the core applications at the GES DISC for online climate-related Earth science data access, subsetting, analysis, visualization, and downloading, was used to evaluate the feasibility and effort of porting an application to the Amazon Cloud Services platform. The performance and the cost of running Giovanni on the Amazon Cloud were compared to similar parameters for the GES DISC local operational system. A Giovanni Time-Series analysis of aerosol absorption optical depth (388nm) from OMI (Ozone Monitoring Instrument)/Aura was selected for these comparisons. All required data were pre-cached in both the Cloud and local system to avoid data transfer delays. The 3-, 6-, 12-, and 24-month data were used for analysis on the Cloud and local system respectively, and the processing times for the analysis were used to evaluate system performance. To investigate application agility, Giovanni was installed and tested on multiple Cloud platforms. The cost of using a Cloud computing platform mainly consists of: computing, storage, data requests, and data transfer in/out. The Cloud computing cost is calculated based on the hourly rate, and the storage cost is calculated based on the rate of Gigabytes per month. Cost for incoming data transfer is free, and for data transfer out, the cost is based on the rate in Gigabytes. The costs for a local server system consist of buying hardware/software, system maintenance/updating, and operating cost. The results showed that the Cloud platform had a 38% better performance and cost 36% less than the local system. This investigation shows the potential of cloud computing to increase system performance and lower the overall cost of system management.

  9. Formative questioning in computer learning environments: a course for pre-service mathematics teachers

    NASA Astrophysics Data System (ADS)

    Akkoç, Hatice

    2015-11-01

    This paper focuses on a specific aspect of formative assessment, namely questioning. Given that computers have gained widespread use in learning and teaching, specific attention should be made when organizing formative assessment in computer learning environments (CLEs). A course including various workshops was designed to develop knowledge and skills of questioning in CLEs. This study investigates how pre-service mathematics teachers used formative questioning with technological tools such as Geogebra and Graphic Calculus software. Participants are 35 pre-service mathematics teachers. To analyse formative questioning, two types of questions are investigated: mathematical questions and technical questions. Data were collected through lesson plans, teaching notes, interviews and observations. Descriptive statistics of the number of questions in the lesson plans before and after the workshops are presented. Examples of two types of questions are discussed using the theoretical framework. One pre-service teacher was selected and a deeper analysis of the way he used questioning during his three lessons was also investigated. The findings indicated an improvement in using technical questions for formative purposes and that the course provided a guideline in planning and using mathematical and technical questions in CLEs.

  10. Mad City Mystery: Developing Scientific Argumentation Skills with a Place-based Augmented Reality Game on Handheld Computers

    NASA Astrophysics Data System (ADS)

    Squire, Kurt D.; Jan, Mingfong

    2007-02-01

    While the knowledge economy has reshaped the world, schools lag behind in producing appropriate learning for this social change. Science education needs to prepare students for a future world in which multiple representations are the norm and adults are required to "think like scientists." Location-based augmented reality games offer an opportunity to create a "post-progressive" pedagogy in which students are not only immersed in authentic scientific inquiry, but also required to perform in adult scientific discourses. This cross-case comparison as a component of a design-based research study investigates three cases (roughly 28 students total) where an Augmented Reality curriculum, Mad City Mystery, was used to support learning in environmental science. We investigate whether augmented reality games on handhelds can be used to engage students in scientific thinking (particularly argumentation), how game structures affect students' thinking, the impact of role playing on learning, and the role of the physical environment in shaping learning. We argue that such games hold potential for engaging students in meaningful scientific argumentation. Through game play, players are required to develop narrative accounts of scientific phenomena, a process that requires them to develop and argue scientific explanations. We argue that specific game features scaffold this thinking process, creating supports for student thinking non-existent in most inquiry-based learning environments.

  11. A Study of Quality of Service Communication for High-Speed Packet-Switching Computer Sub-Networks

    NASA Technical Reports Server (NTRS)

    Cui, Zhenqian

    1999-01-01

    With the development of high-speed networking technology, computer networks, including local-area networks (LANs), wide-area networks (WANs) and the Internet, are extending their traditional roles of carrying computer data. They are being used for Internet telephony, multimedia applications such as conferencing and video on demand, distributed simulations, and other real-time applications. LANs are even used for distributed real-time process control and computing as a cost-effective approach. Differing from traditional data transfer, these new classes of high-speed network applications (video, audio, real-time process control, and others) are delay sensitive. The usefulness of data depends not only on the correctness of received data, but also the time that data are received. In other words, these new classes of applications require networks to provide guaranteed services or quality of service (QoS). Quality of service can be defined by a set of parameters and reflects a user's expectation about the underlying network's behavior. Traditionally, distinct services are provided by different kinds of networks. Voice services are provided by telephone networks, video services are provided by cable networks, and data transfer services are provided by computer networks. A single network providing different services is called an integrated-services network.

  12. Man/terminal interaction evaluation of computer operating system command and control service concepts. [in Spacelab

    NASA Technical Reports Server (NTRS)

    Dodson, D. W.; Shields, N. L., Jr.

    1978-01-01

    The Experiment Computer Operating System (ECOS) of the Spacelab will allow the onboard Payload Specialist to command experiment devices and display information relative to the performance of experiments. Three candidate ECOS command and control service concepts were reviewed and laboratory data on operator performance was taken for each concept. The command and control service concepts evaluated included a dedicated operator's menu display from which all command inputs were issued, a dedicated command key concept with which command inputs could be issued from any display, and a multi-display concept in which command inputs were issued from several dedicated function displays. Advantages and disadvantages are discussed in terms of training, operational errors, task performance time, and subjective comments of system operators.

  13. Defense Information Systems Agency Controls Over the Center for Computing Services

    DTIC Science & Technology

    2007-04-09

    Center for Computing Services 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f...WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) ODIG-AUD Department of Defense Inspector General,400 Army Navy Drive Suite 801...Arlington,VA,22202-4704 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME( S ) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S

  14. Servicing a globally broadcast interrupt signal in a multi-threaded computer

    SciTech Connect

    Attinella, John E.; Davis, Kristan D.; Musselman, Roy G.; Satterfield, David L.

    2015-12-29

    Methods, apparatuses, and computer program products for servicing a globally broadcast interrupt signal in a multi-threaded computer comprising a plurality of processor threads. Embodiments include an interrupt controller indicating in a plurality of local interrupt status locations that a globally broadcast interrupt signal has been received by the interrupt controller. Embodiments also include a thread determining that a local interrupt status location corresponding to the thread indicates that the globally broadcast interrupt signal has been received by the interrupt controller. Embodiments also include the thread processing one or more entries in a global interrupt status bit queue based on whether global interrupt status bits associated with the globally broadcast interrupt signal are locked. Each entry in the global interrupt status bit queue corresponds to a queued global interrupt.

  15. Enabling the Use of Authentic Scientific Data in the Classroom--Lessons Learned from the AccessData and Data Services Workshops

    NASA Astrophysics Data System (ADS)

    Lynds, S. E.; Buhr, S. M.; Ledley, T. S.

    2007-12-01

    Enabling the Use of Authentic Scientific Data in the Classroom--Lessons Learned from the AccessData and Data Services Workshops Since 2004, the annual AccessData and DLESE Data Services workshops have gathered scientists, data managers, technology specialists, teachers, and curriculum developers to work together creating classroom- ready scientific data modules. Teams of five (one participant from each of the five professions) develop topic- specific online educational units of the Earth Exploration Toolbook (serc.carleton.edu/eet/). Educators from middle schools through undergraduate colleges have been represented, as have scientific data professionals from many organizations across the United States. Extensive evaluation has been included in the design of each workshop. The evaluation results have been used each year to improve subsequent workshops. In addition to refining the format and process of the workshop itself, evaluation data collected reveal attendees' experiences using scientific data for educational purposes. Workshop attendees greatly value the opportunity to network with those of other professional roles in developing a real-world education project using scientific data. Educators appreciate the opportunity to work directly with scientists and technology specialists, while researchers and those in technical fields value the classroom expertise of the educators. Attendees' data use experiences are explored every year. Although bandwidth and connectivity were problems for data use in 2004, that has become much less common over time. The most common barriers to data use cited now are discoverability, data format problems, incomplete data sets, and poor documentation. Most attendees agree that the most useful types of online documentation and user support for scientific data are step-by-step instructions, examples, tutorials, and reference manuals. Satellite imagery and weather data were the most commonly used types of data, and these were often

  16. Performance Management of High Performance Computing for Medical Image Processing in Amazon Web Services

    PubMed Central

    Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha

    2016-01-01

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline. PMID:27127335

  17. Performance Management of High Performance Computing for Medical Image Processing in Amazon Web Services.

    PubMed

    Bao, Shunxing; Damon, Stephen M; Landman, Bennett A; Gokhale, Aniruddha

    2016-02-27

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  18. Performance management of high performance computing for medical image processing in Amazon Web Services

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha

    2016-03-01

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  19. Scientific & Technological Literacy through TechnoScience2000+: An Approach for In-Service and Preservice Training

    ERIC Educational Resources Information Center

    Parkinson, Eric

    2003-01-01

    Scientific and technological literacy (STL) is becoming one of the central planks for development through education on a global scale. Within this global thrust, design and technology in particular are gaining strength as curriculum components either as an individual subject or as contributors to a more broad and inclusive approach to learning.…

  20. Scientific Research Activity of Students Pre-Service Teachers of Sciences at University: The Aspects of Understanding, Situation and Improvement

    ERIC Educational Resources Information Center

    Lamanauskas, Vincentas; Augiene, Dalia

    2017-01-01

    The development of student abilities of scientific research activity (SRA) in the process of studies appears as a highly important area. In the course of studies, students not only increase their general competencies, acquire professional abilities and skills but also learn to conduct research. This does not mean that all students will build their…

  1. Data Services in Support of High Performance Computing-Based Distributed Hydrologic Models

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Horsburgh, J. S.; Dash, P. K.; Gichamo, T.; Yildirim, A. A.; Jones, N.

    2014-12-01

    We have developed web-based data services to support the application of hydrologic models on High Performance Computing (HPC) systems. The purposes of these services are to provide hydrologic researchers, modelers, water managers, and users access to HPC resources without requiring them to become HPC experts and understanding the intrinsic complexities of the data services, so as to reduce the amount of time and effort spent in finding and organizing the data required to execute hydrologic models and data preprocessing tools on HPC systems. These services address some of the data challenges faced by hydrologic models that strive to take advantage of HPC. Needed data is often not in the form needed by such models, requiring researchers to spend time and effort on data preparation and preprocessing that inhibits or limits the application of these models. Another limitation is the difficult to use batch job control and queuing systems used by HPC systems. We have developed a REST-based gateway application programming interface (API) for authenticated access to HPC systems that abstracts away many of the details that are barriers to HPC use and enhances accessibility from desktop programming and scripting languages such as Python and R. We have used this gateway API to establish software services that support the delineation of watersheds to define a modeling domain, then extract terrain and land use information to automatically configure the inputs required for hydrologic models. These services support the Terrain Analysis Using Digital Elevation Model (TauDEM) tools for watershed delineation and generation of hydrology-based terrain information such as wetness index and stream networks. These services also support the derivation of inputs for the Utah Energy Balance snowmelt model used to address questions such as how climate, land cover and land use change may affect snowmelt inputs to runoff generation. To enhance access to the time varying climate data used to

  2. Scaffolding Computer-Mediated Discussion to Enhance Moral Reasoning and Argumentation Quality in Pre-Service Teachers

    ERIC Educational Resources Information Center

    Özçinar, Hüseyin

    2015-01-01

    This study investigated the effect of scaffolding computer-mediated discussions to improve moral reasoning and argumentation quality in pre-service teachers. Participants of this study were 76 teaching education students at a Turkish university. They were divided into three groups: (1) a computer-supported argumentation group; (2) a…

  3. 26 CFR 31.3121(i)-1 - Computation to nearest dollar of cash remuneration for domestic service.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 15 2010-04-01 2010-04-01 false Computation to nearest dollar of cash... Revenue Code of 1954) General Provisions § 31.3121(i)-1 Computation to nearest dollar of cash remuneration... dollar any payment of cash remuneration for domestic service described in section 3121(a)(7)(B) (see §...

  4. Exploring the scientific underpinnings of ecosystem services in the Willamette Valley, Oregon, USA - a place-based study

    EPA Science Inventory

    The US Environmental Protection Agency has undertaken a national research effort (Ecological Research Program) involving approximately 200 scientists, nation-wide to develop the science breadth and depth required to incorporate ecosystem services into environmental policy decisio...

  5. The Roles of Embedded Monitoring Requests and Questions in Improving Mental Models of Computer-Based Scientific Text

    ERIC Educational Resources Information Center

    Hathorn, Lesley G.; Rawson, Katherine A.

    2012-01-01

    Prior research has shown that people are likely to skim information presented digitally with the resultant deleterious effect on accurate mental models of the text. Teaching monitoring strategies and presenting text with adjunct questions are effective strategies for improving the mental models of readers of scientific text, but the two strategies…

  6. Computing and information services at the Jet Propulsion Laboratory - A management approach to a diversity of needs

    NASA Technical Reports Server (NTRS)

    Felberg, F. H.

    1984-01-01

    The Jet Propulsion Laboratory, a research and development organization with about 5,000 employees, presents a complicated set of requirements for an institutional system of computing and informational services. The approach taken by JPL in meeting this challenge is one of controlled flexibility. A central communications network is provided, together with selected computing facilities for common use. At the same time, staff members are given considerable discretion in choosing the mini- and microcomputers that they believe will best serve their needs. Consultation services, computer education, and other support functions are also provided.

  7. Products and Processes of Agri-Scientific Service-Learning: Adding Harmony to Dopico and Garcia-Vazquez

    ERIC Educational Resources Information Center

    Boggs, George L.

    2011-01-01

    This forum response adds a conceptualization of harmony to Dopico and Vazquez' investigation of pedagogy that combines citizen science, environmental and cross-cultural research, and service-learning. Placing many appropriate and significant aspects of culturally situated science education in an authentically relational context beyond the…

  8. A Situational Study for the Identification of Pre-Service Science Teachers' Creative Thinking and Creative Scientific Thinking Skills

    ERIC Educational Resources Information Center

    Demir Kaçan, Sibel

    2015-01-01

    This study was conducted with the participation of 33 pre-service teachers attending the department science teaching of a Turkish university. Participants self-reported using the "Self-assessment of creativity scale" and were asked to choose the most appropriate answer to the five-choice self-assessment question "Which category best…

  9. How Novel Algorithms and Access to High Performance Computing Platforms are Enabling Scientific Progress in Atomic and Molecular Physics

    NASA Astrophysics Data System (ADS)

    Schneider, Barry I.

    2016-10-01

    Over the past 40 years there has been remarkable progress in the quantitative treatment of complex many-body problems in atomic and molecular physics (AMP). This has happened as a consequence of the development of new and powerful numerical methods, translating these algorithms into practical software and the associated evolution of powerful computing platforms ranging from desktops to high performance computational instruments capable of massively parallel computation. We are taking the opportunity afforded by this CCP2015 to review computational progress in scattering theory and the interaction of strong electromagnetic fields with atomic and molecular systems from the early 1960’s until the present time to show how these advances have revealed a remarkable array of interesting and in many cases unexpected features. The article is by no means complete and certainly reflects the views and experiences of the author.

  10. SOAs for Scientific Applications: Experiences and Challenges

    PubMed Central

    Krishnan, Sriram; Bhatia, Karan

    2011-01-01

    Over the past several years, with the advent of the Open Grid Services Architecture (OGSA) (19) and the Web Services Resource Framework (WSRF) (25), Service-oriented Architectures (SOA) and Web service technologies have been embraced in the field of scientific and Grid computing. These new principles promise to help make scientific infrastructures simpler to use, more cost effective to implement, and easier to maintain. However, understanding how to leverage these developments to actually design and build a system remains more of an art than a science. In this paper, we present some positions learned through experience that provide guidance in leveraging SOA technologies to build scientific infrastructures. In addition, we present the technical challenges that need to be addressed in building an SOA, and as a case study, we present the SOA that we have designed for the National Biomedical Computation Resource (NBCR) (9) community. We discuss how we have addressed these technical challenges, and present the overall architecture, the individual software toolkits developed, the client interfaces, and the usage scenarios. We hope that our experiences prove to be useful in building similar infrastructures for other scientific applications. PMID:21308003

  11. Bio-signal analysis system design with support vector machines based on cloud computing service architecture.

    PubMed

    Shen, Chia-Ping; Chen, Wei-Hsin; Chen, Jia-Ming; Hsu, Kai-Ping; Lin, Jeng-Wei; Chiu, Ming-Jang; Chen, Chi-Huang; Lai, Feipei

    2010-01-01

    Today, many bio-signals such as Electroencephalography (EEG) are recorded in digital format. It is an emerging research area of analyzing these digital bio-signals to extract useful health information in biomedical engineering. In this paper, a bio-signal analyzing cloud computing architecture, called BACCA, is proposed. The system has been designed with the purpose of seamless integration into the National Taiwan University Health Information System. Based on the concept of. NET Service Oriented Architecture, the system integrates heterogeneous platforms, protocols, as well as applications. In this system, we add modern analytic functions such as approximated entropy and adaptive support vector machine (SVM). It is shown that the overall accuracy of EEG bio-signal analysis has increased to nearly 98% for different data sets, including open-source and clinical data sets.

  12. Public-Private Partnerships in Cloud-Computing Services in the Context of Genomic Research.

    PubMed

    Granados Moreno, Palmira; Joly, Yann; Knoppers, Bartha Maria

    2017-01-01

    Public-private partnerships (PPPs) have been increasingly used to spur and facilitate innovation in a number of fields. In healthcare, the purpose of using a PPP is commonly to develop and/or provide vaccines and drugs against communicable diseases, mainly in developing or underdeveloped countries. With the advancement of technology and of the area of genomics, these partnerships also focus on large-scale genomic research projects that aim to advance the understanding of diseases that have a genetic component and to develop personalized treatments. This new focus has created new forms of PPPs that involve information technology companies, which provide computing infrastructure and services to store, analyze, and share the massive amounts of data genomic-related projects produce. In this article, we explore models of PPPs proposed to handle, protect, and share the genomic data collected and to further develop genomic-based medical products. We also identify the reasons that make these models suitable and the challenges they have yet to overcome. To achieve this, we describe the details and complexities of MSSNG, International Cancer Genome Consortium, and 100,000 Genomes Project, the three PPPs that focus on large-scale genomic research to better understand the genetic components of autism, cancer, rare diseases, and infectious diseases with the intention to find appropriate treatments. Organized as PPP and employing cloud-computing services, the three projects have advanced quickly and are likely to be important sources of research and development for future personalized medicine. However, there still are unresolved matters relating to conflicts of interest, commercialization, and data control. Learning from the challenges encountered by past PPPs allowed us to establish that developing guidelines to adequately manage personal health information stored in clouds and ensuring the protection of data integrity and privacy would be critical steps in the development of

  13. Public–Private Partnerships in Cloud-Computing Services in the Context of Genomic Research

    PubMed Central

    Granados Moreno, Palmira; Joly, Yann; Knoppers, Bartha Maria

    2017-01-01

    Public–private partnerships (PPPs) have been increasingly used to spur and facilitate innovation in a number of fields. In healthcare, the purpose of using a PPP is commonly to develop and/or provide vaccines and drugs against communicable diseases, mainly in developing or underdeveloped countries. With the advancement of technology and of the area of genomics, these partnerships also focus on large-scale genomic research projects that aim to advance the understanding of diseases that have a genetic component and to develop personalized treatments. This new focus has created new forms of PPPs that involve information technology companies, which provide computing infrastructure and services to store, analyze, and share the massive amounts of data genomic-related projects produce. In this article, we explore models of PPPs proposed to handle, protect, and share the genomic data collected and to further develop genomic-based medical products. We also identify the reasons that make these models suitable and the challenges they have yet to overcome. To achieve this, we describe the details and complexities of MSSNG, International Cancer Genome Consortium, and 100,000 Genomes Project, the three PPPs that focus on large-scale genomic research to better understand the genetic components of autism, cancer, rare diseases, and infectious diseases with the intention to find appropriate treatments. Organized as PPP and employing cloud-computing services, the three projects have advanced quickly and are likely to be important sources of research and development for future personalized medicine. However, there still are unresolved matters relating to conflicts of interest, commercialization, and data control. Learning from the challenges encountered by past PPPs allowed us to establish that developing guidelines to adequately manage personal health information stored in clouds and ensuring the protection of data integrity and privacy would be critical steps in the development

  14. Products and processes of agri-scientific service-learning: adding harmony to Dopico and Garcia-Vázquez

    NASA Astrophysics Data System (ADS)

    Boggs, George L.

    2011-06-01

    This forum response adds a conceptualization of harmony to Dopico and Vázquez' investigation of pedagogy that combines citizen science, environmental and cross-cultural research, and service-learning. Placing many appropriate and significant aspects of culturally situated science education in an authentically relational context beyond the classroom, this paper calls attention to insightful contributions and new directions for research, such as the process of inducing or eluding nihilism regarding ecological issues. How can such a question be researched effectively in order to learn about the family of pedagogies emerging in response to the need for more ecologically conscious and relationally authentic teaching across many disciplines? In this paper, I use a Vygotskian framework and an abbreviated case study of agricultural service-learning from my research, drawing attention to the importance of students' culturally-mediated construction of setting as they interact in older and newer ways.

  15. Towards sustainable infrastructure management: knowledge-based service-oriented computing framework for visual analytics

    NASA Astrophysics Data System (ADS)

    Vatcha, Rashna; Lee, Seok-Won; Murty, Ajeet; Tolone, William; Wang, Xiaoyu; Dou, Wenwen; Chang, Remco; Ribarsky, William; Liu, Wanqiu; Chen, Shen-en; Hauser, Edd

    2009-05-01

    Infrastructure management (and its associated processes) is complex to understand, perform and thus, hard to make efficient and effective informed decisions. The management involves a multi-faceted operation that requires the most robust data fusion, visualization and decision making. In order to protect and build sustainable critical assets, we present our on-going multi-disciplinary large-scale project that establishes the Integrated Remote Sensing and Visualization (IRSV) system with a focus on supporting bridge structure inspection and management. This project involves specific expertise from civil engineers, computer scientists, geographers, and real-world practitioners from industry, local and federal government agencies. IRSV is being designed to accommodate the essential needs from the following aspects: 1) Better understanding and enforcement of complex inspection process that can bridge the gap between evidence gathering and decision making through the implementation of ontological knowledge engineering system; 2) Aggregation, representation and fusion of complex multi-layered heterogeneous data (i.e. infrared imaging, aerial photos and ground-mounted LIDAR etc.) with domain application knowledge to support machine understandable recommendation system; 3) Robust visualization techniques with large-scale analytical and interactive visualizations that support users' decision making; and 4) Integration of these needs through the flexible Service-oriented Architecture (SOA) framework to compose and provide services on-demand. IRSV is expected to serve as a management and data visualization tool for construction deliverable assurance and infrastructure monitoring both periodically (annually, monthly, even daily if needed) as well as after extreme events.

  16. Improving scalability with loop transformations and message aggregation in parallel object-oriented frameworks for scientific computing

    SciTech Connect

    Bassetti, F.; Davis, K.; Quinlan, D.

    1998-09-01

    Application codes reliably achieve performance far less than the advertised capabilities of existing architectures, and this problem is worsening with increasingly-parallel machines. For large-scale numerical applications, stencil operations often impose the great part of the computational cost, and the primary sources of inefficiency are the costs of message passing and poor cache utilization. This paper proposes and demonstrates optimizations for stencil and stencil-like computations for both serial and parallel environments that ameliorate these sources of inefficiency. Achieving scalability, they believe, requires both algorithm design and compile-time support. The optimizations they present are automatable because the stencil-like computations are implemented at a high level of abstraction using object-oriented parallel array class libraries. These optimizations, which are beyond the capabilities of today compilers, may be performed automatically by a preprocessor such as the one they are currently developing.

  17. SIAM Conference on Parallel Processing for Scientific Computing, 4th, Chicago, IL, Dec. 11-13, 1989, Proceedings

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack (Editor); Messina, Paul (Editor); Sorensen, Danny C. (Editor); Voigt, Robert G. (Editor)

    1990-01-01

    Attention is given to such topics as an evaluation of block algorithm variants in LAPACK and presents a large-grain parallel sparse system solver, a multiprocessor method for the solution of the generalized Eigenvalue problem on an interval, and a parallel QR algorithm for iterative subspace methods on the CM2. A discussion of numerical methods includes the topics of asynchronous numerical solutions of PDEs on parallel computers, parallel homotopy curve tracking on a hypercube, and solving Navier-Stokes equations on the Cedar Multi-Cluster system. A section on differential equations includes a discussion of a six-color procedure for the parallel solution of elliptic systems using the finite quadtree structure, data parallel algorithms for the finite element method, and domain decomposition methods in aerodynamics. Topics dealing with massively parallel computing include hypercube vs. 2-dimensional meshes and massively parallel computation of conservation laws. Performance and tools are also discussed.

  18. Computed torque control of an under-actuated service robot platform modeled by natural coordinates

    NASA Astrophysics Data System (ADS)

    Zelei, Ambrus; Kovács, László L.; Stépán, Gábor

    2011-05-01

    The paper investigates the motion planning of a suspended service robot platform equipped with ducted fan actuators. The platform consists of an RRT robot and a cable suspended swinging actuator that form a subsequent parallel kinematic chain and it is equipped with ducted fan actuators. In spite of the complementary ducted fan actuators, the system is under-actuated. The method of computed torques is applied to control the motion of the robot. The under-actuated systems have less control inputs than degrees of freedom. We assume that the investigated under-actuated system has desired outputs of the same number as inputs. In spite of the fact that the inverse dynamical calculation leads to the solution of a system of differential-algebraic equations (DAE), the desired control inputs can be determined uniquely by the method of computed torques. We use natural (Cartesian) coordinates to describe the configuration of the robot, while a set of algebraic equations represents the geometric constraints. In this modeling approach the mathematical model of the dynamical system itself is also a DAE. The paper discusses the inverse dynamics problem of the complex hybrid robotic system. The results include the desired actuator forces as well as the nominal coordinates corresponding to the desired motion of the carried payload. The method of computed torque control with a PD controller is applied to under-actuated systems described by natural coordinates, while the inverse dynamics is solved via the backward Euler discretization of the DAE system for which a general formalism is proposed. The results are compared with the closed form results obtained by simplified models of the system. Numerical simulation and experiments demonstrate the applicability of the presented concepts.

  19. Changing Pre-Service Mathematics Teachers' Beliefs about Using Computers for Teaching and Learning Mathematics: The Effect of Three Different Models

    ERIC Educational Resources Information Center

    Karatas, Ilhan

    2014-01-01

    This study examines the effect of three different computer integration models on pre-service mathematics teachers' beliefs about using computers in mathematics education. Participants included 104 pre-service mathematics teachers (36 second-year students in the Computer Oriented Model group, 35 fourth-year students in the Integrated Model (IM)…

  20. Enabling Scientific and Technological Improvements to Meet Core Partner Service Requirements in Alaska - An Arctic Test Bed

    NASA Astrophysics Data System (ADS)

    Petrescu, E. M.; Scott, C. A.

    2014-12-01

    NOAA/NWS Test beds, such as the Joint Hurricane Test Bed (Miami, FL) and the Hazardous Weather Test Bed (Norman, OK) have been highly effective in meeting unique or pressing science and service challenges for the NWS. NWS Alaska Region leadership has developed plans for a significant enhancement to our operational forecast and decision support capabilities in Alaska to address the emerging requirements of the Arctic: An Arctic Test Bed. Historically, the complexity of forecast operations and the inherent challenges in Alaska have not been addressed well by the R&D programs and projects that support the CONUS regions of the NWS. In addition, there are unique science,technology, and support challenges (e.g., sea ice forecasts and arctic drilling prospects) and opportunities (Bilateral agreements with Canada, Russia, and Norway) that would best be worked through Alaska operations. A dedicated test bed will provide a mechanism to transfer technology, research results, and observations advances into operations in a timely and effective manner in support of Weather Ready Nation goals and to enhance decision support services in Alaska. A NOAA Arctic Test Bed will provide a crucial nexus for ensuring NOAA's developers understand Alaska's needs, which are often cross disciplinary (atmosphere, ocean, cryosphere, and hydrologic), to improve NOAA's responsiveness to its Arctic-related science and service priorities among the NWS and OAR (CPO and ESRL), and enable better leveraging of other research initiatives and data sources external to NOAA, including academia, other government agencies, and the private sector, which are particular to the polar region (e.g., WWRP Polar Prediction Project). Organization, capabilities and opportunities will be presentation.

  1. Applications and Methods Utilizing the Simple Semantic Web Architecture and Protocol (SSWAP) for Bioinformatics Resource Discovery and Disparate Data and Service Integration

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of scientific data between information resources difficu...

  2. Health services research as a scientific process: the metamorphosis of an empirical research project from grant proposal to final report.

    PubMed Central

    Luft, H S

    1986-01-01

    The process of health services research is rarely examined; attention is usually focused on results and policy implications. Large and small decisions made during the execution of a study, however, can have major impacts on its outcomes. This article describes a project that underwent major changes because of problems discovered in the basic data and threats to the valid interpretation of econometric results uncovered by qualitative case studies. Although the combination of difficulties encountered in this project may be unusual, it is likely that many similar problems and opportunities occur in other empirical studies. PMID:3771233

  3. Supporting Scientific Modeling Practices in Atmospheric Sciences: Intended and Actual Affordances of a Computer-Based Modeling Tool

    ERIC Educational Resources Information Center

    Wu, Pai-Hsing; Wu, Hsin-Kai; Kuo, Che-Yu; Hsu, Ying-Shao

    2015-01-01

    Computer-based learning tools include design features to enhance learning but learners may not always perceive the existence of these features and use them in desirable ways. There might be a gap between what the tool features are designed to offer (intended affordance) and what they are actually used (actual affordance). This study thus aims at…

  4. COMPUTER-AIDED INDEXING OF A SCIENTIFIC ABSTRACTS JOURNAL BY THE UDC WITH UNIDEK--A CASE STUDY.

    ERIC Educational Resources Information Center

    FREEMAN, ROBERT R.; RUSSELL, MARTIN

    THIS PAPER IS A CASE STUDY OF THE ADOPTION BY GEOSCIENCE ABSTRACTS OF UNIDEK, A COMPUTER-COMPILED SYSTEMATIC SUBJECT INDEX BASED ON THE UNIVERSAL DECIMAL CLASSIFICATION (UDC). EVENTS LEADING TO A DECISION TO ADOPT THE SYSTEM, SOME THEORY OF INDEXES, PROBLEMS INVOLVED IN CONVERSION, AND SOME OF THE RESULTS ACHIEVED ARE REVIEWED. UNIDEK MAKES…

  5. Application of a Micro Computer-Based Management Information System to Improve the USAF Service Reporting Process

    DTIC Science & Technology

    1990-09-01

    ASD) SPO. The Service Reporting Management Information System (SRMIS) was implemented and evaluated in the Air Force One (AF-1) Replacement Aircraft SPO during a five month trial period.... Information System (MIS). MIS experts and System Program Office (SPO) acquisition managers; and software prototyping with an Aeronautical Systems Division...This study conducted research into the development, implementation and evaluation of a personal computer based Service Reporting (SR) Management

  6. Facilitating Preschoolers' Scientific Knowledge Construction via Computer Games Regarding Light and Shadow: The Effect of the Prediction-Observation-Explanation (POE) Strategy

    NASA Astrophysics Data System (ADS)

    Hsu, Chung-Yuan; Tsai, Chin-Chung; Liang, Jyh-Chong

    2011-10-01

    Educational researchers have suggested that computer games have a profound influence on students' motivation, knowledge construction, and learning performance, but little empirical research has targeted preschoolers. Thus, the purpose of the present study was to investigate the effects of implementing a computer game that integrates the prediction-observation-explanation (POE) strategy (White and Gunstone in Probing understanding. Routledge, New York, 1992) on facilitating preschoolers' acquisition of scientific concepts regarding light and shadow. The children's alternative conceptions were explored as well. Fifty participants were randomly assigned into either an experimental group that played a computer game integrating the POE model or a control group that played a non-POE computer game. By assessing the students' conceptual understanding through interviews, this study revealed that the students in the experimental group significantly outperformed their counterparts in the concepts regarding "shadow formation in daylight" and "shadow orientation." However, children in both groups, after playing the games, still expressed some alternative conceptions such as "Shadows always appear behind a person" and "Shadows should be on the same side as the sun."

  7. Feasibility of ensuring confidentiality and security of computer-based patient records. Council on Scientific Affairs, American Medical Association.

    PubMed

    1993-05-01

    Legal and ethical precepts that apply to paper-based medical records, including requirements that patient records be kept confidential, accurate and legible, secure, and free from unauthorized access, should also apply to computer-based patient records. Sources of these precepts include federal regulations, state medical practice acts, licensing statutes and the regulations that implement them, accreditation standards, and professional codes of ethics. While the legal and ethical principles may not change, the risks to confidentiality and security of patient records appear to differ between paper- and computer-based records. Breaches of system security, the potential for faulty performance that may result in inaccessibility or loss of records, the increased technical ability to collect, store, and retrieve large quantities of data, and the ability to access records from multiple and (sometimes) remote locations are among the risk factors unique to computer-based record systems. Managing these risks will require a combination of reliable technological measures, appropriate institutional policies and governmental regulations, and adequate penalties to serve as a dependable deterrent against the infringement of these precepts.

  8. Amplify scientific discovery with artificial intelligence

    SciTech Connect

    Gil, Yolanda; Greaves, Mark T.; Hendler, James; Hirsch, Hyam

    2014-10-10

    Computing innovations have fundamentally changed many aspects of scientific inquiry. For example, advances in robotics, high-end computing, networking, and databases now underlie much of what we do in science such as gene sequencing, general number crunching, sharing information between scientists, and analyzing large amounts of data. As computing has evolved at a rapid pace, so too has its impact in science, with the most recent computing innovations repeatedly being brought to bear to facilitate new forms of inquiry. Recently, advances in Artificial Intelligence (AI) have deeply penetrated many consumer sectors, including for example Apple’s Siri™ speech recognition system, real-time automated language translation services, and a new generation of self-driving cars and self-navigating drones. However, AI has yet to achieve comparable levels of penetration in scientific inquiry, despite its tremendous potential in aiding computers to help scientists tackle tasks that require scientific reasoning. We contend that advances in AI will transform the practice of science as we are increasingly able to effectively and jointly harness human and machine intelligence in the pursuit of major scientific challenges.

  9. Development and use of the computer software package for planning the 12 GHz broadcasting-satellite service at RARC '83

    NASA Technical Reports Server (NTRS)

    Bowen, R. R.; Brown, K. E.; Hothi, H. S.; Miller, E. F.

    1985-01-01

    The 1983 Regional Administrative Radio Conference (RARC '83) had mainly the objective to draw up a plan of detailed frequency assignments and orbital positions for the 12 GHz broadcasting-satellite service (BSS) in ITU Region 2 (the Western Hemisphere) and associated feeder links (earth-to-space) in the 17 GHz band. It was found that for RARC '83 new planning methods and procedures would be needed. The new requirements made it necessary to develop a new generation of planning software. Attention is given to the development of the computer programs to be used at the conference, the package of computer programs, and the use of the computer programs.

  10. Students Upgrading through Computer and Career Education System Services (Project SUCCESS). Final Evaluation Report 1992-93. OER Report.

    ERIC Educational Resources Information Center

    New York City Board of Education, Brooklyn, NY. Office of Educational Research.

    Student Upgrading through Computer and Career Education System Services (Project SUCCESS) was an Elementary and Secondary Education Act Title VII-funded project in its third year of operation. Project SUCCESS served 460 students of limited English proficiency at two high schools in Brooklyn and one high school in Manhattan (New York City).…

  11. The Students Upgrading through Computer and Career Education Systems Services (Project SUCCESS). 1990-91 Final Evaluation Profile. OREA Report.

    ERIC Educational Resources Information Center

    New York City Board of Education, Brooklyn, NY. Office of Research, Evaluation, and Assessment.

    An evaluation was done of the New York City Public Schools' Student Upgrading through Computer and Career Education Systems Services Program (Project SUCCESS). Project SUCCESS operated at 3 high schools in Brooklyn and Manhattan (Murry Bergtraum High School, Edward R. Murrow High School, and John Dewey High School). It enrolled limited English…

  12. Students Upgrading through Computer and Career Education System Services (Project SUCCESS). Final Evaluation Report 1993-94. OER Report.

    ERIC Educational Resources Information Center

    Greene, Judy

    Students Upgrading through Computer and Career Education System Services (Project SUCCESS) was an Elementary and Secondary Education Act Title VII-funded project in its fourth year of operation. The project operated at two high schools in Brooklyn and one in Manhattan (New York). In the 1993-94 school year, the project served 393 students of…

  13. An Exploratory Study of Factors Affecting Usage of an On-Line Computer-Based Bibliographic Retrieval Service.

    ERIC Educational Resources Information Center

    Watt, James H., Jr.; Stefanov, Rebecca

    The situational and personal factors affecting the usage of a computer based information service by health care professionals are described. Of 166 health care professionals who were identified as potential users of MEDLINE at the University of Connecticut, 126 carried out one or more online searches. Pre-use and post-use questionnaires were…

  14. Using Social Network Analysis To Examine the Time of Adoption of Computer-Related Services among University Faculty.

    ERIC Educational Resources Information Center

    Durrington, Vance A.; Repman, Judi; Valente, Thomas W.

    2000-01-01

    Proposes to use social network analysis and diffusion research to study the diffusion of two computer-based administrative services within a university faculty network. Examines the relationship of time of adoption and the number of network nominations received, centrality closeness, spatial proximity, and organizational unit proximity. Results…

  15. 5 CFR 847.906 - How is the present value of a deferred annuity without credit for NAFI service computed?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false How is the present value of a deferred... § 847.906 How is the present value of a deferred annuity without credit for NAFI service computed? (a) The present value of a deferred annuity equals the present value of the deferred annuity...

  16. An Investigation into the Secondary Schools In-Service Teachers' Selected Variables on Interactive Computer Technology (ICT) Competency

    ERIC Educational Resources Information Center

    Adodo, S. O.

    2012-01-01

    The use of computer technologies has come to stay, an individual, group of individual and society who is yet to recognize this fact is merely living. The introduction of Information and Communication Technology (ICT) into the education industry has caused transformation in instructional process. The study investigated the in-service teachers…

  17. Designing and Implementing a Faculty Internet Workshop: A Collaborative Effort of Academic Computing Services and the University Library.

    ERIC Educational Resources Information Center

    Bradford, Jane T.; And Others

    1996-01-01

    Academic Computing Services staff and University librarians at Stetson University (DeLand, Florida) designed and implemented a three-day Internet workshop for interested faculty. The workshop included both hands-on lab sessions and discussions covering e-mail, telnet, ftp, Gopher, and World Wide Web. The planning, preparation of the lab and…

  18. 75 FR 41522 - Novell, Inc., Including On-Site Leased Workers From Affiliated Computer Services, Inc., (ACS...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-16

    ...., (ACS) working on-site at the Provo, Utah location of Novell, Inc. The amended notice applicable to TA-W... Employment and Training Administration Novell, Inc., Including On-Site Leased Workers From Affiliated Computer Services, Inc., (ACS), Provo, UT; Amended Certification Regarding Eligibility To Apply for...

  19. Concept of computer-assisted clinical diagnostic documentation systems for the practice with the option of later scientific evaluations.

    PubMed

    Ahlers, M O; Jaeger, D; Jakstat, H A

    2010-01-01

    Treatment data from practices and specialization centers, especially in the increasingly specialized areas which university clinics do not cover, are very important for evaluating the effectiveness and efficiency of dental examination and treatment methods. In the case of paper-based documentation, the evaluation of these data usually fails because of the cost it entails. With the use of electronic medical records, this expense can be markedly lower, provided the data acquisition and storage is structured accordingly. Since access to sensitive person-related data is simplified considerably by this method, such health data are protected, especially on the European level. Other than generally assumed, this protection is not restricted solely to the confidentiality principle, but also comprises the power of disposition over the data (data protection). The result is that from a legal point of view, the treatment data cannot be readily used for scientific studies, not even by dentists and physicians who have collected the data legally during the course of their therapeutic work. The technical separation of treatment data from the personal data offers a legally acceptable solution to this problem. It must ensure that a later assignment to individual persons will not be feasible at a realistic expense ("effective anonymization"). This article describes the legal and information technology principles and their practical implementation, as illustrated by the concept of a respective compliant IT architecture for the dentaConcept CMD fact diagnostic software. Here, a special export function automatically separates the anonymized treatment data and thus facilitates multicentric studies within an institution and among dental practices.

  20. Data Intensive Scientific Workflows on a Federated Cloud: CRADA Final Report

    SciTech Connect

    Garzoglio, Gabriele

    2015-10-31

    The Fermilab Scientific Computing Division and the KISTI Global Science Experimental Data Hub Center have built a prototypical large-scale infrastructure to handle scientific workflows of stakeholders to run on multiple cloud resources. The demonstrations have been in the areas of (a) Data-Intensive Scientific Workflows on Federated Clouds, (b) Interoperability and Federation of Cloud Resources, and (c) Virtual Infrastructure Automation to enable On-Demand Services.