Sample records for additional computational resources

  1. Enabling opportunistic resources for CMS Computing Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hufnagel, Dirk

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize opportunistic resources resources not owned by, or a priori configured for CMS to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enablemore » access and bring the CMS environment into these non CMS resources. Finally, we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less

  2. Enabling opportunistic resources for CMS Computing Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hufnagel, Dick

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize “opportunistic” resourcesresources not owned by, or a priori configured for CMS — to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are usedmore » to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less

  3. Enabling opportunistic resources for CMS Computing Operations

    DOE PAGES

    Hufnagel, Dirk

    2015-12-23

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize opportunistic resources resources not owned by, or a priori configured for CMS to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enablemore » access and bring the CMS environment into these non CMS resources. Finally, we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less

  4. Laboratory Computing Resource Center

    Science.gov Websites

    Systems Computing and Data Resources Purchasing Resources Future Plans For Users Getting Started Using LCRC Software Best Practices and Policies Getting Help Support Laboratory Computing Resource Center Laboratory Computing Resource Center Latest Announcements See All April 27, 2018, Announcements, John Low

  5. Methods and systems for providing reconfigurable and recoverable computing resources

    NASA Technical Reports Server (NTRS)

    Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)

    2010-01-01

    A method for optimizing the use of digital computing resources to achieve reliability and availability of the computing resources is disclosed. The method comprises providing one or more processors with a recovery mechanism, the one or more processors executing one or more applications. A determination is made whether the one or more processors needs to be reconfigured. A rapid recovery is employed to reconfigure the one or more processors when needed. A computing system that provides reconfigurable and recoverable computing resources is also disclosed. The system comprises one or more processors with a recovery mechanism, with the one or more processors configured to execute a first application, and an additional processor configured to execute a second application different than the first application. The additional processor is reconfigurable with rapid recovery such that the additional processor can execute the first application when one of the one more processors fails.

  6. Using OSG Computing Resources with (iLC)Dirac

    NASA Astrophysics Data System (ADS)

    Sailer, A.; Petric, M.; CLICdp Collaboration

    2017-10-01

    CPU cycles for small experiments and projects can be scarce, thus making use of all available resources, whether dedicated or opportunistic, is mandatory. While enabling uniform access to the LCG computing elements (ARC, CREAM), the DIRAC grid interware was not able to use OSG computing elements (GlobusCE, HTCondor-CE) without dedicated support at the grid site through so called ‘SiteDirectors’, which directly submit to the local batch system. This in turn requires additional dedicated effort for small experiments on the grid site. Adding interfaces to the OSG CEs through the respective grid middleware is therefore allowing accessing them within the DIRAC software without additional site-specific infrastructure. This enables greater use of opportunistic resources for experiments and projects without dedicated clusters or an established computing infrastructure with the DIRAC software. To allow sending jobs to HTCondor-CE and legacy Globus computing elements inside DIRAC the required wrapper classes were developed. Not only is the usage of these types of computing elements now completely transparent for all DIRAC instances, which makes DIRAC a flexible solution for OSG based virtual organisations, but it also allows LCG Grid Sites to move to the HTCondor-CE software, without shutting DIRAC based VOs out of their site. In these proceedings we detail how we interfaced the DIRAC system to the HTCondor-CE and Globus computing elements and explain the encountered obstacles and solutions developed, and how the linear collider community uses resources in the OSG.

  7. Research on elastic resource management for multi-queue under cloud computing environment

    NASA Astrophysics Data System (ADS)

    CHENG, Zhenjing; LI, Haibo; HUANG, Qiulan; Cheng, Yaodong; CHEN, Gang

    2017-10-01

    As a new approach to manage computing resource, virtualization technology is more and more widely applied in the high-energy physics field. A virtual computing cluster based on Openstack was built at IHEP, using HTCondor as the job queue management system. In a traditional static cluster, a fixed number of virtual machines are pre-allocated to the job queue of different experiments. However this method cannot be well adapted to the volatility of computing resource requirements. To solve this problem, an elastic computing resource management system under cloud computing environment has been designed. This system performs unified management of virtual computing nodes on the basis of job queue in HTCondor based on dual resource thresholds as well as the quota service. A two-stage pool is designed to improve the efficiency of resource pool expansion. This paper will present several use cases of the elastic resource management system in IHEPCloud. The practical run shows virtual computing resource dynamically expanded or shrunk while computing requirements change. Additionally, the CPU utilization ratio of computing resource was significantly increased when compared with traditional resource management. The system also has good performance when there are multiple condor schedulers and multiple job queues.

  8. SOCR: Statistics Online Computational Resource

    PubMed Central

    Dinov, Ivo D.

    2011-01-01

    The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR). This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student’s intuition and enhance their learning. PMID:21451741

  9. Statistics Online Computational Resource for Education

    ERIC Educational Resources Information Center

    Dinov, Ivo D.; Christou, Nicolas

    2009-01-01

    The Statistics Online Computational Resource (http://www.SOCR.ucla.edu) provides one of the largest collections of free Internet-based resources for probability and statistics education. SOCR develops, validates and disseminates two core types of materials--instructional resources and computational libraries. (Contains 2 figures.)

  10. Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources

    NASA Astrophysics Data System (ADS)

    Evans, D.; Fisk, I.; Holzman, B.; Melo, A.; Metson, S.; Pordes, R.; Sheldon, P.; Tiradani, A.

    2011-12-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely "on-demand" as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the "base-line" needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.

  11. COMPUTATIONAL RESOURCES FOR BIOFUEL FEEDSTOCK SPECIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buell, Carol Robin; Childs, Kevin L

    2013-05-07

    While current production of ethanol as a biofuel relies on starch and sugar inputs, it is anticipated that sustainable production of ethanol for biofuel use will utilize lignocellulosic feedstocks. Candidate plant species to be used for lignocellulosic ethanol production include a large number of species within the Grass, Pine and Birch plant families. For these biofuel feedstock species, there are variable amounts of genome sequence resources available, ranging from complete genome sequences (e.g. sorghum, poplar) to transcriptome data sets (e.g. switchgrass, pine). These data sets are not only dispersed in location but also disparate in content. It will be essentialmore » to leverage and improve these genomic data sets for the improvement of biofuel feedstock production. The objectives of this project were to provide computational tools and resources for data-mining genome sequence/annotation and large-scale functional genomic datasets available for biofuel feedstock species. We have created a Bioenergy Feedstock Genomics Resource that provides a web-based portal or clearing house for genomic data for plant species relevant to biofuel feedstock production. Sequence data from a total of 54 plant species are included in the Bioenergy Feedstock Genomics Resource including model plant species that permit leveraging of knowledge across taxa to biofuel feedstock species.We have generated additional computational analyses of these data, including uniform annotation, to facilitate genomic approaches to improved biofuel feedstock production. These data have been centralized in the publicly available Bioenergy Feedstock Genomics Resource (http://bfgr.plantbiology.msu.edu/).« less

  12. Dynamic provisioning of local and remote compute resources with OpenStack

    NASA Astrophysics Data System (ADS)

    Giffels, M.; Hauth, T.; Polgart, F.; Quast, G.

    2015-12-01

    Modern high-energy physics experiments rely on the extensive usage of computing resources, both for the reconstruction of measured events as well as for Monte-Carlo simulation. The Institut fur Experimentelle Kernphysik (EKP) at KIT is participating in both the CMS and Belle experiments with computing and storage resources. In the upcoming years, these requirements are expected to increase due to growing amount of recorded data and the rise in complexity of the simulated events. It is therefore essential to increase the available computing capabilities by tapping into all resource pools. At the EKP institute, powerful desktop machines are available to users. Due to the multi-core nature of modern CPUs, vast amounts of CPU time are not utilized by common desktop usage patterns. Other important providers of compute capabilities are classical HPC data centers at universities or national research centers. Due to the shared nature of these installations, the standardized software stack required by HEP applications cannot be installed. A viable way to overcome this constraint and offer a standardized software environment in a transparent manner is the usage of virtualization technologies. The OpenStack project has become a widely adopted solution to virtualize hardware and offer additional services like storage and virtual machine management. This contribution will report on the incorporation of the institute's desktop machines into a private OpenStack Cloud. The additional compute resources provisioned via the virtual machines have been used for Monte-Carlo simulation and data analysis. Furthermore, a concept to integrate shared, remote HPC centers into regular HEP job workflows will be presented. In this approach, local and remote resources are merged to form a uniform, virtual compute cluster with a single point-of-entry for the user. Evaluations of the performance and stability of this setup and operational experiences will be discussed.

  13. Production scheduling with discrete and renewable additional resources

    NASA Astrophysics Data System (ADS)

    Kalinowski, K.; Grabowik, C.; Paprocka, I.; Kempa, W.

    2015-11-01

    In this paper an approach to planning of additional resources when scheduling operations are discussed. The considered resources are assumed to be discrete and renewable. In most research in scheduling domain, the basic and often the only type of regarded resources is a workstation. It can be understood as a machine, a device or even as a separated space on the shop floor. In many cases, during the detailed scheduling of operations the need of using more than one resource, required for its implementation, can be indicated. Resource requirements for an operation may relate to different resources or resources of the same type. Additional resources are most often referred to these human resources, tools or equipment, for which the limited availability in the manufacturing system may have an influence on the execution dates of some operations. In the paper the concept of the division into basic and additional resources and their planning method was shown. A situation in which sets of basic and additional resources are not separable - the same additional resource may be a basic resource for another operation is also considered. Scheduling of operations, including greater amount of resources can cause many difficulties, depending on whether the resource is involved in the entire time of operation, only in the selected part(s) of operation (e.g. as auxiliary staff at setup time) or cyclic - e.g. when an operator supports more than one machine, or supervises the execution of several operations. For this reason the dates and work times of resources participation in the operation can be different. Presented issues are crucial when modelling of production scheduling environment and designing of structures for the purpose of scheduling software development.

  14. Classical multiparty computation using quantum resources

    NASA Astrophysics Data System (ADS)

    Clementi, Marco; Pappa, Anna; Eckstein, Andreas; Walmsley, Ian A.; Kashefi, Elham; Barz, Stefanie

    2017-12-01

    In this work, we demonstrate a way to perform classical multiparty computing among parties with limited computational resources. Our method harnesses quantum resources to increase the computational power of the individual parties. We show how a set of clients restricted to linear classical processing are able to jointly compute a nonlinear multivariable function that lies beyond their individual capabilities. The clients are only allowed to perform classical xor gates and single-qubit gates on quantum states. We also examine the type of security that can be achieved in this limited setting. Finally, we provide a proof-of-concept implementation using photonic qubits that allows four clients to compute a specific example of a multiparty function, the pairwise and.

  15. Economic models for management of resources in peer-to-peer and grid computing

    NASA Astrophysics Data System (ADS)

    Buyya, Rajkumar; Stockinger, Heinz; Giddy, Jonathan; Abramson, David

    2001-07-01

    The accelerated development in Peer-to-Peer (P2P) and Grid computing has positioned them as promising next generation computing platforms. They enable the creation of Virtual Enterprises (VE) for sharing resources distributed across the world. However, resource management, application development and usage models in these environments is a complex undertaking. This is due to the geographic distribution of resources that are owned by different organizations or peers. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. In order to address complex resource management issues, we have proposed a computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments. The framework provides mechanisms for optimizing resource provider and consumer objective functions through trading and brokering services. In a real world market, there exist various economic models for setting the price for goods based on supply-and-demand and their value to the user. They include commodity market, posted price, tenders and auctions. In this paper, we discuss the use of these models for interaction between Grid components in deciding resource value and the necessary infrastructure to realize them. In addition to normal services offered by Grid computing systems, we need an infrastructure to support interaction protocols, allocation mechanisms, currency, secure banking, and enforcement services. Furthermore, we demonstrate the usage of some of these economic models in resource brokering through Nimrod/G deadline and cost-based scheduling for two different optimization strategies on the World Wide Grid (WWG) testbed that contains peer-to-peer resources located on five continents: Asia, Australia, Europe, North America, and South America.

  16. Resource Provisioning in SLA-Based Cluster Computing

    NASA Astrophysics Data System (ADS)

    Xiong, Kaiqi; Suh, Sang

    Cluster computing is excellent for parallel computation. It has become increasingly popular. In cluster computing, a service level agreement (SLA) is a set of quality of services (QoS) and a fee agreed between a customer and an application service provider. It plays an important role in an e-business application. An application service provider uses a set of cluster computing resources to support e-business applications subject to an SLA. In this paper, the QoS includes percentile response time and cluster utilization. We present an approach for resource provisioning in such an environment that minimizes the total cost of cluster computing resources used by an application service provider for an e-business application that often requires parallel computation for high service performance, availability, and reliability while satisfying a QoS and a fee negotiated between a customer and the application service provider. Simulation experiments demonstrate the applicability of the approach.

  17. Computing the Envelope for Stepwise-Constant Resource Allocations

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Computing tight resource-level bounds is a fundamental problem in the construction of flexible plans with resource utilization. In this paper we describe an efficient algorithm that builds a resource envelope, the tightest possible such bound. The algorithm is based on transforming the temporal network of resource consuming and producing events into a flow network with nodes equal to the events and edges equal to the necessary predecessor links between events. A staged maximum flow problem on the network is then used to compute the time of occurrence and the height of each step of the resource envelope profile. Each stage has the same computational complexity of solving a maximum flow problem on the entire flow network. This makes this method computationally feasible and promising for use in the inner loop of flexible-time scheduling algorithms.

  18. Software and resources for computational medicinal chemistry

    PubMed Central

    Liao, Chenzhong; Sitzmann, Markus; Pugliese, Angelo; Nicklaus, Marc C

    2011-01-01

    Computer-aided drug design plays a vital role in drug discovery and development and has become an indispensable tool in the pharmaceutical industry. Computational medicinal chemists can take advantage of all kinds of software and resources in the computer-aided drug design field for the purposes of discovering and optimizing biologically active compounds. This article reviews software and other resources related to computer-aided drug design approaches, putting particular emphasis on structure-based drug design, ligand-based drug design, chemical databases and chemoinformatics tools. PMID:21707404

  19. System Resource Allocations | High-Performance Computing | NREL

    Science.gov Websites

    Allocations System Resource Allocations To use NREL's high-performance computing (HPC) resources : Compute hours on NREL HPC Systems including Peregrine and Eagle Storage space (in Terabytes) on Peregrine , Eagle and Gyrfalcon. Allocations are principally done in response to an annual call for allocation

  20. Computing the Envelope for Stepwise Constant Resource Allocations

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Clancy, Daniel (Technical Monitor)

    2001-01-01

    Estimating tight resource level is a fundamental problem in the construction of flexible plans with resource utilization. In this paper we describe an efficient algorithm that builds a resource envelope, the tightest possible such bound. The algorithm is based on transforming the temporal network of resource consuming and producing events into a flow network with noises equal to the events and edges equal to the necessary predecessor links between events. The incremental solution of a staged maximum flow problem on the network is then used to compute the time of occurrence and the height of each step of the resource envelope profile. The staged algorithm has the same computational complexity of solving a maximum flow problem on the entire flow network. This makes this method computationally feasible for use in the inner loop of search-based scheduling algorithms.

  1. Integration of Cloud resources in the LHCb Distributed Computing

    NASA Astrophysics Data System (ADS)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  2. Framework Resources Multiply Computing Power

    NASA Technical Reports Server (NTRS)

    2010-01-01

    As an early proponent of grid computing, Ames Research Center awarded Small Business Innovation Research (SBIR) funding to 3DGeo Development Inc., of Santa Clara, California, (now FusionGeo Inc., of The Woodlands, Texas) to demonstrate a virtual computer environment that linked geographically dispersed computer systems over the Internet to help solve large computational problems. By adding to an existing product, FusionGeo enabled access to resources for calculation- or data-intensive applications whenever and wherever they were needed. Commercially available as Accelerated Imaging and Modeling, the product is used by oil companies and seismic service companies, which require large processing and data storage capacities.

  3. Computation in Physics: Resources and Support

    NASA Astrophysics Data System (ADS)

    Engelhardt, Larry; Caballero, Marcos; Chonacky, Norman; Hilborn, Robert; Lopez Del Puerto, Marie; Roos, Kelly

    We will describe exciting new resources and support opportunities that have been developed by ``PICUP'' to help faculty to integrate computation into their physics courses. (``PICUP'' is the ``Partnership for Integration of Computation into Undergraduate Physics''). These resources include editable curricular materials that can be downloaded from the PICUP Collection of the ComPADRE Digital Library: www.compadre.org/PICUP. Support opportunities include week-long workshops during the summer and single-day workshops at national AAPT and APS meetings. This project is funded by the National Science Foundation under DUE IUSE Grants 1524128, 1524493, 1524963, 1525062, and 1525525.

  4. ACToR A Aggregated Computational Toxicology Resource ...

    EPA Pesticide Factsheets

    We are developing the ACToR system (Aggregated Computational Toxicology Resource) to serve as a repository for a variety of types of chemical, biological and toxicological data that can be used for predictive modeling of chemical toxicology. We are developing the ACToR system (Aggregated Computational Toxicology Resource) to serve as a repository for a variety of types of chemical, biological and toxicological data that can be used for predictive modeling of chemical toxicology.

  5. Computer Technology Resources for Literacy Projects.

    ERIC Educational Resources Information Center

    Florida State Council on Aging, Tallahassee.

    This resource booklet was prepared to assist literacy projects and community adult education programs in determining the technology they need to serve more older persons. Section 1 contains the following reprinted articles: "The Human Touch in the Computer Age: Seniors Learn Computer Skills from Schoolkids" (Suzanne Kashuba);…

  6. Computing arrival times of firefighting resources for initial attack

    Treesearch

    Romain M. Mees

    1978-01-01

    Dispatching of firefighting resources requires instantaneous or precalculated decisions. A FORTRAN computer program has been developed that can provide a list of resources in order of computed arrival time for initial attack on a fire. The program requires an accurate description of the existing road system and a list of all resources available on a planning unit....

  7. Computing Bounds on Resource Levels for Flexible Plans

    NASA Technical Reports Server (NTRS)

    Muscvettola, Nicola; Rijsman, David

    2009-01-01

    A new algorithm efficiently computes the tightest exact bound on the levels of resources induced by a flexible activity plan (see figure). Tightness of bounds is extremely important for computations involved in planning because tight bounds can save potentially exponential amounts of search (through early backtracking and detection of solutions), relative to looser bounds. The bound computed by the new algorithm, denoted the resource-level envelope, constitutes the measure of maximum and minimum consumption of resources at any time for all fixed-time schedules in the flexible plan. At each time, the envelope guarantees that there are two fixed-time instantiations one that produces the minimum level and one that produces the maximum level. Therefore, the resource-level envelope is the tightest possible resource-level bound for a flexible plan because any tighter bound would exclude the contribution of at least one fixed-time schedule. If the resource- level envelope can be computed efficiently, one could substitute looser bounds that are currently used in the inner cores of constraint-posting scheduling algorithms, with the potential for great improvements in performance. What is needed to reduce the cost of computation is an algorithm, the measure of complexity of which is no greater than a low-degree polynomial in N (where N is the number of activities). The new algorithm satisfies this need. In this algorithm, the computation of resource-level envelopes is based on a novel combination of (1) the theory of shortest paths in the temporal-constraint network for the flexible plan and (2) the theory of maximum flows for a flow network derived from the temporal and resource constraints. The measure of asymptotic complexity of the algorithm is O(N O(maxflow(N)), where O(x) denotes an amount of computing time or a number of arithmetic operations proportional to a number of the order of x and O(maxflow(N)) is the measure of complexity (and thus of cost) of a maximumflow

  8. Computer Maintenance Operations Center (CMOC), additional computer support equipment ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Computer Maintenance Operations Center (CMOC), additional computer support equipment - Beale Air Force Base, Perimeter Acquisition Vehicle Entry Phased-Array Warning System, Techinical Equipment Building, End of Spencer Paul Road, north of Warren Shingle Road (14th Street), Marysville, Yuba County, CA

  9. Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds

    NASA Astrophysics Data System (ADS)

    Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni

    2012-09-01

    Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.

  10. A study of computer graphics technology in application of communication resource management

    NASA Astrophysics Data System (ADS)

    Li, Jing; Zhou, Liang; Yang, Fei

    2017-08-01

    With the development of computer technology, computer graphics technology has been widely used. Especially, the success of object-oriented technology and multimedia technology promotes the development of graphics technology in the computer software system. Therefore, the computer graphics theory and application technology have become an important topic in the field of computer, while the computer graphics technology becomes more and more extensive in various fields of application. In recent years, with the development of social economy, especially the rapid development of information technology, the traditional way of communication resource management cannot effectively meet the needs of resource management. In this case, the current communication resource management is still using the original management tools and management methods, resource management equipment management and maintenance, which brought a lot of problems. It is very difficult for non-professionals to understand the equipment and the situation in communication resource management. Resource utilization is relatively low, and managers cannot quickly and accurately understand the resource conditions. Aimed at the above problems, this paper proposes to introduce computer graphics technology into the communication resource management. The introduction of computer graphics not only makes communication resource management more vivid, but also reduces the cost of resource management and improves work efficiency.

  11. Computer Network Resources for Physical Geography Instruction.

    ERIC Educational Resources Information Center

    Bishop, Michael P.; And Others

    1993-01-01

    Asserts that the use of computer networks provides an important and effective resource for geography instruction. Describes the use of the Internet network in physical geography instruction. Provides an example of the use of Internet resources in a climatology/meteorology course. (CFR)

  12. Cost-Benefit Analysis of Computer Resources for Machine Learning

    USGS Publications Warehouse

    Champion, Richard A.

    2007-01-01

    Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.

  13. NMRbox: A Resource for Biomolecular NMR Computation.

    PubMed

    Maciejewski, Mark W; Schuyler, Adam D; Gryk, Michael R; Moraru, Ion I; Romero, Pedro R; Ulrich, Eldon L; Eghbalnia, Hamid R; Livny, Miron; Delaglio, Frank; Hoch, Jeffrey C

    2017-04-25

    Advances in computation have been enabling many recent advances in biomolecular applications of NMR. Due to the wide diversity of applications of NMR, the number and variety of software packages for processing and analyzing NMR data is quite large, with labs relying on dozens, if not hundreds of software packages. Discovery, acquisition, installation, and maintenance of all these packages is a burdensome task. Because the majority of software packages originate in academic labs, persistence of the software is compromised when developers graduate, funding ceases, or investigators turn to other projects. To simplify access to and use of biomolecular NMR software, foster persistence, and enhance reproducibility of computational workflows, we have developed NMRbox, a shared resource for NMR software and computation. NMRbox employs virtualization to provide a comprehensive software environment preconfigured with hundreds of software packages, available as a downloadable virtual machine or as a Platform-as-a-Service supported by a dedicated compute cloud. Ongoing development includes a metadata harvester to regularize, annotate, and preserve workflows and facilitate and enhance data depositions to BioMagResBank, and tools for Bayesian inference to enhance the robustness and extensibility of computational analyses. In addition to facilitating use and preservation of the rich and dynamic software environment for biomolecular NMR, NMRbox fosters the development and deployment of a new class of metasoftware packages. NMRbox is freely available to not-for-profit users. Copyright © 2017 Biophysical Society. All rights reserved.

  14. Distributed Problem Solving: Adaptive Networks with a Computer Intermediary Resource. Intelligent Executive Computer Communication

    DTIC Science & Technology

    1991-06-01

    Proceedings of The National Conference on Artificial Intelligence , pages 181-184, The American Association for Aritificial Intelligence , Pittsburgh...Intermediary Resource: Intelligent Executive Computer Communication John Lyman and Carla J. Conaway University of California at Los Angeles for Contracting...Include Security Classification) Interim Report: Distributed Problem Solving: Adaptive Networks With a Computer Intermediary Resource: Intelligent

  15. ACToR A Aggregated Computational Toxicology Resource (S) ...

    EPA Pesticide Factsheets

    We are developing the ACToR system (Aggregated Computational Toxicology Resource) to serve as a repository for a variety of types of chemical, biological and toxicological data that can be used for predictive modeling of chemical toxicology. We are developing the ACToR system (Aggregated Computational Toxicology Resource) to serve as a repository for a variety of types of chemical, biological and toxicological data that can be used for predictive modeling of chemical toxicology.

  16. Shared-resource computing for small research labs.

    PubMed

    Ackerman, M J

    1982-04-01

    A real time laboratory computer network is described. This network is composed of four real-time laboratory minicomputers located in each of four division laboratories and a larger minicomputer in a centrally located computer room. Off the shelf hardware and software were used with no customization. The network is configured for resource sharing using DECnet communications software and the RSX-11-M multi-user real-time operating system. The cost effectiveness of the shared resource network and multiple real-time processing using priority scheduling is discussed. Examples of utilization within a medical research department are given.

  17. A resource-sharing model based on a repeated game in fog computing.

    PubMed

    Sun, Yan; Zhang, Nan

    2017-03-01

    With the rapid development of cloud computing techniques, the number of users is undergoing exponential growth. It is difficult for traditional data centers to perform many tasks in real time because of the limited bandwidth of resources. The concept of fog computing is proposed to support traditional cloud computing and to provide cloud services. In fog computing, the resource pool is composed of sporadic distributed resources that are more flexible and movable than a traditional data center. In this paper, we propose a fog computing structure and present a crowd-funding algorithm to integrate spare resources in the network. Furthermore, to encourage more resource owners to share their resources with the resource pool and to supervise the resource supporters as they actively perform their tasks, we propose an incentive mechanism in our algorithm. Simulation results show that our proposed incentive mechanism can effectively reduce the SLA violation rate and accelerate the completion of tasks.

  18. NASA Center for Computational Sciences: History and Resources

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.

  19. A resource management architecture based on complex network theory in cloud computing federation

    NASA Astrophysics Data System (ADS)

    Zhang, Zehua; Zhang, Xuejie

    2011-10-01

    Cloud Computing Federation is a main trend of Cloud Computing. Resource Management has significant effect on the design, realization, and efficiency of Cloud Computing Federation. Cloud Computing Federation has the typical characteristic of the Complex System, therefore, we propose a resource management architecture based on complex network theory for Cloud Computing Federation (abbreviated as RMABC) in this paper, with the detailed design of the resource discovery and resource announcement mechanisms. Compare with the existing resource management mechanisms in distributed computing systems, a Task Manager in RMABC can use the historical information and current state data get from other Task Managers for the evolution of the complex network which is composed of Task Managers, thus has the advantages in resource discovery speed, fault tolerance and adaptive ability. The result of the model experiment confirmed the advantage of RMABC in resource discovery performance.

  20. A review of Computer Science resources for learning and teaching with K-12 computing curricula: an Australian case study

    NASA Astrophysics Data System (ADS)

    Falkner, Katrina; Vivian, Rebecca

    2015-10-01

    To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age children, with the intention to engage children and increase interest, rather than to formally teach concepts and skills. What is the educational quality of existing Computer Science resources and to what extent are they suitable for classroom learning and teaching? In this paper, an assessment framework is presented to evaluate the quality of online Computer Science resources. Further, a semi-systematic review of available online Computer Science resources was conducted to evaluate resources available for classroom learning and teaching and to identify gaps in resource availability, using the Australian curriculum as a case study analysis. The findings reveal a predominance of quality resources, however, a number of critical gaps were identified. This paper provides recommendations and guidance for the development of new and supplementary resources and future research.

  1. Contextuality as a Resource for Models of Quantum Computation with Qubits

    NASA Astrophysics Data System (ADS)

    Bermejo-Vega, Juan; Delfosse, Nicolas; Browne, Dan E.; Okay, Cihan; Raussendorf, Robert

    2017-09-01

    A central question in quantum computation is to identify the resources that are responsible for quantum speed-up. Quantum contextuality has been recently shown to be a resource for quantum computation with magic states for odd-prime dimensional qudits and two-dimensional systems with real wave functions. The phenomenon of state-independent contextuality poses a priori an obstruction to characterizing the case of regular qubits, the fundamental building block of quantum computation. Here, we establish contextuality of magic states as a necessary resource for a large class of quantum computation schemes on qubits. We illustrate our result with a concrete scheme related to measurement-based quantum computation.

  2. A multipurpose computing center with distributed resources

    NASA Astrophysics Data System (ADS)

    Chudoba, J.; Adam, M.; Adamová, D.; Kouba, T.; Mikula, A.; Říkal, V.; Švec, J.; Uhlířová, J.; Vokáč, P.; Svatoš, M.

    2017-10-01

    The Computing Center of the Institute of Physics (CC IoP) of the Czech Academy of Sciences serves a broad spectrum of users with various computing needs. It runs WLCG Tier-2 center for the ALICE and the ATLAS experiments; the same group of services is used by astroparticle physics projects the Pierre Auger Observatory (PAO) and the Cherenkov Telescope Array (CTA). OSG stack is installed for the NOvA experiment. Other groups of users use directly local batch system. Storage capacity is distributed to several locations. DPM servers used by the ATLAS and the PAO are all in the same server room, but several xrootd servers for the ALICE experiment are operated in the Nuclear Physics Institute in Řež, about 10 km away. The storage capacity for the ATLAS and the PAO is extended by resources of the CESNET - the Czech National Grid Initiative representative. Those resources are in Plzen and Jihlava, more than 100 km away from the CC IoP. Both distant sites use a hierarchical storage solution based on disks and tapes. They installed one common dCache instance, which is published in the CC IoP BDII. ATLAS users can use these resources using the standard ATLAS tools in the same way as the local storage without noticing this geographical distribution. Computing clusters LUNA and EXMAG dedicated to users mostly from the Solid State Physics departments offer resources for parallel computing. They are part of the Czech NGI infrastructure MetaCentrum with distributed batch system based on torque with a custom scheduler. Clusters are installed remotely by the MetaCentrum team and a local contact helps only when needed. Users from IoP have exclusive access only to a part of these two clusters and take advantage of higher priorities on the rest (1500 cores in total), which can also be used by any user of the MetaCentrum. IoP researchers can also use distant resources located in several towns of the Czech Republic with a capacity of more than 12000 cores in total.

  3. Exploiting volatile opportunistic computing resources with Lobster

    NASA Astrophysics Data System (ADS)

    Woodard, Anna; Wolf, Matthias; Mueller, Charles; Tovar, Ben; Donnelly, Patrick; Hurtado Anampa, Kenyi; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas

    2015-12-01

    Analysis of high energy physics experiments using the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC) can be limited by availability of computing resources. As a joint effort involving computer scientists and CMS physicists at Notre Dame, we have developed an opportunistic workflow management tool, Lobster, to harvest available cycles from university campus computing pools. Lobster consists of a management server, file server, and worker processes which can be submitted to any available computing resource without requiring root access. Lobster makes use of the Work Queue system to perform task management, while the CMS specific software environment is provided via CVMFS and Parrot. Data is handled via Chirp and Hadoop for local data storage and XrootD for access to the CMS wide-area data federation. An extensive set of monitoring and diagnostic tools have been developed to facilitate system optimisation. We have tested Lobster using the 20 000-core cluster at Notre Dame, achieving approximately 8-10k tasks running simultaneously, sustaining approximately 9 Gbit/s of input data and 340 Mbit/s of output data.

  4. iTools: a framework for classification, categorization and integration of computational biology resources.

    PubMed

    Dinov, Ivo D; Rubin, Daniel; Lorensen, William; Dugan, Jonathan; Ma, Jeff; Murphy, Shawn; Kirschner, Beth; Bug, William; Sherman, Michael; Floratos, Aris; Kennedy, David; Jagadish, H V; Schmidt, Jeanette; Athey, Brian; Califano, Andrea; Musen, Mark; Altman, Russ; Kikinis, Ron; Kohane, Isaac; Delp, Scott; Parker, D Stott; Toga, Arthur W

    2008-05-28

    The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management

  5. iTools: A Framework for Classification, Categorization and Integration of Computational Biology Resources

    PubMed Central

    Dinov, Ivo D.; Rubin, Daniel; Lorensen, William; Dugan, Jonathan; Ma, Jeff; Murphy, Shawn; Kirschner, Beth; Bug, William; Sherman, Michael; Floratos, Aris; Kennedy, David; Jagadish, H. V.; Schmidt, Jeanette; Athey, Brian; Califano, Andrea; Musen, Mark; Altman, Russ; Kikinis, Ron; Kohane, Isaac; Delp, Scott; Parker, D. Stott; Toga, Arthur W.

    2008-01-01

    The advancement of the computational biology field hinges on progress in three fundamental directions – the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources–data, software tools and web-services. The iTools design, implementation and resource meta - data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource

  6. Quantum Computing: Selected Internet Resources for Librarians, Researchers, and the Casually Curious

    ERIC Educational Resources Information Center

    Cirasella, Jill

    2009-01-01

    This article presents an annotated selection of the most important and informative Internet resources for learning about quantum computing, finding quantum computing literature, and tracking quantum computing news. All of the quantum computing resources described in this article are freely available, English-language web sites that fall into one…

  7. Enabling Grid Computing resources within the KM3NeT computing model

    NASA Astrophysics Data System (ADS)

    Filippidis, Christos

    2016-04-01

    KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that - located at the bottom of the Mediterranean Sea - will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  8. Incorporating computational resources in a cancer research program

    PubMed Central

    Woods, Nicholas T.; Jhuraney, Ankita; Monteiro, Alvaro N.A.

    2015-01-01

    Recent technological advances have transformed cancer genetics research. These advances have served as the basis for the generation of a number of richly annotated datasets relevant to the cancer geneticist. In addition, many of these technologies are now within reach of smaller laboratories to answer specific biological questions. Thus, one of the most pressing issues facing an experimental cancer biology research program in genetics is incorporating data from multiple sources to annotate, visualize, and analyze the system under study. Fortunately, there are several computational resources to aid in this process. However, a significant effort is required to adapt a molecular biology-based research program to take advantage of these datasets. Here, we discuss the lessons learned in our laboratory and share several recommendations to make this transition effectively. This article is not meant to be a comprehensive evaluation of all the available resources, but rather highlight those that we have incorporated into our laboratory and how to choose the most appropriate ones for your research program. PMID:25324189

  9. Computers as learning resources in the health sciences: impact and issues.

    PubMed Central

    Ellis, L B; Hannigan, G G

    1986-01-01

    Starting with two computer terminals in 1972, the Health Sciences Learning Resources Center of the University of Minnesota Bio-Medical Library expanded its instructional facilities to ten terminals and thirty-five microcomputers by 1985. Computer use accounted for 28% of total center circulation. The impact of these resources on health sciences curricula is described and issues related to use, support, and planning are raised and discussed. Judged by their acceptance and educational value, computers are successful health sciences learning resources at the University of Minnesota. PMID:3518843

  10. Automating usability of ATLAS Distributed Computing resources

    NASA Astrophysics Data System (ADS)

    Tupputi, S. A.; Di Girolamo, A.; Kouba, T.; Schovancová, J.; Atlas Collaboration

    2014-06-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.

  11. Performance Evaluation of Resource Management in Cloud Computing Environments.

    PubMed

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  12. Performance Evaluation of Resource Management in Cloud Computing Environments

    PubMed Central

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price. PMID:26555730

  13. Development of Computer-Based Resources for Textile Education.

    ERIC Educational Resources Information Center

    Hopkins, Teresa; Thomas, Andrew; Bailey, Mike

    1998-01-01

    Describes the production of computer-based resources for students of textiles and engineering in the United Kingdom. Highlights include funding by the Teaching and Learning Technology Programme (TLTP), courseware author/subject expert interaction, usage test and evaluation, authoring software, graphics, computer-aided design simulation, self-test…

  14. Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue

    NASA Astrophysics Data System (ADS)

    Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; Bagliesi, Giuseppe; Belforte, Stephano; Campana, Simone; Dimou, Maria; Flix, Jose; Forti, Alessandra; di Girolamo, A.; Karavakis, Edward; Lammel, Stephan; Litmaath, Maarten; Sciaba, Andrea; Valassi, Andrea

    2017-10-01

    The Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a model does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.

  15. Computer-Based Resource Accounting Model for Automobile Technology Impact Assessment

    DOT National Transportation Integrated Search

    1976-10-01

    A computer-implemented resource accounting model has been developed for assessing resource impacts of future automobile technology options. The resources tracked are materials, energy, capital, and labor. The model has been used in support of the Int...

  16. Exploiting multicore compute resources in the CMS experiment

    NASA Astrophysics Data System (ADS)

    Ramírez, J. E.; Pérez-Calero Yzquierdo, A.; Hernández, J. M.; CMS Collaboration

    2016-10-01

    CMS has developed a strategy to efficiently exploit the multicore architecture of the compute resources accessible to the experiment. A coherent use of the multiple cores available in a compute node yields substantial gains in terms of resource utilization. The implemented approach makes use of the multithreading support of the event processing framework and the multicore scheduling capabilities of the resource provisioning system. Multicore slots are acquired and provisioned by means of multicore pilot agents which internally schedule and execute single and multicore payloads. Multicore scheduling and multithreaded processing are currently used in production for online event selection and prompt data reconstruction. More workflows are being adapted to run in multicore mode. This paper presents a review of the experience gained in the deployment and operation of the multicore scheduling and processing system, the current status and future plans.

  17. Modern Trends of Additional Professional Education Development for Mineral Resource Extracting

    NASA Astrophysics Data System (ADS)

    Borisova, Olga; Frolova, Victoria; Merzlikina, Elena

    2017-11-01

    The article contains the results of development of additional professional education research, including the field of mineral resource extracting in Russia. The paper describes the levels of education received in Russian Federation and determines the place and role of additional professional education among them. Key factors influencing the development of additional professional education are identified. As a result of the research, the authors proved the necessity of introducing additional professional education programs on educational Internet platforms for mineral resource extracting.

  18. The Gain of Resource Delegation in Distributed Computing Environments

    NASA Astrophysics Data System (ADS)

    Fölling, Alexander; Grimme, Christian; Lepping, Joachim; Papaspyrou, Alexander

    In this paper, we address job scheduling in Distributed Computing Infrastructures, that is a loosely coupled network of autonomous acting High Performance Computing systems. In contrast to the common approach of mutual workload exchange, we consider the more intuitive operator's viewpoint of load-dependent resource reconfiguration. In case of a site's over-utilization, the scheduling system is able to lease resources from other sites to keep up service quality for its local user community. Contrary, the granting of idle resources can increase utilization in times of low local workload and thus ensure higher efficiency. The evaluation considers real workload data and is done with respect to common service quality indicators. For two simple resource exchange policies and three basic setups we show the possible gain of this approach and analyze the dynamics in workload-adaptive reconfiguration behavior.

  19. An emulator for minimizing computer resources for finite element analysis

    NASA Technical Reports Server (NTRS)

    Melosh, R.; Utku, S.; Islam, M.; Salama, M.

    1984-01-01

    A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).

  20. Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue

    DOE PAGES

    Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; ...

    2017-10-01

    Here, the Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a modelmore » does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.« less

  1. Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey

    Here, the Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a modelmore » does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.« less

  2. ACToR A Aggregated Computational Toxicology Resource

    EPA Science Inventory

    We are developing the ACToR system (Aggregated Computational Toxicology Resource) to serve as a repository for a variety of types of chemical, biological and toxicological data that can be used for predictive modeling of chemical toxicology.

  3. Addition of multiple limiting resources reduces grassland diversity.

    PubMed

    Harpole, W Stanley; Sullivan, Lauren L; Lind, Eric M; Firn, Jennifer; Adler, Peter B; Borer, Elizabeth T; Chase, Jonathan; Fay, Philip A; Hautier, Yann; Hillebrand, Helmut; MacDougall, Andrew S; Seabloom, Eric W; Williams, Ryan; Bakker, Jonathan D; Cadotte, Marc W; Chaneton, Enrique J; Chu, Chengjin; Cleland, Elsa E; D'Antonio, Carla; Davies, Kendi F; Gruner, Daniel S; Hagenah, Nicole; Kirkman, Kevin; Knops, Johannes M H; La Pierre, Kimberly J; McCulley, Rebecca L; Moore, Joslin L; Morgan, John W; Prober, Suzanne M; Risch, Anita C; Schuetz, Martin; Stevens, Carly J; Wragg, Peter D

    2016-09-01

    Niche dimensionality provides a general theoretical explanation for biodiversity-more niches, defined by more limiting factors, allow for more ways that species can coexist. Because plant species compete for the same set of limiting resources, theory predicts that addition of a limiting resource eliminates potential trade-offs, reducing the number of species that can coexist. Multiple nutrient limitation of plant production is common and therefore fertilization may reduce diversity by reducing the number or dimensionality of belowground limiting factors. At the same time, nutrient addition, by increasing biomass, should ultimately shift competition from belowground nutrients towards a one-dimensional competitive trade-off for light. Here we show that plant species diversity decreased when a greater number of limiting nutrients were added across 45 grassland sites from a multi-continent experimental network. The number of added nutrients predicted diversity loss, even after controlling for effects of plant biomass, and even where biomass production was not nutrient-limited. We found that elevated resource supply reduced niche dimensionality and diversity and increased both productivity and compositional turnover. Our results point to the importance of understanding dimensionality in ecological systems that are undergoing diversity loss in response to multiple global change factors.

  4. Aggregation of Cricket Activity in Response to Resource Addition Increases Local Diversity.

    PubMed

    Szinwelski, Neucir; Rosa, Cassiano Sousa; Solar, Ricardo Ribeiro de Castro; Sperber, Carlos Frankl

    2015-01-01

    Crickets are often found feeding on fallen fruits among forest litter. Fruits and other sugar-rich resources are not homogeneously distributed, nor are they always available. We therefore expect that crickets dwelling in forest litter have a limited supply of sugar-rich resource, and will perceive this and displace towards resource-supplemented sites. Here we evaluate how sugar availability affects cricket species richness and abundance in old-growth Atlantic forest by spraying sugarcane syrup on leaf litter, simulating increasing availability, and collecting crickets via pitfall trapping. We found an asymptotic positive association between resource addition and species richness, and an interaction between resource addition and species identity on cricket abundance, which indicates differential effects of resource addition among cricket species. Our results indicate that 12 of the 13 cricket species present in forest litter are maintained at low densities by resource scarcity; this highlights sugar-rich resource as a short-term driver of litter cricket community structure in tropical forests. When resource was experimentally increased, species richness increased due to behavioral displacement. We present evidence that the density of many species is limited by resource scarcity and, when resources are added, behavioral displacement promotes increased species packing and alters species composition. Further, our findings have technical applicability for increasing sampling efficiency of local cricket diversity in studies aiming to estimate species richness, but with no regard to local environmental drivers or species-abundance characteristics.

  5. Aggregated Computational Toxicology Online Resource

    EPA Pesticide Factsheets

    Aggregated Computational Toxicology Online Resource (AcTOR) is EPA's online aggregator of all the public sources of chemical toxicity data. ACToR aggregates data from over 1,000 public sources on over 500,000 chemicals and is searchable by chemical name, other identifiers and by chemical structure. It can be used to query a specific chemical and find all publicly available hazard, exposure and risk assessment data. It also provides access to EPA's ToxCast, ToxRefDB, DSSTox, Dashboard and DSSTox data.

  6. The Computer Explosion: Implications for Educational Equity. Resource Notebook.

    ERIC Educational Resources Information Center

    Denbo, Sheryl, Comp.

    This notebook was prepared to provide resources for educators interested in using computers to increase opportunities for all students. The notebook contains specially prepared materials and selected newspaper and journal articles. The first section reviews the issues related to computer equity (equal access, tracking through different…

  7. ACToR: Aggregated Computational Toxicology Resource (T)

    EPA Science Inventory

    The EPA Aggregated Computational Toxicology Resource (ACToR) is a set of databases compiling information on chemicals in the environment from a large number of public and in-house EPA sources. ACToR has 3 main goals: (1) The serve as a repository of public toxicology information ...

  8. QMC Goes BOINC: Using Public Resource Computing to Perform Quantum Monte Carlo Calculations

    NASA Astrophysics Data System (ADS)

    Rainey, Cameron; Engelhardt, Larry; Schröder, Christian; Hilbig, Thomas

    2008-10-01

    Theoretical modeling of magnetic molecules traditionally involves the diagonalization of quantum Hamiltonian matrices. However, as the complexity of these molecules increases, the matrices become so large that this process becomes unusable. An additional challenge to this modeling is that many repetitive calculations must be performed, further increasing the need for computing power. Both of these obstacles can be overcome by using a quantum Monte Carlo (QMC) method and a distributed computing project. We have recently implemented a QMC method within the Spinhenge@home project, which is a Public Resource Computing (PRC) project where private citizens allow part-time usage of their PCs for scientific computing. The use of PRC for scientific computing will be described in detail, as well as how you can contribute to the project. See, e.g., L. Engelhardt, et. al., Angew. Chem. Int. Ed. 47, 924 (2008). C. Schröoder, in Distributed & Grid Computing - Science Made Transparent for Everyone. Principles, Applications and Supporting Communities. (Weber, M.H.W., ed., 2008). Project URL: http://spin.fh-bielefeld.de

  9. Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing.

    PubMed

    Zhang, Nan; Yang, Xiaolong; Zhang, Min; Sun, Yan

    2016-01-01

    Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network.

  10. Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing

    PubMed Central

    Zhang, Min; Sun, Yan

    2016-01-01

    Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network. PMID:28030553

  11. Junior High Computer Studies: Teacher Resource Manual.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton. Curriculum Branch.

    This manual is designed to help classroom teachers in Alberta, Canada implement the Junior High Computer Studies Program. The first eight sections cover the following material: (1) introduction to the teacher resource manual; (2) program rationale and philosophy; (3) general learner expectations; (4) program framework and flexibility; (5) program…

  12. ACToR – Aggregated Computational Toxicology Resource ...

    EPA Pesticide Factsheets

    ACToR (Aggregated Computational Toxicology Resource) is a collection of databases collated or developed by the US EPA National Center for Computational Toxicology (NCCT). More than 200 sources of publicly available data on environmental chemicals have been brought together and made searchable by chemical name and other identifiers, and by chemical structure. Data includes chemical structure, physico-chemical values, in vitro assay data and in vivo toxicology data. Chemicals include, but are not limited to, high and medium production volume industrial chemicals, pesticides (active and inert ingredients), and potential ground and drinking water contaminants.

  13. Diversity in computing technologies and strategies for dynamic resource allocation

    DOE PAGES

    Garzoglio, G.; Gutsche, O.

    2015-12-23

    Here, High Energy Physics (HEP) is a very data intensive and trivially parallelizable science discipline. HEP is probing nature at increasingly finer details requiring ever increasing computational resources to process and analyze experimental data. In this paper, we discuss how HEP provisioned resources so far using Grid technologies, how HEP is starting to include new resource providers like commercial Clouds and HPC installations, and how HEP is transparently provisioning resources at these diverse providers.

  14. Applications of computer-aided text analysis in natural resources.

    Treesearch

    David N. Bengston

    2000-01-01

    Ten contributed papers describe the use of a variety of approaches to computer-aided text analysis and their application to a wide range of research questions related to natural resources and the environment. Taken together, these papers paint a picture of a growing and vital area of research on the human dimensions of natural resource management.

  15. The Job Demands-Resources Model: An Analysis of Additive and Joint Effects of Demands and Resources

    ERIC Educational Resources Information Center

    Hu, Qiao; Schaufeli, Wilmar B.; Taris, Toon W.

    2011-01-01

    The present study investigated the additive, synergistic, and moderating effects of job demands and job resources on well-being (burnout and work engagement) and organizational outcomes, as specified by the Job Demands-Resources (JD-R) model. A survey was conducted among two Chinese samples: 625 blue collar workers and 761 health professionals. A…

  16. A Review of Computer Science Resources for Learning and Teaching with K-12 Computing Curricula: An Australian Case Study

    ERIC Educational Resources Information Center

    Falkner, Katrina; Vivian, Rebecca

    2015-01-01

    To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age…

  17. VECTR: Virtual Environment Computational Training Resource

    NASA Technical Reports Server (NTRS)

    Little, William L.

    2018-01-01

    The Westridge Middle School Curriculum and Community Night is an annual event designed to introduce students and parents to potential employers in the Central Florida area. NASA participated in the event in 2017, and has been asked to come back for the 2018 event on January 25. We will be demonstrating our Microsoft Hololens Virtual Rovers project, and the Virtual Environment Computational Training Resource (VECTR) virtual reality tool.

  18. A Review of Resources for Evaluating K-12 Computer Science Education Programs

    ERIC Educational Resources Information Center

    Randolph, Justus J.; Hartikainen, Elina

    2004-01-01

    Since computer science education is a key to preparing students for a technologically-oriented future, it makes sense to have high quality resources for conducting summative and formative evaluation of those programs. This paper describes the results of a critical analysis of the resources for evaluating K-12 computer science education projects.…

  19. ACToR A Aggregated Computational Toxicology Resource (S)

    EPA Science Inventory

    We are developing the ACToR system (Aggregated Computational Toxicology Resource) to serve as a repository for a variety of types of chemical, biological and toxicological data that can be used for predictive modeling of chemical toxicology.

  20. BelleII@home: Integrate volunteer computing resources into DIRAC in a secure way

    NASA Astrophysics Data System (ADS)

    Wu, Wenjing; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo; Kan, Wenxiao; Urquijo, Phillip

    2017-10-01

    The exploitation of volunteer computing resources has become a popular practice in the HEP computing community as the huge amount of potential computing power it provides. In the recent HEP experiments, the grid middleware has been used to organize the services and the resources, however it relies heavily on the X.509 authentication, which is contradictory to the untrusted feature of volunteer computing resources, therefore one big challenge to utilize the volunteer computing resources is how to integrate them into the grid middleware in a secure way. The DIRAC interware which is commonly used as the major component of the grid computing infrastructure for several HEP experiments proposes an even bigger challenge to this paradox as its pilot is more closely coupled with operations requiring the X.509 authentication compared to the implementations of pilot in its peer grid interware. The Belle II experiment is a B-factory experiment at KEK, and it uses DIRAC for its distributed computing. In the project of BelleII@home, in order to integrate the volunteer computing resources into the Belle II distributed computing platform in a secure way, we adopted a new approach which detaches the payload running from the Belle II DIRAC pilot which is a customized pilot pulling and processing jobs from the Belle II distributed computing platform, so that the payload can run on volunteer computers without requiring any X.509 authentication. In this approach we developed a gateway service running on a trusted server which handles all the operations requiring the X.509 authentication. So far, we have developed and deployed the prototype of BelleII@home, and tested its full workflow which proves the feasibility of this approach. This approach can also be applied on HPC systems whose work nodes do not have outbound connectivity to interact with the DIRAC system in general.

  1. Cross stratum resources protection in fog-computing-based radio over fiber networks for 5G services

    NASA Astrophysics Data System (ADS)

    Guo, Shaoyong; Shao, Sujie; Wang, Yao; Yang, Hui

    2017-09-01

    In order to meet the requirement of internet of things (IoT) and 5G, the cloud radio access network is a paradigm which converges all base stations computational resources into a cloud baseband unit (BBU) pool, while the distributed radio frequency signals are collected by remote radio head (RRH). A precondition for centralized processing in the BBU pool is an interconnection fronthaul network with high capacity and low delay. However, it has become more complex and frequent in the interaction between RRH and BBU and resource scheduling among BBUs in cloud. Cloud radio over fiber network has been proposed in our previous work already. In order to overcome the complexity and latency, in this paper, we first present a novel cross stratum resources protection (CSRP) architecture in fog-computing-based radio over fiber networks (F-RoFN) for 5G services. Additionally, a cross stratum protection (CSP) scheme considering the network survivability is introduced in the proposed architecture. The CSRP with CSP scheme can effectively pull the remote processing resource locally to implement the cooperative radio resource management, enhance the responsiveness and resilience to the dynamic end-to-end 5G service demands, and globally optimize optical network, wireless and fog resources. The feasibility and efficiency of the proposed architecture with CSP scheme are verified on our software defined networking testbed in terms of service latency, transmission success rate, resource occupation rate and blocking probability.

  2. A lightweight distributed framework for computational offloading in mobile cloud computing.

    PubMed

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.

  3. ACToR-Aggregated Computational Resource | Science ...

    EPA Pesticide Factsheets

    ACToR (Aggregated Computational Toxicology Resource) is a database and set of software applications that bring into one central location many types and sources of data on environmental chemicals. Currently, the ACToR chemical database contains information on chemical structure, in vitro bioassays and in vivo toxicology assays derived from more than 150 sources including the U.S. Environmental Protection Agency (EPA), Centers for Disease Control (CDC), U.S. Food & Drug Administration (FDA), National Institutes of Health (NIH), state agencies, corresponding government agencies in Canada, Europe and Japan, universities, the World Health Organization (WHO) and non-governmental organizations (NGOs). At the EPA National Center for Computational Toxicology, ACToR helps manage large data sets being used in a high throughput environmental chemical screening and prioritization program called ToxCast(TM).

  4. Perspectives on Sharing Models and Related Resources in Computational Biomechanics Research.

    PubMed

    Erdemir, Ahmet; Hunter, Peter J; Holzapfel, Gerhard A; Loew, Leslie M; Middleton, John; Jacobs, Christopher R; Nithiarasu, Perumal; Löhner, Rainlad; Wei, Guowei; Winkelstein, Beth A; Barocas, Victor H; Guilak, Farshid; Ku, Joy P; Hicks, Jennifer L; Delp, Scott L; Sacks, Michael; Weiss, Jeffrey A; Ateshian, Gerard A; Maas, Steve A; McCulloch, Andrew D; Peng, Grace C Y

    2018-02-01

    The role of computational modeling for biomechanics research and related clinical care will be increasingly prominent. The biomechanics community has been developing computational models routinely for exploration of the mechanics and mechanobiology of diverse biological structures. As a result, a large array of models, data, and discipline-specific simulation software has emerged to support endeavors in computational biomechanics. Sharing computational models and related data and simulation software has first become a utilitarian interest, and now, it is a necessity. Exchange of models, in support of knowledge exchange provided by scholarly publishing, has important implications. Specifically, model sharing can facilitate assessment of reproducibility in computational biomechanics and can provide an opportunity for repurposing and reuse, and a venue for medical training. The community's desire to investigate biological and biomechanical phenomena crossing multiple systems, scales, and physical domains, also motivates sharing of modeling resources as blending of models developed by domain experts will be a required step for comprehensive simulation studies as well as the enhancement of their rigor and reproducibility. The goal of this paper is to understand current perspectives in the biomechanics community for the sharing of computational models and related resources. Opinions on opportunities, challenges, and pathways to model sharing, particularly as part of the scholarly publishing workflow, were sought. A group of journal editors and a handful of investigators active in computational biomechanics were approached to collect short opinion pieces as a part of a larger effort of the IEEE EMBS Computational Biology and the Physiome Technical Committee to address model reproducibility through publications. A synthesis of these opinion pieces indicates that the community recognizes the necessity and usefulness of model sharing. There is a strong will to facilitate

  5. ACToR - Aggregated Computational Toxicology Resource

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judson, Richard; Richard, Ann; Dix, David

    2008-11-15

    ACToR (Aggregated Computational Toxicology Resource) is a database and set of software applications that bring into one central location many types and sources of data on environmental chemicals. Currently, the ACToR chemical database contains information on chemical structure, in vitro bioassays and in vivo toxicology assays derived from more than 150 sources including the U.S. Environmental Protection Agency (EPA), Centers for Disease Control (CDC), U.S. Food and Drug Administration (FDA), National Institutes of Health (NIH), state agencies, corresponding government agencies in Canada, Europe and Japan, universities, the World Health Organization (WHO) and non-governmental organizations (NGOs). At the EPA National Centermore » for Computational Toxicology, ACToR helps manage large data sets being used in a high-throughput environmental chemical screening and prioritization program called ToxCast{sup TM}.« less

  6. Computer-aided resource planning and scheduling for radiological services

    NASA Astrophysics Data System (ADS)

    Garcia, Hong-Mei C.; Yun, David Y.; Ge, Yiqun; Khan, Javed I.

    1996-05-01

    There exists tremendous opportunity in hospital-wide resource optimization based on system integration. This paper defines the resource planning and scheduling requirements integral to PACS, RIS and HIS integration. An multi-site case study is conducted to define the requirements. A well-tested planning and scheduling methodology, called Constrained Resource Planning model, has been applied to the chosen problem of radiological service optimization. This investigation focuses on resource optimization issues for minimizing the turnaround time to increase clinical efficiency and customer satisfaction, particularly in cases where the scheduling of multiple exams are required for a patient. How best to combine the information system efficiency and human intelligence in improving radiological services is described. Finally, an architecture for interfacing a computer-aided resource planning and scheduling tool with the existing PACS, HIS and RIS implementation is presented.

  7. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    PubMed Central

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  8. Assessing attitudes toward computers and the use of Internet resources among undergraduate microbiology students

    NASA Astrophysics Data System (ADS)

    Anderson, Delia Marie Castro

    Computer literacy and use have become commonplace in our colleges and universities. In an environment that demands the use of technology, educators should be knowledgeable of the components that make up the overall computer attitude of students and be willing to investigate the processes and techniques of effective teaching and learning that can take place with computer technology. The purpose of this study is two fold. First, it investigates the relationship between computer attitudes and gender, ethnicity, and computer experience. Second, it addresses the question of whether, and to what extent, students' attitudes toward computers change over a 16 week period in an undergraduate microbiology course that supplements the traditional lecture with computer-driven assignments. Multiple regression analyses, using data from the Computer Attitudes Scale (Loyd & Loyd, 1985), showed that, in the experimental group, no significant relationships were found between computer anxiety and gender or ethnicity or between computer confidence and gender or ethnicity. However, students who used computers the longest (p = .001) and who were self-taught (p = .046) had the lowest computer anxiety levels. Likewise students who used computers the longest (p = .001) and who were self-taught (p = .041) had the highest confidence levels. No significant relationships between computer liking, usefulness, or the use of Internet resources and gender, ethnicity, or computer experience were found. Dependent T-tests were performed to determine whether computer attitude scores (pretest and posttest) increased over a 16-week period for students who had been exposed to computer-driven assignments and other Internet resources. Results showed that students in the experimental group were less anxious about working with computers and considered computers to be more useful. In the control group, no significant changes in computer anxiety, confidence, liking, or usefulness were noted. Overall, students in

  9. Guidelines for Developing Computer Based Resource Units. Revised.

    ERIC Educational Resources Information Center

    State Univ. of New York, Buffalo. Coll. at Buffalo. Educational Research and Development Complex.

    Presented for use with normal and handicapped children are guidelines for the development of computer based resource units organized into two operations: one of which is the production of software which includes the writing of instructional objectives, content, activities, materials, and measuring devices; and the other the coding of the software…

  10. Geology and mineral and energy resources, Roswell Resource Area, New Mexico; an interactive computer presentation

    USGS Publications Warehouse

    Tidball, Ronald R.; Bartsch-Winkler, S. B.

    1995-01-01

    This Compact Disc-Read Only Memory (CD-ROM) contains a program illustrating the geology and mineral and energy resources of the Roswell Resource Area, an administrative unit of the U.S. Bureau of Land Management in east-central New Mexico. The program enables the user to access information on the geology, geochemistry, geophysics, mining history, metallic and industrial mineral commodities, hydrocarbons, and assessments of the area. The program was created with the display software, SuperCard, version 1.5, by Aldus. The program will run only on a Macintosh personal computer. This CD-ROM was produced in accordance with Macintosh HFS standards. The program was developed on a Macintosh II-series computer with system 7.0.1. The program is a compiled, executable form that is nonproprietary and does not require the presence of the SuperCard software.

  11. Pedagogical Utilization and Assessment of the Statistic Online Computational Resource in Introductory Probability and Statistics Courses.

    PubMed

    Dinov, Ivo D; Sanchez, Juana; Christou, Nicolas

    2008-01-01

    Technology-based instruction represents a new recent pedagogical paradigm that is rooted in the realization that new generations are much more comfortable with, and excited about, new technologies. The rapid technological advancement over the past decade has fueled an enormous demand for the integration of modern networking, informational and computational tools with classical pedagogical instruments. Consequently, teaching with technology typically involves utilizing a variety of IT and multimedia resources for online learning, course management, electronic course materials, and novel tools of communication, engagement, experimental, critical thinking and assessment.The NSF-funded Statistics Online Computational Resource (SOCR) provides a number of interactive tools for enhancing instruction in various undergraduate and graduate courses in probability and statistics. These resources include online instructional materials, statistical calculators, interactive graphical user interfaces, computational and simulation applets, tools for data analysis and visualization. The tools provided as part of SOCR include conceptual simulations and statistical computing interfaces, which are designed to bridge between the introductory and the more advanced computational and applied probability and statistics courses. In this manuscript, we describe our designs for utilizing SOCR technology in instruction in a recent study. In addition, present the results of the effectiveness of using SOCR tools at two different course intensity levels on three outcome measures: exam scores, student satisfaction and choice of technology to complete assignments. Learning styles assessment was completed at baseline. We have used three very different designs for three different undergraduate classes. Each course included a treatment group, using the SOCR resources, and a control group, using classical instruction techniques. Our findings include marginal effects of the SOCR treatment per individual

  12. Pedagogical Utilization and Assessment of the Statistic Online Computational Resource in Introductory Probability and Statistics Courses

    PubMed Central

    Dinov, Ivo D.; Sanchez, Juana; Christou, Nicolas

    2009-01-01

    Technology-based instruction represents a new recent pedagogical paradigm that is rooted in the realization that new generations are much more comfortable with, and excited about, new technologies. The rapid technological advancement over the past decade has fueled an enormous demand for the integration of modern networking, informational and computational tools with classical pedagogical instruments. Consequently, teaching with technology typically involves utilizing a variety of IT and multimedia resources for online learning, course management, electronic course materials, and novel tools of communication, engagement, experimental, critical thinking and assessment. The NSF-funded Statistics Online Computational Resource (SOCR) provides a number of interactive tools for enhancing instruction in various undergraduate and graduate courses in probability and statistics. These resources include online instructional materials, statistical calculators, interactive graphical user interfaces, computational and simulation applets, tools for data analysis and visualization. The tools provided as part of SOCR include conceptual simulations and statistical computing interfaces, which are designed to bridge between the introductory and the more advanced computational and applied probability and statistics courses. In this manuscript, we describe our designs for utilizing SOCR technology in instruction in a recent study. In addition, present the results of the effectiveness of using SOCR tools at two different course intensity levels on three outcome measures: exam scores, student satisfaction and choice of technology to complete assignments. Learning styles assessment was completed at baseline. We have used three very different designs for three different undergraduate classes. Each course included a treatment group, using the SOCR resources, and a control group, using classical instruction techniques. Our findings include marginal effects of the SOCR treatment per individual

  13. A Novel College Network Resource Management Method using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Lin, Chen

    At present information construction of college mainly has construction of college networks and management information system; there are many problems during the process of information. Cloud computing is development of distributed processing, parallel processing and grid computing, which make data stored on the cloud, make software and services placed in the cloud and build on top of various standards and protocols, you can get it through all kinds of equipments. This article introduces cloud computing and function of cloud computing, then analyzes the exiting problems of college network resource management, the cloud computing technology and methods are applied in the construction of college information sharing platform.

  14. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  15. Integration of Openstack cloud resources in BES III computing cluster

    NASA Astrophysics Data System (ADS)

    Li, Haibo; Cheng, Yaodong; Huang, Qiulan; Cheng, Zhenjing; Shi, Jingyan

    2017-10-01

    Cloud computing provides a new technical means for data processing of high energy physics experiment. However, the resource of each queue is fixed and the usage of the resource is static in traditional job management system. In order to make it simple and transparent for physicist to use, we developed a virtual cluster system (vpmanager) to integrate IHEPCloud and different batch systems such as Torque and HTCondor. Vpmanager provides dynamic virtual machines scheduling according to the job queue. The BES III use case results show that resource efficiency is greatly improved.

  16. Using Mosix for Wide-Area Compuational Resources

    USGS Publications Warehouse

    Maddox, Brian G.

    2004-01-01

    One of the problems with using traditional Beowulf-type distributed processing clusters is that they require an investment in dedicated computer resources. These resources are usually needed in addition to pre-existing ones such as desktop computers and file servers. Mosix is a series of modifications to the Linux kernel that creates a virtual computer, featuring automatic load balancing by migrating processes from heavily loaded nodes to less used ones. An extension of the Beowulf concept is to run a Mosixenabled Linux kernel on a large number of computer resources in an organization. This configuration would provide a very large amount of computational resources based on pre-existing equipment. The advantage of this method is that it provides much more processing power than a traditional Beowulf cluster without the added costs of dedicating resources.

  17. Learning with Computers. AECA Resource Book Series, Volume 3, Number 2.

    ERIC Educational Resources Information Center

    Elliott, Alison

    1996-01-01

    Research has supported the idea that the use of computers in the education of young children promotes social interaction and academic achievement. This resource booklet provides an introduction to computers in early childhood settings to enrich learning opportunities and provides guidance to teachers to find developmentally appropriate software…

  18. Computational resources for ribosome profiling: from database to Web server and software.

    PubMed

    Wang, Hongwei; Wang, Yan; Xie, Zhi

    2017-08-14

    Ribosome profiling is emerging as a powerful technique that enables genome-wide investigation of in vivo translation at sub-codon resolution. The increasing application of ribosome profiling in recent years has achieved remarkable progress toward understanding the composition, regulation and mechanism of translation. This benefits from not only the awesome power of ribosome profiling but also an extensive range of computational resources available for ribosome profiling. At present, however, a comprehensive review on these resources is still lacking. Here, we survey the recent computational advances guided by ribosome profiling, with a focus on databases, Web servers and software tools for storing, visualizing and analyzing ribosome profiling data. This review is intended to provide experimental and computational biologists with a reference to make appropriate choices among existing resources for the question at hand. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

    NASA Astrophysics Data System (ADS)

    Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.

    2017-10-01

    This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.

  20. Function Package for Computing Quantum Resource Measures

    NASA Astrophysics Data System (ADS)

    Huang, Zhiming

    2018-05-01

    In this paper, we present a function package for to calculate quantum resource measures and dynamics of open systems. Our package includes common operators and operator lists, frequently-used functions for computing quantum entanglement, quantum correlation, quantum coherence, quantum Fisher information and dynamics in noisy environments. We briefly explain the functions of the package and illustrate how to use the package with several typical examples. We expect that this package is a useful tool for future research and education.

  1. Functional requirements of computer systems for the U.S. Geological Survey, Water Resources Division, 1988-97

    USGS Publications Warehouse

    Hathaway, R.M.; McNellis, J.M.

    1989-01-01

    Investigating the occurrence, quantity, quality, distribution, and movement of the Nation 's water resources is the principal mission of the U.S. Geological Survey 's Water Resources Division. Reports of these investigations are published and available to the public. To accomplish this mission, the Division requires substantial computer technology to process, store, and analyze data from more than 57,000 hydrologic sites. The Division 's computer resources are organized through the Distributed Information System Program Office that manages the nationwide network of computers. The contract that provides the major computer components for the Water Resources Division 's Distributed information System expires in 1991. Five work groups were organized to collect the information needed to procure a new generation of computer systems for the U. S. Geological Survey, Water Resources Division. Each group was assigned a major Division activity and asked to describe its functional requirements of computer systems for the next decade. The work groups and major activities are: (1) hydrologic information; (2) hydrologic applications; (3) geographic information systems; (4) reports and electronic publishing; and (5) administrative. The work groups identified 42 functions and described their functional requirements for 1988, 1992, and 1997. A few new functions such as Decision Support Systems and Executive Information Systems, were identified, but most are the same as performed today. Although the number of functions will remain about the same, steady growth in the size, complexity, and frequency of many functions is predicted for the next decade. No compensating increase in the Division 's staff is anticipated during this period. To handle the increased workload and perform these functions, new approaches will be developed that use advanced computer technology. The advanced technology is required in a unified, tightly coupled system that will support all functions simultaneously

  2. The Relative Effectiveness of Computer-Based and Traditional Resources for Education in Anatomy

    ERIC Educational Resources Information Center

    Khot, Zaid; Quinlan, Kaitlyn; Norman, Geoffrey R.; Wainman, Bruce

    2013-01-01

    There is increasing use of computer-based resources to teach anatomy, although no study has compared computer-based learning to traditional. In this study, we examine the effectiveness of three formats of anatomy learning: (1) a virtual reality (VR) computer-based module, (2) a static computer-based module providing Key Views (KV), (3) a plastic…

  3. Additional development of the XTRAN3S computer program

    NASA Technical Reports Server (NTRS)

    Borland, C. J.

    1989-01-01

    Additional developments and enhancements to the XTRAN3S computer program, a code for calculation of steady and unsteady aerodynamics, and associated aeroelastic solutions, for 3-D wings in the transonic flow regime are described. Algorithm improvements for the XTRAN3S program were provided including an implicit finite difference scheme to enhance the allowable time step and vectorization for improved computational efficiency. The code was modified to treat configurations with a fuselage, multiple stores/nacelles/pylons, and winglets. Computer program changes (updates) for error corrections and updates for version control are provided.

  4. Desktop Computing Integration Project

    NASA Technical Reports Server (NTRS)

    Tureman, Robert L., Jr.

    1992-01-01

    The Desktop Computing Integration Project for the Human Resources Management Division (HRMD) of LaRC was designed to help division personnel use personal computing resources to perform job tasks. The three goals of the project were to involve HRMD personnel in desktop computing, link mainframe data to desktop capabilities, and to estimate training needs for the division. The project resulted in increased usage of personal computers by Awards specialists, an increased awareness of LaRC resources to help perform tasks, and personal computer output that was used in presentation of information to center personnel. In addition, the necessary skills for HRMD personal computer users were identified. The Awards Office was chosen for the project because of the consistency of their data requests and the desire of employees in that area to use the personal computer.

  5. Resource Manual on the Use of Computers in Schooling.

    ERIC Educational Resources Information Center

    New York State Education Dept., Albany. Bureau of Technology Applications.

    This resource manual is designed to provide educators with timely information on the use of computers and related technology in schools. Section one includes a review of the new Bureau of Technology Applications' goal, functions, and major programs and activities; a description of the Model Schools Program, which has been conceptually derived from…

  6. A Novel Resource Management Method of Providing Operating System as a Service for Mobile Transparent Computing

    PubMed Central

    Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua

    2014-01-01

    This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable. PMID:24883353

  7. A novel resource management method of providing operating system as a service for mobile transparent computing.

    PubMed

    Xiong, Yonghua; Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua

    2014-01-01

    This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable.

  8. The Center for Computational Biology: resources, achievements, and challenges

    PubMed Central

    Dinov, Ivo D; Thompson, Paul M; Woods, Roger P; Van Horn, John D; Shattuck, David W; Parker, D Stott

    2011-01-01

    The Center for Computational Biology (CCB) is a multidisciplinary program where biomedical scientists, engineers, and clinicians work jointly to combine modern mathematical and computational techniques, to perform phenotypic and genotypic studies of biological structure, function, and physiology in health and disease. CCB has developed a computational framework built around the Manifold Atlas, an integrated biomedical computing environment that enables statistical inference on biological manifolds. These manifolds model biological structures, features, shapes, and flows, and support sophisticated morphometric and statistical analyses. The Manifold Atlas includes tools, workflows, and services for multimodal population-based modeling and analysis of biological manifolds. The broad spectrum of biomedical topics explored by CCB investigators include the study of normal and pathological brain development, maturation and aging, discovery of associations between neuroimaging and genetic biomarkers, and the modeling, analysis, and visualization of biological shape, form, and size. CCB supports a wide range of short-term and long-term collaborations with outside investigators, which drive the center's computational developments and focus the validation and dissemination of CCB resources to new areas and scientific domains. PMID:22081221

  9. The Center for Computational Biology: resources, achievements, and challenges.

    PubMed

    Toga, Arthur W; Dinov, Ivo D; Thompson, Paul M; Woods, Roger P; Van Horn, John D; Shattuck, David W; Parker, D Stott

    2012-01-01

    The Center for Computational Biology (CCB) is a multidisciplinary program where biomedical scientists, engineers, and clinicians work jointly to combine modern mathematical and computational techniques, to perform phenotypic and genotypic studies of biological structure, function, and physiology in health and disease. CCB has developed a computational framework built around the Manifold Atlas, an integrated biomedical computing environment that enables statistical inference on biological manifolds. These manifolds model biological structures, features, shapes, and flows, and support sophisticated morphometric and statistical analyses. The Manifold Atlas includes tools, workflows, and services for multimodal population-based modeling and analysis of biological manifolds. The broad spectrum of biomedical topics explored by CCB investigators include the study of normal and pathological brain development, maturation and aging, discovery of associations between neuroimaging and genetic biomarkers, and the modeling, analysis, and visualization of biological shape, form, and size. CCB supports a wide range of short-term and long-term collaborations with outside investigators, which drive the center's computational developments and focus the validation and dissemination of CCB resources to new areas and scientific domains.

  10. Radiotherapy infrastructure and human resources in Switzerland : Present status and projected computations for 2020.

    PubMed

    Datta, Niloy Ranjan; Khan, Shaka; Marder, Dietmar; Zwahlen, Daniel; Bodis, Stephan

    2016-09-01

    The purpose of this study was to evaluate the present status of radiotherapy infrastructure and human resources in Switzerland and compute projections for 2020. The European Society of Therapeutic Radiation Oncology "Quantification of Radiation Therapy Infrastructure and Staffing" guidelines (ESTRO-QUARTS) and those of the International Atomic Energy Agency (IAEA) were applied to estimate the requirements for teleradiotherapy (TRT) units, radiation oncologists (RO), medical physicists (MP) and radiotherapy technologists (RTT). The databases used for computation of the present gap and additional requirements are (a) Global Cancer Incidence, Mortality and Prevalence (GLOBOCAN) for cancer incidence (b) the Directory of Radiotherapy Centres (DIRAC) of the IAEA for existing TRT units (c) human resources from the recent ESTRO "Health Economics in Radiation Oncology" (HERO) survey and (d) radiotherapy utilization (RTU) rates for each tumour site, published by the Ingham Institute for Applied Medical Research (IIAMR). In 2015, 30,999 of 45,903 cancer patients would have required radiotherapy. By 2020, this will have increased to 34,041 of 50,427 cancer patients. Switzerland presently has an adequate number of TRTs, but a deficit of 57 ROs, 14 MPs and 36 RTTs. By 2020, an additional 7 TRTs, 72 ROs, 22 MPs and 66 RTTs will be required. In addition, a realistic dynamic model for calculation of staff requirements due to anticipated changes in future radiotherapy practices has been proposed. This model could be tailor-made and individualized for any radiotherapy centre. A 9.8 % increase in radiotherapy requirements is expected for cancer patients over the next 5 years. The present study should assist the stakeholders and health planners in designing an appropriate strategy for meeting future radiotherapy needs for Switzerland.

  11. Measuring the impact of computer resource quality on the software development process and product

    NASA Technical Reports Server (NTRS)

    Mcgarry, Frank; Valett, Jon; Hall, Dana

    1985-01-01

    The availability and quality of computer resources during the software development process was speculated to have measurable, significant impact on the efficiency of the development process and the quality of the resulting product. Environment components such as the types of tools, machine responsiveness, and quantity of direct access storage may play a major role in the effort to produce the product and in its subsequent quality as measured by factors such as reliability and ease of maintenance. During the past six years, the NASA Goddard Space Flight Center has conducted experiments with software projects in an attempt to better understand the impact of software development methodologies, environments, and general technologies on the software process and product. Data was extracted and examined from nearly 50 software development projects. All were related to support of satellite flight dynamics ground-based computations. The relationship between computer resources and the software development process and product as exemplified by the subject NASA data was examined. Based upon the results, a number of computer resource-related implications are provided.

  12. Common Accounting System for Monitoring the ATLAS Distributed Computing Resources

    NASA Astrophysics Data System (ADS)

    Karavakis, E.; Andreeva, J.; Campana, S.; Gayazov, S.; Jezequel, S.; Saiz, P.; Sargsyan, L.; Schovancova, J.; Ueda, I.; Atlas Collaboration

    2014-06-01

    This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.

  13. Mineral resources of the Whipple Mountains and Whipple Mountains Addition Wilderness Study Areas, San Bernardino County, California

    USGS Publications Warehouse

    Marsh, Sherman P.; Raines, Gary L.; Diggles, Michael F.; Howard, Keith A.; Simpson, Robert W.; Hoover, Donald B.; Ridenour, James; Moyle, Phillip R.; Willett, Spencee L.

    1988-01-01

    At the request of the U.S. Bureau of Land Management, approximately 85,100 acres of the Whipple Mountains Wilderness Study Area (CDCA-312) and 1,380 acres of the Whipple Mountains Addition Wilderness Study Area (AZ-050-010) were evaluated for identified mineral resources (known) and mineral resource potential (undiscovered). In this report, the Whipple Mountains and Whipple Mountains Addition Wilderness Study Areas are referred to as simply "the study area." Most of the mines and prospects with identified resources in the Whipple Mountains Wilderness Study Area are within areas designated as having mineral resource potential. The area in and around the Turk Silver mine and the Lucky Green group and the area near the northwest boundary of the study area have high mineral resource potential for copper, lead, zinc, gold, and silver. An area along the west boundary of the study area has moderate resource potential for copper lead, zinc, gold, and silver. An area in the east adjacent to the Whipple Mountains Addition Wilderness Study Area has moderate resource potential for copper, gold, and silver resources. One area on the north boundary and one on the southeast boundary of the study area have low mineral resource potential for copper, lead, zinc, gold, and silver. Two areas, one on the north boundary and one inside the east boundary of the study area, have moderate resource potential for manganese. A small area inside the south boundary of the study area has high resource potential for decorative building stone, and the entire study area has low resource potential for sand and gravel and other rock products suitable for construction. Two areas in the eastern part of the study area have low resource potential for uranium. There is no resource potential for oil and gas or geothermal resources in the Whipple Mountains Wilderness Study Area. Sites within the Whipple Mountains Wilderness Study Area with identified resources of copper, gold, silver, manganese and (or

  14. Computed Tomography Inspection and Analysis for Additive Manufacturing Components

    NASA Technical Reports Server (NTRS)

    Beshears, Ronald D.

    2017-01-01

    Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws and geometric features were inspected using a 2-megavolt linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed to determine the impact of additive manufacturing on inspectability of objects with complex geometries.

  15. The Ever-Present Demand for Public Computing Resources. CDS Spotlight

    ERIC Educational Resources Information Center

    Pirani, Judith A.

    2014-01-01

    This Core Data Service (CDS) Spotlight focuses on public computing resources, including lab/cluster workstations in buildings, virtual lab/cluster workstations, kiosks, laptop and tablet checkout programs, and workstation access in unscheduled classrooms. The findings are derived from 758 CDS 2012 participating institutions. A dataset of 529…

  16. Computed Tomography Inspection and Analysis for Additive Manufacturing Components

    NASA Technical Reports Server (NTRS)

    Beshears, Ronald D.

    2016-01-01

    Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws were inspected using a 2MeV linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed using standard image analysis techniques to determine the impact of additive manufacturing on inspectability of objects with complex geometries.

  17. The importance of employing computational resources for the automation of drug discovery.

    PubMed

    Rosales-Hernández, Martha Cecilia; Correa-Basurto, José

    2015-03-01

    The application of computational tools to drug discovery helps researchers to design and evaluate new drugs swiftly with a reduce economic resources. To discover new potential drugs, computational chemistry incorporates automatization for obtaining biological data such as adsorption, distribution, metabolism, excretion and toxicity (ADMET), as well as drug mechanisms of action. This editorial looks at examples of these computational tools, including docking, molecular dynamics simulation, virtual screening, quantum chemistry, quantitative structural activity relationship, principal component analysis and drug screening workflow systems. The authors then provide their perspectives on the importance of these techniques for drug discovery. Computational tools help researchers to design and discover new drugs for the treatment of several human diseases without side effects, thus allowing for the evaluation of millions of compounds with a reduced cost in both time and economic resources. The problem is that operating each program is difficult; one is required to use several programs and understand each of the properties being tested. In the future, it is possible that a single computer and software program will be capable of evaluating the complete properties (mechanisms of action and ADMET properties) of ligands. It is also possible that after submitting one target, this computer-software will be capable of suggesting potential compounds along with ways to synthesize them, and presenting biological models for testing.

  18. A resource facility for kinetic analysis: modeling using the SAAM computer programs.

    PubMed

    Foster, D M; Boston, R C; Jacquez, J A; Zech, L

    1989-01-01

    Kinetic analysis and integrated system modeling have contributed significantly to understanding the physiology and pathophysiology of metabolic systems in humans and animals. Many experimental biologists are aware of the usefulness of these techniques and recognize that kinetic modeling requires special expertise. The Resource Facility for Kinetic Analysis (RFKA) provides this expertise through: (1) development and application of modeling technology for biomedical problems, and (2) development of computer-based kinetic modeling methodologies concentrating on the computer program Simulation, Analysis, and Modeling (SAAM) and its conversational version, CONversational SAAM (CONSAM). The RFKA offers consultation to the biomedical community in the use of modeling to analyze kinetic data and trains individuals in using this technology for biomedical research. Early versions of SAAM were widely applied in solving dosimetry problems; many users, however, are not familiar with recent improvements to the software. The purpose of this paper is to acquaint biomedical researchers in the dosimetry field with RFKA, which, together with the joint National Cancer Institute-National Heart, Lung and Blood Institute project, is overseeing SAAM development and applications. In addition, RFKA provides many service activities to the SAAM user community that are relevant to solving dosimetry problems.

  19. GANGA: A tool for computational-task management and easy access to Grid resources

    NASA Astrophysics Data System (ADS)

    Mościcki, J. T.; Brochu, F.; Ebke, J.; Egede, U.; Elmsheuser, J.; Harrison, K.; Jones, R. W. L.; Lee, H. C.; Liko, D.; Maier, A.; Muraru, A.; Patrick, G. N.; Pajchel, K.; Reece, W.; Samset, B. H.; Slater, M. W.; Soroko, A.; Tan, C. L.; van der Ster, D. C.; Williams, M.

    2009-11-01

    In this paper, we present the computational task-management tool GANGA, which allows for the specification, submission, bookkeeping and post-processing of computational tasks on a wide set of distributed resources. GANGA has been developed to solve a problem increasingly common in scientific projects, which is that researchers must regularly switch between different processing systems, each with its own command set, to complete their computational tasks. GANGA provides a homogeneous environment for processing data on heterogeneous resources. We give examples from High Energy Physics, demonstrating how an analysis can be developed on a local system and then transparently moved to a Grid system for processing of all available data. GANGA has an API that can be used via an interactive interface, in scripts, or through a GUI. Specific knowledge about types of tasks or computational resources is provided at run-time through a plugin system, making new developments easy to integrate. We give an overview of the GANGA architecture, give examples of current use, and demonstrate how GANGA can be used in many different areas of science. Catalogue identifier: AEEN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL No. of lines in distributed program, including test data, etc.: 224 590 No. of bytes in distributed program, including test data, etc.: 14 365 315 Distribution format: tar.gz Programming language: Python Computer: personal computers, laptops Operating system: Linux/Unix RAM: 1 MB Classification: 6.2, 6.5 Nature of problem: Management of computational tasks for scientific applications on heterogenous distributed systems, including local, batch farms, opportunistic clusters and

  20. Lawrence Livermore National Laboratories Perspective on Code Development and High Performance Computing Resources in Support of the National HED/ICF Effort

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clouse, C. J.; Edwards, M. J.; McCoy, M. G.

    2015-07-07

    Through its Advanced Scientific Computing (ASC) and Inertial Confinement Fusion (ICF) code development efforts, Lawrence Livermore National Laboratory (LLNL) provides a world leading numerical simulation capability for the National HED/ICF program in support of the Stockpile Stewardship Program (SSP). In addition the ASC effort provides high performance computing platform capabilities upon which these codes are run. LLNL remains committed to, and will work with, the national HED/ICF program community to help insure numerical simulation needs are met and to make those capabilities available, consistent with programmatic priorities and available resources.

  1. 15 CFR 270.204 - Provision of additional resources and services needed by a Team.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... services needed by a Team. 270.204 Section 270.204 Commerce and Foreign Trade Regulations Relating to... CONSTRUCTION SAFETY TEAMS NATIONAL CONSTRUCTION SAFETY TEAMS Investigations § 270.204 Provision of additional resources and services needed by a Team. The Director will determine the appropriate resources that a Team...

  2. 15 CFR 270.204 - Provision of additional resources and services needed by a Team.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... services needed by a Team. 270.204 Section 270.204 Commerce and Foreign Trade Regulations Relating to... CONSTRUCTION SAFETY TEAMS NATIONAL CONSTRUCTION SAFETY TEAMS Investigations § 270.204 Provision of additional resources and services needed by a Team. The Director will determine the appropriate resources that a Team...

  3. 15 CFR 270.204 - Provision of additional resources and services needed by a Team.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... services needed by a Team. 270.204 Section 270.204 Commerce and Foreign Trade Regulations Relating to... CONSTRUCTION SAFETY TEAMS NATIONAL CONSTRUCTION SAFETY TEAMS Investigations § 270.204 Provision of additional resources and services needed by a Team. The Director will determine the appropriate resources that a Team...

  4. 15 CFR 270.204 - Provision of additional resources and services needed by a Team.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... services needed by a Team. 270.204 Section 270.204 Commerce and Foreign Trade Regulations Relating to... CONSTRUCTION SAFETY TEAMS NATIONAL CONSTRUCTION SAFETY TEAMS Investigations § 270.204 Provision of additional resources and services needed by a Team. The Director will determine the appropriate resources that a Team...

  5. 15 CFR 270.204 - Provision of additional resources and services needed by a Team.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... services needed by a Team. 270.204 Section 270.204 Commerce and Foreign Trade Regulations Relating to... CONSTRUCTION SAFETY TEAMS NATIONAL CONSTRUCTION SAFETY TEAMS Investigations § 270.204 Provision of additional resources and services needed by a Team. The Director will determine the appropriate resources that a Team...

  6. Research on the digital education resources of sharing pattern in independent colleges based on cloud computing environment

    NASA Astrophysics Data System (ADS)

    Xiong, Ting; He, Zhiwen

    2017-06-01

    Cloud computing was first proposed by Google Company in the United States, which was based on the Internet center, providing a standard and open network sharing service approach. With the rapid development of the higher education in China, the educational resources provided by colleges and universities had greatly gap in the actual needs of teaching resources. therefore, Cloud computing of using the Internet technology to provide shared methods liked the timely rain, which had become an important means of the Digital Education on sharing applications in the current higher education. Based on Cloud computing environment, the paper analyzed the existing problems about the sharing of digital educational resources in Jiangxi Province Independent Colleges. According to the sharing characteristics of mass storage, efficient operation and low input about Cloud computing, the author explored and studied the design of the sharing model about the digital educational resources of higher education in Independent College. Finally, the design of the shared model was put into the practical applications.

  7. An integrated system for land resources supervision based on the IoT and cloud computing

    NASA Astrophysics Data System (ADS)

    Fang, Shifeng; Zhu, Yunqiang; Xu, Lida; Zhang, Jinqu; Zhou, Peiji; Luo, Kan; Yang, Jie

    2017-01-01

    Integrated information systems are important safeguards for the utilisation and development of land resources. Information technologies, including the Internet of Things (IoT) and cloud computing, are inevitable requirements for the quality and efficiency of land resources supervision tasks. In this study, an economical and highly efficient supervision system for land resources has been established based on IoT and cloud computing technologies; a novel online and offline integrated system with synchronised internal and field data that includes the entire process of 'discovering breaches, analysing problems, verifying fieldwork and investigating cases' was constructed. The system integrates key technologies, such as the automatic extraction of high-precision information based on remote sensing, semantic ontology-based technology to excavate and discriminate public sentiment on the Internet that is related to illegal incidents, high-performance parallel computing based on MapReduce, uniform storing and compressing (bitwise) technology, global positioning system data communication and data synchronisation mode, intelligent recognition and four-level ('device, transfer, system and data') safety control technology. The integrated system based on a 'One Map' platform has been officially implemented by the Department of Land and Resources of Guizhou Province, China, and was found to significantly increase the efficiency and level of land resources supervision. The system promoted the overall development of informatisation in fields related to land resource management.

  8. An interactive computer approach to performing resource analysis for a multi-resource/multi-project problem. [Spacelab inventory procurement planning

    NASA Technical Reports Server (NTRS)

    Schlagheck, R. A.

    1977-01-01

    New planning techniques and supporting computer tools are needed for the optimization of resources and costs for space transportation and payload systems. Heavy emphasis on cost effective utilization of resources has caused NASA program planners to look at the impact of various independent variables that affect procurement buying. A description is presented of a category of resource planning which deals with Spacelab inventory procurement analysis. Spacelab is a joint payload project between NASA and the European Space Agency and will be flown aboard the Space Shuttle starting in 1980. In order to respond rapidly to the various procurement planning exercises, a system was built that could perform resource analysis in a quick and efficient manner. This system is known as the Interactive Resource Utilization Program (IRUP). Attention is given to aspects of problem definition, an IRUP system description, questions of data base entry, the approach used for project scheduling, and problems of resource allocation.

  9. Testing a computer-based ostomy care training resource for staff nurses.

    PubMed

    Bales, Isabel

    2010-05-01

    Fragmented teaching and ostomy care provided by nonspecialized clinicians unfamiliar with state-of-the-art care and products have been identified as problems in teaching ostomy care to the new ostomate. After conducting a literature review of theories and concepts related to the impact of nurse behaviors and confidence on ostomy care, the author developed a computer-based learning resource and assessed its effect on staff nurse confidence. Of 189 staff nurses with a minimum of 1 year acute-care experience employed in the acute care, emergency, and rehabilitation departments of an acute care facility in the Midwestern US, 103 agreed to participate and returned completed pre- and post-tests, each comprising the same eight statements about providing ostomy care. F and P values were computed for differences between pre- and post test scores. Based on a scale where 1 = totally disagree and 5 = totally agree with the statement, baseline confidence and perceived mean knowledge scores averaged 3.8 and after viewing the resource program post-test mean scores averaged 4.51, a statistically significant improvement (P = 0.000). The largest difference between pre- and post test scores involved feeling confident in having the resources to learn ostomy skills independently. The availability of an electronic ostomy care resource was rated highly in both pre- and post testing. Studies to assess the effects of increased confidence and knowledge on the quality and provision of care are warranted.

  10. The Need for Computer Science

    ERIC Educational Resources Information Center

    Margolis, Jane; Goode, Joanna; Bernier, David

    2011-01-01

    Broadening computer science learning to include more students is a crucial item on the United States' education agenda, these authors say. Although policymakers advocate more computer science expertise, computer science offerings in high schools are few--and actually shrinking. In addition, poorly resourced schools with a high percentage of…

  11. Pricing the Computing Resources: Reading Between the Lines and Beyond

    NASA Technical Reports Server (NTRS)

    Nakai, Junko; Veronico, Nick (Editor); Thigpen, William W. (Technical Monitor)

    2001-01-01

    Distributed computing systems have the potential to increase the usefulness of existing facilities for computation without adding anything physical, but that is realized only when necessary administrative features are in place. In a distributed environment, the best match is sought between a computing job to be run and a computer to run the job (global scheduling), which is a function that has not been required by conventional systems. Viewing the computers as 'suppliers' and the users as 'consumers' of computing services, markets for computing services/resources have been examined as one of the most promising mechanisms for global scheduling. We first establish why economics can contribute to scheduling. We further define the criterion for a scheme to qualify as an application of economics. Many studies to date have claimed to have applied economics to scheduling. If their scheduling mechanisms do not utilize economics, contrary to their claims, their favorable results do not contribute to the assertion that markets provide the best framework for global scheduling. We examine the well-known scheduling schemes, which concern pricing and markets, using our criterion of what application of economics is. Our conclusion is that none of the schemes examined makes full use of economics.

  12. Cloud computing geospatial application for water resources based on free and open source software and open standards - a prototype

    NASA Astrophysics Data System (ADS)

    Delipetrev, Blagoj

    2016-04-01

    Presently, most of the existing software is desktop-based, designed to work on a single computer, which represents a major limitation in many ways, starting from limited computer processing, storage power, accessibility, availability, etc. The only feasible solution lies in the web and cloud. This abstract presents research and development of a cloud computing geospatial application for water resources based on free and open source software and open standards using hybrid deployment model of public - private cloud, running on two separate virtual machines (VMs). The first one (VM1) is running on Amazon web services (AWS) and the second one (VM2) is running on a Xen cloud platform. The presented cloud application is developed using free and open source software, open standards and prototype code. The cloud application presents a framework how to develop specialized cloud geospatial application that needs only a web browser to be used. This cloud application is the ultimate collaboration geospatial platform because multiple users across the globe with internet connection and browser can jointly model geospatial objects, enter attribute data and information, execute algorithms, and visualize results. The presented cloud application is: available all the time, accessible from everywhere, it is scalable, works in a distributed computer environment, it creates a real-time multiuser collaboration platform, the programing languages code and components are interoperable, and it is flexible in including additional components. The cloud geospatial application is implemented as a specialized water resources application with three web services for 1) data infrastructure (DI), 2) support for water resources modelling (WRM), 3) user management. The web services are running on two VMs that are communicating over the internet providing services to users. The application was tested on the Zletovica river basin case study with concurrent multiple users. The application is a state

  13. Adolescents, Health Education, and Computers: The Body Awareness Resource Network (BARN).

    ERIC Educational Resources Information Center

    Bosworth, Kris; And Others

    1983-01-01

    The Body Awareness Resource Network (BARN) is a computer-based system designed as a confidential, nonjudgmental source of health information for adolescents. Topics include alcohol and other drugs, diet and activity, family communication, human sexuality, smoking, and stress management; programs are available for high school and middle school…

  14. Justification of Filter Selection for Robot Balancing in Conditions of Limited Computational Resources

    NASA Astrophysics Data System (ADS)

    Momot, M. V.; Politsinskaia, E. V.; Sushko, A. V.; Semerenko, I. A.

    2016-08-01

    The paper considers the problem of mathematical filter selection, used for balancing of wheeled robot in conditions of limited computational resources. The solution based on complementary filter is proposed.

  15. SARANA: language, compiler and run-time system support for spatially aware and resource-aware mobile computing.

    PubMed

    Hari, Pradip; Ko, Kevin; Koukoumidis, Emmanouil; Kremer, Ulrich; Martonosi, Margaret; Ottoni, Desiree; Peh, Li-Shiuan; Zhang, Pei

    2008-10-28

    Increasingly, spatial awareness plays a central role in many distributed and mobile computing applications. Spatially aware applications rely on information about the geographical position of compute devices and their supported services in order to support novel functionality. While many spatial application drivers already exist in mobile and distributed computing, very little systems research has explored how best to program these applications, to express their spatial and temporal constraints, and to allow efficient implementations on highly dynamic real-world platforms. This paper proposes the SARANA system architecture, which includes language and run-time system support for spatially aware and resource-aware applications. SARANA allows users to express spatial regions of interest, as well as trade-offs between quality of result (QoR), latency and cost. The goal is to produce applications that use resources efficiently and that can be run on diverse resource-constrained platforms ranging from laptops to personal digital assistants and to smart phones. SARANA's run-time system manages QoR and cost trade-offs dynamically by tracking resource availability and locations, brokering usage/pricing agreements and migrating programs to nodes accordingly. A resource cost model permeates the SARANA system layers, permitting users to express their resource needs and QoR expectations in units that make sense to them. Although we are still early in the system development, initial versions have been demonstrated on a nine-node system prototype.

  16. Emotor control: computations underlying bodily resource allocation, emotions, and confidence.

    PubMed

    Kepecs, Adam; Mensh, Brett D

    2015-12-01

    Emotional processes are central to behavior, yet their deeply subjective nature has been a challenge for neuroscientific study as well as for psychiatric diagnosis. Here we explore the relationships between subjective feelings and their underlying brain circuits from a computational perspective. We apply recent insights from systems neuroscience-approaching subjective behavior as the result of mental computations instantiated in the brain-to the study of emotions. We develop the hypothesis that emotions are the product of neural computations whose motor role is to reallocate bodily resources mostly gated by smooth muscles. This "emotor" control system is analagous to the more familiar motor control computations that coordinate skeletal muscle movements. To illustrate this framework, we review recent research on "confidence." Although familiar as a feeling, confidence is also an objective statistical quantity: an estimate of the probability that a hypothesis is correct. This model-based approach helped reveal the neural basis of decision confidence in mammals and provides a bridge to the subjective feeling of confidence in humans. These results have important implications for psychiatry, since disorders of confidence computations appear to contribute to a number of psychopathologies. More broadly, this computational approach to emotions resonates with the emerging view that psychiatric nosology may be best parameterized in terms of disorders of the cognitive computations underlying complex behavior.

  17. Emotor control: computations underlying bodily resource allocation, emotions, and confidence

    PubMed Central

    Kepecs, Adam; Mensh, Brett D.

    2015-01-01

    Emotional processes are central to behavior, yet their deeply subjective nature has been a challenge for neuroscientific study as well as for psychiatric diagnosis. Here we explore the relationships between subjective feelings and their underlying brain circuits from a computational perspective. We apply recent insights from systems neuroscience—approaching subjective behavior as the result of mental computations instantiated in the brain—to the study of emotions. We develop the hypothesis that emotions are the product of neural computations whose motor role is to reallocate bodily resources mostly gated by smooth muscles. This “emotor” control system is analagous to the more familiar motor control computations that coordinate skeletal muscle movements. To illustrate this framework, we review recent research on “confidence.” Although familiar as a feeling, confidence is also an objective statistical quantity: an estimate of the probability that a hypothesis is correct. This model-based approach helped reveal the neural basis of decision confidence in mammals and provides a bridge to the subjective feeling of confidence in humans. These results have important implications for psychiatry, since disorders of confidence computations appear to contribute to a number of psychopathologies. More broadly, this computational approach to emotions resonates with the emerging view that psychiatric nosology may be best parameterized in terms of disorders of the cognitive computations underlying complex behavior. PMID:26869840

  18. Additional Resources - Naval Oceanography Portal

    Science.gov Websites

    section Advanced Search... Sections Home Time Earth Orientation Astronomy Meteorology Oceanography Ice You , including research and development results. Includes Astronomy and Space, as well as Earth and Ocean Sciences subject categories. Astronomy Resources Union List of Astronomy Serials (ULAS) - Bibliographic

  19. Modified stretched exponential model of computer system resources management limitations-The case of cache memory

    NASA Astrophysics Data System (ADS)

    Strzałka, Dominik; Dymora, Paweł; Mazurek, Mirosław

    2018-02-01

    In this paper we present some preliminary results in the field of computer systems management with relation to Tsallis thermostatistics and the ubiquitous problem of hardware limited resources. In the case of systems with non-deterministic behaviour, management of their resources is a key point that guarantees theirs acceptable performance and proper working. This is very wide problem that stands for many challenges in financial, transport, water and food, health, etc. areas. We focus on computer systems with attention paid to cache memory and propose to use an analytical model that is able to connect non-extensive entropy formalism, long-range dependencies, management of system resources and queuing theory. Obtained analytical results are related to the practical experiment showing interesting and valuable results.

  20. Adaptive Management of Computing and Network Resources for Spacecraft Systems

    NASA Technical Reports Server (NTRS)

    Pfarr, Barbara; Welch, Lonnie R.; Detter, Ryan; Tjaden, Brett; Huh, Eui-Nam; Szczur, Martha R. (Technical Monitor)

    2000-01-01

    It is likely that NASA's future spacecraft systems will consist of distributed processes which will handle dynamically varying workloads in response to perceived scientific events, the spacecraft environment, spacecraft anomalies and user commands. Since all situations and possible uses of sensors cannot be anticipated during pre-deployment phases, an approach for dynamically adapting the allocation of distributed computational and communication resources is needed. To address this, we are evolving the DeSiDeRaTa adaptive resource management approach to enable reconfigurable ground and space information systems. The DeSiDeRaTa approach embodies a set of middleware mechanisms for adapting resource allocations, and a framework for reasoning about the real-time performance of distributed application systems. The framework and middleware will be extended to accommodate (1) the dynamic aspects of intra-constellation network topologies, and (2) the complete real-time path from the instrument to the user. We are developing a ground-based testbed that will enable NASA to perform early evaluation of adaptive resource management techniques without the expense of first deploying them in space. The benefits of the proposed effort are numerous, including the ability to use sensors in new ways not anticipated at design time; the production of information technology that ties the sensor web together; the accommodation of greater numbers of missions with fewer resources; and the opportunity to leverage the DeSiDeRaTa project's expertise, infrastructure and models for adaptive resource management for distributed real-time systems.

  1. Addition of multiple limiting resources reduces grassland diversity

    USDA-ARS?s Scientific Manuscript database

    Niche dimensionality is the most general theoretical explanation for biodiversity: more niches allow for more ecological tradeoffs between species and thus greater opportunities for coexistence. Resource competition theory predicts that removing resource limitations, by increasing resource availabil...

  2. A Modeling Framework for Optimal Computational Resource Allocation Estimation: Considering the Trade-offs between Physical Resolutions, Uncertainty and Computational Costs

    NASA Astrophysics Data System (ADS)

    Moslehi, M.; de Barros, F.; Rajagopal, R.

    2014-12-01

    Hydrogeological models that represent flow and transport in subsurface domains are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting flow and transport in heterogeneous formations often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field representing hydrogeological characteristics of the field. The physical resolution (e.g. grid resolution associated with the physical space) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We propose an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model predictions and physical errors corresponding to numerical grid resolution. In this research, we optimally allocate computational resources by developing a modeling framework for the overall error based on a joint statistical and numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The accuracy of the proposed framework is verified in this study by applying it to several computationally extensive examples. Having this framework at hand aims hydrogeologists to achieve the optimum physical and statistical resolutions to minimize the error with a given computational budget. Moreover, the influence of the available computational resources and the geometric properties of the contaminant source zone on the optimum resolutions are investigated. We conclude that the computational

  3. The relative effectiveness of computer-based and traditional resources for education in anatomy.

    PubMed

    Khot, Zaid; Quinlan, Kaitlyn; Norman, Geoffrey R; Wainman, Bruce

    2013-01-01

    There is increasing use of computer-based resources to teach anatomy, although no study has compared computer-based learning to traditional. In this study, we examine the effectiveness of three formats of anatomy learning: (1) a virtual reality (VR) computer-based module, (2) a static computer-based module providing Key Views (KV), (3) a plastic model. We conducted a controlled trial in which 60 undergraduate students had ten minutes to study the names of 20 different pelvic structures. The outcome measure was a 25 item short answer test consisting of 15 nominal and 10 functional questions, based on a cadaveric pelvis. All subjects also took a brief mental rotations test (MRT) as a measure of spatial ability, used as a covariate in the analysis. Data were analyzed with repeated measures ANOVA. The group learning from the model performed significantly better than the other two groups on the nominal questions (Model 67%; KV 40%; VR 41%, Effect size 1.19 and 1.29, respectively). There was no difference between the KV and VR groups. There was no difference between the groups on the functional questions (Model 28%; KV, 23%, VR 25%). Computer-based learning resources appear to have significant disadvantages compared to traditional specimens in learning nominal anatomy. Consistent with previous research, virtual reality shows no advantage over static presentation of key views. © 2013 American Association of Anatomists.

  4. Discussing sexual and relationship health with young people in a children's hospital: evaluation of a computer-based resource.

    PubMed

    Bray, Lucy; Sanders, Caroline; McKenna, Jacqueline

    2013-12-01

    To investigate health professionals' evaluation of a computer-based resource designed to improve discussions about sexual and relationship health with young people. Evidence suggests that some health professionals can experience discomfort discussing sexual health and relationship issues with young people. Professionals within hospital settings should have the knowledge, competencies and skills to be able to ask young people sexual health questions and provide accurate sexual health education. Despite some educational material being available for community and adult services, there are no resources available, which are directly relevant to holding opportunistic discussions with young people within an acute children's hospital. A descriptive survey design. One hundred and fourteen health professionals from a children's hospital in the UK were involved in evaluating a computer-based resource. All completed an online questionnaire survey comprising of closed and open questions. The health professionals reported that the computer-based resource had a positive influence on their knowledge and clinical practice. The videos as well as the concise nature of the resource were evaluated highly. Learning was facilitated by professionals being able to control their learning through rerunning and accessing the resource on numerous occasions. An engaging, accessible computer-based resource has the capability to positively impact on health professionals' knowledge of, and skills in, starting and holding sexual health conversations with young people accessing a children's hospital. Health professionals working with children and young people value accessible, relevant and short computer-based training. This can facilitate knowledge and skill acquisition despite variation in working patterns. Improving the knowledge and skills of professionals working with young people to facilitate appropriate yet opportunistic sexual health discussions is important within the public health agenda

  5. City University of New York--Availability of Student Computer Resources. Report.

    ERIC Educational Resources Information Center

    McCall, H. Carl

    This audit reports on the availability of computer resources at the City University of New York's (CUNY) senior colleges. CUNY is the largest urban and the third largest public university system in the United States. Of the 19 CUNY campuses located throughout the five boroughs, 11 are senior colleges offering four-year degrees. For the fall 2001…

  6. Critical phenomena in communication/computation networks with various topologies and suboptimal to optimal resource allocation

    NASA Astrophysics Data System (ADS)

    Cogoni, Marco; Busonera, Giovanni; Anedda, Paolo; Zanetti, Gianluigi

    2015-01-01

    We generalize previous studies on critical phenomena in communication networks [1,2] by adding computational capabilities to the nodes. In our model, a set of tasks with random origin, destination and computational structure is distributed on a computational network, modeled as a graph. By varying the temperature of a Metropolis Montecarlo, we explore the global latency for an optimal to suboptimal resource assignment at a given time instant. By computing the two-point correlation function for the local overload, we study the behavior of the correlation distance (both for links and nodes) while approaching the congested phase: a transition from peaked to spread g(r) is seen above a critical (Montecarlo) temperature Tc. The average latency trend of the system is predicted by averaging over several network traffic realizations while maintaining a spatially detailed information for each node: a sharp decrease of performance is found over Tc independently of the workload. The globally optimized computational resource allocation and network routing defines a baseline for a future comparison of the transition behavior with respect to existing routing strategies [3,4] for different network topologies.

  7. NCRETURN Computer Program for Evaluating Investments Revised to Provide Additional Information

    Treesearch

    Allen L. Lundgren; Dennis L. Schweitzer

    1971-01-01

    Reports a modified version of NCRETURN, a computer program for evauating forestry investments. The revised version, RETURN, provides additional information about each investment, including future net worths and benefit-cost ratios, with no added input.

  8. Elastic Extension of a CMS Computing Centre Resources on External Clouds

    NASA Astrophysics Data System (ADS)

    Codispoti, G.; Di Maria, R.; Aiftimiei, C.; Bonacorsi, D.; Calligola, P.; Ciaschini, V.; Costantini, A.; Dal Pra, S.; DeGirolamo, D.; Grandi, C.; Michelotto, D.; Panella, M.; Peco, G.; Sapunenko, V.; Sgaravatto, M.; Taneja, S.; Zizzi, G.

    2016-10-01

    After the successful LHC data taking in Run-I and in view of the future runs, the LHC experiments are facing new challenges in the design and operation of the computing facilities. The computing infrastructure for Run-II is dimensioned to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, CMS - along the lines followed by other LHC experiments - is exploring the opportunity to access Cloud resources provided by external partners or commercial providers. Specific use cases have already been explored and successfully exploited during Long Shutdown 1 (LS1) and the first part of Run 2. In this work we present the proof of concept of the elastic extension of a CMS site, specifically the Bologna Tier-3, on an external OpenStack infrastructure. We focus on the “Cloud Bursting” of a CMS Grid site using a newly designed LSF configuration that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on the OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage. The amount of resources allocated thus can be elastically modeled to cope up with the needs of CMS experiment and local users. Moreover, a direct access/integration of OpenStack resources to the CMS workload management system is explored. In this paper we present this approach, we report on the performances of the on-demand allocated resources, and we discuss the lessons learned and the next steps.

  9. National resource for computation in chemistry, phase I: evaluation and recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1980-05-01

    The National Resource for Computation in Chemistry (NRCC) was inaugurated at the Lawrence Berkeley Laboratory (LBL) in October 1977, with joint funding by the Department of Energy (DOE) and the National Science Foundation (NSF). The chief activities of the NRCC include: assembling a staff of eight postdoctoral computational chemists, establishing an office complex at LBL, purchasing a midi-computer and graphics display system, administering grants of computer time, conducting nine workshops in selected areas of computational chemistry, compiling a library of computer programs with adaptations and improvements, initiating a software distribution system, providing user assistance and consultation on request. This reportmore » presents assessments and recommendations of an Ad Hoc Review Committee appointed by the DOE and NSF in January 1980. The recommendations are that NRCC should: (1) not fund grants for computing time or research but leave that to the relevant agencies, (2) continue the Workshop Program in a mode similar to Phase I, (3) abandon in-house program development and establish instead a competitive external postdoctoral program in chemistry software development administered by the Policy Board and Director, and (4) not attempt a software distribution system (leaving that function to the QCPE). Furthermore, (5) DOE should continue to make its computational facilities available to outside users (at normal cost rates) and should find some way to allow the chemical community to gain occasional access to a CRAY-level computer.« less

  10. ON states as resource units for universal quantum computation with photonic architectures

    NASA Astrophysics Data System (ADS)

    Sabapathy, Krishna Kumar; Weedbrook, Christian

    2018-06-01

    Universal quantum computation using photonic systems requires gates the Hamiltonians of which are of order greater than quadratic in the quadrature operators. We first review previous proposals to implement such gates, where specific non-Gaussian states are used as resources in conjunction with entangling gates such as the continuous-variable versions of controlled-phase and controlled-not gates. We then propose ON states which are superpositions of the vacuum and the N th Fock state, for use as non-Gaussian resource states. We show that ON states can be used to implement the cubic and higher-order quadrature phase gates to first order in gate strength. There are several advantages to this method such as reduced number of superpositions in the resource state preparation and greater control over the final gate. We also introduce useful figures of merit to characterize gate performance. Utilizing a supply of on-demand resource states one can potentially scale up implementation to greater accuracy, by repeated application of the basic circuit.

  11. Process for selecting NEAMS applications for access to Idaho National Laboratory high performance computing resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael Pernice

    2010-09-01

    INL has agreed to provide participants in the Nuclear Energy Advanced Mod- eling and Simulation (NEAMS) program with access to its high performance computing (HPC) resources under sponsorship of the Enabling Computational Technologies (ECT) program element. This report documents the process used to select applications and the software stack in place at INL.

  12. CHARMM additive and polarizable force fields for biophysics and computer-aided drug design

    PubMed Central

    Vanommeslaeghe, K.

    2014-01-01

    Background Molecular Mechanics (MM) is the method of choice for computational studies of biomolecular systems owing to its modest computational cost, which makes it possible to routinely perform molecular dynamics (MD) simulations on chemical systems of biophysical and biomedical relevance. Scope of Review As one of the main factors limiting the accuracy of MD results is the empirical force field used, the present paper offers a review of recent developments in the CHARMM additive force field, one of the most popular bimolecular force fields. Additionally, we present a detailed discussion of the CHARMM Drude polarizable force field, anticipating a growth in the importance and utilization of polarizable force fields in the near future. Throughout the discussion emphasis is placed on the force fields’ parametrization philosophy and methodology. Major Conclusions Recent improvements in the CHARMM additive force field are mostly related to newly found weaknesses in the previous generation of additive force fields. Beyond the additive approximation is the newly available CHARMM Drude polarizable force field, which allows for MD simulations of up to 1 microsecond on proteins, DNA, lipids and carbohydrates. General Significance Addressing the limitations ensures the reliability of the new CHARMM36 additive force field for the types of calculations that are presently coming into routine computational reach while the availability of the Drude polarizable force fields offers a model that is an inherently more accurate model of the underlying physical forces driving macromolecular structures and dynamics. PMID:25149274

  13. Infrastructure Systems for Advanced Computing in E-science applications

    NASA Astrophysics Data System (ADS)

    Terzo, Olivier

    2013-04-01

    cloud infrastructure to add some additional resources form the Public cloud for following the needs in term of computational and storage resources and release them where process are finished. Following the hybrid model, the scheduling approach is important for managing both cloud models. Thanks to this model infrastructure every time resources are available for additional request in term of IT capacities that can used "on demand" for a limited time without having to proceed to purchase additional servers.

  14. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    PubMed Central

    Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.

    2015-01-01

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211

  15. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    PubMed

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  16. CHARMM additive and polarizable force fields for biophysics and computer-aided drug design.

    PubMed

    Vanommeslaeghe, K; MacKerell, A D

    2015-05-01

    Molecular Mechanics (MM) is the method of choice for computational studies of biomolecular systems owing to its modest computational cost, which makes it possible to routinely perform molecular dynamics (MD) simulations on chemical systems of biophysical and biomedical relevance. As one of the main factors limiting the accuracy of MD results is the empirical force field used, the present paper offers a review of recent developments in the CHARMM additive force field, one of the most popular biomolecular force fields. Additionally, we present a detailed discussion of the CHARMM Drude polarizable force field, anticipating a growth in the importance and utilization of polarizable force fields in the near future. Throughout the discussion emphasis is placed on the force fields' parametrization philosophy and methodology. Recent improvements in the CHARMM additive force field are mostly related to newly found weaknesses in the previous generation of additive force fields. Beyond the additive approximation is the newly available CHARMM Drude polarizable force field, which allows for MD simulations of up to 1μs on proteins, DNA, lipids and carbohydrates. Addressing the limitations ensures the reliability of the new CHARMM36 additive force field for the types of calculations that are presently coming into routine computational reach while the availability of the Drude polarizable force fields offers an inherently more accurate model of the underlying physical forces driving macromolecular structures and dynamics. This article is part of a Special Issue entitled "Recent developments of molecular dynamics". Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Rational use of cognitive resources: levels of analysis between the computational and the algorithmic.

    PubMed

    Griffiths, Thomas L; Lieder, Falk; Goodman, Noah D

    2015-04-01

    Marr's levels of analysis-computational, algorithmic, and implementation-have served cognitive science well over the last 30 years. But the recent increase in the popularity of the computational level raises a new challenge: How do we begin to relate models at different levels of analysis? We propose that it is possible to define levels of analysis that lie between the computational and the algorithmic, providing a way to build a bridge between computational- and algorithmic-level models. The key idea is to push the notion of rationality, often used in defining computational-level models, deeper toward the algorithmic level. We offer a simple recipe for reverse-engineering the mind's cognitive strategies by deriving optimal algorithms for a series of increasingly more realistic abstract computational architectures, which we call "resource-rational analysis." Copyright © 2015 Cognitive Science Society, Inc.

  18. Exploiting opportunistic resources for ATLAS with ARC CE and the Event Service

    NASA Astrophysics Data System (ADS)

    Cameron, D.; Filipčič, A.; Guan, W.; Tsulaia, V.; Walker, R.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    With ever-greater computing needs and fixed budgets, big scientific experiments are turning to opportunistic resources as a means to add much-needed extra computing power. These resources can be very different in design from those that comprise the Grid computing of most experiments, therefore exploiting them requires a change in strategy for the experiment. They may be highly restrictive in what can be run or in connections to the outside world, or tolerate opportunistic usage only on condition that tasks may be terminated without warning. The Advanced Resource Connector Computing Element (ARC CE) with its nonintrusive architecture is designed to integrate resources such as High Performance Computing (HPC) systems into a computing Grid. The ATLAS experiment developed the ATLAS Event Service (AES) primarily to address the issue of jobs that can be terminated at any point when opportunistic computing capacity is needed by someone else. This paper describes the integration of these two systems in order to exploit opportunistic resources for ATLAS in a restrictive environment. In addition to the technical details, results from deployment of this solution in the SuperMUC HPC centre in Munich are shown.

  19. Grid computing in large pharmaceutical molecular modeling.

    PubMed

    Claus, Brian L; Johnson, Stephen R

    2008-07-01

    Most major pharmaceutical companies have employed grid computing to expand their compute resources with the intention of minimizing additional financial expenditure. Historically, one of the issues restricting widespread utilization of the grid resources in molecular modeling is the limited set of suitable applications amenable to coarse-grained parallelization. Recent advances in grid infrastructure technology coupled with advances in application research and redesign will enable fine-grained parallel problems, such as quantum mechanics and molecular dynamics, which were previously inaccessible to the grid environment. This will enable new science as well as increase resource flexibility to load balance and schedule existing workloads.

  20. Computational Fluid Dynamics and Additive Manufacturing to Diagnose and Treat Cardiovascular Disease.

    PubMed

    Randles, Amanda; Frakes, David H; Leopold, Jane A

    2017-11-01

    Noninvasive engineering models are now being used for diagnosing and planning the treatment of cardiovascular disease. Techniques in computational modeling and additive manufacturing have matured concurrently, and results from simulations can inform and enable the design and optimization of therapeutic devices and treatment strategies. The emerging synergy between large-scale simulations and 3D printing is having a two-fold benefit: first, 3D printing can be used to validate the complex simulations, and second, the flow models can be used to improve treatment planning for cardiovascular disease. In this review, we summarize and discuss recent methods and findings for leveraging advances in both additive manufacturing and patient-specific computational modeling, with an emphasis on new directions in these fields and remaining open questions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Using a Computer Game to Reinforce Skills in Addition Basic Facts in Second Grade.

    ERIC Educational Resources Information Center

    Kraus, William H.

    1981-01-01

    A computer-generated game called Fish Chase was developed to present drill-and-practice exercises on addition facts. The subjects of the study were 19 second-grade pupils. The results indicate a computer game can be used effectively to increase proficiency with basic facts. (MP)

  2. Dynamic resource allocation scheme for distributed heterogeneous computer systems

    NASA Technical Reports Server (NTRS)

    Liu, Howard T. (Inventor); Silvester, John A. (Inventor)

    1991-01-01

    This invention relates to a resource allocation in computer systems, and more particularly, to a method and associated apparatus for shortening response time and improving efficiency of a heterogeneous distributed networked computer system by reallocating the jobs queued up for busy nodes to idle, or less-busy nodes. In accordance with the algorithm (SIDA for short), the load-sharing is initiated by the server device in a manner such that extra overhead in not imposed on the system during heavily-loaded conditions. The algorithm employed in the present invention uses a dual-mode, server-initiated approach. Jobs are transferred from heavily burdened nodes (i.e., over a high threshold limit) to low burdened nodes at the initiation of the receiving node when: (1) a job finishes at a node which is burdened below a pre-established threshold level, or (2) a node is idle for a period of time as established by a wakeup timer at the node. The invention uses a combination of the local queue length and the local service rate ratio at each node as the workload indicator.

  3. Resources and Approaches for Teaching Quantitative and Computational Skills in the Geosciences and Allied Fields

    NASA Astrophysics Data System (ADS)

    Orr, C. H.; Mcfadden, R. R.; Manduca, C. A.; Kempler, L. A.

    2016-12-01

    Teaching with data, simulations, and models in the geosciences can increase many facets of student success in the classroom, and in the workforce. Teaching undergraduates about programming and improving students' quantitative and computational skills expands their perception of Geoscience beyond field-based studies. Processing data and developing quantitative models are critically important for Geoscience students. Students need to be able to perform calculations, analyze data, create numerical models and visualizations, and more deeply understand complex systems—all essential aspects of modern science. These skills require students to have comfort and skill with languages and tools such as MATLAB. To achieve comfort and skill, computational and quantitative thinking must build over a 4-year degree program across courses and disciplines. However, in courses focused on Geoscience content it can be challenging to get students comfortable with using computational methods to answers Geoscience questions. To help bridge this gap, we have partnered with MathWorks to develop two workshops focused on collecting and developing strategies and resources to help faculty teach students to incorporate data, simulations, and models into the curriculum at the course and program levels. We brought together faculty members from the sciences, including Geoscience and allied fields, who teach computation and quantitative thinking skills using MATLAB to build a resource collection for teaching. These materials, and the outcomes of the workshops are freely available on our website. The workshop outcomes include a collection of teaching activities, essays, and course descriptions that can help faculty incorporate computational skills at the course or program level. The teaching activities include in-class assignments, problem sets, labs, projects, and toolboxes. These activities range from programming assignments to creating and using models. The outcomes also include workshop

  4. Communication, Control, and Computer Access for Disabled and Elderly Individuals. ResourceBook 1: Communication Aids. Rehab/Education Technology ResourceBook Series.

    ERIC Educational Resources Information Center

    Brandenburg, Sara A., Ed.; Vanderheiden, Gregg C., Ed.

    One of a series of three resource guides concerned with communication, control, and computer access for disabled and elderly individuals, the directory focuses on communication aids. The book's six chapters each cover products with the same primary function. Cross reference indexes allow access to listings of products by function, input/output…

  5. Administration of Computer Resources.

    ERIC Educational Resources Information Center

    Franklin, Gene F.

    Computing at Stanford University has, until recently, been performed at one of five facilities. The Stanford hospital operates an IBM 370/135 mainly for administrative use. The university business office has an IBM 370/145 for its administrative needs and support of the medical clinic. Under the supervision of the Stanford Computation Center are…

  6. Computer Processing 10-20-30. Teacher's Manual. Senior High School Teacher Resource Manual.

    ERIC Educational Resources Information Center

    Fisher, Mel; Lautt, Ray

    Designed to help teachers meet the program objectives for the computer processing curriculum for senior high schools in the province of Alberta, Canada, this resource manual includes the following sections: (1) program objectives; (2) a flowchart of curriculum modules; (3) suggestions for short- and long-range planning; (4) sample lesson plans;…

  7. Concrete resource analysis of the quantum linear-system algorithm used to compute the electromagnetic scattering cross section of a 2D target

    NASA Astrophysics Data System (ADS)

    Scherer, Artur; Valiron, Benoît; Mau, Siun-Chuon; Alexander, Scott; van den Berg, Eric; Chapuran, Thomas E.

    2017-03-01

    We provide a detailed estimate for the logical resource requirements of the quantum linear-system algorithm (Harrow et al. in Phys Rev Lett 103:150502, 2009) including the recently described elaborations and application to computing the electromagnetic scattering cross section of a metallic target (Clader et al. in Phys Rev Lett 110:250504, 2013). Our resource estimates are based on the standard quantum-circuit model of quantum computation; they comprise circuit width (related to parallelism), circuit depth (total number of steps), the number of qubits and ancilla qubits employed, and the overall number of elementary quantum gate operations as well as more specific gate counts for each elementary fault-tolerant gate from the standard set { X, Y, Z, H, S, T, { CNOT } }. In order to perform these estimates, we used an approach that combines manual analysis with automated estimates generated via the Quipper quantum programming language and compiler. Our estimates pertain to the explicit example problem size N=332{,}020{,}680 beyond which, according to a crude big-O complexity comparison, the quantum linear-system algorithm is expected to run faster than the best known classical linear-system solving algorithm. For this problem size, a desired calculation accuracy ɛ =0.01 requires an approximate circuit width 340 and circuit depth of order 10^{25} if oracle costs are excluded, and a circuit width and circuit depth of order 10^8 and 10^{29}, respectively, if the resource requirements of oracles are included, indicating that the commonly ignored oracle resources are considerable. In addition to providing detailed logical resource estimates, it is also the purpose of this paper to demonstrate explicitly (using a fine-grained approach rather than relying on coarse big-O asymptotic approximations) how these impressively large numbers arise with an actual circuit implementation of a quantum algorithm. While our estimates may prove to be conservative as more efficient

  8. Computer-generated reminders and quality of pediatric HIV care in a resource-limited setting.

    PubMed

    Were, Martin C; Nyandiko, Winstone M; Huang, Kristin T L; Slaven, James E; Shen, Changyu; Tierney, William M; Vreeman, Rachel C

    2013-03-01

    To evaluate the impact of clinician-targeted computer-generated reminders on compliance with HIV care guidelines in a resource-limited setting. We conducted this randomized, controlled trial in an HIV referral clinic in Kenya caring for HIV-infected and HIV-exposed children (<14 years of age). For children randomly assigned to the intervention group, printed patient summaries containing computer-generated patient-specific reminders for overdue care recommendations were provided to the clinician at the time of the child's clinic visit. For children in the control group, clinicians received the summaries, but no computer-generated reminders. We compared differences between the intervention and control groups in completion of overdue tasks, including HIV testing, laboratory monitoring, initiating antiretroviral therapy, and making referrals. During the 5-month study period, 1611 patients (49% female, 70% HIV-infected) were eligible to receive at least 1 computer-generated reminder (ie, had an overdue clinical task). We observed a fourfold increase in the completion of overdue clinical tasks when reminders were availed to providers over the course of the study (68% intervention vs 18% control, P < .001). Orders also occurred earlier for the intervention group (77 days, SD 2.4 days) compared with the control group (104 days, SD 1.2 days) (P < .001). Response rates to reminders varied significantly by type of reminder and between clinicians. Clinician-targeted, computer-generated clinical reminders are associated with a significant increase in completion of overdue clinical tasks for HIV-infected and exposed children in a resource-limited setting.

  9. Contextuality supplies the 'magic' for quantum computation.

    PubMed

    Howard, Mark; Wallman, Joel; Veitch, Victor; Emerson, Joseph

    2014-06-19

    Quantum computers promise dramatic advantages over their classical counterparts, but the source of the power in quantum computing has remained elusive. Here we prove a remarkable equivalence between the onset of contextuality and the possibility of universal quantum computation via 'magic state' distillation, which is the leading model for experimentally realizing a fault-tolerant quantum computer. This is a conceptually satisfying link, because contextuality, which precludes a simple 'hidden variable' model of quantum mechanics, provides one of the fundamental characterizations of uniquely quantum phenomena. Furthermore, this connection suggests a unifying paradigm for the resources of quantum information: the non-locality of quantum theory is a particular kind of contextuality, and non-locality is already known to be a critical resource for achieving advantages with quantum communication. In addition to clarifying these fundamental issues, this work advances the resource framework for quantum computation, which has a number of practical applications, such as characterizing the efficiency and trade-offs between distinct theoretical and experimental schemes for achieving robust quantum computation, and putting bounds on the overhead cost for the classical simulation of quantum algorithms.

  10. Resource quality of a symmetry-protected topologically ordered phase for quantum computation.

    PubMed

    Miller, Jacob; Miyake, Akimasa

    2015-03-27

    We investigate entanglement naturally present in the 1D topologically ordered phase protected with the on-site symmetry group of an octahedron as a potential resource for teleportation-based quantum computation. We show that, as long as certain characteristic lengths are finite, all its ground states have the capability to implement any unit-fidelity one-qubit gate operation asymptotically as a key computational building block. This feature is intrinsic to the entire phase, in that perfect gate fidelity coincides with perfect string order parameters under a state-insensitive renormalization procedure. Our approach may pave the way toward a novel program to classify quantum many-body systems based on their operational use for quantum information processing.

  11. Resource Quality of a Symmetry-Protected Topologically Ordered Phase for Quantum Computation

    NASA Astrophysics Data System (ADS)

    Miller, Jacob; Miyake, Akimasa

    2015-03-01

    We investigate entanglement naturally present in the 1D topologically ordered phase protected with the on-site symmetry group of an octahedron as a potential resource for teleportation-based quantum computation. We show that, as long as certain characteristic lengths are finite, all its ground states have the capability to implement any unit-fidelity one-qubit gate operation asymptotically as a key computational building block. This feature is intrinsic to the entire phase, in that perfect gate fidelity coincides with perfect string order parameters under a state-insensitive renormalization procedure. Our approach may pave the way toward a novel program to classify quantum many-body systems based on their operational use for quantum information processing.

  12. ACToR: Aggregated Computational Toxicology Resource (T) ...

    EPA Pesticide Factsheets

    The EPA Aggregated Computational Toxicology Resource (ACToR) is a set of databases compiling information on chemicals in the environment from a large number of public and in-house EPA sources. ACToR has 3 main goals: (1) The serve as a repository of public toxicology information on chemicals of interest to the EPA, and in particular to be a central source for the testing data on all chemicals regulated by all EPA programs; (2) To be a source of in vivo training data sets for building in vitro to in vivo computational models; (3) To serve as a central source of chemical structure and identity information for the ToxCastTM and Tox21 programs. There are 4 main databases, all linked through a common set of chemical information and a common structure linking chemicals to assay data: the public ACToR system (available at http://actor.epa.gov), the ToxMiner database holding ToxCast and Tox21 data, along with results form statistical analyses on these data; the Tox21 chemical repository which is managing the ordering and sample tracking process for the larger Tox21 project; and the public version of ToxRefDB. The public ACToR system contains information on ~500K compounds with toxicology, exposure and chemical property information from >400 public sources. The web site is visited by ~1,000 unique users per month and generates ~1,000 page requests per day on average. The databases are built on open source technology, which has allowed us to export them to a number of col

  13. An open-source computational and data resource to analyze digital maps of immunopeptidomes

    DOE PAGES

    Caron, Etienne; Espona, Lucia; Kowalewski, Daniel J.; ...

    2015-07-08

    We present a novel mass spectrometry-based high-throughput workflow and an open-source computational and data resource to reproducibly identify and quantify HLA-associated peptides. Collectively, the resources support the generation of HLA allele-specific peptide assay libraries consisting of consensus fragment ion spectra, and the analysis of quantitative digital maps of HLA peptidomes generated from a range of biological sources by SWATH mass spectrometry (MS). This study represents the first community-based effort to develop a robust platform for the reproducible and quantitative measurement of the entire repertoire of peptides presented by HLA molecules, an essential step towards the design of efficient immunotherapies.

  14. Cloudbus Toolkit for Market-Oriented Cloud Computing

    NASA Astrophysics Data System (ADS)

    Buyya, Rajkumar; Pandey, Suraj; Vecchiola, Christian

    This keynote paper: (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver computing as a utility; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as virtual machines; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a Service software system containing SDK (Software Development Kit) for construction of Cloud applications and deployment on private or public Clouds, in addition to supporting market-oriented resource management; (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications; (iii) creation of 3rd party Cloud brokering services for building content delivery networks and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon along with Grid mashups; (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; (v) Energy Efficient Resource Allocation Mechanisms and Techniques for creation and management of Green Clouds; and (vi) pathways for future research.

  15. Increasing processor utilization during parallel computation rundown

    NASA Technical Reports Server (NTRS)

    Jones, W. H.

    1986-01-01

    Some parallel processing environments provide for asynchronous execution and completion of general purpose parallel computations from a single computational phase. When all the computations from such a phase are complete, a new parallel computational phase is begun. Depending upon the granularity of the parallel computations to be performed, there may be a shortage of available work as a particular computational phase draws to a close (computational rundown). This can result in the waste of computing resources and the delay of the overall problem. In many practical instances, strict sequential ordering of phases of parallel computation is not totally required. In such cases, the beginning of one phase can be correctly computed before the end of a previous phase is completed. This allows additional work to be generated somewhat earlier to keep computing resources busy during each computational rundown. The conditions under which this can occur are identified and the frequency of occurrence of such overlapping in an actual parallel Navier-Stokes code is reported. A language construct is suggested and possible control strategies for the management of such computational phase overlapping are discussed.

  16. Exploiting short-term memory in soft body dynamics as a computational resource

    PubMed Central

    Nakajima, K.; Li, T.; Hauser, H.; Pfeifer, R.

    2014-01-01

    Soft materials are not only highly deformable, but they also possess rich and diverse body dynamics. Soft body dynamics exhibit a variety of properties, including nonlinearity, elasticity and potentially infinitely many degrees of freedom. Here, we demonstrate that such soft body dynamics can be employed to conduct certain types of computation. Using body dynamics generated from a soft silicone arm, we show that they can be exploited to emulate functions that require memory and to embed robust closed-loop control into the arm. Our results suggest that soft body dynamics have a short-term memory and can serve as a computational resource. This finding paves the way towards exploiting passive body dynamics for control of a large class of underactuated systems. PMID:25185579

  17. Planning health education: Internet and computer resources in southwestern Nigeria. 2000-2001.

    PubMed

    Oyadoke, Adebola A; Salami, Kabiru K; Brieger, William R

    The use of the Internet as a health education tool and as a resource in health education planning is widely accepted as the norm in industrialized countries. Unfortunately, access to computers and the Internet is quite limited in developing countries. Not all licensed service providers operate, many users are actually foreign nationals, telephone connections are unreliable, and electricity supplies are intermittent. In this context, computer, e-mail, Internet, and CD-Rom use by health and health education program officers in five states in southwestern Nigeria were assessed to document their present access and use. Eight of the 30 organizations visited were government health ministry departments, while the remainder were non-governmental organizations (NGOs). Six NGOs and four State Ministry of Health (MOH) departments had no computers, but nearly two-thirds of both types of agency had e-mail, less than one-third had Web browsing facilities, and six had CD-Roms, all of whom were NGOs. Only 25 of the 48 individual respondents had computer use skills. Narrative responses from individual employees showed a qualitative difference between computer and Internet access and use and type of agency. NGO staff in organizations with computers indicated having relatively free access to a computer and the Internet and used these for both program planning and administrative purposes. In government offices it appeared that computers were more likely to be located in administrative or statistics offices and used for management tasks like salaries and correspondence, limiting the access of individual health staff. These two different organizational cultures must be considered when plans are made for increasing computer availability and skills for health education planning.

  18. Resource Efficient Hardware Architecture for Fast Computation of Running Max/Min Filters

    PubMed Central

    Torres-Huitzil, Cesar

    2013-01-01

    Running max/min filters on rectangular kernels are widely used in many digital signal and image processing applications. Filtering with a k × k kernel requires of k 2 − 1 comparisons per sample for a direct implementation; thus, performance scales expensively with the kernel size k. Faster computations can be achieved by kernel decomposition and using constant time one-dimensional algorithms on custom hardware. This paper presents a hardware architecture for real-time computation of running max/min filters based on the van Herk/Gil-Werman (HGW) algorithm. The proposed architecture design uses less computation and memory resources than previously reported architectures when targeted to Field Programmable Gate Array (FPGA) devices. Implementation results show that the architecture is able to compute max/min filters, on 1024 × 1024 images with up to 255 × 255 kernels, in around 8.4 milliseconds, 120 frames per second, at a clock frequency of 250 MHz. The implementation is highly scalable for the kernel size with good performance/area tradeoff suitable for embedded applications. The applicability of the architecture is shown for local adaptive image thresholding. PMID:24288456

  19. Postprocessing of Voxel-Based Topologies for Additive Manufacturing Using the Computational Geometry Algorithms Library (CGAL)

    DTIC Science & Technology

    2015-06-01

    10-2014 to 00-11-2014 4. TITLE AND SUBTITLE Postprocessing of Voxel-Based Topologies for Additive Manufacturing Using the Computational Geometry...ABSTRACT Postprocessing of 3-dimensional (3-D) topologies that are defined as a set of voxels using the Computational Geometry Algorithms Library (CGAL... computational geometry algorithms, several of which are suited to the task. The work flow described in this report involves first defining a set of

  20. Aggregating Data for Computational Toxicology Applications: The U.S. Environmental Protection Agency (EPA) Aggregated Computational Toxicology Resource (ACToR) System

    PubMed Central

    Judson, Richard S.; Martin, Matthew T.; Egeghy, Peter; Gangwal, Sumit; Reif, David M.; Kothiya, Parth; Wolf, Maritja; Cathey, Tommy; Transue, Thomas; Smith, Doris; Vail, James; Frame, Alicia; Mosher, Shad; Cohen Hubal, Elaine A.; Richard, Ann M.

    2012-01-01

    Computational toxicology combines data from high-throughput test methods, chemical structure analyses and other biological domains (e.g., genes, proteins, cells, tissues) with the goals of predicting and understanding the underlying mechanistic causes of chemical toxicity and for predicting toxicity of new chemicals and products. A key feature of such approaches is their reliance on knowledge extracted from large collections of data and data sets in computable formats. The U.S. Environmental Protection Agency (EPA) has developed a large data resource called ACToR (Aggregated Computational Toxicology Resource) to support these data-intensive efforts. ACToR comprises four main repositories: core ACToR (chemical identifiers and structures, and summary data on hazard, exposure, use, and other domains), ToxRefDB (Toxicity Reference Database, a compilation of detailed in vivo toxicity data from guideline studies), ExpoCastDB (detailed human exposure data from observational studies of selected chemicals), and ToxCastDB (data from high-throughput screening programs, including links to underlying biological information related to genes and pathways). The EPA DSSTox (Distributed Structure-Searchable Toxicity) program provides expert-reviewed chemical structures and associated information for these and other high-interest public inventories. Overall, the ACToR system contains information on about 400,000 chemicals from 1100 different sources. The entire system is built using open source tools and is freely available to download. This review describes the organization of the data repository and provides selected examples of use cases. PMID:22408426

  1. Aggregating data for computational toxicology applications: The U.S. Environmental Protection Agency (EPA) Aggregated Computational Toxicology Resource (ACToR) System.

    PubMed

    Judson, Richard S; Martin, Matthew T; Egeghy, Peter; Gangwal, Sumit; Reif, David M; Kothiya, Parth; Wolf, Maritja; Cathey, Tommy; Transue, Thomas; Smith, Doris; Vail, James; Frame, Alicia; Mosher, Shad; Cohen Hubal, Elaine A; Richard, Ann M

    2012-01-01

    Computational toxicology combines data from high-throughput test methods, chemical structure analyses and other biological domains (e.g., genes, proteins, cells, tissues) with the goals of predicting and understanding the underlying mechanistic causes of chemical toxicity and for predicting toxicity of new chemicals and products. A key feature of such approaches is their reliance on knowledge extracted from large collections of data and data sets in computable formats. The U.S. Environmental Protection Agency (EPA) has developed a large data resource called ACToR (Aggregated Computational Toxicology Resource) to support these data-intensive efforts. ACToR comprises four main repositories: core ACToR (chemical identifiers and structures, and summary data on hazard, exposure, use, and other domains), ToxRefDB (Toxicity Reference Database, a compilation of detailed in vivo toxicity data from guideline studies), ExpoCastDB (detailed human exposure data from observational studies of selected chemicals), and ToxCastDB (data from high-throughput screening programs, including links to underlying biological information related to genes and pathways). The EPA DSSTox (Distributed Structure-Searchable Toxicity) program provides expert-reviewed chemical structures and associated information for these and other high-interest public inventories. Overall, the ACToR system contains information on about 400,000 chemicals from 1100 different sources. The entire system is built using open source tools and is freely available to download. This review describes the organization of the data repository and provides selected examples of use cases.

  2. An open-source computational and data resource to analyze digital maps of immunopeptidomes

    PubMed Central

    Caron, Etienne; Espona, Lucia; Kowalewski, Daniel J; Schuster, Heiko; Ternette, Nicola; Alpízar, Adán; Schittenhelm, Ralf B; Ramarathinam, Sri H; Lindestam Arlehamn, Cecilia S; Chiek Koh, Ching; Gillet, Ludovic C; Rabsteyn, Armin; Navarro, Pedro; Kim, Sangtae; Lam, Henry; Sturm, Theo; Marcilla, Miguel; Sette, Alessandro; Campbell, David S; Deutsch, Eric W; Moritz, Robert L; Purcell, Anthony W; Rammensee, Hans-Georg; Stevanovic, Stefan; Aebersold, Ruedi

    2015-01-01

    We present a novel mass spectrometry-based high-throughput workflow and an open-source computational and data resource to reproducibly identify and quantify HLA-associated peptides. Collectively, the resources support the generation of HLA allele-specific peptide assay libraries consisting of consensus fragment ion spectra, and the analysis of quantitative digital maps of HLA peptidomes generated from a range of biological sources by SWATH mass spectrometry (MS). This study represents the first community-based effort to develop a robust platform for the reproducible and quantitative measurement of the entire repertoire of peptides presented by HLA molecules, an essential step towards the design of efficient immunotherapies. DOI: http://dx.doi.org/10.7554/eLife.07661.001 PMID:26154972

  3. Exploiting short-term memory in soft body dynamics as a computational resource.

    PubMed

    Nakajima, K; Li, T; Hauser, H; Pfeifer, R

    2014-11-06

    Soft materials are not only highly deformable, but they also possess rich and diverse body dynamics. Soft body dynamics exhibit a variety of properties, including nonlinearity, elasticity and potentially infinitely many degrees of freedom. Here, we demonstrate that such soft body dynamics can be employed to conduct certain types of computation. Using body dynamics generated from a soft silicone arm, we show that they can be exploited to emulate functions that require memory and to embed robust closed-loop control into the arm. Our results suggest that soft body dynamics have a short-term memory and can serve as a computational resource. This finding paves the way towards exploiting passive body dynamics for control of a large class of underactuated systems. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  4. Computer User's Guide to the Protection of Information Resources. NIST Special Publication 500-171.

    ERIC Educational Resources Information Center

    Helsing, Cheryl; And Others

    Computers have changed the way information resources are handled. Large amounts of information are stored in one central place and can be accessed from remote locations. Users have a personal responsibility for the security of the system and the data stored in it. This document outlines the user's responsibilities and provides security and control…

  5. Communication, Control, and Computer Access for Disabled and Elderly Individuals. ResourceBook 2: Switches and Environmental Controls. Rehab/Education Technology ResourceBook Series.

    ERIC Educational Resources Information Center

    Brandenburg, Sara A., Ed.; Vanderheiden, Gregg C., Ed.

    One of a series of three resource guides concerned with communication, control, and computer access for disabled and elderly individuals, the directory focuses on switches and environmental controls. The book's three chapters each cover products with the same primary function. Cross reference indexes allow access to listings of products by…

  6. ADHydro: A Large-scale High Resolution Multi-Physics Distributed Water Resources Model for Water Resource Simulations in a Parallel Computing Environment

    NASA Astrophysics Data System (ADS)

    lai, W.; Steinke, R. C.; Ogden, F. L.

    2013-12-01

    Physics-based watershed models are useful tools for hydrologic studies, water resources management and economic analyses in the contexts of climate, land-use, and water-use changes. This poster presents development of a physics-based, high-resolution, distributed water resources model suitable for simulating large watersheds in a massively parallel computing environment. Developing this model is one of the objectives of the NSF EPSCoR RII Track II CI-WATER project, which is joint between Wyoming and Utah. The model, which we call ADHydro, is aimed at simulating important processes in the Rocky Mountain west, includes: rainfall and infiltration, snowfall and snowmelt in complex terrain, vegetation and evapotranspiration, soil heat flux and freezing, overland flow, channel flow, groundwater flow and water management. The ADHydro model uses the explicit finite volume method to solve PDEs for 2D overland flow, 2D saturated groundwater flow coupled to 1D channel flow. The model has a quasi-3D formulation that couples 2D overland flow and 2D saturated groundwater flow using the 1D Talbot-Ogden finite water-content infiltration and redistribution model. This eliminates difficulties in solving the highly nonlinear 3D Richards equation, while the finite volume Talbot-Ogden infiltration solution is computationally efficient, guaranteed to conserve mass, and allows simulation of the effect of near-surface groundwater tables on runoff generation. The process-level components of the model are being individually tested and validated. The model as a whole will be tested on the Green River basin in Wyoming and ultimately applied to the entire Upper Colorado River basin. ADHydro development has necessitated development of tools for large-scale watershed modeling, including open-source workflow steps to extract hydromorphological information from GIS data, integrate hydrometeorological and water management forcing input, and post-processing and visualization of large output data

  7. Scaling predictive modeling in drug development with cloud computing.

    PubMed

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  8. The Electronic School Library Resource Center: Facilities Planning for the New Information Technologies.

    ERIC Educational Resources Information Center

    Blodgett, Teresa; Repman, Judi

    1995-01-01

    Addresses the necessity of incorporating new computer technologies into school library resource centers and notes some administrative challenges. An extensive checklist is provided for assessing equipment and furniture needs, physical facilities, and rewiring needs. A glossary of 20 terms and 11 additional resources is included. (AEF)

  9. Selecting, Evaluating and Creating Policies for Computer-Based Resources in the Behavioral Sciences and Education.

    ERIC Educational Resources Information Center

    Richardson, Linda B., Comp.; And Others

    This collection includes four handouts: (1) "Selection Critria Considerations for Computer-Based Resources" (Linda B. Richardson); (2) "Software Collection Policies in Academic Libraries" (a 24-item bibliography, Jane W. Johnson); (3) "Circulation and Security of Software" (a 19-item bibliography, Sara Elizabeth Williams); and (4) "Bibliography of…

  10. A comprehensive overview of computational resources to aid in precision genome editing with engineered nucleases.

    PubMed

    Periwal, Vinita

    2017-07-01

    Genome editing with engineered nucleases (zinc finger nucleases, TAL effector nucleases s and Clustered regularly inter-spaced short palindromic repeats/CRISPR-associated) has recently been shown to have great promise in a variety of therapeutic and biotechnological applications. However, their exploitation in genetic analysis and clinical settings largely depends on their specificity for the intended genomic target. Large and complex genomes often contain highly homologous/repetitive sequences, which limits the specificity of genome editing tools and could result in off-target activity. Over the past few years, various computational approaches have been developed to assist the design process and predict/reduce the off-target activity of these nucleases. These tools could be efficiently used to guide the design of constructs for engineered nucleases and evaluate results after genome editing. This review provides a comprehensive overview of various databases, tools, web servers and resources for genome editing and compares their features and functionalities. Additionally, it also describes tools that have been developed to analyse post-genome editing results. The article also discusses important design parameters that could be considered while designing these nucleases. This review is intended to be a quick reference guide for experimentalists as well as computational biologists working in the field of genome editing with engineered nucleases. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Additive effect of calcium depletion and low resource quality on Gammarus fossarum (Crustacea, Amphipoda) life history traits.

    PubMed

    Rollin, Marc; Coulaud, Romain; Danger, Michael; Sohm, Bénédicte; Flayac, Justine; Bec, Alexandre; Chaumot, Arnaud; Geffard, Olivier; Felten, Vincent

    2018-04-01

    Gammarus fossarum is an often-abundant crustacean detritivore that contributes importantly to leaf litter breakdown in oligotrophic, mainly heterotrophic, headwater streams. This species requires large amounts of Ca to moult, thus allowing growth and reproduction. Because resource quality is tightly coupled to the organism's growth and physiological status, we hypothesised that low Ca concentration [Ca] and low food resource quality (low phosphorus [P] and/or reduced highly unsaturated fatty acid [HUFA] contents) would interactively impair molecular responses (gene expression) and reproduction of G. fossarum. To investigate the effects of food resources quality, we experimentally manipulated the P content of sycamore leaves and also used diatoms because they contain high amounts of HUFAs. Three resource quality treatments were tested: low quality (LQ, unmanipulated leaves: low P content), high quality 1 (HQ1; P-manipulated leaves: high P content), and high quality 2 (unmanipulated leaves supplemented with a pellet containing diatoms: high P and HUFA content). Naturally, demineralised stream water was supplemented with CaSO 4 to obtain three Ca concentrations (2, 3.5, and 10.5 mg Ca L -1 ). For 21 days, pairs of G. fossarum were individually exposed to one of the nine treatments (3 [Ca] × 3 resource qualities). At the individual level, strong and significant delays in moult stage were observed in gammarids exposed to lower [Ca] and to lower resource quality, with additive effects lengthening the duration of the reproductive cycle. Effects at the molecular level were investigated by measuring expression of 12 genes involved in energy production, translation, or Ca or P homeostasis. Expression of ATP synthase beta (higher in HQ2), calcified cuticle protein (higher in HQ1 and HQ2), and tropomyosin (higher in HQ2 compared to HQ1) was significantly affected by resource quality, and significant additive effects on Ca transporting ATPase expression were induced by

  12. EGI-EUDAT integration activity - Pair data and high-throughput computing resources together

    NASA Astrophysics Data System (ADS)

    Scardaci, Diego; Viljoen, Matthew; Vitlacil, Dejan; Fiameni, Giuseppe; Chen, Yin; sipos, Gergely; Ferrari, Tiziana

    2016-04-01

    EGI (www.egi.eu) is a publicly funded e-infrastructure put together to give scientists access to more than 530,000 logical CPUs, 200 PB of disk capacity and 300 PB of tape storage to drive research and innovation in Europe. The infrastructure provides both high throughput computing and cloud compute/storage capabilities. Resources are provided by about 350 resource centres which are distributed across 56 countries in Europe, the Asia-Pacific region, Canada and Latin America. EUDAT (www.eudat.eu) is a collaborative Pan-European infrastructure providing research data services, training and consultancy for researchers, research communities, research infrastructures and data centres. EUDAT's vision is to enable European researchers and practitioners from any research discipline to preserve, find, access, and process data in a trusted environment, as part of a Collaborative Data Infrastructure (CDI) conceived as a network of collaborating, cooperating centres, combining the richness of numerous community-specific data repositories with the permanence and persistence of some of Europe's largest scientific data centres. EGI and EUDAT, in the context of their flagship projects, EGI-Engage and EUDAT2020, started in March 2015 a collaboration to harmonise the two infrastructures, including technical interoperability, authentication, authorisation and identity management, policy and operations. The main objective of this work is to provide end-users with a seamless access to an integrated infrastructure offering both EGI and EUDAT services and, then, pairing data and high-throughput computing resources together. To define the roadmap of this collaboration, EGI and EUDAT selected a set of relevant user communities, already collaborating with both infrastructures, which could bring requirements and help to assign the right priorities to each of them. In this way, from the beginning, this activity has been really driven by the end users. The identified user communities are

  13. Volunteer Computing Experience with ATLAS@Home

    NASA Astrophysics Data System (ADS)

    Adam-Bourdarios, C.; Bianchi, R.; Cameron, D.; Filipčič, A.; Isacchini, G.; Lançon, E.; Wu, W.; ATLAS Collaboration

    2017-10-01

    ATLAS@Home is a volunteer computing project which allows the public to contribute to computing for the ATLAS experiment through their home or office computers. The project has grown continuously since its creation in mid-2014 and now counts almost 100,000 volunteers. The combined volunteers’ resources make up a sizeable fraction of overall resources for ATLAS simulation. This paper takes stock of the experience gained so far and describes the next steps in the evolution of the project. These improvements include running natively on Linux to ease the deployment on for example university clusters, using multiple cores inside one task to reduce the memory requirements and running different types of workload such as event generation. In addition to technical details the success of ATLAS@Home as an outreach tool is evaluated.

  14. SInCRe—structural interactome computational resource for Mycobacterium tuberculosis

    PubMed Central

    Metri, Rahul; Hariharaputran, Sridhar; Ramakrishnan, Gayatri; Anand, Praveen; Raghavender, Upadhyayula S.; Ochoa-Montaño, Bernardo; Higueruelo, Alicia P.; Sowdhamini, Ramanathan; Chandra, Nagasuma R.; Blundell, Tom L.; Srinivasan, Narayanaswamy

    2015-01-01

    We have developed an integrated database for Mycobacterium tuberculosis H37Rv (Mtb) that collates information on protein sequences, domain assignments, functional annotation and 3D structural information along with protein–protein and protein–small molecule interactions. SInCRe (Structural Interactome Computational Resource) is developed out of CamBan (Cambridge and Bangalore) collaboration. The motivation for development of this database is to provide an integrated platform to allow easily access and interpretation of data and results obtained by all the groups in CamBan in the field of Mtb informatics. In-house algorithms and databases developed independently by various academic groups in CamBan are used to generate Mtb-specific datasets and are integrated in this database to provide a structural dimension to studies on tuberculosis. The SInCRe database readily provides information on identification of functional domains, genome-scale modelling of structures of Mtb proteins and characterization of the small-molecule binding sites within Mtb. The resource also provides structure-based function annotation, information on small-molecule binders including FDA (Food and Drug Administration)-approved drugs, protein–protein interactions (PPIs) and natural compounds that bind to pathogen proteins potentially and result in weakening or elimination of host–pathogen protein–protein interactions. Together they provide prerequisites for identification of off-target binding. Database URL: http://proline.biochem.iisc.ernet.in/sincre PMID:26130660

  15. Developing Online Learning Resources: Big Data, Social Networks, and Cloud Computing to Support Pervasive Knowledge

    ERIC Educational Resources Information Center

    Anshari, Muhammad; Alas, Yabit; Guan, Lim Sei

    2016-01-01

    Utilizing online learning resources (OLR) from multi channels in learning activities promise extended benefits from traditional based learning-centred to a collaborative based learning-centred that emphasises pervasive learning anywhere and anytime. While compiling big data, cloud computing, and semantic web into OLR offer a broader spectrum of…

  16. Training auscultatory skills: computer simulated heart sounds or additional bedside training? A randomized trial on third-year medical students

    PubMed Central

    2010-01-01

    Background The present study compares the value of additional use of computer simulated heart sounds, to conventional bedside auscultation training, on the cardiac auscultation skills of 3rd year medical students at Oslo University Medical School. Methods In addition to their usual curriculum courses, groups of seven students each were randomized to receive four hours of additional auscultation training either employing a computer simulator system or adding on more conventional bedside training. Cardiac auscultation skills were afterwards tested using live patients. Each student gave a written description of the auscultation findings in four selected patients, and was rewarded from 0-10 points for each patient. Differences between the two study groups were evaluated using student's t-test. Results At the auscultation test no significant difference in mean score was found between the students who had used additional computer based sound simulation compared to additional bedside training. Conclusions Students at an early stage of their cardiology training demonstrated equal performance of cardiac auscultation whether they had received an additional short auscultation course based on computer simulated training, or had had additional bedside training. PMID:20082701

  17. Unconditionally verifiable blind quantum computation

    NASA Astrophysics Data System (ADS)

    Fitzsimons, Joseph F.; Kashefi, Elham

    2017-07-01

    Blind quantum computing (BQC) allows a client to have a server carry out a quantum computation for them such that the client's input, output, and computation remain private. A desirable property for any BQC protocol is verification, whereby the client can verify with high probability whether the server has followed the instructions of the protocol or if there has been some deviation resulting in a corrupted output state. A verifiable BQC protocol can be viewed as an interactive proof system leading to consequences for complexity theory. We previously proposed [A. Broadbent, J. Fitzsimons, and E. Kashefi, in Proceedings of the 50th Annual Symposium on Foundations of Computer Science, Atlanta, 2009 (IEEE, Piscataway, 2009), p. 517] a universal and unconditionally secure BQC scheme where the client only needs to be able to prepare single qubits in separable states randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. In this paper we extend that protocol with additional functionality allowing blind computational basis measurements, which we use to construct another verifiable BQC protocol based on a different class of resource states. We rigorously prove that the probability of failing to detect an incorrect output is exponentially small in a security parameter, while resource overhead remains polynomial in this parameter. This resource state allows entangling gates to be performed between arbitrary pairs of logical qubits with only constant overhead. This is a significant improvement on the original scheme, which required that all computations to be performed must first be put into a nearest-neighbor form, incurring linear overhead in the number of qubits. Such an improvement has important consequences for efficiency and fault-tolerance thresholds.

  18. Controlling user access to electronic resources without password

    DOEpatents

    Smith, Fred Hewitt

    2015-06-16

    Described herein are devices and techniques for remotely controlling user access to a restricted computer resource. The process includes pre-determining an association of the restricted computer resource and computer-resource-proximal environmental information. Indicia of user-proximal environmental information are received from a user requesting access to the restricted computer resource. Received indicia of user-proximal environmental information are compared to associated computer-resource-proximal environmental information. User access to the restricted computer resource is selectively granted responsive to a favorable comparison in which the user-proximal environmental information is sufficiently similar to the computer-resource proximal environmental information. In at least some embodiments, the process further includes comparing user-supplied biometric measure and comparing it with a predetermined association of at least one biometric measure of an authorized user. Access to the restricted computer resource is granted in response to a favorable comparison.

  19. Self managing experiment resources

    NASA Astrophysics Data System (ADS)

    Stagni, F.; Ubeda, M.; Tsaregorodtsev, A.; Romanovskiy, V.; Roiser, S.; Charpentier, P.; Graciani, R.

    2014-06-01

    Within this paper we present an autonomic Computing resources management system, used by LHCb for assessing the status of their Grid resources. Virtual Organizations Grids include heterogeneous resources. For example, LHC experiments very often use resources not provided by WLCG, and Cloud Computing resources will soon provide a non-negligible fraction of their computing power. The lack of standards and procedures across experiments and sites generated the appearance of multiple information systems, monitoring tools, ticket portals, etc... which nowadays coexist and represent a very precious source of information for running HEP experiments Computing systems as well as sites. These two facts lead to many particular solutions for a general problem: managing the experiment resources. In this paper we present how LHCb, via the DIRAC interware, addressed such issues. With a renewed Central Information Schema hosting all resources metadata and a Status System (Resource Status System) delivering real time information, the system controls the resources topology, independently of the resource types. The Resource Status System applies data mining techniques against all possible information sources available and assesses the status changes, that are then propagated to the topology description. Obviously, giving full control to such an automated system is not risk-free. Therefore, in order to minimise the probability of misbehavior, a battery of tests has been developed in order to certify the correctness of its assessments. We will demonstrate the performance and efficiency of such a system in terms of cost reduction and reliability.

  20. Computational Process Modeling for Additive Manufacturing (OSU)

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  1. A survey and taxonomy on energy efficient resource allocation techniques for cloud computing systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hameed, Abdul; Khoshkbarforoushha, Alireza; Ranjan, Rajiv

    In a cloud computing paradigm, energy efficient allocation of different virtualized ICT resources (servers, storage disks, and networks, and the like) is a complex problem due to the presence of heterogeneous application (e.g., content delivery networks, MapReduce, web applications, and the like) workloads having contentious allocation requirements in terms of ICT resource capacities (e.g., network bandwidth, processing speed, response time, etc.). Several recent papers have tried to address the issue of improving energy efficiency in allocating cloud resources to applications with varying degree of success. However, to the best of our knowledge there is no published literature on this subjectmore » that clearly articulates the research problem and provides research taxonomy for succinct classification of existing techniques. Hence, the main aim of this paper is to identify open challenges associated with energy efficient resource allocation. In this regard, the study, first, outlines the problem and existing hardware and software-based techniques available for this purpose. Furthermore, available techniques already presented in the literature are summarized based on the energy-efficient research dimension taxonomy. The advantages and disadvantages of the existing techniques are comprehensively analyzed against the proposed research dimension taxonomy namely: resource adaption policy, objective function, allocation method, allocation operation, and interoperability.« less

  2. An expert fitness diagnosis system based on elastic cloud computing.

    PubMed

    Tseng, Kevin C; Wu, Chia-Chuan

    2014-01-01

    This paper presents an expert diagnosis system based on cloud computing. It classifies a user's fitness level based on supervised machine learning techniques. This system is able to learn and make customized diagnoses according to the user's physiological data, such as age, gender, and body mass index (BMI). In addition, an elastic algorithm based on Poisson distribution is presented to allocate computation resources dynamically. It predicts the required resources in the future according to the exponential moving average of past observations. The experimental results show that Naïve Bayes is the best classifier with the highest accuracy (90.8%) and that the elastic algorithm is able to capture tightly the trend of requests generated from the Internet and thus assign corresponding computation resources to ensure the quality of service.

  3. Realizing the Potential of Information Resources: Information, Technology, and Services. Track 8: Academic Computing and Libraries.

    ERIC Educational Resources Information Center

    CAUSE, Boulder, CO.

    Eight papers are presented from the 1995 CAUSE conference track on academic computing and library issues faced by managers of information technology at colleges and universities. The papers include: (1) "Where's the Beef?: Implementation of Discipline-Specific Training on Internet Resources" (Priscilla Hancock and others); (2)…

  4. ATLAS user analysis on private cloud resources at GoeGrid

    NASA Astrophysics Data System (ADS)

    Glaser, F.; Nadal Serrano, J.; Grabowski, J.; Quadt, A.

    2015-12-01

    User analysis job demands can exceed available computing resources, especially before major conferences. ATLAS physics results can potentially be slowed down due to the lack of resources. For these reasons, cloud research and development activities are now included in the skeleton of the ATLAS computing model, which has been extended by using resources from commercial and private cloud providers to satisfy the demands. However, most of these activities are focused on Monte-Carlo production jobs, extending the resources at Tier-2. To evaluate the suitability of the cloud-computing model for user analysis jobs, we developed a framework to launch an ATLAS user analysis cluster in a cloud infrastructure on demand and evaluated two solutions. The first solution is entirely integrated in the Grid infrastructure by using the same mechanism, which is already in use at Tier-2: A designated Panda-Queue is monitored and additional worker nodes are launched in a cloud environment and assigned to a corresponding HTCondor queue according to the demand. Thereby, the use of cloud resources is completely transparent to the user. However, using this approach, submitted user analysis jobs can still suffer from a certain delay introduced by waiting time in the queue and the deployed infrastructure lacks customizability. Therefore, our second solution offers the possibility to easily deploy a totally private, customizable analysis cluster on private cloud resources belonging to the university.

  5. Computational laser intensity stabilisation for organic molecule concentration estimation in low-resource settings

    NASA Astrophysics Data System (ADS)

    Haider, Shahid A.; Kazemzadeh, Farnoud; Wong, Alexander

    2017-03-01

    An ideal laser is a useful tool for the analysis of biological systems. In particular, the polarization property of lasers can allow for the concentration of important organic molecules in the human body, such as proteins, amino acids, lipids, and carbohydrates, to be estimated. However, lasers do not always work as intended and there can be effects such as mode hopping and thermal drift that can cause time-varying intensity fluctuations. The causes of these effects can be from the surrounding environment, where either an unstable current source is used or the temperature of the surrounding environment is not temporally stable. This intensity fluctuation can cause bias and error in typical organic molecule concentration estimation techniques. In a low-resource setting where cost must be limited and where environmental factors, like unregulated power supplies and temperature, cannot be controlled, the hardware required to correct for these intensity fluctuations can be prohibitive. We propose a method for computational laser intensity stabilisation that uses Bayesian state estimation to correct for the time-varying intensity fluctuations from electrical and thermal instabilities without the use of additional hardware. This method will allow for consistent intensities across all polarization measurements for accurate estimates of organic molecule concentrations.

  6. A Hierarchical Auction-Based Mechanism for Real-Time Resource Allocation in Cloud Robotic Systems.

    PubMed

    Wang, Lujia; Liu, Ming; Meng, Max Q-H

    2017-02-01

    Cloud computing enables users to share computing resources on-demand. The cloud computing framework cannot be directly mapped to cloud robotic systems with ad hoc networks since cloud robotic systems have additional constraints such as limited bandwidth and dynamic structure. However, most multirobotic applications with cooperative control adopt this decentralized approach to avoid a single point of failure. Robots need to continuously update intensive data to execute tasks in a coordinated manner, which implies real-time requirements. Thus, a resource allocation strategy is required, especially in such resource-constrained environments. This paper proposes a hierarchical auction-based mechanism, namely link quality matrix (LQM) auction, which is suitable for ad hoc networks by introducing a link quality indicator. The proposed algorithm produces a fast and robust method that is accurate and scalable. It reduces both global communication and unnecessary repeated computation. The proposed method is designed for firm real-time resource retrieval for physical multirobot systems. A joint surveillance scenario empirically validates the proposed mechanism by assessing several practical metrics. The results show that the proposed LQM auction outperforms state-of-the-art algorithms for resource allocation.

  7. Design and fabrication of a sleep apnea device using computer-aided design/additive manufacture technologies.

    PubMed

    Al Mortadi, Noor; Eggbeer, Dominic; Lewis, Jeffrey; Williams, Robert J

    2013-04-01

    The aim of this study was to analyze the latest innovations in additive manufacture techniques and uniquely apply them to dentistry, to build a sleep apnea device requiring rotating hinges. Laser scanning was used to capture the three-dimensional topography of an upper and lower dental cast. The data sets were imported into an appropriate computer-aided design software environment, which was used to design a sleep apnea device. This design was then exported as a stereolithography file and transferred for three-dimensional printing by an additive manufacture machine. The results not only revealed that the novel computer-based technique presented provides new design opportunities but also highlighted limitations that must be addressed before the techniques can become clinically viable.

  8. Integration and Exposure of Large Scale Computational Resources Across the Earth System Grid Federation (ESGF)

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Maxwell, T. P.; Doutriaux, C.; Williams, D. N.; Chaudhary, A.; Ames, S.

    2015-12-01

    As the size of remote sensing observations and model output data grows, the volume of the data has become overwhelming, even to many scientific experts. As societies are forced to better understand, mitigate, and adapt to climate changes, the combination of Earth observation data and global climate model projects is crucial to not only scientists but to policy makers, downstream applications, and even the public. Scientific progress on understanding climate is critically dependent on the availability of a reliable infrastructure that promotes data access, management, and provenance. The Earth System Grid Federation (ESGF) has created such an environment for the Intergovernmental Panel on Climate Change (IPCC). ESGF provides a federated global cyber infrastructure for data access and management of model outputs generated for the IPCC Assessment Reports (AR). The current generation of the ESGF federated grid allows consumers of the data to find and download data with limited capabilities for server-side processing. Since the amount of data for future AR is expected to grow dramatically, ESGF is working on integrating server-side analytics throughout the federation. The ESGF Compute Working Team (CWT) has created a Web Processing Service (WPS) Application Programming Interface (API) to enable access scalable computational resources. The API is the exposure point to high performance computing resources across the federation. Specifically, the API allows users to execute simple operations, such as maximum, minimum, average, and anomalies, on ESGF data without having to download the data. These operations are executed at the ESGF data node site with access to large amounts of parallel computing capabilities. This presentation will highlight the WPS API, its capabilities, provide implementation details, and discuss future developments.

  9. Comparison of progressive addition lenses for general purpose and for computer vision: an office field study.

    PubMed

    Jaschinski, Wolfgang; König, Mirjam; Mekontso, Tiofil M; Ohlendorf, Arne; Welscher, Monique

    2015-05-01

    Two types of progressive addition lenses (PALs) were compared in an office field study: 1. General purpose PALs with continuous clear vision between infinity and near reading distances and 2. Computer vision PALs with a wider zone of clear vision at the monitor and in near vision but no clear distance vision. Twenty-three presbyopic participants wore each type of lens for two weeks in a double-masked four-week quasi-experimental procedure that included an adaptation phase (Weeks 1 and 2) and a test phase (Weeks 3 and 4). Questionnaires on visual and musculoskeletal conditions as well as preferences regarding the type of lenses were administered. After eight more weeks of free use of the spectacles, the preferences were assessed again. The ergonomic conditions were analysed from photographs. Head inclination when looking at the monitor was significantly lower by 2.3 degrees with the computer vision PALs than with the general purpose PALs. Vision at the monitor was judged significantly better with computer PALs, while distance vision was judged better with general purpose PALs; however, the reported advantage of computer vision PALs differed in extent between participants. Accordingly, 61 per cent of the participants preferred the computer vision PALs, when asked without information about lens design. After full information about lens characteristics and additional eight weeks of free spectacle use, 44 per cent preferred the computer vision PALs. On average, computer vision PALs were rated significantly better with respect to vision at the monitor during the experimental part of the study. In the final forced-choice ratings, approximately half of the participants preferred either the computer vision PAL or the general purpose PAL. Individual factors seem to play a role in this preference and in the rated advantage of computer vision PALs. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.

  10. Now and next-generation sequencing techniques: future of sequence analysis using cloud computing.

    PubMed

    Thakur, Radhe Shyam; Bandopadhyay, Rajib; Chaudhary, Bratati; Chatterjee, Sourav

    2012-01-01

    Advances in the field of sequencing techniques have resulted in the greatly accelerated production of huge sequence datasets. This presents immediate challenges in database maintenance at datacenters. It provides additional computational challenges in data mining and sequence analysis. Together these represent a significant overburden on traditional stand-alone computer resources, and to reach effective conclusions quickly and efficiently, the virtualization of the resources and computation on a pay-as-you-go concept (together termed "cloud computing") has recently appeared. The collective resources of the datacenter, including both hardware and software, can be available publicly, being then termed a public cloud, the resources being provided in a virtual mode to the clients who pay according to the resources they employ. Examples of public companies providing these resources include Amazon, Google, and Joyent. The computational workload is shifted to the provider, which also implements required hardware and software upgrades over time. A virtual environment is created in the cloud corresponding to the computational and data storage needs of the user via the internet. The task is then performed, the results transmitted to the user, and the environment finally deleted after all tasks are completed. In this discussion, we focus on the basics of cloud computing, and go on to analyze the prerequisites and overall working of clouds. Finally, the applications of cloud computing in biological systems, particularly in comparative genomics, genome informatics, and SNP detection are discussed with reference to traditional workflows.

  11. Monitoring of computing resource use of active software releases at ATLAS

    NASA Astrophysics Data System (ADS)

    Limosani, Antonio; ATLAS Collaboration

    2017-10-01

    The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the TierO at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end-user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as “MemoryMonitor”, to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed in plots generated using Python visualization libraries and collected into pre-formatted auto-generated Web pages, which allow the ATLAS developer community to track the performance of their algorithms. This information is however preferentially filtered to domain leaders and developers through the use of JIRA and via reports given at ATLAS software meetings. Finally, we take a glimpse of the future by reporting on the expected CPU and RAM usage in benchmark workflows associated with the High Luminosity LHC and anticipate the ways performance monitoring will evolve to understand and benchmark future workflows.

  12. Job Superscheduler Architecture and Performance in Computational Grid Environments

    NASA Technical Reports Server (NTRS)

    Shan, Hongzhang; Oliker, Leonid; Biswas, Rupak

    2003-01-01

    Computational grids hold great promise in utilizing geographically separated heterogeneous resources to solve large-scale complex scientific problems. However, a number of major technical hurdles, including distributed resource management and effective job scheduling, stand in the way of realizing these gains. In this paper, we propose a novel grid superscheduler architecture and three distributed job migration algorithms. We also model the critical interaction between the superscheduler and autonomous local schedulers. Extensive performance comparisons with ideal, central, and local schemes using real workloads from leading computational centers are conducted in a simulation environment. Additionally, synthetic workloads are used to perform a detailed sensitivity analysis of our superscheduler. Several key metrics demonstrate that substantial performance gains can be achieved via smart superscheduling in distributed computational grids.

  13. Optimisation of the usage of LHC and local computing resources in a multidisciplinary physics department hosting a WLCG Tier-2 centre

    NASA Astrophysics Data System (ADS)

    Barberis, Stefano; Carminati, Leonardo; Leveraro, Franco; Mazza, Simone Michele; Perini, Laura; Perlz, Francesco; Rebatto, David; Tura, Ruggero; Vaccarossa, Luca; Villaplana, Miguel

    2015-12-01

    We present the approach of the University of Milan Physics Department and the local unit of INFN to allow and encourage the sharing among different research areas of computing, storage and networking resources (the largest ones being those composing the Milan WLCG Tier-2 centre and tailored to the needs of the ATLAS experiment). Computing resources are organised as independent HTCondor pools, with a global master in charge of monitoring them and optimising their usage. The configuration has to provide satisfactory throughput for both serial and parallel (multicore, MPI) jobs. A combination of local, remote and cloud storage options are available. The experience of users from different research areas operating on this shared infrastructure is discussed. The promising direction of improving scientific computing throughput by federating access to distributed computing and storage also seems to fit very well with the objectives listed in the European Horizon 2020 framework for research and development.

  14. Integration of High-Performance Computing into Cloud Computing Services

    NASA Astrophysics Data System (ADS)

    Vouk, Mladen A.; Sills, Eric; Dreher, Patrick

    High-Performance Computing (HPC) projects span a spectrum of computer hardware implementations ranging from peta-flop supercomputers, high-end tera-flop facilities running a variety of operating systems and applications, to mid-range and smaller computational clusters used for HPC application development, pilot runs and prototype staging clusters. What they all have in common is that they operate as a stand-alone system rather than a scalable and shared user re-configurable resource. The advent of cloud computing has changed the traditional HPC implementation. In this article, we will discuss a very successful production-level architecture and policy framework for supporting HPC services within a more general cloud computing infrastructure. This integrated environment, called Virtual Computing Lab (VCL), has been operating at NC State since fall 2004. Nearly 8,500,000 HPC CPU-Hrs were delivered by this environment to NC State faculty and students during 2009. In addition, we present and discuss operational data that show that integration of HPC and non-HPC (or general VCL) services in a cloud can substantially reduce the cost of delivering cloud services (down to cents per CPU hour).

  15. Now and Next-Generation Sequencing Techniques: Future of Sequence Analysis Using Cloud Computing

    PubMed Central

    Thakur, Radhe Shyam; Bandopadhyay, Rajib; Chaudhary, Bratati; Chatterjee, Sourav

    2012-01-01

    Advances in the field of sequencing techniques have resulted in the greatly accelerated production of huge sequence datasets. This presents immediate challenges in database maintenance at datacenters. It provides additional computational challenges in data mining and sequence analysis. Together these represent a significant overburden on traditional stand-alone computer resources, and to reach effective conclusions quickly and efficiently, the virtualization of the resources and computation on a pay-as-you-go concept (together termed “cloud computing”) has recently appeared. The collective resources of the datacenter, including both hardware and software, can be available publicly, being then termed a public cloud, the resources being provided in a virtual mode to the clients who pay according to the resources they employ. Examples of public companies providing these resources include Amazon, Google, and Joyent. The computational workload is shifted to the provider, which also implements required hardware and software upgrades over time. A virtual environment is created in the cloud corresponding to the computational and data storage needs of the user via the internet. The task is then performed, the results transmitted to the user, and the environment finally deleted after all tasks are completed. In this discussion, we focus on the basics of cloud computing, and go on to analyze the prerequisites and overall working of clouds. Finally, the applications of cloud computing in biological systems, particularly in comparative genomics, genome informatics, and SNP detection are discussed with reference to traditional workflows. PMID:23248640

  16. Reducing usage of the computational resources by event driven approach to model predictive control

    NASA Astrophysics Data System (ADS)

    Misik, Stefan; Bradac, Zdenek; Cela, Arben

    2017-08-01

    This paper deals with a real-time and optimal control of dynamic systems while also considers the constraints which these systems might be subject to. Main objective of this work is to propose a simple modification of the existing Model Predictive Control approach to better suit needs of computational resource-constrained real-time systems. An example using model of a mechanical system is presented and the performance of the proposed method is evaluated in a simulated environment.

  17. Regional geoid computation by least squares modified Hotine's formula with additive corrections

    NASA Astrophysics Data System (ADS)

    Märdla, Silja; Ellmann, Artu; Ågren, Jonas; Sjöberg, Lars E.

    2018-03-01

    Geoid and quasigeoid modelling from gravity anomalies by the method of least squares modification of Stokes's formula with additive corrections is adapted for the usage with gravity disturbances and Hotine's formula. The biased, unbiased and optimum versions of least squares modification are considered. Equations are presented for the four additive corrections that account for the combined (direct plus indirect) effect of downward continuation (DWC), topographic, atmospheric and ellipsoidal corrections in geoid or quasigeoid modelling. The geoid or quasigeoid modelling scheme by the least squares modified Hotine formula is numerically verified, analysed and compared to the Stokes counterpart in a heterogeneous study area. The resulting geoid models and the additive corrections computed both for use with Stokes's or Hotine's formula differ most in high topography areas. Over the study area (reaching almost 2 km in altitude), the approximate geoid models (before the additive corrections) differ by 7 mm on average with a 3 mm standard deviation (SD) and a maximum of 1.3 cm. The additive corrections, out of which only the DWC correction has a numerically significant difference, improve the agreement between respective geoid or quasigeoid models to an average difference of 5 mm with a 1 mm SD and a maximum of 8 mm.

  18. Near real time water resources data for river basin management

    NASA Technical Reports Server (NTRS)

    Paulson, R. W. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Twenty Data Collection Platforms (DCP) are being field installed on USGS water resources stations in the Delaware River Basin. DCP's have been successfully installed and are operating well on five stream gaging stations, three observation wells, and one water quality monitor in the basin. DCP's have been installed at nine additional water quality monitors, and work is progressing on interfacing the platforms to the monitors. ERTS-related water resources data from the platforms are being provided in near real time, by the Goddard Space Flight Center to the Pennsylvania district, Water Resources Division, U.S. Geological Survey. On a daily basis, the data are computer processed by the Survey and provided to the Delaware River Basin Commission. Each daily summary contains data that were relayed during 4 or 5 of the 15 orbits made by ERTS-1 during the previous day. Water resources parameters relays by the platforms include dissolved oxygen concentrations, temperature, pH, specific conductance, well level, and stream gage height, which is used to compute stream flow for the daily summary.

  19. Computational aerodynamics and artificial intelligence

    NASA Technical Reports Server (NTRS)

    Mehta, U. B.; Kutler, P.

    1984-01-01

    The general principles of artificial intelligence are reviewed and speculations are made concerning how knowledge based systems can accelerate the process of acquiring new knowledge in aerodynamics, how computational fluid dynamics may use expert systems, and how expert systems may speed the design and development process. In addition, the anatomy of an idealized expert system called AERODYNAMICIST is discussed. Resource requirements for using artificial intelligence in computational fluid dynamics and aerodynamics are examined. Three main conclusions are presented. First, there are two related aspects of computational aerodynamics: reasoning and calculating. Second, a substantial portion of reasoning can be achieved with artificial intelligence. It offers the opportunity of using computers as reasoning machines to set the stage for efficient calculating. Third, expert systems are likely to be new assets of institutions involved in aeronautics for various tasks of computational aerodynamics.

  20. Interactive computer methods for generating mineral-resource maps

    USGS Publications Warehouse

    Calkins, James Alfred; Crosby, A.S.; Huffman, T.E.; Clark, A.L.; Mason, G.T.; Bascle, R.J.

    1980-01-01

    Inasmuch as maps are a basic tool of geologists, the U.S. Geological Survey's CRIB (Computerized Resources Information Bank) was constructed so that the data it contains can be used to generate mineral-resource maps. However, by the standard methods used-batch processing and off-line plotting-the production of a finished map commonly takes 2-3 weeks. To produce computer-generated maps more rapidly, cheaply, and easily, and also to provide an effective demonstration tool, we have devised two related methods for plotting maps as alternatives to conventional batch methods. These methods are: 1. Quick-Plot, an interactive program whose output appears on a CRT (cathode-ray-tube) device, and 2. The Interactive CAM (Cartographic Automatic Mapping system), which combines batch and interactive runs. The output of the Interactive CAM system is final compilation (not camera-ready) paper copy. Both methods are designed to use data from the CRIB file in conjunction with a map-plotting program. Quick-Plot retrieves a user-selected subset of data from the CRIB file, immediately produces an image of the desired area on a CRT device, and plots data points according to a limited set of user-selected symbols. This method is useful for immediate evaluation of the map and for demonstrating how trial maps can be made quickly. The Interactive CAM system links the output of an interactive CRIB retrieval to a modified version of the CAM program, which runs in the batch mode and stores plotting instructions on a disk, rather than on a tape. The disk can be accessed by a CRT, and, thus, the user can view and evaluate the map output on a CRT immediately after a batch run, without waiting 1-3 days for an off-line plot. The user can, therefore, do most of the layout and design work in a relatively short time by use of the CRT, before generating a plot tape and having the map plotted on an off-line plotter.

  1. Short-term Temperature Prediction Using Adaptive Computing on Dynamic Scales

    NASA Astrophysics Data System (ADS)

    Hu, W.; Cervone, G.; Jha, S.; Balasubramanian, V.; Turilli, M.

    2017-12-01

    When predicting temperature, there are specific places and times when high accuracy predictions are harder. For example, not all the sub-regions in the domain require the same amount of computing resources to generate an accurate prediction. Plateau areas might require less computing resources than mountainous areas because of the steeper gradient of temperature change in the latter. However, it is difficult to estimate beforehand the optimal allocation of computational resources because several parameters play a role in determining the accuracy of the forecasts, in addition to orography. The allocation of resources to perform simulations can become a bottleneck because it requires human intervention to stop jobs or start new ones. The goal of this project is to design and develop a dynamic approach to generate short-term temperature predictions that can automatically determines the required computing resources and the geographic scales of the predictions based on the spatial and temporal uncertainties. The predictions and the prediction quality metrics are computed using a numeric weather prediction model, Analog Ensemble (AnEn), and the parallelization on high performance computing systems is accomplished using Ensemble Toolkit, one component of the RADICAL-Cybertools family of tools. RADICAL-Cybertools decouple the science needs from the computational capabilities by building an intermediate layer to run general ensemble patterns, regardless of the science. In this research, we show how the ensemble toolkit allows generating high resolution temperature forecasts at different spatial and temporal resolution. The AnEn algorithm is run using NAM analysis and forecasts data for the continental United States for a period of 2 years. AnEn results show that temperature forecasts perform well according to different probabilistic and deterministic statistical tests.

  2. Fermilab computing at the Intensity Frontier

    DOE PAGES

    Group, Craig; Fuess, S.; Gutsche, O.; ...

    2015-12-23

    The Intensity Frontier refers to a diverse set of particle physics experiments using high- intensity beams. In this paper I will focus the discussion on the computing requirements and solutions of a set of neutrino and muon experiments in progress or planned to take place at the Fermi National Accelerator Laboratory located near Chicago, Illinois. In addition, the experiments face unique challenges, but also have overlapping computational needs. In principle, by exploiting the commonality and utilizing centralized computing tools and resources, requirements can be satisfied efficiently and scientists of individual experiments can focus more on the science and less onmore » the development of tools and infrastructure.« less

  3. A Resource Service Model in the Industrial IoT System Based on Transparent Computing.

    PubMed

    Li, Weimin; Wang, Bin; Sheng, Jinfang; Dong, Ke; Li, Zitong; Hu, Yixiang

    2018-03-26

    The Internet of Things (IoT) has received a lot of attention, especially in industrial scenarios. One of the typical applications is the intelligent mine, which actually constructs the Six-Hedge underground systems with IoT platforms. Based on a case study of the Six Systems in the underground metal mine, this paper summarizes the main challenges of industrial IoT from the aspects of heterogeneity in devices and resources, security, reliability, deployment and maintenance costs. Then, a novel resource service model for the industrial IoT applications based on Transparent Computing (TC) is presented, which supports centralized management of all resources including operating system (OS), programs and data on the server-side for the IoT devices, thus offering an effective, reliable, secure and cross-OS IoT service and reducing the costs of IoT system deployment and maintenance. The model has five layers: sensing layer, aggregation layer, network layer, service and storage layer and interface and management layer. We also present a detailed analysis on the system architecture and key technologies of the model. Finally, the efficiency of the model is shown by an experiment prototype system.

  4. A Resource Service Model in the Industrial IoT System Based on Transparent Computing

    PubMed Central

    Wang, Bin; Sheng, Jinfang; Dong, Ke; Li, Zitong; Hu, Yixiang

    2018-01-01

    The Internet of Things (IoT) has received a lot of attention, especially in industrial scenarios. One of the typical applications is the intelligent mine, which actually constructs the Six-Hedge underground systems with IoT platforms. Based on a case study of the Six Systems in the underground metal mine, this paper summarizes the main challenges of industrial IoT from the aspects of heterogeneity in devices and resources, security, reliability, deployment and maintenance costs. Then, a novel resource service model for the industrial IoT applications based on Transparent Computing (TC) is presented, which supports centralized management of all resources including operating system (OS), programs and data on the server-side for the IoT devices, thus offering an effective, reliable, secure and cross-OS IoT service and reducing the costs of IoT system deployment and maintenance. The model has five layers: sensing layer, aggregation layer, network layer, service and storage layer and interface and management layer. We also present a detailed analysis on the system architecture and key technologies of the model. Finally, the efficiency of the model is shown by an experiment prototype system. PMID:29587450

  5. Project Final Report: Ubiquitous Computing and Monitoring System (UCoMS) for Discovery and Management of Energy Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tzeng, Nian-Feng; White, Christopher D.; Moreman, Douglas

    2012-07-14

    The UCoMS research cluster has spearheaded three research areas since August 2004, including wireless and sensor networks, Grid computing, and petroleum applications. The primary goals of UCoMS research are three-fold: (1) creating new knowledge to push forward the technology forefronts on pertinent research on the computing and monitoring aspects of energy resource management, (2) developing and disseminating software codes and toolkits for the research community and the public, and (3) establishing system prototypes and testbeds for evaluating innovative techniques and methods. Substantial progress and diverse accomplishment have been made by research investigators in their respective areas of expertise cooperatively onmore » such topics as sensors and sensor networks, wireless communication and systems, computational Grids, particularly relevant to petroleum applications.« less

  6. Concurrent negotiation and coordination for grid resource coallocation.

    PubMed

    Sim, Kwang Mong; Shi, Benyun

    2010-06-01

    Bolstering resource coallocation is essential for realizing the Grid vision, because computationally intensive applications often require multiple computing resources from different administrative domains. Given that resource providers and consumers may have different requirements, successfully obtaining commitments through concurrent negotiations with multiple resource providers to simultaneously access several resources is a very challenging task for consumers. The impetus of this paper is that it is one of the earliest works that consider a concurrent negotiation mechanism for Grid resource coallocation. The concurrent negotiation mechanism is designed for 1) managing (de)commitment of contracts through one-to-many negotiations and 2) coordination of multiple concurrent one-to-many negotiations between a consumer and multiple resource providers. The novel contributions of this paper are devising 1) a utility-oriented coordination (UOC) strategy, 2) three classes of commitment management strategies (CMSs) for concurrent negotiation, and 3) the negotiation protocols of consumers and providers. Implementing these ideas in a testbed, three series of experiments were carried out in a variety of settings to compare the following: 1) the CMSs in this paper with the work of others in a single one-to-many negotiation environment for one resource where decommitment is allowed for both provider and consumer agents; 2) the performance of the three classes of CMSs in different resource market types; and 3) the UOC strategy with the work of others [e.g., the patient coordination strategy (PCS )] for coordinating multiple concurrent negotiations. Empirical results show the following: 1) the UOC strategy achieved higher utility, faster negotiation speed, and higher success rates than PCS for different resource market types; and 2) the CMS in this paper achieved higher final utility than the CMS in other works. Additionally, the properties of the three classes of CMSs in

  7. 19 CFR 201.14 - Computation of time, additional hearings, postponements, continuances, and extensions of time.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 19 Customs Duties 3 2014-04-01 2014-04-01 false Computation of time, additional hearings, postponements, continuances, and extensions of time. 201.14 Section 201.14 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Initiation and Conduct of Investigations...

  8. 19 CFR 201.14 - Computation of time, additional hearings, postponements, continuances, and extensions of time.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 19 Customs Duties 3 2013-04-01 2013-04-01 false Computation of time, additional hearings, postponements, continuances, and extensions of time. 201.14 Section 201.14 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Initiation and Conduct of Investigations...

  9. 19 CFR 210.6 - Computation of time, additional hearings, postponements, continuances, and extensions of time.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Computation of time, additional hearings, postponements, continuances, and extensions of time. 210.6 Section 210.6 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION INVESTIGATIONS OF UNFAIR PRACTICES IN IMPORT TRADE ADJUDICATION AND ENFORCEMENT...

  10. Computational chemistry

    NASA Technical Reports Server (NTRS)

    Arnold, J. O.

    1987-01-01

    With the advent of supercomputers, modern computational chemistry algorithms and codes, a powerful tool was created to help fill NASA's continuing need for information on the properties of matter in hostile or unusual environments. Computational resources provided under the National Aerodynamics Simulator (NAS) program were a cornerstone for recent advancements in this field. Properties of gases, materials, and their interactions can be determined from solutions of the governing equations. In the case of gases, for example, radiative transition probabilites per particle, bond-dissociation energies, and rates of simple chemical reactions can be determined computationally as reliably as from experiment. The data are proving to be quite valuable in providing inputs to real-gas flow simulation codes used to compute aerothermodynamic loads on NASA's aeroassist orbital transfer vehicles and a host of problems related to the National Aerospace Plane Program. Although more approximate, similar solutions can be obtained for ensembles of atoms simulating small particles of materials with and without the presence of gases. Computational chemistry has application in studying catalysis, properties of polymers, all of interest to various NASA missions, including those previously mentioned. In addition to discussing these applications of computational chemistry within NASA, the governing equations and the need for supercomputers for their solution is outlined.

  11. Computer Engineers.

    ERIC Educational Resources Information Center

    Moncarz, Roger

    2000-01-01

    Looks at computer engineers and describes their job, employment outlook, earnings, and training and qualifications. Provides a list of resources related to computer engineering careers and the computer industry. (JOW)

  12. The Development of an Individualized Instructional Program in Beginning College Mathematics Utilizing Computer Based Resource Units. Final Report.

    ERIC Educational Resources Information Center

    Rockhill, Theron D.

    Reported is an attempt to develop and evaluate an individualized instructional program in pre-calculus college mathematics. Four computer based resource units were developed in the areas of set theory, relations and function, algebra, trigonometry, and analytic geometry. Objectives were determined by experienced calculus teachers, and…

  13. Bounding the Resource Availability of Partially Ordered Events with Constant Resource Impact

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy

    2004-01-01

    We compare existing techniques to bound the resource availability of partially ordered events. We first show that, contrary to intuition, two existing techniques, one due to Laborie and one due to Muscettola, are not strictly comparable in terms of the size of the search trees generated under chronological search with a fixed heuristic. We describe a generalization of these techniques called the Flow Balance Constraint to tightly bound the amount of available resource for a set of partially ordered events with piecewise constant resource impact We prove that the new technique generates smaller proof trees under chronological search with a fixed heuristic, at little increase in computational expense. We then show how to construct tighter resource bounds but at increased computational cost.

  14. Large Scale Computing and Storage Requirements for High Energy Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. Themore » effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report

  15. Additional support for the TDK/MABL computer program

    NASA Technical Reports Server (NTRS)

    Nickerson, G. R.; Dunn, Stuart S.

    1993-01-01

    An advanced version of the Two-Dimensional Kinetics (TDK) computer program was developed under contract and released to the propulsion community in early 1989. Exposure of the code to this community indicated a need for improvements in certain areas. In particular, the TDK code needed to be adapted to the special requirements imposed by the Space Transportation Main Engine (STME) development program. This engine utilizes injection of the gas generator exhaust into the primary nozzle by means of a set of slots. The subsequent mixing of this secondary stream with the primary stream with finite rate chemical reaction can have a major impact on the engine performance and the thermal protection of the nozzle wall. In attempting to calculate this reacting boundary layer problem, the Mass Addition Boundary Layer (MABL) module of TDK was found to be deficient in several respects. For example, when finite rate chemistry was used to determine gas properties, (MABL-K option) the program run times became excessive because extremely small step sizes were required to maintain numerical stability. A robust solution algorithm was required so that the MABL-K option could be viable as a rocket propulsion industry design tool. Solving this problem was a primary goal of the phase 1 work effort.

  16. Stochastic resource allocation in emergency departments with a multi-objective simulation optimization algorithm.

    PubMed

    Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li

    2017-03-01

    The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.

  17. Improving hospital bed occupancy and resource utilization through queuing modeling and evolutionary computation.

    PubMed

    Belciug, Smaranda; Gorunescu, Florin

    2015-02-01

    Scarce healthcare resources require carefully made policies ensuring optimal bed allocation, quality healthcare service, and adequate financial support. This paper proposes a complex analysis of the resource allocation in a hospital department by integrating in the same framework a queuing system, a compartmental model, and an evolutionary-based optimization. The queuing system shapes the flow of patients through the hospital, the compartmental model offers a feasible structure of the hospital department in accordance to the queuing characteristics, and the evolutionary paradigm provides the means to optimize the bed-occupancy management and the resource utilization using a genetic algorithm approach. The paper also focuses on a "What-if analysis" providing a flexible tool to explore the effects on the outcomes of the queuing system and resource utilization through systematic changes in the input parameters. The methodology was illustrated using a simulation based on real data collected from a geriatric department of a hospital from London, UK. In addition, the paper explores the possibility of adapting the methodology to different medical departments (surgery, stroke, and mental illness). Moreover, the paper also focuses on the practical use of the model from the healthcare point of view, by presenting a simulated application. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Additive Manufacturing and High-Performance Computing: a Disruptive Latent Technology

    NASA Astrophysics Data System (ADS)

    Goodwin, Bruce

    2015-03-01

    This presentation will discuss the relationship between recent advances in Additive Manufacturing (AM) technology, High-Performance Computing (HPC) simulation and design capabilities, and related advances in Uncertainty Quantification (UQ), and then examines their impacts upon national and international security. The presentation surveys how AM accelerates the fabrication process, while HPC combined with UQ provides a fast track for the engineering design cycle. The combination of AM and HPC/UQ almost eliminates the engineering design and prototype iterative cycle, thereby dramatically reducing cost of production and time-to-market. These methods thereby present significant benefits for US national interests, both civilian and military, in an age of austerity. Finally, considering cyber security issues and the advent of the ``cloud,'' these disruptive, currently latent technologies may well enable proliferation and so challenge both nuclear and non-nuclear aspects of international security.

  19. Computer simulation: A modern day crystal ball?

    NASA Technical Reports Server (NTRS)

    Sham, Michael; Siprelle, Andrew

    1994-01-01

    It has long been the desire of managers to be able to look into the future and predict the outcome of decisions. With the advent of computer simulation and the tremendous capability provided by personal computers, that desire can now be realized. This paper presents an overview of computer simulation and modeling, and discusses the capabilities of Extend. Extend is an iconic-driven Macintosh-based software tool that brings the power of simulation to the average computer user. An example of an Extend based model is presented in the form of the Space Transportation System (STS) Processing Model. The STS Processing Model produces eight shuttle launches per year, yet it takes only about ten minutes to run. In addition, statistical data such as facility utilization, wait times, and processing bottlenecks are produced. The addition or deletion of resources, such as orbiters or facilities, can be easily modeled and their impact analyzed. Through the use of computer simulation, it is possible to look into the future to see the impact of today's decisions.

  20. X-ray computed tomography for additive manufacturing: a review

    NASA Astrophysics Data System (ADS)

    Thompson, A.; Maskery, I.; Leach, R. K.

    2016-07-01

    In this review, the use of x-ray computed tomography (XCT) is examined, identifying the requirement for volumetric dimensional measurements in industrial verification of additively manufactured (AM) parts. The XCT technology and AM processes are summarised, and their historical use is documented. The use of XCT and AM as tools for medical reverse engineering is discussed, and the transition of XCT from a tool used solely for imaging to a vital metrological instrument is documented. The current states of the combined technologies are then examined in detail, separated into porosity measurements and general dimensional measurements. In the conclusions of this review, the limitation of resolution on improvement of porosity measurements and the lack of research regarding the measurement of surface texture are identified as the primary barriers to ongoing adoption of XCT in AM. The limitations of both AM and XCT regarding slow speeds and high costs, when compared to other manufacturing and measurement techniques, are also noted as general barriers to continued adoption of XCT and AM.

  1. NREL: Renewable Resource Data Center - Solar Resource Information

    Science.gov Websites

    Solar Resource Information The Renewable Resource Data Center (RReDC) offers a collection of data and tools to assist with solar resource research. Learn more about RReDC's solar resource: Data Models siting. In addition, RReDC offers a solar resource glossary, unit conversion information, and an

  2. Resource Aware Intelligent Network Services (RAINS) Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehman, Tom; Yang, Xi

    The Resource Aware Intelligent Network Services (RAINS) project conducted research and developed technologies in the area of cyber infrastructure resource modeling and computation. The goal of this work was to provide a foundation to enable intelligent, software defined services which spanned the network AND the resources which connect to the network. A Multi-Resource Service Plane (MRSP) was defined, which allows resource owners/managers to locate and place themselves from a topology and service availability perspective within the dynamic networked cyberinfrastructure ecosystem. The MRSP enables the presentation of integrated topology views and computation results which can include resources across the spectrum ofmore » compute, storage, and networks. The RAINS project developed MSRP includes the following key components: i) Multi-Resource Service (MRS) Ontology/Multi-Resource Markup Language (MRML), ii) Resource Computation Engine (RCE), iii) Modular Driver Framework (to allow integration of a variety of external resources). The MRS/MRML is a general and extensible modeling framework that allows for resource owners to model, or describe, a wide variety of resource types. All resources are described using three categories of elements: Resources, Services, and Relationships between the elements. This modeling framework defines a common method for the transformation of cyber infrastructure resources into data in the form of MRML models. In order to realize this infrastructure datification, the RAINS project developed a model based computation system, i.e. “RAINS Computation Engine (RCE)”. The RCE has the ability to ingest, process, integrate, and compute based on automatically generated MRML models. The RCE interacts with the resources thru system drivers which are specific to the type of external network or resource controller. The RAINS project developed a modular and pluggable driver system which facilities a variety of resource controllers to automatically

  3. Spacecraft computer resource margin management. [of Project Galileo Orbiter in-flight reprogramming task

    NASA Technical Reports Server (NTRS)

    Larman, B. T.

    1981-01-01

    The conduction of the Project Galileo Orbiter, with 18 microcomputers and the equivalent of 360K 8-bit bytes of memory contained within two major engineering subsystems and eight science instruments, requires that the key onboard computer system resources be managed in a very rigorous manner. Attention is given to the rationale behind the project policy, the development stage, the preliminary design stage, the design/implementation stage, and the optimization or 'scrubbing' stage. The implementation of the policy is discussed, taking into account the development of the Attitude and Articulation Control Subsystem (AACS) and the Command and Data Subsystem (CDS), the reporting of margin status, and the response to allocation oversubscription.

  4. Addition of flexible body option to the TOLA computer program, part 1

    NASA Technical Reports Server (NTRS)

    Dick, J. W.; Benda, B. J.

    1975-01-01

    This report describes a flexible body option that was developed and added to the Takeoff and Landing Analysis (TOLA) computer program. The addition of the flexible body option to TOLA allows it to be used to study essentially any conventional type airplane in the ground operating environment. It provides the capability to predict the total motion of selected points on the analytical methods incorporated in the program and operating instructions for the option are described. A program listing is included along with several example problems to aid in interpretation of the operating instructions and to illustrate program usage.

  5. A Web of Resources for Introductory Computer Science.

    ERIC Educational Resources Information Center

    Rebelsky, Samuel A.

    As the field of Computer Science has grown, the syllabus of the introductory Computer Science course has changed significantly. No longer is it a simple introduction to programming or a tutorial on computer concepts and applications. Rather, it has become a survey of the field of Computer Science, touching on a wide variety of topics from digital…

  6. Optimization of tomographic reconstruction workflows on geographically distributed resources

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar; ...

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in

  7. Optimization of tomographic reconstruction workflows on geographically distributed resources

    PubMed Central

    Bicer, Tekin; Gürsoy, Doǧa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T.

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can

  8. Optimization of tomographic reconstruction workflows on geographically distributed resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in

  9. Synchronization of Finite State Shared Resources

    DTIC Science & Technology

    1976-03-01

    IMHI uiw mmm " AFOSR -TR- 70- 0^8 3 QC o SYNCHRONIZATION OF FINITE STATE SHARED RESOURCES Edward A Sei neide.- DEPARTMENT of COMPUTER...34" ■ ■ ^ I I. i. . : ,1 . i-i SYNCHRONIZATION OF FINITE STATE SHARED RESOURCES Edward A Schneider Department of Computer...SIGNIFICANT NUMBER OF PAGES WHICH DO NOT REPRODUCE LEGIBLY. ABSTRACT The problem of synchronizing a set of operations defined on a shared resource

  10. ERDC MSRC Resource. High Performance Computing for the Warfighter. Spring 2006

    DTIC Science & Technology

    2006-01-01

    named Ruby, and the HP/Compaq SC45, named Emerald , continue to add their unique sparkle to the ERDC MSRC computer infrastructure. ERDC invited the...configuration on B-52H purchased additional memory for the login nodes so that this part of the solution process could be done as a preprocessing step. On...application and system services. Of the service nodes, 10 are login nodes and 23 are input/output (I/O) server nodes for the Lustre file system (i.e., the

  11. Radar Control Optimal Resource Allocation

    DTIC Science & Technology

    2015-07-13

    other tunable parameters of radars [17, 18]. Such radar resource scheduling usually demands massive computation. Even myopic 14 Distribution A: Approved...reduced validity of the optimal choice of radar resource. In the non- myopic context, the computational problem becomes exponentially more difficult...computed as t? = ασ2 q + σ r √ α q (σ + r + α q) α q2 r − 1ασ q2 + q r2 . (19) We are only interested in t? > 1 and solving the inequality we obtain the

  12. Family History Resources.

    ERIC Educational Resources Information Center

    Bookmark, 1991

    1991-01-01

    The 12 articles in this issue focus on the theme of family history resources: (1) "Introduction: Family History Resources" (Joseph F. Shubert); (2) "Work, Credentials, and Expectations of a Professional Genealogist" (Coreen P. Hallenbeck and Lewis W. Hallenbeck); (3) "Computers and Genealogy" (Theresa C. Strasser);…

  13. SOCR: Statistics Online Computational Resource

    ERIC Educational Resources Information Center

    Dinov, Ivo D.

    2006-01-01

    The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an…

  14. Acausal measurement-based quantum computing

    NASA Astrophysics Data System (ADS)

    Morimae, Tomoyuki

    2014-07-01

    In measurement-based quantum computing, there is a natural "causal cone" among qubits of the resource state, since the measurement angle on a qubit has to depend on previous measurement results in order to correct the effect of by-product operators. If we respect the no-signaling principle, by-product operators cannot be avoided. Here we study the possibility of acausal measurement-based quantum computing by using the process matrix framework [Oreshkov, Costa, and Brukner, Nat. Commun. 3, 1092 (2012), 10.1038/ncomms2076]. We construct a resource process matrix for acausal measurement-based quantum computing restricting local operations to projective measurements. The resource process matrix is an analog of the resource state of the standard causal measurement-based quantum computing. We find that if we restrict local operations to projective measurements the resource process matrix is (up to a normalization factor and trivial ancilla qubits) equivalent to the decorated graph state created from the graph state of the corresponding causal measurement-based quantum computing. We also show that it is possible to consider a causal game whose causal inequality is violated by acausal measurement-based quantum computing.

  15. Node Resource Manager: A Distributed Computing Software Framework Used for Solving Geophysical Problems

    NASA Astrophysics Data System (ADS)

    Lawry, B. J.; Encarnacao, A.; Hipp, J. R.; Chang, M.; Young, C. J.

    2011-12-01

    With the rapid growth of multi-core computing hardware, it is now possible for scientific researchers to run complex, computationally intensive software on affordable, in-house commodity hardware. Multi-core CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) are now commonplace in desktops and servers. Developers today have access to extremely powerful hardware that enables the execution of software that could previously only be run on expensive, massively-parallel systems. It is no longer cost-prohibitive for an institution to build a parallel computing cluster consisting of commodity multi-core servers. In recent years, our research team has developed a distributed, multi-core computing system and used it to construct global 3D earth models using seismic tomography. Traditionally, computational limitations forced certain assumptions and shortcuts in the calculation of tomographic models; however, with the recent rapid growth in computational hardware including faster CPU's, increased RAM, and the development of multi-core computers, we are now able to perform seismic tomography, 3D ray tracing and seismic event location using distributed parallel algorithms running on commodity hardware, thereby eliminating the need for many of these shortcuts. We describe Node Resource Manager (NRM), a system we developed that leverages the capabilities of a parallel computing cluster. NRM is a software-based parallel computing management framework that works in tandem with the Java Parallel Processing Framework (JPPF, http://www.jppf.org/), a third party library that provides a flexible and innovative way to take advantage of modern multi-core hardware. NRM enables multiple applications to use and share a common set of networked computers, regardless of their hardware platform or operating system. Using NRM, algorithms can be parallelized to run on multiple processing cores of a distributed computing cluster of servers and desktops, which results in a dramatic

  16. CloVR: a virtual machine for automated and portable sequence analysis from the desktop using cloud computing.

    PubMed

    Angiuoli, Samuel V; Matalka, Malcolm; Gussman, Aaron; Galens, Kevin; Vangala, Mahesh; Riley, David R; Arze, Cesar; White, James R; White, Owen; Fricke, W Florian

    2011-08-30

    Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing.

  17. Professional Computer Education Organizations--A Resource for Administrators.

    ERIC Educational Resources Information Center

    Ricketts, Dick

    Professional computer education organizations serve a valuable function by generating, collecting, and disseminating information concerning the role of the computer in education. This report touches briefly on the reasons for the rapid and successful development of professional computer education organizations. A number of attributes of effective…

  18. Computational challenges, tools and resources for analyzing co- and post-transcriptional events in high throughput

    PubMed Central

    Bahrami-Samani, Emad; Vo, Dat T.; de Araujo, Patricia Rosa; Vogel, Christine; Smith, Andrew D.; Penalva, Luiz O. F.; Uren, Philip J.

    2014-01-01

    Co- and post-transcriptional regulation of gene expression is complex and multi-faceted, spanning the complete RNA lifecycle from genesis to decay. High-throughput profiling of the constituent events and processes is achieved through a range of technologies that continue to expand and evolve. Fully leveraging the resulting data is non-trivial, and requires the use of computational methods and tools carefully crafted for specific data sources and often intended to probe particular biological processes. Drawing upon databases of information pre-compiled by other researchers can further elevate analyses. Within this review, we describe the major co- and post-transcriptional events in the RNA lifecycle that are amenable to high-throughput profiling. We place specific emphasis on the analysis of the resulting data, in particular the computational tools and resources available, as well as looking towards future challenges that remain to be addressed. PMID:25515586

  19. Parallel high-performance grid computing: capabilities and opportunities of a novel demanding service and business class allowing highest resource efficiency.

    PubMed

    Kepper, Nick; Ettig, Ramona; Dickmann, Frank; Stehr, Rene; Grosveld, Frank G; Wedemann, Gero; Knoch, Tobias A

    2010-01-01

    Especially in the life-science and the health-care sectors the huge IT requirements are imminent due to the large and complex systems to be analysed and simulated. Grid infrastructures play here a rapidly increasing role for research, diagnostics, and treatment, since they provide the necessary large-scale resources efficiently. Whereas grids were first used for huge number crunching of trivially parallelizable problems, increasingly parallel high-performance computing is required. Here, we show for the prime example of molecular dynamic simulations how the presence of large grid clusters including very fast network interconnects within grid infrastructures allows now parallel high-performance grid computing efficiently and thus combines the benefits of dedicated super-computing centres and grid infrastructures. The demands for this service class are the highest since the user group has very heterogeneous requirements: i) two to many thousands of CPUs, ii) different memory architectures, iii) huge storage capabilities, and iv) fast communication via network interconnects, are all needed in different combinations and must be considered in a highly dedicated manner to reach highest performance efficiency. Beyond, advanced and dedicated i) interaction with users, ii) the management of jobs, iii) accounting, and iv) billing, not only combines classic with parallel high-performance grid usage, but more importantly is also able to increase the efficiency of IT resource providers. Consequently, the mere "yes-we-can" becomes a huge opportunity like e.g. the life-science and health-care sectors as well as grid infrastructures by reaching higher level of resource efficiency.

  20. A Reliable TTP-Based Infrastructure with Low Sensor Resource Consumption for the Smart Home Multi-Platform

    PubMed Central

    Kang, Jungho; Kim, Mansik; Park, Jong Hyuk

    2016-01-01

    With the ICT technology making great progress in the smart home environment, the ubiquitous environment is rapidly emerging all over the world, but problems are also increasing proportionally to the rapid growth of the smart home market such as multiplatform heterogeneity and new security threats. In addition, the smart home sensors have so low computing resources that they cannot process complicated computation tasks, which is required to create a proper security environment. A service provider also faces overhead in processing data from a rapidly increasing number of sensors. This paper aimed to propose a scheme to build infrastructure in which communication entities can securely authenticate and design security channel with physically unclonable PUFs and the TTP that smart home communication entities can rely on. In addition, we analyze and evaluate the proposed scheme for security and performance and prove that it can build secure channels with low resources. Finally, we expect that the proposed scheme can be helpful for secure communication with low resources in future smart home multiplatforms. PMID:27399699

  1. A Reliable TTP-Based Infrastructure with Low Sensor Resource Consumption for the Smart Home Multi-Platform.

    PubMed

    Kang, Jungho; Kim, Mansik; Park, Jong Hyuk

    2016-07-05

    With the ICT technology making great progress in the smart home environment, the ubiquitous environment is rapidly emerging all over the world, but problems are also increasing proportionally to the rapid growth of the smart home market such as multiplatform heterogeneity and new security threats. In addition, the smart home sensors have so low computing resources that they cannot process complicated computation tasks, which is required to create a proper security environment. A service provider also faces overhead in processing data from a rapidly increasing number of sensors. This paper aimed to propose a scheme to build infrastructure in which communication entities can securely authenticate and design security channel with physically unclonable PUFs and the TTP that smart home communication entities can rely on. In addition, we analyze and evaluate the proposed scheme for security and performance and prove that it can build secure channels with low resources. Finally, we expect that the proposed scheme can be helpful for secure communication with low resources in future smart home multiplatforms.

  2. omniClassifier: a Desktop Grid Computing System for Big Data Prediction Modeling

    PubMed Central

    Phan, John H.; Kothari, Sonal; Wang, May D.

    2016-01-01

    Robust prediction models are important for numerous science, engineering, and biomedical applications. However, best-practice procedures for optimizing prediction models can be computationally complex, especially when choosing models from among hundreds or thousands of parameter choices. Computational complexity has further increased with the growth of data in these fields, concurrent with the era of “Big Data”. Grid computing is a potential solution to the computational challenges of Big Data. Desktop grid computing, which uses idle CPU cycles of commodity desktop machines, coupled with commercial cloud computing resources can enable research labs to gain easier and more cost effective access to vast computing resources. We have developed omniClassifier, a multi-purpose prediction modeling application that provides researchers with a tool for conducting machine learning research within the guidelines of recommended best-practices. omniClassifier is implemented as a desktop grid computing system using the Berkeley Open Infrastructure for Network Computing (BOINC) middleware. In addition to describing implementation details, we use various gene expression datasets to demonstrate the potential scalability of omniClassifier for efficient and robust Big Data prediction modeling. A prototype of omniClassifier can be accessed at http://omniclassifier.bme.gatech.edu/. PMID:27532062

  3. Allocating Tactical High-Performance Computer (HPC) Resources to Offloaded Computation in Battlefield Scenarios

    DTIC Science & Technology

    2013-12-01

    authors present a Computing on Dissemination with predictable contacts ( pCoD ) algorithm, since it is impossible to reserve task execution time in advance...Computing While Charging DAG Directed Acyclic Graph 18 TTL Time-to-live pCoD Predictable contacts CoD Computing on Dissemination upCoD Unpredictable

  4. Perspectives on the utilization of aquaculture coproduct in Europe and Asia: prospects for value addition and improved resource efficiency.

    PubMed

    Newton, Richard; Telfer, Trevor; Little, Dave

    2014-01-01

    Aquaculture has often been criticized for its environmental impacts, especially efficiencies concerning global fisheries resources for use in aquafeeds among others. However, little attention has been paid to the contribution of coproducts from aquaculture, which can vary between 40% and 70% of the production. These have often been underutilized and could be redirected to maximize the efficient use of resource inputs including reducing the burden on fisheries resources. In this review, we identify strategies to enhance the overall value of the harvested yield including noneffluent processing coproducts for three of the most important global aquaculture species, and discuss the current and prospective utilization of these resources for value addition and environmental impact reduction. The review concludes that in Europe coproducts are often underutilized because of logistical reasons such as the disconnected nature of the value chain, and perceived legislative barriers. However, in Asia, most coproducts are used, often innovatively but not to their full economic potential and sometimes with possible human health and biosecurity risks. These include possible spread of diseased material and low traceability in some circumstances. Full economic and environmental appraisal is long overdue for the current and potential strategies available for coproduct utilization.

  5. ATLAS Distributed Computing Experience and Performance During the LHC Run-2

    NASA Astrophysics Data System (ADS)

    Filipčič, A.; ATLAS Collaboration

    2017-10-01

    ATLAS Distributed Computing during LHC Run-1 was challenged by steadily increasing computing, storage and network requirements. In addition, the complexity of processing task workflows and their associated data management requirements led to a new paradigm in the ATLAS computing model for Run-2, accompanied by extensive evolution and redesign of the workflow and data management systems. The new systems were put into production at the end of 2014, and gained robustness and maturity during 2015 data taking. ProdSys2, the new request and task interface; JEDI, the dynamic job execution engine developed as an extension to PanDA; and Rucio, the new data management system, form the core of Run-2 ATLAS distributed computing engine. One of the big changes for Run-2 was the adoption of the Derivation Framework, which moves the chaotic CPU and data intensive part of the user analysis into the centrally organized train production, delivering derived AOD datasets to user groups for final analysis. The effectiveness of the new model was demonstrated through the delivery of analysis datasets to users just one week after data taking, by completing the calibration loop, Tier-0 processing and train production steps promptly. The great flexibility of the new system also makes it possible to execute part of the Tier-0 processing on the grid when Tier-0 resources experience a backlog during high data-taking periods. The introduction of the data lifetime model, where each dataset is assigned a finite lifetime (with extensions possible for frequently accessed data), was made possible by Rucio. Thanks to this the storage crises experienced in Run-1 have not reappeared during Run-2. In addition, the distinction between Tier-1 and Tier-2 disk storage, now largely artificial given the quality of Tier-2 resources and their networking, has been removed through the introduction of dynamic ATLAS clouds that group the storage endpoint nucleus and its close-by execution satellite sites. All stable

  6. Dynamic resource allocation engine for cloud-based real-time video transcoding in mobile cloud computing environments

    NASA Astrophysics Data System (ADS)

    Adedayo, Bada; Wang, Qi; Alcaraz Calero, Jose M.; Grecos, Christos

    2015-02-01

    The recent explosion in video-related Internet traffic has been driven by the widespread use of smart mobile devices, particularly smartphones with advanced cameras that are able to record high-quality videos. Although many of these devices offer the facility to record videos at different spatial and temporal resolutions, primarily with local storage considerations in mind, most users only ever use the highest quality settings. The vast majority of these devices are optimised for compressing the acquired video using a single built-in codec and have neither the computational resources nor battery reserves to transcode the video to alternative formats. This paper proposes a new low-complexity dynamic resource allocation engine for cloud-based video transcoding services that are both scalable and capable of being delivered in real-time. Firstly, through extensive experimentation, we establish resource requirement benchmarks for a wide range of transcoding tasks. The set of tasks investigated covers the most widely used input formats (encoder type, resolution, amount of motion and frame rate) associated with mobile devices and the most popular output formats derived from a comprehensive set of use cases, e.g. a mobile news reporter directly transmitting videos to the TV audience of various video format requirements, with minimal usage of resources both at the reporter's end and at the cloud infrastructure end for transcoding services.

  7. An emulator for minimizing finite element analysis implementation resources

    NASA Technical Reports Server (NTRS)

    Melosh, R. J.; Utku, S.; Salama, M.; Islam, M.

    1982-01-01

    A finite element analysis emulator providing a basis for efficiently establishing an optimum computer implementation strategy when many calculations are involved is described. The SCOPE emulator determines computer resources required as a function of the structural model, structural load-deflection equation characteristics, the storage allocation plan, and computer hardware capabilities. Thereby, it provides data for trading analysis implementation options to arrive at a best strategy. The models contained in SCOPE lead to micro-operation computer counts of each finite element operation as well as overall computer resource cost estimates. Application of SCOPE to the Memphis-Arkansas bridge analysis provides measures of the accuracy of resource assessments. Data indicate that predictions are within 17.3 percent for calculation times and within 3.2 percent for peripheral storage resources for the ELAS code.

  8. TethysCluster: A comprehensive approach for harnessing cloud resources for hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Nelson, J.; Jones, N.; Ames, D. P.

    2015-12-01

    Advances in water resources modeling are improving the information that can be supplied to support decisions affecting the safety and sustainability of society. However, as water resources models become more sophisticated and data-intensive they require more computational power to run. Purchasing and maintaining the computing facilities needed to support certain modeling tasks has been cost-prohibitive for many organizations. With the advent of the cloud, the computing resources needed to address this challenge are now available and cost-effective, yet there still remains a significant technical barrier to leverage these resources. This barrier inhibits many decision makers and even trained engineers from taking advantage of the best science and tools available. Here we present the Python tools TethysCluster and CondorPy, that have been developed to lower the barrier to model computation in the cloud by providing (1) programmatic access to dynamically scalable computing resources, (2) a batch scheduling system to queue and dispatch the jobs to the computing resources, (3) data management for job inputs and outputs, and (4) the ability to dynamically create, submit, and monitor computing jobs. These Python tools leverage the open source, computing-resource management, and job management software, HTCondor, to offer a flexible and scalable distributed-computing environment. While TethysCluster and CondorPy can be used independently to provision computing resources and perform large modeling tasks, they have also been integrated into Tethys Platform, a development platform for water resources web apps, to enable computing support for modeling workflows and decision-support systems deployed as web apps.

  9. Computationally efficient statistical differential equation modeling using homogenization

    USGS Publications Warehouse

    Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.

    2013-01-01

    Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.

  10. Basic Skills Resource Center: Documentation and Phaseover Report for the Military Educators Resource NETWORK

    DTIC Science & Technology

    1985-01-01

    narrative form. 111. Describe the subject of your request in 3 or 4 precise terms (e.g., reading skills , computer assisted instruction, adult literacy ...00 Research Product 85-03 L’C £ BASIC SKILLS RESOURCE CENTER: DOCUMENTATION AND PHASEOVER REPORT FOR THE MILITARY EDUCATORS RESOURCE NETWORK... SKILLS RESOURCE CENTER: DOCUMENTATION AND Interim Report PHLASEOVER REPORT FOR THE MILITARY EDUCATORS Feb 1982 - Sept 1984 RESOURCE NETWORK 6

  11. COMPUTATIONAL TOXICOLOGY-WHERE IS THE DATA? ...

    EPA Pesticide Factsheets

    This talk will briefly describe the state of the data world for computational toxicology and one approach to improve the situation, called ACToR (Aggregated Computational Toxicology Resource). This talk will briefly describe the state of the data world for computational toxicology and one approach to improve the situation, called ACToR (Aggregated Computational Toxicology Resource).

  12. Interoperability of GADU in using heterogeneous Grid resources for bioinformatics applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sulakhe, D.; Rodriguez, A.; Wilde, M.

    2008-03-01

    Bioinformatics tools used for efficient and computationally intensive analysis of genetic sequences require large-scale computational resources to accommodate the growing data. Grid computational resources such as the Open Science Grid and TeraGrid have proved useful for scientific discovery. The genome analysis and database update system (GADU) is a high-throughput computational system developed to automate the steps involved in accessing the Grid resources for running bioinformatics applications. This paper describes the requirements for building an automated scalable system such as GADU that can run jobs on different Grids. The paper describes the resource-independent configuration of GADU using the Pegasus-based virtual datamore » system that makes high-throughput computational tools interoperable on heterogeneous Grid resources. The paper also highlights the features implemented to make GADU a gateway to computationally intensive bioinformatics applications on the Grid. The paper will not go into the details of problems involved or the lessons learned in using individual Grid resources as it has already been published in our paper on genome analysis research environment (GNARE) and will focus primarily on the architecture that makes GADU resource independent and interoperable across heterogeneous Grid resources.« less

  13. The OSG Open Facility: an on-ramp for opportunistic scientific computing

    NASA Astrophysics Data System (ADS)

    Jayatilaka, B.; Levshina, T.; Sehgal, C.; Gardner, R.; Rynge, M.; Würthwein, F.

    2017-10-01

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.

  14. The OSG Open Facility: An On-Ramp for Opportunistic Scientific Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jayatilaka, B.; Levshina, T.; Sehgal, C.

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource ownersmore » and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.« less

  15. Scheduling Nonconsumable Resources

    NASA Technical Reports Server (NTRS)

    Porta, Harry J.

    1990-01-01

    Users manual describes computer program SWITCH that schedules use of resources - by appliances switched on and off and use resources while they are on. Plans schedules according to predetermined goals; revises schedules when new goals imposed. Program works by depth-first searching with strict chronological back-tracking. Proceeds to evaluate alternatives as necessary, sometimes interacting with user.

  16. Aviation & Space Education: A Teacher's Resource Guide.

    ERIC Educational Resources Information Center

    Texas State Dept. of Aviation, Austin.

    This resource guide contains information on curriculum guides, resources for teachers, computer software and computer related programs, audio/visual presentations, model aircraft and demonstration aids, training seminars and career education, and an aerospace bibliography for primary grades. Each entry includes all or some of the following items:…

  17. A strategy for mineral and energy resource independence

    USGS Publications Warehouse

    Carter, W.D.

    1983-01-01

    Data acquired by Landsats 1, 2, and 3, are beginning to provide the information on which an improved mineral and energy resource exploration strategy can be based. Landsat 4 is expected to augment this capability with its higher resolution (30 m) and additional spectral bands in the Thematic Mapper (TM) designed specifically to discriminate clay minerals associated with mineral alteration. In addition, a new global magnetic anomaly map, derived from the recent Magsat mission, has recently been compiled by the National Aeronautics and Space Administration (NASA), the U.S. Geological Survey (USGS), and others. Preliminary, extremely small-scale renditions of this map indicate that global coverage is nearly complete and that the map will improve upon a previous one derived from Polar Orbiting Geophysical Observatory (POGO) data. Digital processing of the Landsat image data and Magsat geophysical data can be used to create three-dimensional stereoscopic models for which Landsat images provide surface reference to deep structural anomalies. Comparative studies of national Landsat lineament maps, Magsat stereoscopic models, and metallogenic information derived from the Computerized Resources Information Bank (CRIB) inventory of U.S. mineral resources, provide a way of identifying and selecting exploration areas that have mineral resource potential. Landsat images and computer-compatible tapes can provide new and better mosaics and also provide the capability for a closer look at promising sites. ?? 1983.

  18. CloVR: A virtual machine for automated and portable sequence analysis from the desktop using cloud computing

    PubMed Central

    2011-01-01

    Background Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. Results We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. Conclusion The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing. PMID:21878105

  19. Resources for Teaching Astronomy.

    ERIC Educational Resources Information Center

    Grafton, Teresa; Suggett, Martin

    1991-01-01

    Resources that are available for teachers presenting astronomy in the National Curriculum are listed. Included are societies and organizations, resource centers and places to visit, planetaria, telescopes and binoculars, planispheres, star charts, night sky diaries, equipment, audiovisual materials, computer software, books, and magazines. (KR)

  20. Automation of the CFD Process on Distributed Computing Systems

    NASA Technical Reports Server (NTRS)

    Tejnil, Ed; Gee, Ken; Rizk, Yehia M.

    2000-01-01

    resources required to compute and store the information. The scripts were continually modified to improve the utilization of the computational resources and reduce the likelihood of data loss due to failures. An ad-hoc file server was created to manage the large amount of data being generated as part of the design event. Files were stored and retrieved as needed to create new jobs and analyze the results. Additional information is contained in the original.

  1. Fast and Scalable Computation of the Forward and Inverse Discrete Periodic Radon Transform.

    PubMed

    Carranza, Cesar; Llamocca, Daniel; Pattichis, Marios

    2016-01-01

    The discrete periodic radon transform (DPRT) has extensively been used in applications that involve image reconstructions from projections. Beyond classic applications, the DPRT can also be used to compute fast convolutions that avoids the use of floating-point arithmetic associated with the use of the fast Fourier transform. Unfortunately, the use of the DPRT has been limited by the need to compute a large number of additions and the need for a large number of memory accesses. This paper introduces a fast and scalable approach for computing the forward and inverse DPRT that is based on the use of: a parallel array of fixed-point adder trees; circular shift registers to remove the need for accessing external memory components when selecting the input data for the adder trees; an image block-based approach to DPRT computation that can fit the proposed architecture to available resources; and fast transpositions that are computed in one or a few clock cycles that do not depend on the size of the input image. As a result, for an N × N image (N prime), the proposed approach can compute up to N(2) additions per clock cycle. Compared with the previous approaches, the scalable approach provides the fastest known implementations for different amounts of computational resources. For example, for a 251×251 image, for approximately 25% fewer flip-flops than required for a systolic implementation, we have that the scalable DPRT is computed 36 times faster. For the fastest case, we introduce optimized just 2N + ⌈log(2) N⌉ + 1 and 2N + 3 ⌈log(2) N⌉ + B + 2 cycles, architectures that can compute the DPRT and its inverse in respectively, where B is the number of bits used to represent each input pixel. On the other hand, the scalable DPRT approach requires more 1-b additions than for the systolic implementation and provides a tradeoff between speed and additional 1-b additions. All of the proposed DPRT architectures were implemented in VHSIC Hardware Description Language

  2. Exploring a Multiphysics Resolution Approach for Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Estupinan Donoso, Alvaro Antonio; Peters, Bernhard

    2018-06-01

    Metal additive manufacturing (AM) is a fast-evolving technology aiming to efficiently produce complex parts while saving resources. Worldwide, active research is being performed to solve the existing challenges of this growing technique. Constant computational advances have enabled multiscale and multiphysics numerical tools that complement the traditional physical experimentation. In this contribution, an advanced discrete-continuous concept is proposed to address the physical phenomena involved during laser powder bed fusion. The concept treats powder as discrete by the extended discrete element method, which predicts the thermodynamic state and phase change for each particle. The fluid surrounding is solved with multiphase computational fluid dynamics techniques to determine momentum, heat, gas and liquid transfer. Thus, results track the positions and thermochemical history of individual particles in conjunction with the prevailing fluid phases' temperature and composition. It is believed that this methodology can be employed to complement experimental research by analysis of the comprehensive results, which can be extracted from it to enable AM processes optimization for parts qualification.

  3. Computer Literacy Project. A General Orientation in Basic Computer Concepts and Applications.

    ERIC Educational Resources Information Center

    Murray, David R.

    This paper proposes a two-part, basic computer literacy program for university faculty, staff, and students with no prior exposure to computers. The program described would introduce basic computer concepts and computing center service programs and resources; provide fundamental preparation for other computer courses; and orient faculty towards…

  4. 18 CFR 1314.10 - Additional provisions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 2 2010-04-01 2010-04-01 false Additional provisions. 1314.10 Section 1314.10 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY BOOK-ENTRY PROCEDURES FOR TVA POWER SECURITIES ISSUED THROUGH THE FEDERAL RESERVE BANKS § 1314.10 Additional provisions...

  5. 18 CFR 1314.10 - Additional provisions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 2 2011-04-01 2011-04-01 false Additional provisions. 1314.10 Section 1314.10 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY BOOK-ENTRY PROCEDURES FOR TVA POWER SECURITIES ISSUED THROUGH THE FEDERAL RESERVE BANKS § 1314.10 Additional provisions...

  6. 18 CFR 33.10 - Additional information.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Additional information. 33.10 Section 33.10 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION... § 33.10 Additional information. The Director of the Office of Energy Market Regulation, or his designee...

  7. Computational Science at the Argonne Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Romero, Nichols

    2014-03-01

    The goal of the Argonne Leadership Computing Facility (ALCF) is to extend the frontiers of science by solving problems that require innovative approaches and the largest-scale computing systems. ALCF's most powerful computer - Mira, an IBM Blue Gene/Q system - has nearly one million cores. How does one program such systems? What software tools are available? Which scientific and engineering applications are able to utilize such levels of parallelism? This talk will address these questions and describe a sampling of projects that are using ALCF systems in their research, including ones in nanoscience, materials science, and chemistry. Finally, the ways to gain access to ALCF resources will be presented. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357.

  8. Application of a Resource Theory for Magic States to Fault-Tolerant Quantum Computing.

    PubMed

    Howard, Mark; Campbell, Earl

    2017-03-03

    Motivated by their necessity for most fault-tolerant quantum computation schemes, we formulate a resource theory for magic states. First, we show that robustness of magic is a well-behaved magic monotone that operationally quantifies the classical simulation overhead for a Gottesman-Knill-type scheme using ancillary magic states. Our framework subsequently finds immediate application in the task of synthesizing non-Clifford gates using magic states. When magic states are interspersed with Clifford gates, Pauli measurements, and stabilizer ancillas-the most general synthesis scenario-then the class of synthesizable unitaries is hard to characterize. Our techniques can place nontrivial lower bounds on the number of magic states required for implementing a given target unitary. Guided by these results, we have found new and optimal examples of such synthesis.

  9. Impact of remote sensing upon the planning, management and development of water resources. Summary of computers and computer growth trends for hydrologic modeling and the input of ERTS image data processing load

    NASA Technical Reports Server (NTRS)

    Castruccio, P. A.; Loats, H. L., Jr.

    1975-01-01

    An analysis of current computer usage by major water resources users was made to determine the trends of usage and costs for the principal hydrologic users/models. The laws and empirical relationships governing the growth of the data processing loads were described and applied to project the future data loads. Data loads for ERTS CCT image processing were computed and projected through the 1985 era. The analysis showns significant impact due to the utilization and processing of ERTS CCT's data.

  10. Electronic neural network for dynamic resource allocation

    NASA Technical Reports Server (NTRS)

    Thakoor, A. P.; Eberhardt, S. P.; Daud, T.

    1991-01-01

    A VLSI implementable neural network architecture for dynamic assignment is presented. The resource allocation problems involve assigning members of one set (e.g. resources) to those of another (e.g. consumers) such that the global 'cost' of the associations is minimized. The network consists of a matrix of sigmoidal processing elements (neurons), where the rows of the matrix represent resources and columns represent consumers. Unlike previous neural implementations, however, association costs are applied directly to the neurons, reducing connectivity of the network to VLSI-compatible 0 (number of neurons). Each row (and column) has an additional neuron associated with it to independently oversee activations of all the neurons in each row (and each column), providing a programmable 'k-winner-take-all' function. This function simultaneously enforces blocking (excitatory/inhibitory) constraints during convergence to control the number of active elements in each row and column within desired boundary conditions. Simulations show that the network, when implemented in fully parallel VLSI hardware, offers optimal (or near-optimal) solutions within only a fraction of a millisecond, for problems up to 128 resources and 128 consumers, orders of magnitude faster than conventional computing or heuristic search methods.

  11. Community-Authored Resources for Education

    ERIC Educational Resources Information Center

    Tinker, Robert; Linn, Marcia; Gerard, Libby; Staudt, Carolyn

    2010-01-01

    Textbooks are resources for learning that provide the structure, content, assessments, and teacher guidance for an entire course. Technology can provide a far better resource that provides the same functions, but takes full advantage of computers. This resource would be much more than text on a screen. It would use the best technology; it would be…

  12. Computer Literacy and Societal Impact of Computers: Education and Manpower.

    ERIC Educational Resources Information Center

    Hamblen, John W.

    The use of computers in education and general society is continually expanding, but the growing need for highly educated computer technologists will continue unless universities take major steps toward the reallocation of their resources in favor of the burgeoning computer science departments, and government and industry provide a significant…

  13. Resource utilization and costs during the initial years of lung cancer screening with computed tomography in Canada.

    PubMed

    Cressman, Sonya; Lam, Stephen; Tammemagi, Martin C; Evans, William K; Leighl, Natasha B; Regier, Dean A; Bolbocean, Corneliu; Shepherd, Frances A; Tsao, Ming-Sound; Manos, Daria; Liu, Geoffrey; Atkar-Khattra, Sukhinder; Cromwell, Ian; Johnston, Michael R; Mayo, John R; McWilliams, Annette; Couture, Christian; English, John C; Goffin, John; Hwang, David M; Puksa, Serge; Roberts, Heidi; Tremblay, Alain; MacEachern, Paul; Burrowes, Paul; Bhatia, Rick; Finley, Richard J; Goss, Glenwood D; Nicholas, Garth; Seely, Jean M; Sekhon, Harmanjatinder S; Yee, John; Amjadi, Kayvan; Cutz, Jean-Claude; Ionescu, Diana N; Yasufuku, Kazuhiro; Martel, Simon; Soghrati, Kamyar; Sin, Don D; Tan, Wan C; Urbanski, Stefan; Xu, Zhaolin; Peacock, Stuart J

    2014-10-01

    It is estimated that millions of North Americans would qualify for lung cancer screening and that billions of dollars of national health expenditures would be required to support population-based computed tomography lung cancer screening programs. The decision to implement such programs should be informed by data on resource utilization and costs. Resource utilization data were collected prospectively from 2059 participants in the Pan-Canadian Early Detection of Lung Cancer Study using low-dose computed tomography (LDCT). Participants who had 2% or greater lung cancer risk over 3 years using a risk prediction tool were recruited from seven major cities across Canada. A cost analysis was conducted from the Canadian public payer's perspective for resources that were used for the screening and treatment of lung cancer in the initial years of the study. The average per-person cost for screening individuals with LDCT was $453 (95% confidence interval [CI], $400-$505) for the initial 18-months of screening following a baseline scan. The screening costs were highly dependent on the detected lung nodule size, presence of cancer, screening intervention, and the screening center. The mean per-person cost of treating lung cancer with curative surgery was $33,344 (95% CI, $31,553-$34,935) over 2 years. This was lower than the cost of treating advanced-stage lung cancer with chemotherapy, radiotherapy, or supportive care alone, ($47,792; 95% CI, $43,254-$52,200; p = 0.061). In the Pan-Canadian study, the average cost to screen individuals with a high risk for developing lung cancer using LDCT and the average initial cost of curative intent treatment were lower than the average per-person cost of treating advanced stage lung cancer which infrequently results in a cure.

  14. Resource Utilization and Costs during the Initial Years of Lung Cancer Screening with Computed Tomography in Canada

    PubMed Central

    Lam, Stephen; Tammemagi, Martin C.; Evans, William K.; Leighl, Natasha B.; Regier, Dean A.; Bolbocean, Corneliu; Shepherd, Frances A.; Tsao, Ming-Sound; Manos, Daria; Liu, Geoffrey; Atkar-Khattra, Sukhinder; Cromwell, Ian; Johnston, Michael R.; Mayo, John R.; McWilliams, Annette; Couture, Christian; English, John C.; Goffin, John; Hwang, David M.; Puksa, Serge; Roberts, Heidi; Tremblay, Alain; MacEachern, Paul; Burrowes, Paul; Bhatia, Rick; Finley, Richard J.; Goss, Glenwood D.; Nicholas, Garth; Seely, Jean M.; Sekhon, Harmanjatinder S.; Yee, John; Amjadi, Kayvan; Cutz, Jean-Claude; Ionescu, Diana N.; Yasufuku, Kazuhiro; Martel, Simon; Soghrati, Kamyar; Sin, Don D.; Tan, Wan C.; Urbanski, Stefan; Xu, Zhaolin; Peacock, Stuart J.

    2014-01-01

    Background: It is estimated that millions of North Americans would qualify for lung cancer screening and that billions of dollars of national health expenditures would be required to support population-based computed tomography lung cancer screening programs. The decision to implement such programs should be informed by data on resource utilization and costs. Methods: Resource utilization data were collected prospectively from 2059 participants in the Pan-Canadian Early Detection of Lung Cancer Study using low-dose computed tomography (LDCT). Participants who had 2% or greater lung cancer risk over 3 years using a risk prediction tool were recruited from seven major cities across Canada. A cost analysis was conducted from the Canadian public payer’s perspective for resources that were used for the screening and treatment of lung cancer in the initial years of the study. Results: The average per-person cost for screening individuals with LDCT was $453 (95% confidence interval [CI], $400–$505) for the initial 18-months of screening following a baseline scan. The screening costs were highly dependent on the detected lung nodule size, presence of cancer, screening intervention, and the screening center. The mean per-person cost of treating lung cancer with curative surgery was $33,344 (95% CI, $31,553–$34,935) over 2 years. This was lower than the cost of treating advanced-stage lung cancer with chemotherapy, radiotherapy, or supportive care alone, ($47,792; 95% CI, $43,254–$52,200; p = 0.061). Conclusion: In the Pan-Canadian study, the average cost to screen individuals with a high risk for developing lung cancer using LDCT and the average initial cost of curative intent treatment were lower than the average per-person cost of treating advanced stage lung cancer which infrequently results in a cure. PMID:25105438

  15. Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter

    PubMed Central

    Loganathan, Shyamala; Mukherjee, Saswati

    2015-01-01

    Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms. PMID:26473166

  16. Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter.

    PubMed

    Loganathan, Shyamala; Mukherjee, Saswati

    2015-01-01

    Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms.

  17. A cross-sectional evaluation of computer literacy among medical students at a tertiary care teaching hospital in Mumbai, Bombay.

    PubMed

    Panchabhai, T S; Dangayach, N S; Mehta, V S; Patankar, C V; Rege, N N

    2011-01-01

    Computer usage capabilities of medical students for introduction of computer-aided learning have not been adequately assessed. Cross-sectional study to evaluate computer literacy among medical students. Tertiary care teaching hospital in Mumbai, India. Participants were administered a 52-question questionnaire, designed to study their background, computer resources, computer usage, activities enhancing computer skills, and attitudes toward computer-aided learning (CAL). The data was classified on the basis of sex, native place, and year of medical school, and the computer resources were compared. The computer usage and attitudes toward computer-based learning were assessed on a five-point Likert scale, to calculate Computer usage score (CUS - maximum 55, minimum 11) and Attitude score (AS - maximum 60, minimum 12). The quartile distribution among the groups with respect to the CUS and AS was compared by chi-squared tests. The correlation between CUS and AS was then tested. Eight hundred and seventy-five students agreed to participate in the study and 832 completed the questionnaire. One hundred and twenty eight questionnaires were excluded and 704 were analyzed. Outstation students had significantly lesser computer resources as compared to local students (P<0.0001). The mean CUS for local students (27.0±9.2, Mean±SD) was significantly higher than outstation students (23.2±9.05). No such difference was observed for the AS. The means of CUS and AS did not differ between males and females. The CUS and AS had positive, but weak correlations for all subgroups. The weak correlation between AS and CUS for all students could be explained by the lack of computer resources or inadequate training to use computers for learning. Providing additional resources would benefit the subset of outstation students with lesser computer resources. This weak correlation between the attitudes and practices of all students needs to be investigated. We believe that this gap can be bridged

  18. Computing with a single qubit faster than the computation quantum speed limit

    NASA Astrophysics Data System (ADS)

    Sinitsyn, Nikolai A.

    2018-02-01

    The possibility to save and process information in fundamentally indistinguishable states is the quantum mechanical resource that is not encountered in classical computing. I demonstrate that, if energy constraints are imposed, this resource can be used to accelerate information-processing without relying on entanglement or any other type of quantum correlations. In fact, there are computational problems that can be solved much faster, in comparison to currently used classical schemes, by saving intermediate information in nonorthogonal states of just a single qubit. There are also error correction strategies that protect such computations.

  19. Menu-driven cloud computing and resource sharing for R and Bioconductor.

    PubMed

    Bolouri, Hamid; Dulepet, Rajiv; Angerman, Michael

    2011-08-15

    We report CRdata.org, a cloud-based, free, open-source web server for running analyses and sharing data and R scripts with others. In addition to using the free, public service, CRdata users can launch their own private Amazon Elastic Computing Cloud (EC2) nodes and store private data and scripts on Amazon's Simple Storage Service (S3) with user-controlled access rights. All CRdata services are provided via point-and-click menus. CRdata is open-source and free under the permissive MIT License (opensource.org/licenses/mit-license.php). The source code is in Ruby (ruby-lang.org/en/) and available at: github.com/seerdata/crdata. hbolouri@fhcrc.org.

  20. Telescience Resource Kit

    NASA Technical Reports Server (NTRS)

    Schneider, Michelle

    2003-01-01

    This viewgraph representation provides an overview of the Telescience Resource Kit. The Telescience Resource Kit is a pc-based telemetry and command system that will be used by scientists and engineers to monitor and control experiments located on-board the International Space Station (ISS). Topics covered include: ISS Payload Telemetry and Command Flow, kit computer applications, kit telemetry capabilities, command capabilities, and training/testing capabilities.

  1. Computed Tomography (CT) - Spine

    MedlinePlus

    ... Resources Professions Site Index A-Z Computed Tomography (CT) - Spine Computed tomography (CT) of the spine is ... of CT Scanning of the Spine? What is CT Scanning of the Spine? Computed tomography, more commonly ...

  2. Towards a Cloud Computing Environment: Near Real-time Cloud Product Processing and Distribution for Next Generation Satellites

    NASA Astrophysics Data System (ADS)

    Nguyen, L.; Chee, T.; Minnis, P.; Palikonda, R.; Smith, W. L., Jr.; Spangenberg, D.

    2016-12-01

    The NASA LaRC Satellite ClOud and Radiative Property retrieval System (SatCORPS) processes and derives near real-time (NRT) global cloud products from operational geostationary satellite imager datasets. These products are being used in NRT to improve forecast model, aircraft icing warnings, and support aircraft field campaigns. Next generation satellites, such as the Japanese Himawari-8 and the upcoming NOAA GOES-R, present challenges for NRT data processing and product dissemination due to the increase in temporal and spatial resolution. The volume of data is expected to increase to approximately 10 folds. This increase in data volume will require additional IT resources to keep up with the processing demands to satisfy NRT requirements. In addition, these resources are not readily available due to cost and other technical limitations. To anticipate and meet these computing resource requirements, we have employed a hybrid cloud computing environment to augment the generation of SatCORPS products. This paper will describe the workflow to ingest, process, and distribute SatCORPS products and the technologies used. Lessons learn from working on both AWS Clouds and GovCloud will be discussed: benefits, similarities, and differences that could impact decision to use cloud computing and storage. A detail cost analysis will be presented. In addition, future cloud utilization, parallelization, and architecture layout will be discussed for GOES-R.

  3. The Effects of Computer-Assisted Instruction on Student Achievement in Addition and Subtraction at First Grade Level.

    ERIC Educational Resources Information Center

    Spivey, Patsy M.

    This study was conducted to determine whether the traditional classroom approach to instruction involving the addition and subtraction of number facts (digits 0-6) is more or less effective than the traditional classroom approach plus a commercially-prepared computer game. A pretest-posttest control group design was used with two groups of first…

  4. Automatic Computer Mapping of Terrain

    NASA Technical Reports Server (NTRS)

    Smedes, H. W.

    1971-01-01

    Computer processing of 17 wavelength bands of visible, reflective infrared, and thermal infrared scanner spectrometer data, and of three wavelength bands derived from color aerial film has resulted in successful automatic computer mapping of eight or more terrain classes in a Yellowstone National Park test site. The tests involved: (1) supervised and non-supervised computer programs; (2) special preprocessing of the scanner data to reduce computer processing time and cost, and improve the accuracy; and (3) studies of the effectiveness of the proposed Earth Resources Technology Satellite (ERTS) data channels in the automatic mapping of the same terrain, based on simulations, using the same set of scanner data. The following terrain classes have been mapped with greater than 80 percent accuracy in a 12-square-mile area with 1,800 feet of relief; (1) bedrock exposures, (2) vegetated rock rubble, (3) talus, (4) glacial kame meadow, (5) glacial till meadow, (6) forest, (7) bog, and (8) water. In addition, shadows of clouds and cliffs are depicted, but were greatly reduced by using preprocessing techniques.

  5. Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist

    PubMed Central

    Banerjee, Debjani; Bellesia, Giovanni; Daigle, Bernie J.; Douglas, Geoffrey; Gu, Mengyuan; Gupta, Anand; Hellander, Stefan; Horuk, Chris; Nath, Dibyendu; Takkar, Aviral; Lötstedt, Per; Petzold, Linda R.

    2016-01-01

    We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources and exchange models via a public model repository. We demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity. PMID:27930676

  6. Computer Aided Manufacturing.

    ERIC Educational Resources Information Center

    Insolia, Gerard

    This document contains course outlines in computer-aided manufacturing developed for a business-industry technology resource center for firms in eastern Pennsylvania by Northampton Community College. The four units of the course cover the following: (1) introduction to computer-assisted design (CAD)/computer-assisted manufacturing (CAM); (2) CAM…

  7. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing.

    PubMed

    Brown, David K; Penkler, David L; Musyoka, Thommas M; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.

  8. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing

    PubMed Central

    Brown, David K.; Penkler, David L.; Musyoka, Thommas M.; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS. PMID:26280450

  9. MAROON BELLS-SNOWMASS WILDERNESS AND ADDITIONS, COLORADO.

    USGS Publications Warehouse

    Freeman, Val L.; Weisner, Robert C.

    1984-01-01

    The Maroon Bells-Snowmass Wilderness and Additions, located in western Colorado, was examined for mineral potential. Evidence of mineralization is widespread and numerous areas have either probable or substantiated mineral-resource potential for one or more of the following metals: gold, silver, lead, zinc, copper, and molybdenum. In addition, part of the wilderness has substantiated coal resource potential. There is little promise for the occurrence of oil and gas or geothermal resources.

  10. Resource Allocation Planning Helper (RALPH): Lessons learned

    NASA Technical Reports Server (NTRS)

    Durham, Ralph; Reilly, Norman B.; Springer, Joe B.

    1990-01-01

    The current task of Resource Allocation Process includes the planning and apportionment of JPL's Ground Data System composed of the Deep Space Network and Mission Control and Computing Center facilities. The addition of the data driven, rule based planning system, RALPH, has expanded the planning horizon from 8 weeks to 10 years and has resulted in large labor savings. Use of the system has also resulted in important improvements in science return through enhanced resource utilization. In addition, RALPH has been instrumental in supporting rapid turn around for an increased volume of special what if studies. The status of RALPH is briefly reviewed and important lessons learned from the creation of an highly functional design team are focused on through an evolutionary design and implementation period in which an AI shell was selected, prototyped, and ultimately abandoned, and through the fundamental changes to the very process that spawned the tool kit. Principal topics include proper integration of software tools within the planning environment, transition from prototype to delivered to delivered software, changes in the planning methodology as a result of evolving software capabilities and creation of the ability to develop and process generic requirements to allow planning flexibility.

  11. Menu-driven cloud computing and resource sharing for R and Bioconductor

    PubMed Central

    Bolouri, Hamid; Angerman, Michael

    2011-01-01

    Summary: We report CRdata.org, a cloud-based, free, open-source web server for running analyses and sharing data and R scripts with others. In addition to using the free, public service, CRdata users can launch their own private Amazon Elastic Computing Cloud (EC2) nodes and store private data and scripts on Amazon's Simple Storage Service (S3) with user-controlled access rights. All CRdata services are provided via point-and-click menus. Availability and Implementation: CRdata is open-source and free under the permissive MIT License (opensource.org/licenses/mit-license.php). The source code is in Ruby (ruby-lang.org/en/) and available at: github.com/seerdata/crdata. Contact: hbolouri@fhcrc.org PMID:21685055

  12. Exploring Cloud Computing for Large-scale Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Guang; Han, Binh; Yin, Jian

    This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less

  13. ``Carbon Credits'' for Resource-Bounded Computations Using Amortised Analysis

    NASA Astrophysics Data System (ADS)

    Jost, Steffen; Loidl, Hans-Wolfgang; Hammond, Kevin; Scaife, Norman; Hofmann, Martin

    Bounding resource usage is important for a number of areas, notably real-time embedded systems and safety-critical systems. In this paper, we present a fully automatic static type-based analysis for inferring upper bounds on resource usage for programs involving general algebraic datatypes and full recursion. Our method can easily be used to bound any countable resource, without needing to revisit proofs. We apply the analysis to the important metrics of worst-case execution time, stack- and heap-space usage. Our results from several realistic embedded control applications demonstrate good matches between our inferred bounds and measured worst-case costs for heap and stack usage. For time usage we infer good bounds for one application. Where we obtain less tight bounds, this is due to the use of software floating-point libraries.

  14. [Research strategies for feed additives and veterinary medicines from side products of Chinese medicine resources industrialization].

    PubMed

    Zhao, Ming; Duan, Jin-Ao; Zhang, Sen; Guo, Sheng; Su, Shu-Lan; Wu, Qi-Nan; Tang, Yu-Ping; Zeng, Jian-Guo

    2017-09-01

    The global antimicrobial resistance has been a big challenge to the human health for years. It has to make balance between the safety of animal products and the use of antimicrobials in animal husbandry. Any methods that can minimize or even phase out the use of antimicrobials in animal husbandry should be encouraged. We herein describe the research strategies for feed additives and veterinary medicines from the side products of Chinese medicine resources industrialization. Killing two birds with one stone-besides the major purposes, the rational utilization of non-medicinal parts and wastes of industrialization of Chinese herbal medicines is also achieved under the proposed strategies. Copyright© by the Chinese Pharmaceutical Association.

  15. Projections of costs, financing, and additional resource requirements for low- and lower middle-income country immunization programs over the decade, 2011-2020.

    PubMed

    Gandhi, Gian; Lydon, Patrick; Cornejo, Santiago; Brenzel, Logan; Wrobel, Sandra; Chang, Hugh

    2013-04-18

    The Decade of Vaccines Global Vaccine Action Plan has outlined a set of ambitious goals to broaden the impact and reach of immunization across the globe. A projections exercise has been undertaken to assess the costs, financing availability, and additional resource requirements to achieve these goals through the delivery of vaccines against 19 diseases across 94 low- and middle-income countries for the period 2011-2020. The exercise draws upon data from existing published and unpublished global forecasts, country immunization plans, and costing studies. A combination of an ingredients-based approach and use of approximations based on past spending has been used to generate vaccine and non-vaccine delivery costs for routine programs, as well as supplementary immunization activities (SIAs). Financing projections focused primarily on support from governments and the GAVI Alliance. Cost and financing projections are presented in constant 2010 US dollars (US$). Cumulative total costs for the decade are projected to be US$57.5 billion, with 85% for routine programs and the remaining 15% for SIAs. Delivery costs account for 54% of total cumulative costs, and vaccine costs make up the remainder. A conservative estimate of total financing for immunization programs is projected to be $34.3 billion over the decade, with country governments financing 65%. These projections imply a cumulative funding gap of $23.2 billion. About 57% of the total resources required to close the funding gap are needed just to maintain existing programs and scale up other currently available vaccines (i.e., before adding in the additional costs of vaccines still in development). Efforts to mobilize additional resources, manage program costs, and establish mutual accountability between countries and development partners will all be necessary to ensure the goals of the Decade of Vaccines are achieved. Establishing or building on existing mechanisms to more comprehensively track resources and

  16. Utility functions and resource management in an oversubscribed heterogeneous computing environment

    DOE PAGES

    Khemka, Bhavesh; Friese, Ryan; Briceno, Luis Diego; ...

    2014-09-26

    We model an oversubscribed heterogeneous computing system where tasks arrive dynamically and a scheduler maps the tasks to machines for execution. The environment and workloads are based on those being investigated by the Extreme Scale Systems Center at Oak Ridge National Laboratory. Utility functions that are designed based on specifications from the system owner and users are used to create a metric for the performance of resource allocation heuristics. Each task has a time-varying utility (importance) that the enterprise will earn based on when the task successfully completes execution. We design multiple heuristics, which include a technique to drop lowmore » utility-earning tasks, to maximize the total utility that can be earned by completing tasks. The heuristics are evaluated using simulation experiments with two levels of oversubscription. The results show the benefit of having fast heuristics that account for the importance of a task and the heterogeneity of the environment when making allocation decisions in an oversubscribed environment. Furthermore, the ability to drop low utility-earning tasks allow the heuristics to tolerate the high oversubscription as well as earn significant utility.« less

  17. Data Resources for the Computer-Guided Discovery of Bioactive Natural Products.

    PubMed

    Chen, Ya; de Bruyn Kops, Christina; Kirchmair, Johannes

    2017-09-25

    Natural products from plants, animals, marine life, fungi, bacteria, and other organisms are an important resource for modern drug discovery. Their biological relevance and structural diversity make natural products good starting points for drug design. Natural product-based drug discovery can benefit greatly from computational approaches, which are a valuable precursor or supplementary method to in vitro testing. We present an overview of 25 virtual and 31 physical natural product libraries that are useful for applications in cheminformatics, in particular virtual screening. The overview includes detailed information about each library, the extent of its structural information, and the overlap between different sources of natural products. In terms of chemical structures, there is a large overlap between freely available and commercial virtual natural product libraries. Of particular interest for drug discovery is that at least ten percent of known natural products are readily purchasable and many more natural products and derivatives are available through on-demand sourcing, extraction and synthesis services. Many of the readily purchasable natural products are of small size and hence of relevance to fragment-based drug discovery. There are also an increasing number of macrocyclic natural products and derivatives becoming available for screening.

  18. Techniques for computer-aided analysis of ERTS-1 data, useful in geologic, forest and water resource surveys. [Colorado Rocky Mountains

    NASA Technical Reports Server (NTRS)

    Hoffer, R. M.

    1974-01-01

    Forestry, geology, and water resource applications were the focus of this study, which involved the use of computer-implemented pattern-recognition techniques to analyze ERTS-1 data. The results have proven the value of computer-aided analysis techniques, even in areas of mountainous terrain. Several analysis capabilities have been developed during these ERTS-1 investigations. A procedure to rotate, deskew, and geometrically scale the MSS data results in 1:24,000 scale printouts that can be directly overlayed on 7 1/2 minutes U.S.G.S. topographic maps. Several scales of computer-enhanced "false color-infrared" composites of MSS data can be obtained from a digital display unit, and emphasize the tremendous detail present in the ERTS-1 data. A grid can also be superimposed on the displayed data to aid in specifying areas of interest.

  19. Working with Computers: Computer Orientation for Foreign Students.

    ERIC Educational Resources Information Center

    Barlow, Michael

    Designed as a resource for foreign students, this book includes instructions not only on how to use computers, but also on how to use them to complete academic work more efficiently. Part I introduces the basic operations of mainframes and microcomputers and the major areas of computing, i.e., file management, editing, communications, databases,…

  20. A Computer Security Course in the Undergraduate Computer Science Curriculum.

    ERIC Educational Resources Information Center

    Spillman, Richard

    1992-01-01

    Discusses the importance of computer security and considers criminal, national security, and personal privacy threats posed by security breakdown. Several examples are given, including incidents involving computer viruses. Objectives, content, instructional strategies, resources, and a sample examination for an experimental undergraduate computer…

  1. Campus Computing Environment: University of Kentucky.

    ERIC Educational Resources Information Center

    CAUSE/EFFECT, 1989

    1989-01-01

    A dramatic growth in computing and communications was precipitated largely by the leadership of President David Roselle at the University of Kentucky. A new operational structure of information resource management includes not only computing (academic and administrative) and communications, instructional resources, and printing/mailing services,…

  2. Control-display mapping in brain-computer interfaces.

    PubMed

    Thurlings, Marieke E; van Erp, Jan B F; Brouwer, Anne-Marie; Blankertz, Benjamin; Werkhoven, Peter

    2012-01-01

    Event-related potential (ERP) based brain-computer interfaces (BCIs) employ differences in brain responses to attended and ignored stimuli. When using a tactile ERP-BCI for navigation, mapping is required between navigation directions on a visual display and unambiguously corresponding tactile stimuli (tactors) from a tactile control device: control-display mapping (CDM). We investigated the effect of congruent (both display and control horizontal or both vertical) and incongruent (vertical display, horizontal control) CDMs on task performance, the ERP and potential BCI performance. Ten participants attended to a target (determined via CDM), in a stream of sequentially vibrating tactors. We show that congruent CDM yields best task performance, enhanced the P300 and results in increased estimated BCI performance. This suggests a reduced availability of attentional resources when operating an ERP-BCI with incongruent CDM. Additionally, we found an enhanced N2 for incongruent CDM, which indicates a conflict between visual display and tactile control orientations. Incongruency in control-display mapping reduces task performance. In this study, brain responses, task and system performance are related to (in)congruent mapping of command options and the corresponding stimuli in a brain-computer interface (BCI). Directional congruency reduces task errors, increases available attentional resources, improves BCI performance and thus facilitates human-computer interaction.

  3. BNL ATLAS Grid Computing

    ScienceCinema

    Michael Ernst

    2017-12-09

    As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,

  4. Using additive manufacturing in accuracy evaluation of reconstructions from computed tomography.

    PubMed

    Smith, Erin J; Anstey, Joseph A; Venne, Gabriel; Ellis, Randy E

    2013-05-01

    Bone models derived from patient imaging and fabricated using additive manufacturing technology have many potential uses including surgical planning, training, and research. This study evaluated the accuracy of bone surface reconstruction of two diarthrodial joints, the hip and shoulder, from computed tomography. Image segmentation of the tomographic series was used to develop a three-dimensional virtual model, which was fabricated using fused deposition modelling. Laser scanning was used to compare cadaver bones, printed models, and intermediate segmentations. The overall bone reconstruction process had a reproducibility of 0.3 ± 0.4 mm. Production of the model had an accuracy of 0.1 ± 0.1 mm, while the segmentation had an accuracy of 0.3 ± 0.4 mm, indicating that segmentation accuracy was the key factor in reconstruction. Generally, the shape of the articular surfaces was reproduced accurately, with poorer accuracy near the periphery of the articular surfaces, particularly in regions with periosteum covering and where osteophytes were apparent.

  5. Computer Simulation and Digital Resources for Plastic Surgery Psychomotor Education.

    PubMed

    Diaz-Siso, J Rodrigo; Plana, Natalie M; Stranix, John T; Cutting, Court B; McCarthy, Joseph G; Flores, Roberto L

    2016-10-01

    Contemporary plastic surgery residents are increasingly challenged to learn a greater number of complex surgical techniques within a limited period. Surgical simulation and digital education resources have the potential to address some limitations of the traditional training model, and have been shown to accelerate knowledge and skills acquisition. Although animal, cadaver, and bench models are widely used for skills and procedure-specific training, digital simulation has not been fully embraced within plastic surgery. Digital educational resources may play a future role in a multistage strategy for skills and procedures training. The authors present two virtual surgical simulators addressing procedural cognition for cleft repair and craniofacial surgery. Furthermore, the authors describe how partnerships among surgical educators, industry, and philanthropy can be a successful strategy for the development and maintenance of digital simulators and educational resources relevant to plastic surgery training. It is our responsibility as surgical educators not only to create these resources, but to demonstrate their utility for enhanced trainee knowledge and technical skills development. Currently available digital resources should be evaluated in partnership with plastic surgery educational societies to guide trainees and practitioners toward effective digital content.

  6. Infrastructures for Distributed Computing: the case of BESIII

    NASA Astrophysics Data System (ADS)

    Pellegrino, J.

    2018-05-01

    The BESIII is an electron-positron collision experiment hosted at BEPCII in Beijing and aimed to investigate Tau-Charm physics. Now BESIII has been running for several years and gathered more than 1PB raw data. In order to analyze these data and perform massive Monte Carlo simulations, a large amount of computing and storage resources is needed. The distributed computing system is based up on DIRAC and it is in production since 2012. It integrates computing and storage resources from different institutes and a variety of resource types such as cluster, grid, cloud or volunteer computing. About 15 sites from BESIII Collaboration from all over the world joined this distributed computing infrastructure, giving a significant contribution to the IHEP computing facility. Nowadays cloud computing is playing a key role in the HEP computing field, due to its scalability and elasticity. Cloud infrastructures take advantages of several tools, such as VMDirac, to manage virtual machines through cloud managers according to the job requirements. With the virtually unlimited resources from commercial clouds, the computing capacity could scale accordingly in order to deal with any burst demands. General computing models have been discussed in the talk and are addressed herewith, with particular focus on the BESIII infrastructure. Moreover new computing tools and upcoming infrastructures will be addressed.

  7. Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist

    DOE PAGES

    Drawert, Brian; Hellander, Andreas; Bales, Ben; ...

    2016-12-08

    We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources andmore » exchange models via a public model repository. We also demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity.« less

  8. Tracking the Flow of Resources in Electronic Waste - The Case of End-of-Life Computer Hard Disk Drives.

    PubMed

    Habib, Komal; Parajuly, Keshav; Wenzel, Henrik

    2015-10-20

    Recovery of resources, in particular, metals, from waste flows is widely seen as a prioritized option to reduce their potential supply constraints in the future. The current waste electrical and electronic equipment (WEEE) treatment system is more focused on bulk metals, where the recycling rate of specialty metals, such as rare earths, is negligible compared to their increasing use in modern products, such as electronics. This study investigates the challenges in recovering these resources in the existing WEEE treatment system. It is illustrated by following the material flows of resources in a conventional WEEE treatment plant in Denmark. Computer hard disk drives (HDDs) containing neodymium-iron-boron (NdFeB) magnets were selected as the case product for this experiment. The resulting output fractions were tracked until their final treatment in order to estimate the recovery potential of rare earth elements (REEs) and other resources contained in HDDs. The results further show that out of the 244 kg of HDDs treated, 212 kg comprising mainly of aluminum and steel can be finally recovered from the metallurgic process. The results further demonstrate the complete loss of REEs in the existing shredding-based WEEE treatment processes. Dismantling and separate processing of NdFeB magnets from their end-use products can be a more preferred option over shredding. However, it remains a technological and logistic challenge for the existing system.

  9. Undergraduate computational physics projects on quantum computing

    NASA Astrophysics Data System (ADS)

    Candela, D.

    2015-08-01

    Computational projects on quantum computing suitable for students in a junior-level quantum mechanics course are described. In these projects students write their own programs to simulate quantum computers. Knowledge is assumed of introductory quantum mechanics through the properties of spin 1/2. Initial, more easily programmed projects treat the basics of quantum computation, quantum gates, and Grover's quantum search algorithm. These are followed by more advanced projects to increase the number of qubits and implement Shor's quantum factoring algorithm. The projects can be run on a typical laptop or desktop computer, using most programming languages. Supplementing resources available elsewhere, the projects are presented here in a self-contained format especially suitable for a short computational module for physics students.

  10. Quantum ring-polymer contraction method: Including nuclear quantum effects at no additional computational cost in comparison to ab initio molecular dynamics

    NASA Astrophysics Data System (ADS)

    John, Christopher; Spura, Thomas; Habershon, Scott; Kühne, Thomas D.

    2016-04-01

    We present a simple and accurate computational method which facilitates ab initio path-integral molecular dynamics simulations, where the quantum-mechanical nature of the nuclei is explicitly taken into account, at essentially no additional computational cost in comparison to the corresponding calculation using classical nuclei. The predictive power of the proposed quantum ring-polymer contraction method is demonstrated by computing various static and dynamic properties of liquid water at ambient conditions using density functional theory. This development will enable routine inclusion of nuclear quantum effects in ab initio molecular dynamics simulations of condensed-phase systems.

  11. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    NASA Technical Reports Server (NTRS)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  12. GapMap: Enabling Comprehensive Autism Resource Epidemiology

    PubMed Central

    Albert, Nikhila; Schwartz, Jessey; Du, Michael

    2017-01-01

    Background For individuals with autism spectrum disorder (ASD), finding resources can be a lengthy and difficult process. The difficulty in obtaining global, fine-grained autism epidemiological data hinders researchers from quickly and efficiently studying large-scale correlations among ASD, environmental factors, and geographical and cultural factors. Objective The objective of this study was to define resource load and resource availability for families affected by autism and subsequently create a platform to enable a more accurate representation of prevalence rates and resource epidemiology. Methods We created a mobile application, GapMap, to collect locational, diagnostic, and resource use information from individuals with autism to compute accurate prevalence rates and better understand autism resource epidemiology. GapMap is hosted on AWS S3, running on a React and Redux front-end framework. The backend framework is comprised of an AWS API Gateway and Lambda Function setup, with secure and scalable end points for retrieving prevalence and resource data, and for submitting participant data. Measures of autism resource scarcity, including resource load, resource availability, and resource gaps were defined and preliminarily computed using simulated or scraped data. Results The average distance from an individual in the United States to the nearest diagnostic center is approximately 182 km (50 miles), with a standard deviation of 235 km (146 miles). The average distance from an individual with ASD to the nearest diagnostic center, however, is only 32 km (20 miles), suggesting that individuals who live closer to diagnostic services are more likely to be diagnosed. Conclusions This study confirmed that individuals closer to diagnostic services are more likely to be diagnosed and proposes GapMap, a means to measure and enable the alleviation of increasingly overburdened diagnostic centers and resource-poor areas where parents are unable to diagnose their children

  13. Connecting Restricted, High-Availability, or Low-Latency Resources to a Seamless Global Pool for CMS

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Bockelman, B.; Hufnagel, D.; Hurtado Anampa, K.; Jayatilaka, B.; Khan, F.; Larson, K.; Letts, J.; Mascheroni, M.; Mohapatra, A.; Marra Da Silva, J.; Mason, D.; Perez-Calero Yzquierdo, A.; Piperov, S.; Tiradani, A.; Verguilov, V.; CMS Collaboration

    2017-10-01

    The connection of diverse and sometimes non-Grid enabled resource types to the CMS Global Pool, which is based on HTCondor and glideinWMS, has been a major goal of CMS. These resources range in type from a high-availability, low latency facility at CERN for urgent calibration studies, called the CAF, to a local user facility at the Fermilab LPC, allocation-based computing resources at NERSC and SDSC, opportunistic resources provided through the Open Science Grid, commercial clouds, and others, as well as access to opportunistic cycles on the CMS High Level Trigger farm. In addition, we have provided the capability to give priority to local users of beyond WLCG pledged resources at CMS sites. Many of the solutions employed to bring these diverse resource types into the Global Pool have common elements, while some are very specific to a particular project. This paper details some of the strategies and solutions used to access these resources through the Global Pool in a seamless manner.

  14. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    NASA Astrophysics Data System (ADS)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  15. The DoD's High Performance Computing Modernization Program - Ensuing the National Earth Systems Prediction Capability Becomes Operational

    NASA Astrophysics Data System (ADS)

    Burnett, W.

    2016-12-01

    The Department of Defense's (DoD) High Performance Computing Modernization Program (HPCMP) provides high performance computing to address the most significant challenges in computational resources, software application support and nationwide research and engineering networks. Today, the HPCMP has a critical role in ensuring the National Earth System Prediction Capability (N-ESPC) achieves initial operational status in 2019. A 2015 study commissioned by the HPCMP found that N-ESPC computational requirements will exceed interconnect bandwidth capacity due to the additional load from data assimilation and passing connecting data between ensemble codes. Memory bandwidth and I/O bandwidth will continue to be significant bottlenecks for the Navy's Hybrid Coordinate Ocean Model (HYCOM) scalability - by far the major driver of computing resource requirements in the N-ESPC. The study also found that few of the N-ESPC model developers have detailed plans to ensure their respective codes scale through 2024. Three HPCMP initiatives are designed to directly address and support these issues: Productivity Enhancement, Technology, Transfer and Training (PETTT), the HPCMP Applications Software Initiative (HASI), and Frontier Projects. PETTT supports code conversion by providing assistance, expertise and training in scalable and high-end computing architectures. HASI addresses the continuing need for modern application software that executes effectively and efficiently on next-generation high-performance computers. Frontier Projects enable research and development that could not be achieved using typical HPCMP resources by providing multi-disciplinary teams access to exceptional amounts of high performance computing resources. Finally, the Navy's DoD Supercomputing Resource Center (DSRC) currently operates a 6 Petabyte system, of which Naval Oceanography receives 15% of operational computational system use, or approximately 1 Petabyte of the processing capability. The DSRC will

  16. LaRC local area networks to support distributed computing

    NASA Technical Reports Server (NTRS)

    Riddle, E. P.

    1984-01-01

    The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.

  17. Neutron Characterization for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Watkins, Thomas; Bilheux, Hassina; An, Ke; Payzant, Andrew; DeHoff, Ryan; Duty, Chad; Peter, William; Blue, Craig; Brice, Craig A.

    2013-01-01

    Oak Ridge National Laboratory (ORNL) is leveraging decades of experience in neutron characterization of advanced materials together with resources such as the Spallation Neutron Source (SNS) and the High Flux Isotope Reactor (HFIR) shown in Fig. 1 to solve challenging problems in additive manufacturing (AM). Additive manufacturing, or three-dimensional (3-D) printing, is a rapidly maturing technology wherein components are built by selectively adding feedstock material at locations specified by a computer model. The majority of these technologies use thermally driven phase change mechanisms to convert the feedstock into functioning material. As the molten material cools and solidifies, the component is subjected to significant thermal gradients, generating significant internal stresses throughout the part (Fig. 2). As layers are added, inherent residual stresses cause warping and distortions that lead to geometrical differences between the final part and the original computer generated design. This effect also limits geometries that can be fabricated using AM, such as thin-walled, high-aspect- ratio, and overhanging structures. Distortion may be minimized by intelligent toolpath planning or strategic placement of support structures, but these approaches are not well understood and often "Edisonian" in nature. Residual stresses can also impact component performance during operation. For example, in a thermally cycled environment such as a high-pressure turbine engine, residual stresses can cause components to distort unpredictably. Different thermal treatments on as-fabricated AM components have been used to minimize residual stress, but components still retain a nonhomogeneous stress state and/or demonstrate a relaxation-derived geometric distortion. Industry, federal laboratory, and university collaboration is needed to address these challenges and enable the U.S. to compete in the global market. Work is currently being conducted on AM technologies at the ORNL

  18. The HEPCloud Facility: elastic computing for High Energy Physics - The NOvA Use Case

    NASA Astrophysics Data System (ADS)

    Fuess, S.; Garzoglio, G.; Holzman, B.; Kennedy, R.; Norman, A.; Timm, S.; Tiradani, A.

    2017-10-01

    The need for computing in the HEP community follows cycles of peaks and valleys mainly driven by conference dates, accelerator shutdown, holiday schedules, and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks such as potential overprovisioning. As the appetite for computing increases, however, so does the need to maximize cost efficiency by developing a model for dynamically provisioning resources only when needed. To address this issue, the HEPCloud project was launched by the Fermilab Scientific Computing Division in June 2015. Its goal is to develop a facility that provides a common interface to a variety of resources, including local clusters, grids, high performance computers, and community and commercial Clouds. Initially targeted experiments include CMS and NOvA, as well as other Fermilab stakeholders. In its first phase, the project has demonstrated the use of the “elastic” provisioning model offered by commercial clouds, such as Amazon Web Services. In this model, resources are rented and provisioned automatically over the Internet upon request. In January 2016, the project demonstrated the ability to increase the total amount of global CMS resources by 58,000 cores from 150,000 cores - a 38 percent increase - in preparation for the Recontres de Moriond. In March 2016, the NOvA experiment has also demonstrated resource burst capabilities with an additional 7,300 cores, achieving a scale almost four times as large as the local allocated resources and utilizing the local AWS s3 storage to optimize data handling operations and costs. NOvA was using the same familiar services used for local computations, such as data handling and job submission, in preparation for the Neutrino 2016 conference. In both cases, the cost was contained by the use of the Amazon Spot Instance Market and the Decision Engine, a HEPCloud component that aims at minimizing cost and job interruption. This paper

  19. Reconciling resource utilization and resource selection functions

    USGS Publications Warehouse

    Hooten, Mevin B.; Hanks, Ephraim M.; Johnson, Devin S.; Alldredge, Mat W.

    2013-01-01

    Summary: 1. Analyses based on utilization distributions (UDs) have been ubiquitous in animal space use studies, largely because they are computationally straightforward and relatively easy to employ. Conventional applications of resource utilization functions (RUFs) suggest that estimates of UDs can be used as response variables in a regression involving spatial covariates of interest. 2. It has been claimed that contemporary implementations of RUFs can yield inference about resource selection, although to our knowledge, an explicit connection has not been described. 3. We explore the relationships between RUFs and resource selection functions from a hueristic and simulation perspective. We investigate several sources of potential bias in the estimation of resource selection coefficients using RUFs (e.g. the spatial covariance modelling that is often used in RUF analyses). 4. Our findings illustrate that RUFs can, in fact, serve as approximations to RSFs and are capable of providing inference about resource selection, but only with some modification and under specific circumstances. 5. Using real telemetry data as an example, we provide guidance on which methods for estimating resource selection may be more appropriate and in which situations. In general, if telemetry data are assumed to arise as a point process, then RSF methods may be preferable to RUFs; however, modified RUFs may provide less biased parameter estimates when the data are subject to location error.

  20. An Overview of Cloud Computing in Distributed Systems

    NASA Astrophysics Data System (ADS)

    Divakarla, Usha; Kumari, Geetha

    2010-11-01

    Cloud computing is the emerging trend in the field of distributed computing. Cloud computing evolved from grid computing and distributed computing. Cloud plays an important role in huge organizations in maintaining huge data with limited resources. Cloud also helps in resource sharing through some specific virtual machines provided by the cloud service provider. This paper gives an overview of the cloud organization and some of the basic security issues pertaining to the cloud.

  1. A data management system to enable urgent natural disaster computing

    NASA Astrophysics Data System (ADS)

    Leong, Siew Hoon; Kranzlmüller, Dieter; Frank, Anton

    2014-05-01

    Civil protection, in particular natural disaster management, is very important to most nations and civilians in the world. When disasters like flash floods, earthquakes and tsunamis are expected or have taken place, it is of utmost importance to make timely decisions for managing the affected areas and reduce casualties. Computer simulations can generate information and provide predictions to facilitate this decision making process. Getting the data to the required resources is a critical requirement to enable the timely computation of the predictions. An urgent data management system to support natural disaster computing is thus necessary to effectively carry out data activities within a stipulated deadline. Since the trigger of a natural disaster is usually unpredictable, it is not always possible to prepare required resources well in advance. As such, an urgent data management system for natural disaster computing has to be able to work with any type of resources. Additional requirements include the need to manage deadlines and huge volume of data, fault tolerance, reliable, flexibility to changes, ease of usage, etc. The proposed data management platform includes a service manager to provide a uniform and extensible interface for the supported data protocols, a configuration manager to check and retrieve configurations of available resources, a scheduler manager to ensure that the deadlines can be met, a fault tolerance manager to increase the reliability of the platform and a data manager to initiate and perform the data activities. These managers will enable the selection of the most appropriate resource, transfer protocol, etc. such that the hard deadline of an urgent computation can be met for a particular urgent activity, e.g. data staging or computation. We associated 2 types of deadlines [2] with an urgent computing system. Soft-hard deadline: Missing a soft-firm deadline will render the computation less useful resulting in a cost that can have severe

  2. Study on the application of mobile internet cloud computing platform

    NASA Astrophysics Data System (ADS)

    Gong, Songchun; Fu, Songyin; Chen, Zheng

    2012-04-01

    The innovative development of computer technology promotes the application of the cloud computing platform, which actually is the substitution and exchange of a sort of resource service models and meets the needs of users on the utilization of different resources after changes and adjustments of multiple aspects. "Cloud computing" owns advantages in many aspects which not merely reduce the difficulties to apply the operating system and also make it easy for users to search, acquire and process the resources. In accordance with this point, the author takes the management of digital libraries as the research focus in this paper, and analyzes the key technologies of the mobile internet cloud computing platform in the operation process. The popularization and promotion of computer technology drive people to create the digital library models, and its core idea is to strengthen the optimal management of the library resource information through computers and construct an inquiry and search platform with high performance, allowing the users to access to the necessary information resources at any time. However, the cloud computing is able to promote the computations within the computers to distribute in a large number of distributed computers, and hence implement the connection service of multiple computers. The digital libraries, as a typical representative of the applications of the cloud computing, can be used to carry out an analysis on the key technologies of the cloud computing.

  3. Inspection of additive manufactured parts using laser ultrasonics

    NASA Astrophysics Data System (ADS)

    Lévesque, D.; Bescond, C.; Lord, M.; Cao, X.; Wanjara, P.; Monchalin, J.-P.

    2016-02-01

    Additive manufacturing is a novel technology of high importance for global sustainability of resources. As additive manufacturing involves typically layer-by-layer fusion of the feedstock (wire or powder), an important characteristic of the fabricated metallic structural parts, such as those used in aero-engines, is the performance, which is highly related to the presence of defects, such as cracks, lack of fusion or bonding between layers, and porosity. For this purpose, laser ultrasonics is very attractive due to its non-contact nature and is especially suited for the analysis of parts of complex geometries. In addition, the technique is well adapted to online implementation and real-time measurement during the manufacturing process. The inspection can be performed from either the top deposited layer or the underside of the substrate and the defects can be visualized using laser ultrasonics combined with the synthetic aperture focusing technique (SAFT). In this work, a variety of results obtained off-line on INCONEL® 718 and Ti-6Al-4V coupons that were manufactured using laser powder, laser wire, or electron beam wire deposition are reported and most defects detected were further confirmed by X-ray micro-computed tomography.

  4. Quantum computing with incoherent resources and quantum jumps.

    PubMed

    Santos, M F; Cunha, M Terra; Chaves, R; Carvalho, A R R

    2012-04-27

    Spontaneous emission and the inelastic scattering of photons are two natural processes usually associated with decoherence and the reduction in the capacity to process quantum information. Here we show that, when suitably detected, these photons are sufficient to build all the fundamental blocks needed to perform quantum computation in the emitting qubits while protecting them from deleterious dissipative effects. We exemplify this by showing how to efficiently prepare graph states for the implementation of measurement-based quantum computation.

  5. Research on Computers in Mathematics Education, IV. The Use of Computers in Mathematics Education Resource Series.

    ERIC Educational Resources Information Center

    Kieren, Thomas E.

    This last paper in a set of four reviews research on a wide variety of computer applications in the mathematics classroom. It covers computer-based instruction, especially drill-and-practice and tutorial modes; computer-managed instruction; and computer-augmented problem-solving. Analytical comments on the findings and status of the research are…

  6. Research on Key Technologies of Cloud Computing

    NASA Astrophysics Data System (ADS)

    Zhang, Shufen; Yan, Hongcan; Chen, Xuebin

    With the development of multi-core processors, virtualization, distributed storage, broadband Internet and automatic management, a new type of computing mode named cloud computing is produced. It distributes computation task on the resource pool which consists of massive computers, so the application systems can obtain the computing power, the storage space and software service according to its demand. It can concentrate all the computing resources and manage them automatically by the software without intervene. This makes application offers not to annoy for tedious details and more absorbed in his business. It will be advantageous to innovation and reduce cost. It's the ultimate goal of cloud computing to provide calculation, services and applications as a public facility for the public, So that people can use the computer resources just like using water, electricity, gas and telephone. Currently, the understanding of cloud computing is developing and changing constantly, cloud computing still has no unanimous definition. This paper describes three main service forms of cloud computing: SAAS, PAAS, IAAS, compared the definition of cloud computing which is given by Google, Amazon, IBM and other companies, summarized the basic characteristics of cloud computing, and emphasized on the key technologies such as data storage, data management, virtualization and programming model.

  7. Natural resource protection on buffer lands: integrating resource evaluation and economics.

    PubMed

    Burger, Joanna; Gochfeld, Michael; Greenberg, Michael

    2008-07-01

    Environmental managers are faced with the wise management, sustainability, and stewardship of their land for natural resource values. This task requires the integration of ecological evaluation with economics. Using the Department of Energy (DOE) as a case study, we examine the why, who, what, where, when, and how questions about assessment and natural resource protection of buffer lands. We suggest that managers evaluate natural resources for a variety of reasons that revolve around land use, remediation/restoration, protection of natural environments, and natural resource damage assessment (NRDA). While DOE is the manager of its lands, and thus its natural resources, a range of natural resource trustees and public officials have co-responsibility. We distinguish four types of natural resource evaluations: (1) the resources themselves (to the ecosystem), (2) the value of specific resources to people (e.g. hunting/fishing/bird-watching/herbal medicines), (3) the value of ecological resources to services for communities (e.g. clean air/water), and (4) the value of the intact ecosystems (e.g. forests or estuaries). Resource evaluations should occur initially to provide information about the status of those resources, and continued evaluation is required to provide trends data. Additional natural resource evaluation is required before, during and immediately following changes in land use, and remediation or restoration. Afterwards, additional monitoring and evaluations are required to evaluate the effects of the land use change or the efficacy of remediation/restoration. There are a wide range of economic methods available to evaluate natural resources, but the methods chosen depend upon the nature of the resource being evaluated, the purpose of the evaluation, and the needs of the agencies, natural resource trustees, public officials, and the public. We discuss the uses, and the advantages and disadvantages of different evaluation methods for natural resources.

  8. Natural resource protection on buffer lands: integrating resource evaluation and economics

    PubMed Central

    Gochfeld, Michael; Greenberg, Michael

    2014-01-01

    Environmental managers are faced with the wise management, sustainability, and stewardship of their land for natural resource values. This task requires the integration of ecological evaluation with economics. Using the Department of Energy (DOE) as a case study, we examine the why, who, what, where, when, and how questions about assessment and natural resource protection of buffer lands. We suggest that managers evaluate natural resources for a variety of reasons that revolve around land use, remediation/restoration, protection of natural environments, and natural resource damage assessment (NRDA). While DOE is the manager of its lands, and thus its natural resources, a range of natural resource trustees and public officials have co-responsibility. We distinguish four types of natural resource evaluations: (1) the resources themselves (to the ecosystem), (2) the value of specific resources to people (e.g. hunting/fishing/bird-watching/herbal medicines), (3) the value of ecological resources to services for communities (e.g. clean air/water), and (4) the value of the intact ecosystems (e.g. forests or estuaries). Resource evaluations should occur initially to provide information about the status of those resources, and continued evaluation is required to provide trends data. Additional natural resource evaluation is required before, during and immediately following changes in land use, and remediation or restoration. Afterwards, additional monitoring and evaluations are required to evaluate the effects of the land use change or the efficacy of remediation/restoration. There are a wide range of economic methods available to evaluate natural resources, but the methods chosen depend upon the nature of the resource being evaluated, the purpose of the evaluation, and the needs of the agencies, natural resource trustees, public officials, and the public. We discuss the uses, and the advantages and disadvantages of different evaluation methods for natural resources. PMID

  9. Self-guaranteed measurement-based quantum computation

    NASA Astrophysics Data System (ADS)

    Hayashi, Masahito; Hajdušek, Michal

    2018-05-01

    In order to guarantee the output of a quantum computation, we usually assume that the component devices are trusted. However, when the total computation process is large, it is not easy to guarantee the whole system when we have scaling effects, unexpected noise, or unaccounted for correlations between several subsystems. If we do not trust the measurement basis or the prepared entangled state, we do need to be worried about such uncertainties. To this end, we propose a self-guaranteed protocol for verification of quantum computation under the scheme of measurement-based quantum computation where no prior-trusted devices (measurement basis or entangled state) are needed. The approach we present enables the implementation of verifiable quantum computation using the measurement-based model in the context of a particular instance of delegated quantum computation where the server prepares the initial computational resource and sends it to the client, who drives the computation by single-qubit measurements. Applying self-testing procedures, we are able to verify the initial resource as well as the operation of the quantum devices and hence the computation itself. The overhead of our protocol scales with the size of the initial resource state to the power of 4 times the natural logarithm of the initial state's size.

  10. Laser powder bed fusion additive manufacturing of metals; physics, computational, and materials challenges

    NASA Astrophysics Data System (ADS)

    King, W. E.; Anderson, A. T.; Ferencz, R. M.; Hodge, N. E.; Kamath, C.; Khairallah, S. A.; Rubenchik, A. M.

    2015-12-01

    The production of metal parts via laser powder bed fusion additive manufacturing is growing exponentially. However, the transition of this technology from production of prototypes to production of critical parts is hindered by a lack of confidence in the quality of the part. Confidence can be established via a fundamental understanding of the physics of the process. It is generally accepted that this understanding will be increasingly achieved through modeling and simulation. However, there are significant physics, computational, and materials challenges stemming from the broad range of length and time scales and temperature ranges associated with the process. In this paper, we review the current state of the art and describe the challenges that need to be met to achieve the desired fundamental understanding of the physics of the process.

  11. Laser powder bed fusion additive manufacturing of metals; physics, computational, and materials challenges

    DOE PAGES

    King, W. E.; Anderson, A. T.; Ferencz, R. M.; ...

    2015-12-29

    The production of metal parts via laser powder bed fusion additive manufacturing is growing exponentially. However, the transition of this technology from production of prototypes to production of critical parts is hindered by a lack of confidence in the quality of the part. Confidence can be established via a fundamental understanding of the physics of the process. It is generally accepted that this understanding will be increasingly achieved through modeling and simulation. However, there are significant physics, computational, and materials challenges stemming from the broad range of length and time scales and temperature ranges associated with the process. In thismore » study, we review the current state of the art and describe the challenges that need to be met to achieve the desired fundamental understanding of the physics of the process.« less

  12. Nonthermal Quantum Channels as a Thermodynamical Resource.

    PubMed

    Navascués, Miguel; García-Pintos, Luis Pedro

    2015-07-03

    Quantum thermodynamics can be understood as a resource theory, whereby thermal states are free and the only allowed operations are unitary transformations commuting with the total Hamiltonian of the system. Previous literature on the subject has just focused on transformations between different state resources, overlooking the fact that quantum operations which do not commute with the total energy also constitute a potentially valuable resource. In this Letter, given a number of nonthermal quantum channels, we study the problem of how to integrate them in a thermal engine so as to distill a maximum amount of work. We find that, in the limit of asymptotically many uses of each channel, the distillable work is an additive function of the considered channels, computable for both finite dimensional quantum operations and bosonic channels. We apply our results to bound the amount of distillable work due to the natural nonthermal processes postulated in the Ghirardi-Rimini-Weber (GRW) collapse model. We find that, although GRW theory predicts the possibility of extracting work from the vacuum at no cost, the power which a collapse engine could, in principle, generate is extremely low.

  13. Nonthermal Quantum Channels as a Thermodynamical Resource

    NASA Astrophysics Data System (ADS)

    Navascués, Miguel; García-Pintos, Luis Pedro

    2015-07-01

    Quantum thermodynamics can be understood as a resource theory, whereby thermal states are free and the only allowed operations are unitary transformations commuting with the total Hamiltonian of the system. Previous literature on the subject has just focused on transformations between different state resources, overlooking the fact that quantum operations which do not commute with the total energy also constitute a potentially valuable resource. In this Letter, given a number of nonthermal quantum channels, we study the problem of how to integrate them in a thermal engine so as to distill a maximum amount of work. We find that, in the limit of asymptotically many uses of each channel, the distillable work is an additive function of the considered channels, computable for both finite dimensional quantum operations and bosonic channels. We apply our results to bound the amount of distillable work due to the natural nonthermal processes postulated in the Ghirardi-Rimini-Weber (GRW) collapse model. We find that, although GRW theory predicts the possibility of extracting work from the vacuum at no cost, the power which a collapse engine could, in principle, generate is extremely low.

  14. Resource Letter RPS-1: Research in problem solving

    NASA Astrophysics Data System (ADS)

    Hsu, Leonardo; Brewe, Eric; Foster, Thomas M.; Harper, Kathleen A.

    2004-09-01

    This Resource Letter provides a guide to the literature on research in problem solving, especially in physics. The references were compiled with two audiences in mind: physicists who are (or might become) engaged in research on problem solving, and physics instructors who are interested in using research results to improve their students' learning of problem solving. In addition to general references, journal articles and books are cited for the following topics: cognitive aspects of problem solving, expert-novice problem-solver characteristics, problem solving in mathematics, alternative problem types, curricular interventions, and the use of computers in problem solving.

  15. Computation Directorate 2008 Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, D L

    2009-03-25

    Whether a computer is simulating the aging and performance of a nuclear weapon, the folding of a protein, or the probability of rainfall over a particular mountain range, the necessary calculations can be enormous. Our computers help researchers answer these and other complex problems, and each new generation of system hardware and software widens the realm of possibilities. Building on Livermore's historical excellence and leadership in high-performance computing, Computation added more than 331 trillion floating-point operations per second (teraFLOPS) of power to LLNL's computer room floors in 2008. In addition, Livermore's next big supercomputer, Sequoia, advanced ever closer to itsmore » 2011-2012 delivery date, as architecture plans and the procurement contract were finalized. Hyperion, an advanced technology cluster test bed that teams Livermore with 10 industry leaders, made a big splash when it was announced during Michael Dell's keynote speech at the 2008 Supercomputing Conference. The Wall Street Journal touted Hyperion as a 'bright spot amid turmoil' in the computer industry. Computation continues to measure and improve the costs of operating LLNL's high-performance computing systems by moving hardware support in-house, by measuring causes of outages to apply resources asymmetrically, and by automating most of the account and access authorization and management processes. These improvements enable more dollars to go toward fielding the best supercomputers for science, while operating them at less cost and greater responsiveness to the customers.« less

  16. Quantum resource theory of non-stabilizer states in the one-shot regime

    NASA Astrophysics Data System (ADS)

    Ahmadi, Mehdi; Dang, Hoan; Gour, Gilad; Sanders, Barry

    Universal quantum computing is known to be impossible using only stabilizer states and stabilizer operations. However, addition of non-stabilizer states (also known as magic states) to quantum circuits enables us to achieve universality. The resource theory of non-stablizer states aims at quantifying the usefulness of non-stabilizer states. Here, we focus on a fundamental question in this resource theory in the so called single-shot regime: Given two resource states, is there a free quantum channel that will (approximately or exactly) convert one to the other?. To provide an answer, we phrase the question as a semidefinite program with constraints on the Choi matrix of the corresponding channel. Then, we use the semidefinite version of the Farkas lemma to derive the necessary and sufficient conditions for the conversion between two arbitrary resource states via a free quantum channel. BCS appreciates financial support from Alberta Innovates, NSERC, China's 1000 Talent Plan and the Institute for Quantum Information and Matter.

  17. Mission Critical Computer Resources Management Guide.

    DTIC Science & Technology

    1990-01-01

    another task and Ada fixes priorities at con- ported. The goal of the early developers of pilation time and is therefore inflexible to thc Ada was to...may support thc use of computer stitute are investigating the technical and aided design and manufacturing management advances required to answer (CAD...the Commerce Business Daily ( CBD ). intended, and special contract clauses to be It will be used by procurement personnel to used. determine potential

  18. Bringing Federated Identity to Grid Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teheran, Jeny

    The Fermi National Accelerator Laboratory (FNAL) is facing the challenge of providing scientific data access and grid submission to scientific collaborations that span the globe but are hosted at FNAL. Users in these collaborations are currently required to register as an FNAL user and obtain FNAL credentials to access grid resources to perform their scientific computations. These requirements burden researchers with managing additional authentication credentials, and put additional load on FNAL for managing user identities. Our design integrates the existing InCommon federated identity infrastructure, CILogon Basic CA, and MyProxy with the FNAL grid submission system to provide secure access formore » users from diverse experiments and collab orations without requiring each user to have authentication credentials from FNAL. The design automates the handling of certificates so users do not need to manage them manually. Although the initial implementation is for FNAL's grid submission system, the design and the core of the implementation are general and could be applied to other distributed computing systems.« less

  19. Simple Linux Utility for Resource Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jette, M.

    2009-09-09

    SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allciated nodes. Finally, it arbitrates conflicting requests for resouces by managing a queue of pending work.

  20. Computing at h1 - Experience and Future

    NASA Astrophysics Data System (ADS)

    Eckerlin, G.; Gerhards, R.; Kleinwort, C.; KrÜNer-Marquis, U.; Egli, S.; Niebergall, F.

    The H1 experiment has now been successfully operating at the electron proton collider HERA at DESY for three years. During this time the computing environment has gradually shifted from a mainframe oriented environment to the distributed server/client Unix world. This transition is now almost complete. Computing needs are largely determined by the present amount of 1.5 TB of reconstructed data per year (1994), corresponding to 1.2 × 107 accepted events. All data are centrally available at DESY. In addition to data analysis, which is done in all collaborating institutes, most of the centrally organized Monte Carlo production is performed outside of DESY. New software tools to cope with offline computing needs include CENTIPEDE, a tool for the use of distributed batch and interactive resources for Monte Carlo production, and H1 UNIX, a software package for automatic updates of H1 software on all UNIX platforms.

  1. The Computer Aided Aircraft-design Package (CAAP)

    NASA Technical Reports Server (NTRS)

    Yalif, Guy U.

    1994-01-01

    The preliminary design of an aircraft is a complex, labor-intensive, and creative process. Since the 1970's, many computer programs have been written to help automate preliminary airplane design. Time and resource analyses have identified, 'a substantial decrease in project duration with the introduction of an automated design capability'. Proof-of-concept studies have been completed which establish 'a foundation for a computer-based airframe design capability', Unfortunately, today's design codes exist in many different languages on many, often expensive, hardware platforms. Through the use of a module-based system architecture, the Computer aided Aircraft-design Package (CAAP) will eventually bring together many of the most useful features of existing programs. Through the use of an expert system, it will add an additional feature that could be described as indispensable to entry level engineers and students: the incorporation of 'expert' knowledge into the automated design process.

  2. SCEAPI: A unified Restful Web API for High-Performance Computing

    NASA Astrophysics Data System (ADS)

    Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi

    2017-10-01

    The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.

  3. Teaching Computer Literacy with Freeware and Shareware.

    ERIC Educational Resources Information Center

    Hobart, R. Dale; And Others

    1988-01-01

    Describes workshops given at Ferris State University for faculty and staff who want to acquire computer skills. Considered are a computer literacy and a software toolkit distributed to participants made from public domain/shareware resources. Stresses the benefits of shareware as an educational resource. (CW)

  4. Computers Don't Byte. A Starting Point for Teachers Using Computers. A Resource Booklet.

    ERIC Educational Resources Information Center

    Lieberman, Michael; And Others

    Designed to provide a starting point for the teacher without computer experience, this booklet deals with both the "how" and the "when" of computers in education. Educational applications described include classroom uses with the student as a passive or an active user and programs for the handicapped; the purpose of computers…

  5. Additive Manufactured Product Integrity

    NASA Technical Reports Server (NTRS)

    Waller, Jess; Wells, Doug; James, Steve; Nichols, Charles

    2017-01-01

    NASA is providing key leadership in an international effort linking NASA and non-NASA resources to speed adoption of additive manufacturing (AM) to meet NASA's mission goals. Participants include industry, NASA's space partners, other government agencies, standards organizations and academia. Nondestructive Evaluation (NDE) is identified as a universal need for all aspects of additive manufacturing.

  6. TSARINA: A Computer Model for Assessing Conventional and Chemical Attacks on Airbases

    DTIC Science & Technology

    1990-09-01

    IV, and has been updated to FORTRAN 77; it has been adapted to various computer systems, as was the widely used AIDA model and the previous versions of...conventional and chemical attacks on sortie generation. In the first version of TSARINA [1 2], several key additions were made to the AIDA model so that (1...various on-base resources, in addition to the estimates of hits and facility damage that are generated by the original AIDA model . The second version

  7. HPC on Competitive Cloud Resources

    NASA Astrophysics Data System (ADS)

    Bientinesi, Paolo; Iakymchuk, Roman; Napper, Jeff

    Computing as a utility has reached the mainstream. Scientists can now easily rent time on large commercial clusters that can be expanded and reduced on-demand in real-time. However, current commercial cloud computing performance falls short of systems specifically designed for scientific applications. Scientific computing needs are quite different from those of the web applications that have been the focus of cloud computing vendors. In this chapter we demonstrate through empirical evaluation the computational efficiency of high-performance numerical applications in a commercial cloud environment when resources are shared under high contention. Using the Linpack benchmark as a case study, we show that cache utilization becomes highly unpredictable and similarly affects computation time. For some problems, not only is it more efficient to underutilize resources, but the solution can be reached sooner in realtime (wall-time). We also show that the smallest, cheapest (64-bit) instance on the studied environment is the best for price to performance ration. In light of the high-contention we witness, we believe that alternative definitions of efficiency for commercial cloud environments should be introduced where strong performance guarantees do not exist. Concepts like average, expected performance and execution time, expected cost to completion, and variance measures--traditionally ignored in the high-performance computing context--now should complement or even substitute the standard definitions of efficiency.

  8. Additional Cultural Resources Investigations at Selected Portions of the State-Road Coulee - Pammel Creek Flood Control Project at La Crosse, Wisconsin

    DTIC Science & Technology

    1986-05-01

    Mammals: Ten mammal taxa are represented in the Lc176 assemblage. Two of these, the short-tailed shrew (Blarina brevivicauda) and a vole (Microtus sp...ADDITIONAL CULTURAL RESOURCES INVESTIGATIONS AT SELECTED PORTIONS OF THE STATE-ROAD COULEE - PAMMEL CREEK FLOOD CONTROL PROJECT ATm LA CROSSE...INVESTIGATIONS AT SELECTED PORTIONS OF THE STATE-ROAD COULEE- PAMMEL CREEK FLOOD CONTROL PROJECT AT LA CROSSE. WISCONSIN 12. PERSONAL AUTHOR(S

  9. The HEPCloud Facility: elastic computing for High Energy Physics – The NOvA Use Case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuess, S.; Garzoglio, G.; Holzman, B.

    The need for computing in the HEP community follows cycles of peaks and valleys mainly driven by conference dates, accelerator shutdown, holiday schedules, and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks such as potential overprovisioning. As the appetite for computing increases, however, so does the need to maximize cost efficiency by developing a model for dynamically provisioning resources only when needed. To address this issue, the HEPCloud project was launched by the Fermilab Scientific Computing Division in June 2015. Its goal is to develop a facility that provides a commonmore » interface to a variety of resources, including local clusters, grids, high performance computers, and community and commercial Clouds. Initially targeted experiments include CMS and NOvA, as well as other Fermilab stakeholders. In its first phase, the project has demonstrated the use of the “elastic” provisioning model offered by commercial clouds, such as Amazon Web Services. In this model, resources are rented and provisioned automatically over the Internet upon request. In January 2016, the project demonstrated the ability to increase the total amount of global CMS resources by 58,000 cores from 150,000 cores - a 25 percent increase - in preparation for the Recontres de Moriond. In March 2016, the NOvA experiment has also demonstrated resource burst capabilities with an additional 7,300 cores, achieving a scale almost four times as large as the local allocated resources and utilizing the local AWS s3 storage to optimize data handling operations and costs. NOvA was using the same familiar services used for local computations, such as data handling and job submission, in preparation for the Neutrino 2016 conference. In both cases, the cost was contained by the use of the Amazon Spot Instance Market and the Decision Engine, a HEPCloud component that aims at minimizing cost and job interruption. This

  10. SCANIT: centralized digitizing of forest resource maps or photographs

    Treesearch

    Elliot L. Amidon; E. Joyce Dye

    1981-01-01

    Spatial data on wildland resource maps and aerial photographs can be analyzed by computer after digitizing. SCANIT is a computerized system for encoding such data in digital form. The system, consisting of a collection of computer programs and subroutines, provides a powerful and versatile tool for a variety of resource analyses. SCANIT also may be converted easily to...

  11. Science and Technology Resources on the Internet: Computer Security.

    ERIC Educational Resources Information Center

    Kinkus, Jane F.

    2002-01-01

    Discusses issues related to computer security, including confidentiality, integrity, and authentication or availability; and presents a selected list of Web sites that cover the basic issues of computer security under subject headings that include ethics, privacy, kids, antivirus, policies, cryptography, operating system security, and biometrics.…

  12. Universal Blind Quantum Computation

    NASA Astrophysics Data System (ADS)

    Fitzsimons, Joseph; Kashefi, Elham

    2012-02-01

    Blind Quantum Computing (BQC) allows a client to have a server carry out a quantum computation for them such that the client's inputs, outputs and computation remain private. Recently we proposed a universal unconditionally secure BQC scheme, based on the conceptual framework of the measurement-based quantum computing model, where the client only needs to be able to prepare single qubits in separable states randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. Here we present a refinement of the scheme which vastly expands the class of quantum circuits which can be directly implemented as a blind computation, by introducing a new class of resource states which we term dotted-complete graph states and expanding the set of single qubit states the client is required to prepare. These two modifications significantly simplify the overall protocol and remove the previously present restriction that only nearest-neighbor circuits could be implemented as blind computations directly. As an added benefit, the refined protocol admits a substantially more intuitive and simplified verification mechanism, allowing the correctness of a blind computation to be verified with arbitrarily small probability of error.

  13. Benefits of Exchange Between Computer Scientists and Perceptual Scientists: A Panel Discussion

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    We have established several major goals for this panel: 1) Introduce the computer graphics community to some specific leaders in the use of perceptual psychology relating to computer graphics; 2) Enumerate the major results that are known, and provide a set of resources for finding others; 3) Identify research areas where knowledge of perceptual psychology can help computer system designers improve their systems; and 4) Provide advice to researchers on how they can establish collaborations in their own research programs. We believe this will be a very important panel. In addition to generating lively discussion, we hope to point out some of the fundamental issues that occur at the boundary between computer science and perception, and possibly help researchers avoid some of the common pitfalls.

  14. Large Data at Small Universities: Astronomical processing using a computer classroom

    NASA Astrophysics Data System (ADS)

    Fuller, Nathaniel James; Clarkson, William I.; Fluharty, Bill; Belanger, Zach; Dage, Kristen

    2016-06-01

    The use of large computing clusters for astronomy research is becoming more commonplace as datasets expand, but access to these required resources is sometimes difficult for research groups working at smaller Universities. As an alternative to purchasing processing time on an off-site computing cluster, or purchasing dedicated hardware, we show how one can easily build a crude on-site cluster by utilizing idle cycles on instructional computers in computer-lab classrooms. Since these computers are maintained as part of the educational mission of the University, the resource impact on the investigator is generally low.By using open source Python routines, it is possible to have a large number of desktop computers working together via a local network to sort through large data sets. By running traditional analysis routines in an “embarrassingly parallel” manner, gains in speed are accomplished without requiring the investigator to learn how to write routines using highly specialized methodology. We demonstrate this concept here applied to 1. photometry of large-format images and 2. Statistical significance-tests for X-ray lightcurve analysis. In these scenarios, we see a speed-up factor which scales almost linearly with the number of cores in the cluster. Additionally, we show that the usage of the cluster does not severely limit performance for a local user, and indeed the processing can be performed while the computers are in use for classroom purposes.

  15. The aquatic animals' transcriptome resource for comparative functional analysis.

    PubMed

    Chou, Chih-Hung; Huang, Hsi-Yuan; Huang, Wei-Chih; Hsu, Sheng-Da; Hsiao, Chung-Der; Liu, Chia-Yu; Chen, Yu-Hung; Liu, Yu-Chen; Huang, Wei-Yun; Lee, Meng-Lin; Chen, Yi-Chang; Huang, Hsien-Da

    2018-05-09

    Aquatic animals have great economic and ecological importance. Among them, non-model organisms have been studied regarding eco-toxicity, stress biology, and environmental adaptation. Due to recent advances in next-generation sequencing techniques, large amounts of RNA-seq data for aquatic animals are publicly available. However, currently there is no comprehensive resource exist for the analysis, unification, and integration of these datasets. This study utilizes computational approaches to build a new resource of transcriptomic maps for aquatic animals. This aquatic animal transcriptome map database dbATM provides de novo assembly of transcriptome, gene annotation and comparative analysis of more than twenty aquatic organisms without draft genome. To improve the assembly quality, three computational tools (Trinity, Oases and SOAPdenovo-Trans) were employed to enhance individual transcriptome assembly, and CAP3 and CD-HIT-EST software were then used to merge these three assembled transcriptomes. In addition, functional annotation analysis provides valuable clues to gene characteristics, including full-length transcript coding regions, conserved domains, gene ontology and KEGG pathways. Furthermore, all aquatic animal genes are essential for comparative genomics tasks such as constructing homologous gene groups and blast databases and phylogenetic analysis. In conclusion, we establish a resource for non model organism aquatic animals, which is great economic and ecological importance and provide transcriptomic information including functional annotation and comparative transcriptome analysis. The database is now publically accessible through the URL http://dbATM.mbc.nctu.edu.tw/ .

  16. DynaMed Plus®: An Evidence-Based Clinical Reference Resource.

    PubMed

    Charbonneau, Deborah H; James, LaTeesa N

    2018-01-01

    DynaMed Plus ® from EBSCO Health is an evidence-based tool that health professionals can use to inform clinical care. DynaMed Plus content undergoes a review process, and the evidence is synthesized in detailed topic overviews. A unique three-level rating scale is used to assess the quality of available evidence. Topic overviews summarize current evidence and provide recommendations to support health providers at the point-of-care. Additionally, DynaMed Plus content can be accessed via a desktop computer or mobile platforms. Given this, DynaMed Plus can be a time-saving resource for health providers. Overall, DynaMed Plus provides evidence summaries using an easy-to-read bullet format, and the resource incorporates images, clinical calculators, patient handouts, and practice guidelines in one place.

  17. Using multiple metaphors and multimodalities as a semiotic resource when teaching year 2 students computational strategies

    NASA Astrophysics Data System (ADS)

    Mildenhall, Paula; Sherriff, Barbara

    2017-06-01

    Recent research indicates that using multimodal learning experiences can be effective in teaching mathematics. Using a social semiotic lens within a participationist framework, this paper reports on a professional learning collaboration with a primary school teacher designed to explore the use of metaphors and modalities in mathematics instruction. This video case study was conducted in a year 2 classroom over two terms, with the focus on building children's understanding of computational strategies. The findings revealed that the teacher was able to successfully plan both multimodal and multiple metaphor learning experiences that acted as semiotic resources to support the children's understanding of abstract mathematics. The study also led to implications for teaching when using multiple metaphors and multimodalities.

  18. A computer software system for integration and analysis of grid-based remote sensing data with other natural resource data. Remote Sensing Project

    NASA Technical Reports Server (NTRS)

    Tilmann, S. E.; Enslin, W. R.; Hill-Rowley, R.

    1977-01-01

    A computer-based information system is described designed to assist in the integration of commonly available spatial data for regional planning and resource analysis. The Resource Analysis Program (RAP) provides a variety of analytical and mapping phases for single factor or multi-factor analyses. The unique analytical and graphic capabilities of RAP are demonstrated with a study conducted in Windsor Township, Eaton County, Michigan. Soil, land cover/use, topographic and geological maps were used as a data base to develope an eleven map portfolio. The major themes of the portfolio are land cover/use, non-point water pollution, waste disposal, and ground water recharge.

  19. GapMap: Enabling Comprehensive Autism Resource Epidemiology.

    PubMed

    Albert, Nikhila; Daniels, Jena; Schwartz, Jessey; Du, Michael; Wall, Dennis P

    2017-05-04

    For individuals with autism spectrum disorder (ASD), finding resources can be a lengthy and difficult process. The difficulty in obtaining global, fine-grained autism epidemiological data hinders researchers from quickly and efficiently studying large-scale correlations among ASD, environmental factors, and geographical and cultural factors. The objective of this study was to define resource load and resource availability for families affected by autism and subsequently create a platform to enable a more accurate representation of prevalence rates and resource epidemiology. We created a mobile application, GapMap, to collect locational, diagnostic, and resource use information from individuals with autism to compute accurate prevalence rates and better understand autism resource epidemiology. GapMap is hosted on AWS S3, running on a React and Redux front-end framework. The backend framework is comprised of an AWS API Gateway and Lambda Function setup, with secure and scalable end points for retrieving prevalence and resource data, and for submitting participant data. Measures of autism resource scarcity, including resource load, resource availability, and resource gaps were defined and preliminarily computed using simulated or scraped data. The average distance from an individual in the United States to the nearest diagnostic center is approximately 182 km (50 miles), with a standard deviation of 235 km (146 miles). The average distance from an individual with ASD to the nearest diagnostic center, however, is only 32 km (20 miles), suggesting that individuals who live closer to diagnostic services are more likely to be diagnosed. This study confirmed that individuals closer to diagnostic services are more likely to be diagnosed and proposes GapMap, a means to measure and enable the alleviation of increasingly overburdened diagnostic centers and resource-poor areas where parents are unable to diagnose their children as quickly and easily as needed. GapMap will

  20. Design & implementation of distributed spatial computing node based on WPS

    NASA Astrophysics Data System (ADS)

    Liu, Liping; Li, Guoqing; Xie, Jibo

    2014-03-01

    Currently, the research work of SIG (Spatial Information Grid) technology mostly emphasizes on the spatial data sharing in grid environment, while the importance of spatial computing resources is ignored. In order to implement the sharing and cooperation of spatial computing resources in grid environment, this paper does a systematical research of the key technologies to construct Spatial Computing Node based on the WPS (Web Processing Service) specification by OGC (Open Geospatial Consortium). And a framework of Spatial Computing Node is designed according to the features of spatial computing resources. Finally, a prototype of Spatial Computing Node is implemented and the relevant verification work under the environment is completed.

  1. Dynamic remapping of parallel computations with varying resource demands

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.; Saltz, J. H.

    1986-01-01

    A large class of computational problems is characterized by frequent synchronization, and computational requirements which change as a function of time. When such a problem must be solved on a message passing multiprocessor machine, the combination of these characteristics lead to system performance which decreases in time. Performance can be improved with periodic redistribution of computational load; however, redistribution can exact a sometimes large delay cost. We study the issue of deciding when to invoke a global load remapping mechanism. Such a decision policy must effectively weigh the costs of remapping against the performance benefits. We treat this problem by constructing two analytic models which exhibit stochastically decreasing performance. One model is quite tractable; we are able to describe the optimal remapping algorithm, and the optimal decision policy governing when to invoke that algorithm. However, computational complexity prohibits the use of the optimal remapping decision policy. We then study the performance of a general remapping policy on both analytic models. This policy attempts to minimize a statistic W(n) which measures the system degradation (including the cost of remapping) per computation step over a period of n steps. We show that as a function of time, the expected value of W(n) has at most one minimum, and that when this minimum exists it defines the optimal fixed-interval remapping policy. Our decision policy appeals to this result by remapping when it estimates that W(n) is minimized. Our performance data suggests that this policy effectively finds the natural frequency of remapping. We also use the analytic models to express the relationship between performance and remapping cost, number of processors, and the computation's stochastic activity.

  2. Advanced networks and computing in healthcare

    PubMed Central

    Ackerman, Michael

    2011-01-01

    As computing and network capabilities continue to rise, it becomes increasingly important to understand the varied applications for using them to provide healthcare. The objective of this review is to identify key characteristics and attributes of healthcare applications involving the use of advanced computing and communication technologies, drawing upon 45 research and development projects in telemedicine and other aspects of healthcare funded by the National Library of Medicine over the past 12 years. Only projects publishing in the professional literature were included in the review. Four projects did not publish beyond their final reports. In addition, the authors drew on their first-hand experience as project officers, reviewers and monitors of the work. Major themes in the corpus of work were identified, characterizing key attributes of advanced computing and network applications in healthcare. Advanced computing and network applications are relevant to a range of healthcare settings and specialties, but they are most appropriate for solving a narrower range of problems in each. Healthcare projects undertaken primarily to explore potential have also demonstrated effectiveness and depend on the quality of network service as much as bandwidth. Many applications are enabling, making it possible to provide service or conduct research that previously was not possible or to achieve outcomes in addition to those for which projects were undertaken. Most notable are advances in imaging and visualization, collaboration and sense of presence, and mobility in communication and information-resource use. PMID:21486877

  3. The Merit Computer Network

    ERIC Educational Resources Information Center

    Aupperle, Eric M.; Davis, Donna L.

    1978-01-01

    The successful Merit Computer Network is examined in terms of both technology and operational management. The network is fully operational and has a significant and rapidly increasing usage, with three major institutions currently sharing computer resources. (Author/CMV)

  4. Networking Micro-Processors for Effective Computer Utilization in Nursing

    PubMed Central

    Mangaroo, Jewellean; Smith, Bob; Glasser, Jay; Littell, Arthur; Saba, Virginia

    1982-01-01

    Networking as a social entity has important implications for maximizing computer resources for improved utilization in nursing. This paper describes the one process of networking of complementary resources at three institutions. Prairie View A&M University, Texas A&M University and the University of Texas School of Public Health, which has effected greater utilization of computers at the college. The results achieved in this project should have implications for nurses, users, and consumers in the development of computer resources.

  5. Optimizing Resource Utilization in Grid Batch Systems

    NASA Astrophysics Data System (ADS)

    Gellrich, Andreas

    2012-12-01

    On Grid sites, the requirements of the computing tasks (jobs) to computing, storage, and network resources differ widely. For instance Monte Carlo production jobs are almost purely CPU-bound, whereas physics analysis jobs demand high data rates. In order to optimize the utilization of the compute node resources, jobs must be distributed intelligently over the nodes. Although the job resource requirements cannot be deduced directly, jobs are mapped to POSIX UID/GID according to the VO, VOMS group and role information contained in the VOMS proxy. The UID/GID then allows to distinguish jobs, if users are using VOMS proxies as planned by the VO management, e.g. ‘role=production’ for Monte Carlo jobs. It is possible to setup and configure batch systems (queuing system and scheduler) at Grid sites based on these considerations although scaling limits were observed with the scheduler MAUI. In tests these limitations could be overcome with a home-made scheduler.

  6. Responses of medial and ventrolateral prefrontal cortex to interpersonal conflict for resources.

    PubMed

    Koban, Leonie; Pichon, Swann; Vuilleumier, Patrik

    2014-05-01

    Little is known about brain mechanisms recruited during the monitoring and appraisal of social conflicts--for instance, when individuals compete with each other for the same resources. We designed a novel experimental task inducing resource conflicts between two individuals. In an event-related functional magnetic resonance imaging (fMRI) design, participants played with another human participant or against a computer, who across trials chose either different (no-conflict) or the same tokens (conflict trials) in order to obtain monetary gains. In conflict trials, the participants could decide whether they would share the token, and the resulting gain, with the other person or instead keep all points for themselves. Behaviorally, participants shared much more often when playing with a human partner than with a computer. fMRI results demonstrated that the dorsal mediofrontal cortex was selectively activated during human conflicts. This region might play a key role in detecting situations in which self- and social interest are incompatible and require behavioral adjustment. In addition, we found a conflict-related response in the right ventrolateral prefrontal cortex that correlated with measures of social relationship and individual sharing behavior. Taken together, these findings reveal a key role of these prefrontal areas for the appraisal and resolution of interpersonal resource conflicts.

  7. Responses of medial and ventrolateral prefrontal cortex to interpersonal conflict for resources

    PubMed Central

    Koban, Leonie; Pichon, Swann; Vuilleumier, Patrik

    2014-01-01

    Little is known about brain mechanisms recruited during the monitoring and appraisal of social conflicts—for instance, when individuals compete with each other for the same resources. We designed a novel experimental task inducing resource conflicts between two individuals. In an event-related functional magnetic resonance imaging (fMRI) design, participants played with another human participant or against a computer, who across trials chose either different (no-conflict) or the same tokens (conflict trials) in order to obtain monetary gains. In conflict trials, the participants could decide whether they would share the token, and the resulting gain, with the other person or instead keep all points for themselves. Behaviorally, participants shared much more often when playing with a human partner than with a computer. fMRI results demonstrated that the dorsal mediofrontal cortex was selectively activated during human conflicts. This region might play a key role in detecting situations in which self- and social interest are incompatible and require behavioral adjustment. In addition, we found a conflict-related response in the right ventrolateral prefrontal cortex that correlated with measures of social relationship and individual sharing behavior. Taken together, these findings reveal a key role of these prefrontal areas for the appraisal and resolution of interpersonal resource conflicts. PMID:23460073

  8. Elucidating reaction mechanisms on quantum computers.

    PubMed

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M; Wecker, Dave; Troyer, Matthias

    2017-07-18

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources.

  9. Elucidating reaction mechanisms on quantum computers

    PubMed Central

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M.; Wecker, Dave; Troyer, Matthias

    2017-01-01

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources. PMID:28674011

  10. Elucidating reaction mechanisms on quantum computers

    NASA Astrophysics Data System (ADS)

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M.; Wecker, Dave; Troyer, Matthias

    2017-07-01

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources.

  11. Impact of remote sensing upon the planning, management, and development of water resources

    NASA Technical Reports Server (NTRS)

    Loats, H. L.; Fowler, T. R.; Frech, S. L.

    1974-01-01

    A survey of the principal water resource users was conducted to determine the impact of new remote data streams on hydrologic computer models. The analysis of the responses and direct contact demonstrated that: (1) the majority of water resource effort of the type suitable to remote sensing inputs is conducted by major federal water resources agencies or through federally stimulated research, (2) the federal government develops most of the hydrologic models used in this effort; and (3) federal computer power is extensive. The computers, computer power, and hydrologic models in current use were determined.

  12. OUR's: Optimum Utilization of Resources; A Guide to Instructional Resources in Occupational Education. Research Pub. 77-1.

    ERIC Educational Resources Information Center

    Beamish, Eric; And Others

    This resource guide contains over 300 entries which are available through the Optimum Utilization of Resources (OUR's) exchange system. The entries describe learning materials, such as slides, video tapes, audio tapes, films, print material, and computer assisted instructional programs, which have been developed primarily by faculty of the…

  13. Efficient Nash Equilibrium Resource Allocation Based on Game Theory Mechanism in Cloud Computing by Using Auction

    PubMed Central

    Nezarat, Amin; Dastghaibifard, GH

    2015-01-01

    One of the most complex issues in the cloud computing environment is the problem of resource allocation so that, on one hand, the cloud provider expects the most profitability and, on the other hand, users also expect to have the best resources at their disposal considering the budget constraints and time. In most previous work conducted, heuristic and evolutionary approaches have been used to solve this problem. Nevertheless, since the nature of this environment is based on economic methods, using such methods can decrease response time and reducing the complexity of the problem. In this paper, an auction-based method is proposed which determines the auction winner by applying game theory mechanism and holding a repetitive game with incomplete information in a non-cooperative environment. In this method, users calculate suitable price bid with their objective function during several round and repetitions and send it to the auctioneer; and the auctioneer chooses the winning player based the suggested utility function. In the proposed method, the end point of the game is the Nash equilibrium point where players are no longer inclined to alter their bid for that resource and the final bid also satisfies the auctioneer’s utility function. To prove the response space convexity, the Lagrange method is used and the proposed model is simulated in the cloudsim and the results are compared with previous work. At the end, it is concluded that this method converges to a response in a shorter time, provides the lowest service level agreement violations and the most utility to the provider. PMID:26431035

  14. Efficient Nash Equilibrium Resource Allocation Based on Game Theory Mechanism in Cloud Computing by Using Auction.

    PubMed

    Nezarat, Amin; Dastghaibifard, G H

    2015-01-01

    One of the most complex issues in the cloud computing environment is the problem of resource allocation so that, on one hand, the cloud provider expects the most profitability and, on the other hand, users also expect to have the best resources at their disposal considering the budget constraints and time. In most previous work conducted, heuristic and evolutionary approaches have been used to solve this problem. Nevertheless, since the nature of this environment is based on economic methods, using such methods can decrease response time and reducing the complexity of the problem. In this paper, an auction-based method is proposed which determines the auction winner by applying game theory mechanism and holding a repetitive game with incomplete information in a non-cooperative environment. In this method, users calculate suitable price bid with their objective function during several round and repetitions and send it to the auctioneer; and the auctioneer chooses the winning player based the suggested utility function. In the proposed method, the end point of the game is the Nash equilibrium point where players are no longer inclined to alter their bid for that resource and the final bid also satisfies the auctioneer's utility function. To prove the response space convexity, the Lagrange method is used and the proposed model is simulated in the cloudsim and the results are compared with previous work. At the end, it is concluded that this method converges to a response in a shorter time, provides the lowest service level agreement violations and the most utility to the provider.

  15. Computer Software Reviews.

    ERIC Educational Resources Information Center

    Hawaii State Dept. of Education, Honolulu. Office of Instructional Services.

    Intended to provide guidance in the selection of the best computer software available to support instruction and to make optimal use of schools' financial resources, this publication provides a listing of computer software programs that have been evaluated according to their currency, relevance, and value to Hawaii's educational programs. The…

  16. Ground data systems resource allocation process

    NASA Technical Reports Server (NTRS)

    Berner, Carol A.; Durham, Ralph; Reilly, Norman B.

    1989-01-01

    The Ground Data Systems Resource Allocation Process at the Jet Propulsion Laboratory provides medium- and long-range planning for the use of Deep Space Network and Mission Control and Computing Center resources in support of NASA's deep space missions and Earth-based science. Resources consist of radio antenna complexes and associated data processing and control computer networks. A semi-automated system was developed that allows operations personnel to interactively generate, edit, and revise allocation plans spanning periods of up to ten years (as opposed to only two or three weeks under the manual system) based on the relative merit of mission events. It also enhances scientific data return. A software system known as the Resource Allocation and Planning Helper (RALPH) merges the conventional methods of operations research, rule-based knowledge engineering, and advanced data base structures. RALPH employs a generic, highly modular architecture capable of solving a wide variety of scheduling and resource sequencing problems. The rule-based RALPH system has saved significant labor in resource allocation. Its successful use affirms the importance of establishing and applying event priorities based on scientific merit, and the benefit of continuity in planning provided by knowledge-based engineering. The RALPH system exhibits a strong potential for minimizing development cycles of resource and payload planning systems throughout NASA and the private sector.

  17. Computer assisted audit techniques for UNIX (UNIX-CAATS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polk, W.T.

    1991-12-31

    Federal and DOE regulations impose specific requirements for internal controls of computer systems. These controls include adequate separation of duties and sufficient controls for access of system and data. The DOE Inspector General`s Office has the responsibility to examine internal controls, as well as efficient use of computer system resources. As a result, DOE supported NIST development of computer assisted audit techniques to examine BSD UNIX computers (UNIX-CAATS). These systems were selected due to the increasing number of UNIX workstations in use within DOE. This paper describes the design and development of these techniques, as well as the results ofmore » testing at NIST and the first audit at a DOE site. UNIX-CAATS consists of tools which examine security of passwords, file systems, and network access. In addition, a tool was developed to examine efficiency of disk utilization. Test results at NIST indicated inadequate password management, as well as weak network resource controls. File system security was considered adequate. Audit results at a DOE site indicated weak password management and inefficient disk utilization. During the audit, we also found improvements to UNIX-CAATS were needed when applied to large systems. NIST plans to enhance the techniques developed for DOE/IG in future work. This future work would leverage currently available tools, along with needed enhancements. These enhancements would enable DOE/IG to audit large systems, such as supercomputers.« less

  18. Computer assisted audit techniques for UNIX (UNIX-CAATS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polk, W.T.

    1991-01-01

    Federal and DOE regulations impose specific requirements for internal controls of computer systems. These controls include adequate separation of duties and sufficient controls for access of system and data. The DOE Inspector General's Office has the responsibility to examine internal controls, as well as efficient use of computer system resources. As a result, DOE supported NIST development of computer assisted audit techniques to examine BSD UNIX computers (UNIX-CAATS). These systems were selected due to the increasing number of UNIX workstations in use within DOE. This paper describes the design and development of these techniques, as well as the results ofmore » testing at NIST and the first audit at a DOE site. UNIX-CAATS consists of tools which examine security of passwords, file systems, and network access. In addition, a tool was developed to examine efficiency of disk utilization. Test results at NIST indicated inadequate password management, as well as weak network resource controls. File system security was considered adequate. Audit results at a DOE site indicated weak password management and inefficient disk utilization. During the audit, we also found improvements to UNIX-CAATS were needed when applied to large systems. NIST plans to enhance the techniques developed for DOE/IG in future work. This future work would leverage currently available tools, along with needed enhancements. These enhancements would enable DOE/IG to audit large systems, such as supercomputers.« less

  19. Energy-Aware Computation Offloading of IoT Sensors in Cloudlet-Based Mobile Edge Computing.

    PubMed

    Ma, Xiao; Lin, Chuang; Zhang, Han; Liu, Jianwei

    2018-06-15

    Mobile edge computing is proposed as a promising computing paradigm to relieve the excessive burden of data centers and mobile networks, which is induced by the rapid growth of Internet of Things (IoT). This work introduces the cloud-assisted multi-cloudlet framework to provision scalable services in cloudlet-based mobile edge computing. Due to the constrained computation resources of cloudlets and limited communication resources of wireless access points (APs), IoT sensors with identical computation offloading decisions interact with each other. To optimize the processing delay and energy consumption of computation tasks, theoretic analysis of the computation offloading decision problem of IoT sensors is presented in this paper. In more detail, the computation offloading decision problem of IoT sensors is formulated as a computation offloading game and the condition of Nash equilibrium is derived by introducing the tool of a potential game. By exploiting the finite improvement property of the game, the Computation Offloading Decision (COD) algorithm is designed to provide decentralized computation offloading strategies for IoT sensors. Simulation results demonstrate that the COD algorithm can significantly reduce the system cost compared with the random-selection algorithm and the cloud-first algorithm. Furthermore, the COD algorithm can scale well with increasing IoT sensors.

  20. Hard-real-time resource management for autonomous spacecraft

    NASA Technical Reports Server (NTRS)

    Gat, E.

    2000-01-01

    This paper describes tickets, a computational mechanism for hard-real-time autonomous resource management. Autonomous spacecraftcontrol can be considered abstractly as a computational process whose outputs are spacecraft commands.

  1. Computers in the Classroom: Teacher's Resource Manual for Algebra.

    ERIC Educational Resources Information Center

    Koetke, Walter

    Demonstration programs, possible assignments for students (with solutions), and remedial drill programs for students to use are presented to aid teachers using a computer or a computer terminal in the teaching of algebra. The text can be followed page by page or used as a well-indexed reference work, and specific suggestions are made on how and…

  2. Probabilistic resource allocation system with self-adaptive capability

    NASA Technical Reports Server (NTRS)

    Yufik, Yan M. (Inventor)

    1996-01-01

    A probabilistic resource allocation system is disclosed containing a low capacity computational module (Short Term Memory or STM) and a self-organizing associative network (Long Term Memory or LTM) where nodes represent elementary resources, terminal end nodes represent goals, and directed links represent the order of resource association in different allocation episodes. Goals and their priorities are indicated by the user, and allocation decisions are made in the STM, while candidate associations of resources are supplied by the LTM based on the association strength (reliability). Reliability values are automatically assigned to the network links based on the frequency and relative success of exercising those links in the previous allocation decisions. Accumulation of allocation history in the form of an associative network in the LTM reduces computational demands on subsequent allocations. For this purpose, the network automatically partitions itself into strongly associated high reliability packets, allowing fast approximate computation and display of allocation solutions satisfying the overall reliability and other user-imposed constraints. System performance improves in time due to modification of network parameters and partitioning criteria based on the performance feedback.

  3. Probabilistic resource allocation system with self-adaptive capability

    NASA Technical Reports Server (NTRS)

    Yufik, Yan M. (Inventor)

    1998-01-01

    A probabilistic resource allocation system is disclosed containing a low capacity computational module (Short Term Memory or STM) and a self-organizing associative network (Long Term Memory or LTM) where nodes represent elementary resources, terminal end nodes represent goals, and weighted links represent the order of resource association in different allocation episodes. Goals and their priorities are indicated by the user, and allocation decisions are made in the STM, while candidate associations of resources are supplied by the LTM based on the association strength (reliability). Weights are automatically assigned to the network links based on the frequency and relative success of exercising those links in the previous allocation decisions. Accumulation of allocation history in the form of an associative network in the LTM reduces computational demands on subsequent allocations. For this purpose, the network automatically partitions itself into strongly associated high reliability packets, allowing fast approximate computation and display of allocation solutions satisfying the overall reliability and other user-imposed constraints. System performance improves in time due to modification of network parameters and partitioning criteria based on the performance feedback.

  4. Analysis of problem solving on project based learning with resource based learning approach computer-aided program

    NASA Astrophysics Data System (ADS)

    Kuncoro, K. S.; Junaedi, I.; Dwijanto

    2018-03-01

    This study aimed to reveal the effectiveness of Project Based Learning with Resource Based Learning approach computer-aided program and analyzed problem-solving abilities in terms of problem-solving steps based on Polya stages. The research method used was mixed method with sequential explanatory design. The subject of this research was the students of math semester 4. The results showed that the S-TPS (Strong Top Problem Solving) and W-TPS (Weak Top Problem Solving) had good problem-solving abilities in each problem-solving indicator. The problem-solving ability of S-MPS (Strong Middle Problem Solving) and (Weak Middle Problem Solving) in each indicator was good. The subject of S-BPS (Strong Bottom Problem Solving) had a difficulty in solving the problem with computer program, less precise in writing the final conclusion and could not reflect the problem-solving process using Polya’s step. While the Subject of W-BPS (Weak Bottom Problem Solving) had not been able to meet almost all the indicators of problem-solving. The subject of W-BPS could not precisely made the initial table of completion so that the completion phase with Polya’s step was constrained.

  5. Human Resource Management, Computers, and Organization Theory.

    ERIC Educational Resources Information Center

    Garson, G. David

    In an attempt to provide a framework for research and theory building in public management information systems (PMIS), state officials responsible for computing in personnel operations were surveyed. The data were applied to hypotheses arising from a recent model by Bozeman and Bretschneider, attempting to relate organization theory to management…

  6. Nutritional supplementation: the additional costs of managing children infected with HIV in resource-constrained settings.

    PubMed

    Cobb, G; Bland, R M

    2013-01-01

    To explore the financial implications of applying the WHO guidelines for the nutritional management of HIV-infected children in a rural South African HIV programme. WHO guidelines describe Nutritional Care Plans (NCPs) for three categories of HIV-infected children: NCP-A: growing adequately; NCP-B: weight-for-age z-score (WAZ) ≤-2 but no evidence of severe acute malnutrition (SAM), confirmed weight loss/growth curve flattening, or condition with increased nutritional needs (e.g. tuberculosis); NCP-C: SAM. In resource-constrained settings, children requiring NCP-B or NCP-C usually need supplementation to achieve the additional energy recommendation. We estimated the proportion of children initiating antiretroviral treatment (ART) in the Hlabisa HIV Programme who would have been eligible for supplementation in 2010. The cost of supplying 26-weeks supplementation as a proportion of the cost of supplying ART to the same group was calculated. A total of 251 children aged 6 months to 14 years initiated ART. Eighty-eight required 6-month NCP-B, including 41 with a WAZ ≤-2 (no evidence of SAM) and 47 with a WAZ >-2 with co-existent morbidities including tuberculosis. Additionally, 25 children had SAM and required 10-weeks NCP-C followed by 16-weeks NCP-B. Thus, 113 of 251 (45%) children were eligible for nutritional supplementation at an estimated overall cost of $11 136, using 2010 exchange rates. These costs are an estimated additional 11.6% to that of supplying 26-week ART to the 251 children initiated. It is essential to address nutritional needs of HIV-infected children to optimise their health outcomes. Nutritional supplementation should be integral to, and budgeted for, in HIV programmes. © 2012 Blackwell Publishing Ltd.

  7. Resources for GCSE.

    ERIC Educational Resources Information Center

    Anderton, Alain

    1987-01-01

    Argues that new resources are needed to help teachers prepare students for the new General Certificate in Secondary Education (GCSE) examination. Compares previous examinations with new examinations to illustrate the problem. Presents textbooks, workbooks, computer programs, and other curriculum materials to demonstrate the gap between resources…

  8. Computer Viruses: Pathology and Detection.

    ERIC Educational Resources Information Center

    Maxwell, John R.; Lamon, William E.

    1992-01-01

    Explains how computer viruses were originally created, how a computer can become infected by a virus, how viruses operate, symptoms that indicate a computer is infected, how to detect and remove viruses, and how to prevent a reinfection. A sidebar lists eight antivirus resources. (four references) (LRW)

  9. 18 CFR 154.400 - Additional requirements.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Additional requirements. 154.400 Section 154.400 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER NATURAL GAS ACT RATE SCHEDULES AND TARIFFS Limited Rate Changes § 154...

  10. 18 CFR 154.400 - Additional requirements.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Additional requirements. 154.400 Section 154.400 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER NATURAL GAS ACT RATE SCHEDULES AND TARIFFS Limited Rate Changes § 154...

  11. 18 CFR 154.400 - Additional requirements.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Additional requirements. 154.400 Section 154.400 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER NATURAL GAS ACT RATE SCHEDULES AND TARIFFS Limited Rate Changes § 154...

  12. 18 CFR 154.400 - Additional requirements.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Additional requirements. 154.400 Section 154.400 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER NATURAL GAS ACT RATE SCHEDULES AND TARIFFS Limited Rate Changes § 154...

  13. 18 CFR 154.400 - Additional requirements.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Additional requirements. 154.400 Section 154.400 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER NATURAL GAS ACT RATE SCHEDULES AND TARIFFS Limited Rate Changes § 154...

  14. 18 CFR 5.21 - Additional information.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Additional information. 5.21 Section 5.21 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE FEDERAL POWER ACT INTEGRATED LICENSE APPLICATION PROCESS § 5.21...

  15. 18 CFR 5.21 - Additional information.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Additional information. 5.21 Section 5.21 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE FEDERAL POWER ACT INTEGRATED LICENSE APPLICATION PROCESS § 5.21...

  16. Graphic analysis of resources by numerical evaluation techniques (Garnet)

    USGS Publications Warehouse

    Olson, A.C.

    1977-01-01

    An interactive computer program for graphical analysis has been developed by the U.S. Geological Survey. The program embodies five goals, (1) economical use of computer resources, (2) simplicity for user applications, (3) interactive on-line use, (4) minimal core requirements, and (5) portability. It is designed to aid (1) the rapid analysis of point-located data, (2) structural mapping, and (3) estimation of area resources. ?? 1977.

  17. Cyber-workstation for computational neuroscience.

    PubMed

    Digiovanna, Jack; Rattanatamrong, Prapaporn; Zhao, Ming; Mahmoudi, Babak; Hermer, Linda; Figueiredo, Renato; Principe, Jose C; Fortes, Jose; Sanchez, Justin C

    2010-01-01

    A Cyber-Workstation (CW) to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists to develop new models and integrate them with existing models (e.g. recursive least-squares regressor) by specifying appropriate connections in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface.

  18. Cyber-Workstation for Computational Neuroscience

    PubMed Central

    DiGiovanna, Jack; Rattanatamrong, Prapaporn; Zhao, Ming; Mahmoudi, Babak; Hermer, Linda; Figueiredo, Renato; Principe, Jose C.; Fortes, Jose; Sanchez, Justin C.

    2009-01-01

    A Cyber-Workstation (CW) to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists to develop new models and integrate them with existing models (e.g. recursive least-squares regressor) by specifying appropriate connections in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface. PMID:20126436

  19. Review of the Water Resources Information System of Argentina

    USGS Publications Warehouse

    Hutchison, N.E.

    1987-01-01

    A representative of the U.S. Geological Survey traveled to Buenos Aires, Argentina, in November 1986, to discuss water information systems and data bank implementation in the Argentine Government Center for Water Resources Information. Software has been written by Center personnel for a minicomputer to be used to manage inventory (index) data and water quality data. Additional hardware and software have been ordered to upgrade the existing computer. Four microcomputers, statistical and data base management software, and network hardware and software for linking the computers have also been ordered. The Center plans to develop a nationwide distributed data base for Argentina that will include the major regional offices as nodes. Needs for continued development of the water resources information system for Argentina were reviewed. Identified needs include: (1) conducting a requirements analysis to define the content of the data base and insure that all user requirements are met, (2) preparing a plan for the development, implementation, and operation of the data base, and (3) developing a conceptual design to inform all development personnel and users of the basic functionality planned for the system. A quality assurance and configuration management program to provide oversight to the development process was also discussed. (USGS)

  20. Mouse Genome Informatics (MGI) Resource: Genetic, Genomic, and Biological Knowledgebase for the Laboratory Mouse.

    PubMed

    Eppig, Janan T

    2017-07-01

    The Mouse Genome Informatics (MGI) Resource supports basic, translational, and computational research by providing high-quality, integrated data on the genetics, genomics, and biology of the laboratory mouse. MGI serves a strategic role for the scientific community in facilitating biomedical, experimental, and computational studies investigating the genetics and processes of diseases and enabling the development and testing of new disease models and therapeutic interventions. This review describes the nexus of the body of growing genetic and biological data and the advances in computer technology in the late 1980s, including the World Wide Web, that together launched the beginnings of MGI. MGI develops and maintains a gold-standard resource that reflects the current state of knowledge, provides semantic and contextual data integration that fosters hypothesis testing, continually develops new and improved tools for searching and analysis, and partners with the scientific community to assure research data needs are met. Here we describe one slice of MGI relating to the development of community-wide large-scale mutagenesis and phenotyping projects and introduce ways to access and use these MGI data. References and links to additional MGI aspects are provided. © The Author 2017. Published by Oxford University Press.

  1. Mouse Genome Informatics (MGI) Resource: Genetic, Genomic, and Biological Knowledgebase for the Laboratory Mouse

    PubMed Central

    Eppig, Janan T.

    2017-01-01

    Abstract The Mouse Genome Informatics (MGI) Resource supports basic, translational, and computational research by providing high-quality, integrated data on the genetics, genomics, and biology of the laboratory mouse. MGI serves a strategic role for the scientific community in facilitating biomedical, experimental, and computational studies investigating the genetics and processes of diseases and enabling the development and testing of new disease models and therapeutic interventions. This review describes the nexus of the body of growing genetic and biological data and the advances in computer technology in the late 1980s, including the World Wide Web, that together launched the beginnings of MGI. MGI develops and maintains a gold-standard resource that reflects the current state of knowledge, provides semantic and contextual data integration that fosters hypothesis testing, continually develops new and improved tools for searching and analysis, and partners with the scientific community to assure research data needs are met. Here we describe one slice of MGI relating to the development of community-wide large-scale mutagenesis and phenotyping projects and introduce ways to access and use these MGI data. References and links to additional MGI aspects are provided. PMID:28838066

  2. Miscellaneous Topics in Computer-Aided Drug Design: Synthetic Accessibility and GPU Computing, and Other Topics.

    PubMed

    Fukunishi, Yoshifumi; Mashimo, Tadaaki; Misoo, Kiyotaka; Wakabayashi, Yoshinori; Miyaki, Toshiaki; Ohta, Seiji; Nakamura, Mayu; Ikeda, Kazuyoshi

    2016-01-01

    Computer-aided drug design is still a state-of-the-art process in medicinal chemistry, and the main topics in this field have been extensively studied and well reviewed. These topics include compound databases, ligand-binding pocket prediction, protein-compound docking, virtual screening, target/off-target prediction, physical property prediction, molecular simulation and pharmacokinetics/pharmacodynamics (PK/PD) prediction. Message and Conclusion: However, there are also a number of secondary or miscellaneous topics that have been less well covered. For example, methods for synthesizing and predicting the synthetic accessibility (SA) of designed compounds are important in practical drug development, and hardware/software resources for performing the computations in computer-aided drug design are crucial. Cloud computing and general purpose graphics processing unit (GPGPU) computing have been used in virtual screening and molecular dynamics simulations. Not surprisingly, there is a growing demand for computer systems which combine these resources. In the present review, we summarize and discuss these various topics of drug design.

  3. Miscellaneous Topics in Computer-Aided Drug Design: Synthetic Accessibility and GPU Computing, and Other Topics

    PubMed Central

    Fukunishi, Yoshifumi; Mashimo, Tadaaki; Misoo, Kiyotaka; Wakabayashi, Yoshinori; Miyaki, Toshiaki; Ohta, Seiji; Nakamura, Mayu; Ikeda, Kazuyoshi

    2016-01-01

    Abstract: Background Computer-aided drug design is still a state-of-the-art process in medicinal chemistry, and the main topics in this field have been extensively studied and well reviewed. These topics include compound databases, ligand-binding pocket prediction, protein-compound docking, virtual screening, target/off-target prediction, physical property prediction, molecular simulation and pharmacokinetics/pharmacodynamics (PK/PD) prediction. Message and Conclusion: However, there are also a number of secondary or miscellaneous topics that have been less well covered. For example, methods for synthesizing and predicting the synthetic accessibility (SA) of designed compounds are important in practical drug development, and hardware/software resources for performing the computations in computer-aided drug design are crucial. Cloud computing and general purpose graphics processing unit (GPGPU) computing have been used in virtual screening and molecular dynamics simulations. Not surprisingly, there is a growing demand for computer systems which combine these resources. In the present review, we summarize and discuss these various topics of drug design. PMID:27075578

  4. CyberShake: Running Seismic Hazard Workflows on Distributed HPC Resources

    NASA Astrophysics Data System (ADS)

    Callaghan, S.; Maechling, P. J.; Graves, R. W.; Gill, D.; Olsen, K. B.; Milner, K. R.; Yu, J.; Jordan, T. H.

    2013-12-01

    As part of its program of earthquake system science research, the Southern California Earthquake Center (SCEC) has developed a simulation platform, CyberShake, to perform physics-based probabilistic seismic hazard analysis (PSHA) using 3D deterministic wave propagation simulations. CyberShake performs PSHA by simulating a tensor-valued wavefield of Strain Green Tensors, and then using seismic reciprocity to calculate synthetic seismograms for about 415,000 events per site of interest. These seismograms are processed to compute ground motion intensity measures, which are then combined with probabilities from an earthquake rupture forecast to produce a site-specific hazard curve. Seismic hazard curves for hundreds of sites in a region can be used to calculate a seismic hazard map, representing the seismic hazard for a region. We present a recently completed PHSA study in which we calculated four CyberShake seismic hazard maps for the Southern California area to compare how CyberShake hazard results are affected by different SGT computational codes (AWP-ODC and AWP-RWG) and different community velocity models (Community Velocity Model - SCEC (CVM-S4) v11.11 and Community Velocity Model - Harvard (CVM-H) v11.9). We present our approach to running workflow applications on distributed HPC resources, including systems without support for remote job submission. We show how our approach extends the benefits of scientific workflows, such as job and data management, to large-scale applications on Track 1 and Leadership class open-science HPC resources. We used our distributed workflow approach to perform CyberShake Study 13.4 on two new NSF open-science HPC computing resources, Blue Waters and Stampede, executing over 470 million tasks to calculate physics-based hazard curves for 286 locations in the Southern California region. For each location, we calculated seismic hazard curves with two different community velocity models and two different SGT codes, resulting in over

  5. Natural resources information system.

    NASA Technical Reports Server (NTRS)

    Leachtenauer, J. C.; Woll, A. M.

    1972-01-01

    A computer-based Natural Resources Information System was developed for the Bureaus of Indian Affairs and Land Management. The system stores, processes and displays data useful to the land manager in the decision making process. Emphasis is placed on the use of remote sensing as a data source. Data input consists of maps, imagery overlays, and on-site data. Maps and overlays are entered using a digitizer and stored as irregular polygons, lines and points. Processing functions include set intersection, union and difference and area, length and value computations. Data output consists of computer tabulations and overlays prepared on a drum plotter.

  6. Advancing Cyberinfrastructure to support high resolution water resources modeling

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Ogden, F. L.; Jones, N.; Horsburgh, J. S.

    2012-12-01

    Addressing the problem of how the availability and quality of water resources at large scales are sensitive to climate variability, watershed alterations and management activities requires computational resources that combine data from multiple sources and support integrated modeling. Related cyberinfrastructure challenges include: 1) how can we best structure data and computer models to address this scientific problem through the use of high-performance and data-intensive computing, and 2) how can we do this in a way that discipline scientists without extensive computational and algorithmic knowledge and experience can take advantage of advances in cyberinfrastructure? This presentation will describe a new system called CI-WATER that is being developed to address these challenges and advance high resolution water resources modeling in the Western U.S. We are building on existing tools that enable collaboration to develop model and data interfaces that link integrated system models running within an HPC environment to multiple data sources. Our goal is to enhance the use of computational simulation and data-intensive modeling to better understand water resources. Addressing water resource problems in the Western U.S. requires simulation of natural and engineered systems, as well as representation of legal (water rights) and institutional constraints alongside the representation of physical processes. We are establishing data services to represent the engineered infrastructure and legal and institutional systems in a way that they can be used with high resolution multi-physics watershed modeling at high spatial resolution. These services will enable incorporation of location-specific information on water management infrastructure and systems into the assessment of regional water availability in the face of growing demands, uncertain future meteorological forcings, and existing prior-appropriations water rights. This presentation will discuss the informatics

  7. Probability calculations for three-part mineral resource assessments

    USGS Publications Warehouse

    Ellefsen, Karl J.

    2017-06-27

    Three-part mineral resource assessment is a methodology for predicting, in a specified geographic region, both the number of undiscovered mineral deposits and the amount of mineral resources in those deposits. These predictions are based on probability calculations that are performed with computer software that is newly implemented. Compared to the previous implementation, the new implementation includes new features for the probability calculations themselves and for checks of those calculations. The development of the new implementation lead to a new understanding of the probability calculations, namely the assumptions inherent in the probability calculations. Several assumptions strongly affect the mineral resource predictions, so it is crucial that they are checked during an assessment. The evaluation of the new implementation leads to new findings about the probability calculations,namely findings regarding the precision of the computations,the computation time, and the sensitivity of the calculation results to the input.

  8. 18 CFR 33.10 - Additional information.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... § 33.10 Additional information. The Director of the Office of Energy Market Regulation, or his designee, may, by letter, require the applicant to submit additional information as is needed for analysis of an... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Additional information...

  9. 18 CFR 33.10 - Additional information.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... § 33.10 Additional information. The Director of the Office of Energy Market Regulation, or his designee, may, by letter, require the applicant to submit additional information as is needed for analysis of an... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Additional information...

  10. 18 CFR 33.10 - Additional information.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... § 33.10 Additional information. The Director of the Office of Energy Market Regulation, or his designee, may, by letter, require the applicant to submit additional information as is needed for analysis of an... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Additional information...

  11. Curriculum and Resources: Computer Provision in a CTC.

    ERIC Educational Resources Information Center

    Denholm, Lawrence

    The program for City Technical Colleges (CTCs) draws on ideas and resources from government, private industry, and education to focus on the educational needs of inner city and urban children. Mathematics, science, and technology are at the center of the CTCs' mission, in a context which includes economic awareness and a commitment to enterprise…

  12. Airport Simulations Using Distributed Computational Resources

    NASA Technical Reports Server (NTRS)

    McDermott, William J.; Maluf, David A.; Gawdiak, Yuri; Tran, Peter; Clancy, Daniel (Technical Monitor)

    2002-01-01

    The Virtual National Airspace Simulation (VNAS) will improve the safety of Air Transportation. In 2001, using simulation and information management software running over a distributed network of super-computers, researchers at NASA Ames, Glenn, and Langley Research Centers developed a working prototype of a virtual airspace. This VNAS prototype modeled daily operations of the Atlanta airport by integrating measured operational data and simulation data on up to 2,000 flights a day. The concepts and architecture developed by NASA for this prototype are integral to the National Airspace Simulation to support the development of strategies improving aviation safety, identifying precursors to component failure.

  13. Polyphony: A Workflow Orchestration Framework for Cloud Computing

    NASA Technical Reports Server (NTRS)

    Shams, Khawaja S.; Powell, Mark W.; Crockett, Tom M.; Norris, Jeffrey S.; Rossi, Ryan; Soderstrom, Tom

    2010-01-01

    Cloud Computing has delivered unprecedented compute capacity to NASA missions at affordable rates. Missions like the Mars Exploration Rovers (MER) and Mars Science Lab (MSL) are enjoying the elasticity that enables them to leverage hundreds, if not thousands, or machines for short durations without making any hardware procurements. In this paper, we describe Polyphony, a resilient, scalable, and modular framework that efficiently leverages a large set of computing resources to perform parallel computations. Polyphony can employ resources on the cloud, excess capacity on local machines, as well as spare resources on the supercomputing center, and it enables these resources to work in concert to accomplish a common goal. Polyphony is resilient to node failures, even if they occur in the middle of a transaction. We will conclude with an evaluation of a production-ready application built on top of Polyphony to perform image-processing operations of images from around the solar system, including Mars, Saturn, and Titan.

  14. Integrating Xgrid into the HENP distributed computing model

    NASA Astrophysics Data System (ADS)

    Hajdu, L.; Kocoloski, A.; Lauret, J.; Miller, M.

    2008-07-01

    Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology.

  15. Orchestrating Distributed Resource Ensembles for Petascale Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baldin, Ilya; Mandal, Anirban; Ruth, Paul

    2014-04-24

    Distributed, data-intensive computational science applications of interest to DOE scientific com- munities move large amounts of data for experiment data management, distributed analysis steps, remote visualization, and accessing scientific instruments. These applications need to orchestrate ensembles of resources from multiple resource pools and interconnect them with high-capacity multi- layered networks across multiple domains. It is highly desirable that mechanisms are designed that provide this type of resource provisioning capability to a broad class of applications. It is also important to have coherent monitoring capabilities for such complex distributed environments. In this project, we addressed these problems by designing an abstractmore » API, enabled by novel semantic resource descriptions, for provisioning complex and heterogeneous resources from multiple providers using their native provisioning mechanisms and control planes: computational, storage, and multi-layered high-speed network domains. We used an extensible resource representation based on semantic web technologies to afford maximum flexibility to applications in specifying their needs. We evaluated the effectiveness of provisioning using representative data-intensive ap- plications. We also developed mechanisms for providing feedback about resource performance to the application, to enable closed-loop feedback control and dynamic adjustments to resource allo- cations (elasticity). This was enabled through development of a novel persistent query framework that consumes disparate sources of monitoring data, including perfSONAR, and provides scalable distribution of asynchronous notifications.« less

  16. Focus issue: series on computational and systems biology.

    PubMed

    Gough, Nancy R

    2011-09-06

    The application of computational biology and systems biology is yielding quantitative insight into cellular regulatory phenomena. For the month of September, Science Signaling highlights research featuring computational approaches to understanding cell signaling and investigation of signaling networks, a series of Teaching Resources from a course in systems biology, and various other articles and resources relevant to the application of computational biology and systems biology to the study of signal transduction.

  17. Additional extensions to the NASCAP computer code, volume 3

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Cooke, D. L.

    1981-01-01

    The ION computer code is designed to calculate charge exchange ion densities, electric potentials, plasma temperatures, and current densities external to a neutralized ion engine in R-Z geometry. The present version assumes the beam ion current and density to be known and specified, and the neutralizing electrons to originate from a hot-wire ring surrounding the beam orifice. The plasma is treated as being resistive, with an electron relaxation time comparable to the plasma frequency. Together with the thermal and electrical boundary conditions described below and other straightforward engine parameters, these assumptions suffice to determine the required quantities. The ION code, written in ASCII FORTRAN for UNIVAC 1100 series computers, is designed to be run interactively, although it can also be run in batch mode. The input is free-format, and the output is mainly graphical, using the machine-independent graphics developed for the NASCAP code. The executive routine calls the code's major subroutines in user-specified order, and the code allows great latitude for restart and parameter change.

  18. Distributed GPU Computing in GIScience

    NASA Astrophysics Data System (ADS)

    Jiang, Y.; Yang, C.; Huang, Q.; Li, J.; Sun, M.

    2013-12-01

    Geoscientists strived to discover potential principles and patterns hidden inside ever-growing Big Data for scientific discoveries. To better achieve this objective, more capable computing resources are required to process, analyze and visualize Big Data (Ferreira et al., 2003; Li et al., 2013). Current CPU-based computing techniques cannot promptly meet the computing challenges caused by increasing amount of datasets from different domains, such as social media, earth observation, environmental sensing (Li et al., 2013). Meanwhile CPU-based computing resources structured as cluster or supercomputer is costly. In the past several years with GPU-based technology matured in both the capability and performance, GPU-based computing has emerged as a new computing paradigm. Compare to traditional computing microprocessor, the modern GPU, as a compelling alternative microprocessor, has outstanding high parallel processing capability with cost-effectiveness and efficiency(Owens et al., 2008), although it is initially designed for graphical rendering in visualization pipe. This presentation reports a distributed GPU computing framework for integrating GPU-based computing within distributed environment. Within this framework, 1) for each single computer, computing resources of both GPU-based and CPU-based can be fully utilized to improve the performance of visualizing and processing Big Data; 2) within a network environment, a variety of computers can be used to build up a virtual super computer to support CPU-based and GPU-based computing in distributed computing environment; 3) GPUs, as a specific graphic targeted device, are used to greatly improve the rendering efficiency in distributed geo-visualization, especially for 3D/4D visualization. Key words: Geovisualization, GIScience, Spatiotemporal Studies Reference : 1. Ferreira de Oliveira, M. C., & Levkowitz, H. (2003). From visual data exploration to visual data mining: A survey. Visualization and Computer Graphics, IEEE

  19. A multi-group and preemptable scheduling of cloud resource based on HTCondor

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaowei; Zou, Jiaheng; Cheng, Yaodong; Shi, Jingyan

    2017-10-01

    and LHAASO. The result indicates that multi-group and preemptable resource scheduling is efficient to support multi-group and soft preemption. Additionally, the permission controlling component has been used in the local computing cluster, supporting for experiment JUNO, CMS and LHAASO, and the scale will be expanded to more experiments at the first half year, including DYW, BES and so on. Its evidence that the permission controlling is efficient.

  20. Impact of Classroom Computer Use on Computer Anxiety.

    ERIC Educational Resources Information Center

    Lambert, Matthew E.; And Others

    Increasing use of computer programs for undergraduate psychology education has raised concern over the impact of computer anxiety on educational performance. Additionally, some researchers have indicated that classroom computer use can exacerbate pre-existing computer anxiety. To evaluate the relationship between in-class computer use and computer…

  1. Telescience Resource Kit Software Capabilities and Future Enhancements

    NASA Technical Reports Server (NTRS)

    Schneider, Michelle

    2004-01-01

    The Telescience Resource Kit (TReK) is a suite of PC-based software applications that can be used to monitor and control a payload on board the International Space Station (ISS). This software provides a way for payload users to operate their payloads from their home sites. It can be used by an individual or a team of people. TReK provides both local ground support system services and an interface to utilize remote services provided by the Payload Operations Integration Center (POIC). by the POIC and to perform local data functions such as processing the data, storing it in local files, and forwarding it to other computer systems. TReK can also be used to build, send, and track payload commands. In addition to these features, work is in progress to add a new command management capability. This capability will provide a way to manage a multi- platform command environment that can include geographically distributed computers. This is intended to help those teams that need to manage a shared on-board resource such as a facility class payload. The environment can be configured such that one individual can manage all the command activities associated with that payload. This paper will provide a summary of existing TReK capabilities and a description of the new command management capability. For example, 7'ReK can be used to receive payload data distributed

  2. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    NASA Astrophysics Data System (ADS)

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-10-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.

  3. The future of PanDA in ATLAS distributed computing

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.

  4. Surfer: An Extensible Pull-Based Framework for Resource Selection and Ranking

    NASA Technical Reports Server (NTRS)

    Zolano, Paul Z.

    2004-01-01

    Grid computing aims to connect large numbers of geographically and organizationally distributed resources to increase computational power; resource utilization, and resource accessibility. In order to effectively utilize grids, users need to be connected to the best available resources at any given time. As grids are in constant flux, users cannot be expected to keep up with the configuration and status of the grid, thus they must be provided with automatic resource brokering for selecting and ranking resources meeting constraints and preferences they specify. This paper presents a new OGSI-compliant resource selection and ranking framework called Surfer that has been implemented as part of NASA's Information Power Grid (IPG) project. Surfer is highly extensible and may be integrated into any grid environment by adding information providers knowledgeable about that environment.

  5. Multidimensional Environmental Data Resource Brokering on Computational Grids and Scientific Clouds

    NASA Astrophysics Data System (ADS)

    Montella, Raffaele; Giunta, Giulio; Laccetti, Giuliano

    Grid computing has widely evolved over the past years, and its capabilities have found their way even into business products and are no longer relegated to scientific applications. Today, grid computing technology is not restricted to a set of specific grid open source or industrial products, but rather it is comprised of a set of capabilities virtually within any kind of software to create shared and highly collaborative production environments. These environments are focused on computational (workload) capabilities and the integration of information (data) into those computational capabilities. An active grid computing application field is the fully virtualization of scientific instruments in order to increase their availability and decrease operational and maintaining costs. Computational and information grids allow to manage real-world objects in a service-oriented way using industrial world-spread standards.

  6. Extensible Computational Chemistry Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-08-09

    ECCE provides a sophisticated graphical user interface, scientific visualization tools, and the underlying data management framework enabling scientists to efficiently set up calculations and store, retrieve, and analyze the rapidly growing volumes of data produced by computational chemistry studies. ECCE was conceived as part of the Environmental Molecular Sciences Laboratory construction to solve the problem of researchers being able to effectively utilize complex computational chemistry codes and massively parallel high performance compute resources. Bringing the power of these codes and resources to the desktops of researcher and thus enabling world class research without users needing a detailed understanding of themore » inner workings of either the theoretical codes or the supercomputers needed to run them was a grand challenge problem in the original version of the EMSL. ECCE allows collaboration among researchers using a web-based data repository where the inputs and results for all calculations done within ECCE are organized. ECCE is a first of kind end-to-end problem solving environment for all phases of computational chemistry research: setting up calculations with sophisticated GUI and direct manipulation visualization tools, submitting and monitoring calculations on remote high performance supercomputers without having to be familiar with the details of using these compute resources, and performing results visualization and analysis including creating publication quality images. ECCE is a suite of tightly integrated applications that are employed as the user moves through the modeling process.« less

  7. Satellite on-board processing for earth resources data

    NASA Technical Reports Server (NTRS)

    Bodenheimer, R. E.; Gonzalez, R. C.; Gupta, J. N.; Hwang, K.; Rochelle, R. W.; Wilson, J. B.; Wintz, P. A.

    1975-01-01

    Results of a survey of earth resources user applications and their data requirements, earth resources multispectral scanner sensor technology, and preprocessing algorithms for correcting the sensor outputs and for data bulk reduction are presented along with a candidate data format. Computational requirements required to implement the data analysis algorithms are included along with a review of computer architectures and organizations. Computer architectures capable of handling the algorithm computational requirements are suggested and the environmental effects of an on-board processor discussed. By relating performance parameters to the system requirements of each of the user requirements the feasibility of on-board processing is determined for each user. A tradeoff analysis is performed to determine the sensitivity of results to each of the system parameters. Significant results and conclusions are discussed, and recommendations are presented.

  8. Water, Energy, and Food Nexus: Modeling of Inter-Basin Resources Trading

    NASA Astrophysics Data System (ADS)

    KIm, T. W.; Kang, D.; Wicaksono, A.; Jeong, G.; Jang, B. J.; Ahn, J.

    2016-12-01

    The water, energy, and food (WEF) nexus is an emerging issue in the concern of fulfilling the human requirements with a lack of available resources. The WEF nexus concept arises to develop a sustainable resources planning and management. In the concept, the three valuable resources (i.e. water, energy, and food) are inevitably interconnected thus it becomes a challenge for researchers to understand the complicated interdependency. A few studies have been committed for interpreting and implementing the WEF nexus using a computer based simulation model. Some of them mentioned that a trade-off is one alternative solution that can be taken to secure the available resources. Taking a concept of inter-basin water transfer, this study attempts to introduce an idea to develop a WEF nexus model for inter-basin resources trading simulation. Using the trading option among regions (e.g., cities, basins, or even countries), the model provides an opportunity to increase overall resources availability without draining local resources. The proposed model adopted the calculation process of an amount of water, energy, and food from a nation-wide model, with additional input and analysis process to simulate the resources trading between regions. The proposed model is applied for a hypothetic test area in South Korea for demonstration purposes. It is anticipated that the developed model can be a decision tool for efficient resources allocation for sustainable resources management. Acknowledgements This study was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of the Korean government.

  9. Cloud Computing. Technology Briefing. Number 1

    ERIC Educational Resources Information Center

    Alberta Education, 2013

    2013-01-01

    Cloud computing is Internet-based computing in which shared resources, software and information are delivered as a service that computers or mobile devices can access on demand. Cloud computing is already used extensively in education. Free or low-cost cloud-based services are used daily by learners and educators to support learning, social…

  10. Calculation of coal resources using ARC/INFO and Earth Vision; methodology for the National Coal Resource Assessment

    USGS Publications Warehouse

    Roberts, L.N.; Biewick, L.R.

    1999-01-01

    This report documents a comparison of two methods of resource calculation that are being used in the National Coal Resource Assessment project of the U.S. Geological Survey (USGS). Tewalt (1998) discusses the history of using computer software packages such as GARNET (Graphic Analysis of Resources using Numerical Evaluation Techniques), GRASS (Geographic Resource Analysis Support System), and the vector-based geographic information system (GIS) ARC/INFO (ESRI, 1998) to calculate coal resources within the USGS. The study discussed here, compares resource calculations using ARC/INFO* (ESRI, 1998) and EarthVision (EV)* (Dynamic Graphics, Inc. 1997) for the coal-bearing John Henry Member of the Straight Cliffs Formation of Late Cretaceous age in the Kaiparowits Plateau of southern Utah. Coal resource estimates in the Kaiparowits Plateau using ARC/INFO are reported in Hettinger, and others, 1996.

  11. Computer-Based Resource Accounting Model for Generating Aggregate Resource Impacts of Alternative Automobile Technologies : Volume 1. Fleet Attributes Model

    DOT National Transportation Integrated Search

    1977-01-01

    Auto production and operation consume energy, material, capital and labor resources. Numerous substitution possibilities exist within and between resource sectors, corresponding to the broad spectrum of potential design technologies. Alternative auto...

  12. Cloud computing basics for librarians.

    PubMed

    Hoy, Matthew B

    2012-01-01

    "Cloud computing" is the name for the recent trend of moving software and computing resources to an online, shared-service model. This article briefly defines cloud computing, discusses different models, explores the advantages and disadvantages, and describes some of the ways cloud computing can be used in libraries. Examples of cloud services are included at the end of the article. Copyright © Taylor & Francis Group, LLC

  13. Understanding the Performance and Potential of Cloud Computing for Scientific Applications

    DOE PAGES

    Sadooghi, Iman; Martin, Jesus Hernandez; Li, Tonglin; ...

    2015-02-19

    In this paper, commercial clouds bring a great opportunity to the scientific computing area. Scientific applications usually require significant resources, however not all scientists have access to sufficient high-end computing systems, may of which can be found in the Top500 list. Cloud Computing has gained the attention of scientists as a competitive resource to run HPC applications at a potentially lower cost. But as a different infrastructure, it is unclear whether clouds are capable of running scientific applications with a reasonable performance per money spent. This work studies the performance of public clouds and places this performance in context tomore » price. We evaluate the raw performance of different services of AWS cloud in terms of the basic resources, such as compute, memory, network and I/O. We also evaluate the performance of the scientific applications running in the cloud. This paper aims to assess the ability of the cloud to perform well, as well as to evaluate the cost of the cloud running scientific applications. We developed a full set of metrics and conducted a comprehensive performance evlauation over the Amazon cloud. We evaluated EC2, S3, EBS and DynamoDB among the many Amazon AWS services. We evaluated the memory sub-system performance with CacheBench, the network performance with iperf, processor and network performance with the HPL benchmark application, and shared storage with NFS and PVFS in addition to S3. We also evaluated a real scientific computing application through the Swift parallel scripting system at scale. Armed with both detailed benchmarks to gauge expected performance and a detailed monetary cost analysis, we expect this paper will be a recipe cookbook for scientists to help them decide where to deploy and run their scientific applications between public clouds, private clouds, or hybrid clouds.« less

  14. Understanding the Performance and Potential of Cloud Computing for Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadooghi, Iman; Martin, Jesus Hernandez; Li, Tonglin

    In this paper, commercial clouds bring a great opportunity to the scientific computing area. Scientific applications usually require significant resources, however not all scientists have access to sufficient high-end computing systems, may of which can be found in the Top500 list. Cloud Computing has gained the attention of scientists as a competitive resource to run HPC applications at a potentially lower cost. But as a different infrastructure, it is unclear whether clouds are capable of running scientific applications with a reasonable performance per money spent. This work studies the performance of public clouds and places this performance in context tomore » price. We evaluate the raw performance of different services of AWS cloud in terms of the basic resources, such as compute, memory, network and I/O. We also evaluate the performance of the scientific applications running in the cloud. This paper aims to assess the ability of the cloud to perform well, as well as to evaluate the cost of the cloud running scientific applications. We developed a full set of metrics and conducted a comprehensive performance evlauation over the Amazon cloud. We evaluated EC2, S3, EBS and DynamoDB among the many Amazon AWS services. We evaluated the memory sub-system performance with CacheBench, the network performance with iperf, processor and network performance with the HPL benchmark application, and shared storage with NFS and PVFS in addition to S3. We also evaluated a real scientific computing application through the Swift parallel scripting system at scale. Armed with both detailed benchmarks to gauge expected performance and a detailed monetary cost analysis, we expect this paper will be a recipe cookbook for scientists to help them decide where to deploy and run their scientific applications between public clouds, private clouds, or hybrid clouds.« less

  15. Outlook Bright for Computers in Chemistry.

    ERIC Educational Resources Information Center

    Baum, Rudy M.

    1981-01-01

    Discusses the recent decision to close down the National Resource for Computation in Chemistry (NRCC), implications of that decision, and various alternatives in the field of computational chemistry. (CS)

  16. Decentralized Resource Management in Distributed Computer Systems.

    DTIC Science & Technology

    1982-02-01

    directly exchanging user state information. Eventcounts and sequencers correspond to semaphores in the sense that synchronization primitives are used to...and techniques are required to achieve synchronization in distributed computers without reliance on any centralized entity such as a semaphore ...known solutions to the access synchronization problem was Dijkstra’s semaphore [12]. The importance of the semaphore is that it correctly addresses the

  17. An improved resource management model based on MDS

    NASA Astrophysics Data System (ADS)

    Yuan, Man; Sun, Changying; Li, Pengfei; Sun, Yongdong; He, Rui

    2005-11-01

    GRID technology provides a kind of convenient method for managing GRID resources. This service is so-called monitoring, discovering service. This method is proposed by Globus Alliance, in this GRID environment, all kinds of resources, such as computational resources, storage resources and other resources can be organized by MDS specifications. However, this MDS is a theory framework, particularly, in a small world intranet, in the case of limit of resources, the MDS has its own limitation. Based on MDS, an improved light method for managing corporation computational resources and storage resources is proposed in intranet(IMDS). Firstly, in MDS, all kinds of resource description information is stored in LDAP, it is well known although LDAP is a light directory access protocol, in practice, programmers rarely master how to access and store resource information into LDAP store, in such way, it limits MDS to be used. So, in intranet, these resources' description information can be stored in RDBMS, programmers and users can access this information by standard SQL. Secondly, in MDS, how to monitor all kinds of resources in GRID is not transparent for programmers and users. In such way, it limits its application scope, in general, resource monitoring method base on SNMP is widely employed in intranet, therefore, a kind of resource monitoring method based on SNMP is integrated into MDS. Finally, all kinds of resources in the intranet can be described by XML, and all kinds of resources' description information is stored in RDBMS, such as MySql, and retrieved by standard SQL, dynamic information for all kinds of resources can be sent to resource storage by SNMP, A prototype resource description, monitoring is designed and implemented in intranet.

  18. Integrating Computers into Michigan Education.

    ERIC Educational Resources Information Center

    Lentz, Linda P.

    Computer use in Michigan schools has evolved in three stages over the past decade. In the first, computers were new and few, and professional development was typically self-initiated. The Michigan Association of Computer Users in Learning (MACUL) was formed at this time to provide resources to local districts which they were unable to provide…

  19. Computer Operating System Maintenance.

    DTIC Science & Technology

    1982-06-01

    FACILITY The Computer Management Information Facility ( CMIF ) system was developed by Rapp Systems to fulfill the need at the CRF to record and report on...computer center resource usage and utilization. The foundation of the CMIF system is a System 2000 data base (CRFMGMT) which stores and permits access

  20. CTserver: A Computational Thermodynamics Server for the Geoscience Community

    NASA Astrophysics Data System (ADS)

    Kress, V. C.; Ghiorso, M. S.

    2006-12-01

    The CTserver platform is an Internet-based computational resource that provides on-demand services in Computational Thermodynamics (CT) to a diverse geoscience user base. This NSF-supported resource can be accessed at ctserver.ofm-research.org. The CTserver infrastructure leverages a high-quality and rigorously tested software library of routines for computing equilibrium phase assemblages and for evaluating internally consistent thermodynamic properties of materials, e.g. mineral solid solutions and a variety of geological fluids, including magmas. Thermodynamic models are currently available for 167 phases. Recent additions include Duan, Møller and Weare's model for supercritical C-O-H-S, extended to include SO2 and S2 species, and an entirely new associated solution model for O-S-Fe-Ni sulfide liquids. This software library is accessed via the CORBA Internet protocol for client-server communication. CORBA provides a standardized, object-oriented, language and platform independent, fast, low-bandwidth interface to phase property modules running on the server cluster. Network transport, language translation and resource allocation are handled by the CORBA interface. Users access server functionality in two principal ways. Clients written as browser- based Java applets may be downloaded which provide specific functionality such as retrieval of thermodynamic properties of phases, computation of phase equilibria for systems of specified composition, or modeling the evolution of these systems along some particular reaction path. This level of user interaction requires minimal programming effort and is ideal for classroom use. A more universal and flexible mode of CTserver access involves making remote procedure calls from user programs directly to the server public interface. The CTserver infrastructure relieves the user of the burden of implementing and testing the often complex thermodynamic models of real liquids and solids. A pilot application of this distributed

  1. Make It and Take It: Computer-Based Resources for Lesson Planning.

    ERIC Educational Resources Information Center

    Brown, Tasha; Cargill, Debby; Hostetler, Jan; Joyner, Susan; Phillips, Vanessa

    This document is part lesson planner and idea resource and part annotated bibliography of electronic resources. The lesson planner is divided into four parts. Part one, "Tables to Go," contains different tables that can be used for a variety of exercises at all levels of the English-as-a-Second-Language (ESL) classroom. Part two, "Exploring the…

  2. Controlling user access to electronic resources without password

    DOEpatents

    Smith, Fred Hewitt

    2017-08-22

    Described herein are devices and techniques for remotely controlling user access to a restricted computer resource. The process includes obtaining an image from a communication device of a user. An individual and a landmark are identified within the image. Determinations are made that the individual is the user and that the landmark is a predetermined landmark. Access to a restricted computing resource is granted based on the determining that the individual is the user and that the landmark is the predetermined landmark. Other embodiments are disclosed.

  3. Water resources scientific information center

    USGS Publications Warehouse

    Cardin, C. William; Campbell, J.T.

    1986-01-01

    The Water Resources Scientific Information Center (WRSIC) acquires, abstracts and indexes the major water resources related literature of the world, and makes information available to the water resources community and the public. A component of the Water Resources Division of the US Geological Survey, the Center maintains a searchable computerized bibliographic data base, and publishers a monthly journal of abstracts. Through its services, the Center is able to provide reliable scientific and technical information about the most recent water resources developments, as well as long-term trends and changes. WRSIC was established in 1966 by the Secretary of the Interior to further the objectives of the Water Resources Research Act of 1964--legislation that encouraged research in water resources and the prevention of needless duplication of research efforts. It was determined the WRSIC should be the national center for information on water resources, covering research reports, scientific journals, and other water resources literature of the world. WRSIC would evaluate all water resources literature, catalog selected articles, and make the information available in publications or by computer access. In this way WRSIC would increase the availability and awareness of water related scientific and technical information. (Lantz-PTT)

  4. Resource Economics

    NASA Astrophysics Data System (ADS)

    Conrad, Jon M.

    1999-10-01

    Resource Economics is a text for students with a background in calculus, intermediate microeconomics, and a familiarity with the spreadsheet software Excel. The book covers basic concepts, shows how to set up spreadsheets to solve dynamic allocation problems, and presents economic models for fisheries, forestry, nonrenewable resources, stock pollutants, option value, and sustainable development. Within the text, numerical examples are posed and solved using Excel's Solver. Through these examples and additional exercises at the end of each chapter, students can make dynamic models operational, develop their economic intuition, and learn how to set up spreadsheets for the simulation of optimization of resource and environmental systems.

  5. Current Issues for Higher Education Information Resources Management.

    ERIC Educational Resources Information Center

    CAUSE/EFFECT, 1996

    1996-01-01

    Issues identified as important to the future of information resources management and use in higher education include information policy in a networked environment, distributed computing, integrating information resources and college planning, benchmarking information technology, integrated digital libraries, technology integration in teaching,…

  6. The national coal-resources data system of the U.S. geological survey

    USGS Publications Warehouse

    Carter, M.D.

    1976-01-01

    The National Coal Resources Data System (NCRDS) was designed by the U.S. Geological Survey (USGS) to meet the increasing demands for rapid retrieval of information on coal location, quantity, quality, and accessibility. An interactive conversational query system devised by the USGS retrieves information from the data bank through a standard computer terminal. The system is being developed in two phases. Phase I, which currently is available on a limited basis, contains published areal resource and chemical data. The primary objective of this phase is to retrieve, calculate, and tabulate coal-resource data by area on a local, regional, or national scale. Factors available for retrieval include: state, county, quadrangle, township, coal field, coal bed, formation, geologic age, source and reliability of data, and coal-bed rank, thickness, overburden, and tonnage, or any combinations of variables. In addition, the chemical data items include individual values for proximate and ultimate analyses, BTU value, and several other physical and chemical tests. Information will be validated and deleted or updated as needed. Phase II is being developed to store, retrieve, and manipulate basic point source coal data (e.g., field observations, drill-hole logs), including geodetic location; bed thickness; depth of burial; moisture; ash; sulfur; major-, minor-, and trace-element content; heat value; and characteristics of overburden, roof rocks, and floor rocks. The computer system may be used to generate interactively structure-contour or isoline maps of the physical and chemical characteristics of a coal bed or to calculate coal resources. ?? 1976.

  7. On computational methods for crashworthiness

    NASA Technical Reports Server (NTRS)

    Belytschko, T.

    1992-01-01

    The evolution of computational methods for crashworthiness and related fields is described and linked with the decreasing cost of computational resources and with improvements in computation methodologies. The latter includes more effective time integration procedures and more efficient elements. Some recent developments in methodologies and future trends are also summarized. These include multi-time step integration (or subcycling), further improvements in elements, adaptive meshes, and the exploitation of parallel computers.

  8. Resources monitoring and automatic management system for multi-VO distributed computing system

    NASA Astrophysics Data System (ADS)

    Chen, J.; Pelevanyuk, I.; Sun, Y.; Zhemchugov, A.; Yan, T.; Zhao, X. H.; Zhang, X. M.

    2017-10-01

    Multi-VO supports based on DIRAC have been set up to provide workload and data management for several high energy experiments in IHEP. To monitor and manage the heterogeneous resources which belong to different Virtual Organizations in a uniform way, a resources monitoring and automatic management system based on Resource Status System(RSS) of DIRAC has been presented in this paper. The system is composed of three parts: information collection, status decision and automatic control, and information display. The information collection includes active and passive way of gathering status from different sources and stores them in databases. The status decision and automatic control is used to evaluate the resources status and take control actions on resources automatically through some pre-defined policies and actions. The monitoring information is displayed on a web portal. Both the real-time information and historical information can be obtained from the web portal. All the implementations are based on DIRAC framework. The information and control including sites, policies, web portal for different VOs can be well defined and distinguished within DIRAC user and group management infrastructure.

  9. Consolidation of cloud computing in ATLAS

    NASA Astrophysics Data System (ADS)

    Taylor, Ryan P.; Domingues Cordeiro, Cristovao Jose; Giordano, Domenico; Hover, John; Kouba, Tomas; Love, Peter; McNab, Andrew; Schovancova, Jaroslava; Sobie, Randall; ATLAS Collaboration

    2017-10-01

    Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems.

  10. A Formative Analysis of Resources Used to Learn Software

    ERIC Educational Resources Information Center

    Kay, Robin

    2007-01-01

    A comprehensive, formal comparison of resources used to learn computer software has yet to be researched. Understanding the relative strengths and weakness of resources would provide useful guidance to teachers and students. The purpose of the current study was to explore the effectiveness of seven key resources: human assistance, the manual, the…

  11. Dynamic Extension of a Virtualized Cluster by using Cloud Resources

    NASA Astrophysics Data System (ADS)

    Oberst, Oliver; Hauth, Thomas; Kernert, David; Riedel, Stephan; Quast, Günter

    2012-12-01

    The specific requirements concerning the software environment within the HEP community constrain the choice of resource providers for the outsourcing of computing infrastructure. The use of virtualization in HPC clusters and in the context of cloud resources is therefore a subject of recent developments in scientific computing. The dynamic virtualization of worker nodes in common batch systems provided by ViBatch serves each user with a dynamically virtualized subset of worker nodes on a local cluster. Now it can be transparently extended by the use of common open source cloud interfaces like OpenNebula or Eucalyptus, launching a subset of the virtual worker nodes within the cloud. This paper demonstrates how a dynamically virtualized computing cluster is combined with cloud resources by attaching remotely started virtual worker nodes to the local batch system.

  12. LHC Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lincoln, Don

    The LHC is the world’s highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter. In this video, Fermilab’s Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that makes it all possible.

  13. NASA CORE (Central Operation of Resources for Educators) Educational Materials Catalog

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This educational materials catalog presents NASA CORE (Central Operation of Resources for Educators). The topics include: 1) Videocassettes (Aeronautics, Earth Resources, Weather, Space Exploration/Satellites, Life Sciences, Careers); 2) Slide Programs; 3) Computer Materials; 4) NASA Memorabilia/Miscellaneous; 5) NASA Educator Resource Centers; 6) and NASA Resources.

  14. Step-by-step magic state encoding for efficient fault-tolerant quantum computation.

    PubMed

    Goto, Hayato

    2014-12-16

    Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation.

  15. Synthetic analog computation in living cells.

    PubMed

    Daniel, Ramiz; Rubens, Jacob R; Sarpeshkar, Rahul; Lu, Timothy K

    2013-05-30

    A central goal of synthetic biology is to achieve multi-signal integration and processing in living cells for diagnostic, therapeutic and biotechnology applications. Digital logic has been used to build small-scale circuits, but other frameworks may be needed for efficient computation in the resource-limited environments of cells. Here we demonstrate that synthetic analog gene circuits can be engineered to execute sophisticated computational functions in living cells using just three transcription factors. Such synthetic analog gene circuits exploit feedback to implement logarithmically linear sensing, addition, ratiometric and power-law computations. The circuits exhibit Weber's law behaviour as in natural biological systems, operate over a wide dynamic range of up to four orders of magnitude and can be designed to have tunable transfer functions. Our circuits can be composed to implement higher-order functions that are well described by both intricate biochemical models and simple mathematical functions. By exploiting analog building-block functions that are already naturally present in cells, this approach efficiently implements arithmetic operations and complex functions in the logarithmic domain. Such circuits may lead to new applications for synthetic biology and biotechnology that require complex computations with limited parts, need wide-dynamic-range biosensing or would benefit from the fine control of gene expression.

  16. Effects of a Structured Resource-Based Web Issue-Quest Approach on Students' Learning Performances in Computer Programming Courses

    ERIC Educational Resources Information Center

    Hsu, Ting-Chia; Hwang, Gwo-Jen

    2017-01-01

    Programming concepts are important and challenging to novices who are beginning to study computer programming skills. In addition to the textbook content, students usually learn the concepts of programming from the web; however, it could be difficult for novice learners to effectively derive helpful information from such non-structured open…

  17. Infrastructure for Training and Partnershipes: California Water and Coastal Ocean Resources

    NASA Technical Reports Server (NTRS)

    Siegel, David A.; Dozier, Jeffrey; Gautier, Catherine; Davis, Frank; Dickey, Tommy; Dunne, Thomas; Frew, James; Keller, Arturo; MacIntyre, Sally; Melack, John

    2000-01-01

    The purpose of this project was to advance the existing ICESS/Bren School computing infrastructure to allow scientists, students, and research trainees the opportunity to interact with environmental data and simulations in near-real time. Improvements made with the funding from this project have helped to strengthen the research efforts within both units, fostered graduate research training, and helped fortify partnerships with government and industry. With this funding, we were able to expand our computational environment in which computer resources, software, and data sets are shared by ICESS/Bren School faculty researchers in all areas of Earth system science. All of the graduate and undergraduate students associated with the Donald Bren School of Environmental Science and Management and the Institute for Computational Earth System Science have benefited from the infrastructure upgrades accomplished by this project. Additionally, the upgrades fostered a significant number of research projects (attached is a list of the projects that benefited from the upgrades). As originally proposed, funding for this project provided the following infrastructure upgrades: 1) a modem file management system capable of interoperating UNIX and NT file systems that can scale to 6.7 TB, 2) a Qualstar 40-slot tape library with two AIT tape drives and Legato Networker backup/archive software, 3) previously unavailable import/export capability for data sets on Zip, Jaz, DAT, 8mm, CD, and DLT media in addition to a 622Mb/s Internet 2 connection, 4) network switches capable of 100 Mbps to 128 desktop workstations, 5) Portable Batch System (PBS) computational task scheduler, and vi) two Compaq/Digital Alpha XP1000 compute servers each with 1.5 GB of RAM along with an SGI Origin 2000 (purchased partially using funds from this project along with funding from various other sources) to be used for very large computations, as required for simulation of mesoscale meteorology or climate.

  18. Application of microarray analysis on computer cluster and cloud platforms.

    PubMed

    Bernau, C; Boulesteix, A-L; Knaus, J

    2013-01-01

    Analysis of recent high-dimensional biological data tends to be computationally intensive as many common approaches such as resampling or permutation tests require the basic statistical analysis to be repeated many times. A crucial advantage of these methods is that they can be easily parallelized due to the computational independence of the resampling or permutation iterations, which has induced many statistics departments to establish their own computer clusters. An alternative is to rent computing resources in the cloud, e.g. at Amazon Web Services. In this article we analyze whether a selection of statistical projects, recently implemented at our department, can be efficiently realized on these cloud resources. Moreover, we illustrate an opportunity to combine computer cluster and cloud resources. In order to compare the efficiency of computer cluster and cloud implementations and their respective parallelizations we use microarray analysis procedures and compare their runtimes on the different platforms. Amazon Web Services provide various instance types which meet the particular needs of the different statistical projects we analyzed in this paper. Moreover, the network capacity is sufficient and the parallelization is comparable in efficiency to standard computer cluster implementations. Our results suggest that many statistical projects can be efficiently realized on cloud resources. It is important to mention, however, that workflows can change substantially as a result of a shift from computer cluster to cloud computing.

  19. Mediagraphy: Print and Nonprint Resources.

    ERIC Educational Resources Information Center

    Educational Media and Technology Yearbook, 1998

    1998-01-01

    Lists educational media-related journals, books, ERIC documents, journal articles, and nonprint resources classified by Artificial Intelligence, Robotics, Electronic Performance Support Systems; Computer-Assisted Instruction; Distance Education; Educational Research; Educational Technology; Electronic Publishing; Information Science and…

  20. Renewable Fuel Standard (RFS2): Final Rule Additional Resources

    EPA Pesticide Factsheets

    The final rule of fuels and fuel additives: renewable fuel standard program is published on March 26, 2010 and is effective on July 1, 2010. You will find the links to this final rule and technical amendments supporting this rule.

  1. Quantum computation with realistic magic-state factories

    NASA Astrophysics Data System (ADS)

    O'Gorman, Joe; Campbell, Earl T.

    2017-03-01

    Leading approaches to fault-tolerant quantum computation dedicate a significant portion of the hardware to computational factories that churn out high-fidelity ancillas called magic states. Consequently, efficient and realistic factory design is of paramount importance. Here we present the most detailed resource assessment to date of magic-state factories within a surface code quantum computer, along the way introducing a number of techniques. We show that the block codes of Bravyi and Haah [Phys. Rev. A 86, 052329 (2012), 10.1103/PhysRevA.86.052329] have been systematically undervalued; we track correlated errors both numerically and analytically, providing fidelity estimates without appeal to the union bound. We also introduce a subsystem code realization of these protocols with constant time and low ancilla cost. Additionally, we confirm that magic-state factories have space-time costs that scale as a constant factor of surface code costs. We find that the magic-state factory required for postclassical factoring can be as small as 6.3 million data qubits, ignoring ancilla qubits, assuming 10-4 error gates and the availability of long-range interactions.

  2. 76 FR 22680 - Procurement List; Proposed Addition

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-22

    ... (41 U.S.C. 46- 48c) in connection with the service proposed for addition to the Procurement List... Type/Location: Contact Center Service. Human Resources Command Contact Center, Fort Knox, KY. NPAs...). Contracting Activity: Department of the Army, Human Resource Command, Fort Knox, KY. Barry S. Lineback...

  3. Capitalizing upon Rural Resources.

    ERIC Educational Resources Information Center

    Worth, Charles E.

    1987-01-01

    Offers three examples of how low cost and innovative methods can be planned to bring expertise and resources to rural school districts: hosting a state/regional educational conference, arranging local evening classes from nearby small colleges/universities, and building/maintaining working contacts with computer software representatives. (NEC)

  4. Avoiding Split Attention in Computer-Based Testing: Is Neglecting Additional Information Facilitative?

    ERIC Educational Resources Information Center

    Jarodzka, Halszka; Janssen, Noortje; Kirschner, Paul A.; Erkens, Gijsbert

    2015-01-01

    This study investigated whether design guidelines for computer-based learning can be applied to computer-based testing (CBT). Twenty-two students completed a CBT exam with half of the questions presented in a split-screen format that was analogous to the original paper-and-pencil version and half in an integrated format. Results show that students…

  5. Novel schemes for measurement-based quantum computation.

    PubMed

    Gross, D; Eisert, J

    2007-06-01

    We establish a framework which allows one to construct novel schemes for measurement-based quantum computation. The technique develops tools from many-body physics-based on finitely correlated or projected entangled pair states-to go beyond the cluster-state based one-way computer. We identify resource states radically different from the cluster state, in that they exhibit nonvanishing correlations, can be prepared using nonmaximally entangling gates, or have very different local entanglement properties. In the computational models, randomness is compensated in a different manner. It is shown that there exist resource states which are locally arbitrarily close to a pure state. We comment on the possibility of tailoring computational models to specific physical systems.

  6. Informatics for RNA Sequencing: A Web Resource for Analysis on the Cloud

    PubMed Central

    Griffith, Malachi; Walker, Jason R.; Spies, Nicholas C.; Ainscough, Benjamin J.; Griffith, Obi L.

    2015-01-01

    Massively parallel RNA sequencing (RNA-seq) has rapidly become the assay of choice for interrogating RNA transcript abundance and diversity. This article provides a detailed introduction to fundamental RNA-seq molecular biology and informatics concepts. We make available open-access RNA-seq tutorials that cover cloud computing, tool installation, relevant file formats, reference genomes, transcriptome annotations, quality-control strategies, expression, differential expression, and alternative splicing analysis methods. These tutorials and additional training resources are accompanied by complete analysis pipelines and test datasets made available without encumbrance at www.rnaseq.wiki. PMID:26248053

  7. Computational Complexity and Human Decision-Making.

    PubMed

    Bossaerts, Peter; Murawski, Carsten

    2017-12-01

    The rationality principle postulates that decision-makers always choose the best action available to them. It underlies most modern theories of decision-making. The principle does not take into account the difficulty of finding the best option. Here, we propose that computational complexity theory (CCT) provides a framework for defining and quantifying the difficulty of decisions. We review evidence showing that human decision-making is affected by computational complexity. Building on this evidence, we argue that most models of decision-making, and metacognition, are intractable from a computational perspective. To be plausible, future theories of decision-making will need to take into account both the resources required for implementing the computations implied by the theory, and the resource constraints imposed on the decision-maker by biology. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Semantic computing and language knowledge bases

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Wang, Houfeng; Yu, Shiwen

    2017-09-01

    As the proposition of the next-generation Web - semantic Web, semantic computing has been drawing more and more attention within the circle and the industries. A lot of research has been conducted on the theory and methodology of the subject, and potential applications have also been investigated and proposed in many fields. The progress of semantic computing made so far cannot be detached from its supporting pivot - language resources, for instance, language knowledge bases. This paper proposes three perspectives of semantic computing from a macro view and describes the current status of affairs about the construction of language knowledge bases and the related research and applications that have been carried out on the basis of these resources via a case study in the Institute of Computational Linguistics at Peking University.

  9. One-way quantum computing in superconducting circuits

    NASA Astrophysics Data System (ADS)

    Albarrán-Arriagada, F.; Alvarado Barrios, G.; Sanz, M.; Romero, G.; Lamata, L.; Retamal, J. C.; Solano, E.

    2018-03-01

    We propose a method for the implementation of one-way quantum computing in superconducting circuits. Measurement-based quantum computing is a universal quantum computation paradigm in which an initial cluster state provides the quantum resource, while the iteration of sequential measurements and local rotations encodes the quantum algorithm. Up to now, technical constraints have limited a scalable approach to this quantum computing alternative. The initial cluster state can be generated with available controlled-phase gates, while the quantum algorithm makes use of high-fidelity readout and coherent feedforward. With current technology, we estimate that quantum algorithms with above 20 qubits may be implemented in the path toward quantum supremacy. Moreover, we propose an alternative initial state with properties of maximal persistence and maximal connectedness, reducing the required resources of one-way quantum computing protocols.

  10. Workflow Management Systems for Molecular Dynamics on Leadership Computers

    NASA Astrophysics Data System (ADS)

    Wells, Jack; Panitkin, Sergey; Oleynik, Danila; Jha, Shantenu

    Molecular Dynamics (MD) simulations play an important role in a range of disciplines from Material Science to Biophysical systems and account for a large fraction of cycles consumed on computing resources. Increasingly science problems require the successful execution of ''many'' MD simulations as opposed to a single MD simulation. There is a need to provide scalable and flexible approaches to the execution of the workload. We present preliminary results on the Titan computer at the Oak Ridge Leadership Computing Facility that demonstrate a general capability to manage workload execution agnostic of a specific MD simulation kernel or execution pattern, and in a manner that integrates disparate grid-based and supercomputing resources. Our results build upon our extensive experience of distributed workload management in the high-energy physics ATLAS project using PanDA (Production and Distributed Analysis System), coupled with recent conceptual advances in our understanding of workload management on heterogeneous resources. We will discuss how we will generalize these initial capabilities towards a more production level service on DOE leadership resources. This research is sponsored by US DOE/ASCR and used resources of the OLCF computing facility.

  11. Sustaining flows of crucial watershed resources

    Treesearch

    J. E. de Steiguer

    2000-01-01

    Watersheds are the source of a number of resources which are of benefit to society. These resources include water, timber, grazing, recreation, wildlife and others, often described as multiple-use resources. In addition, however, watersheds also produce a number of less tangible resources and uses, which are also socially important. These include amenity, option values...

  12. An approach for heterogeneous and loosely coupled geospatial data distributed computing

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Huang, Fengru; Fang, Yu; Huang, Zhou; Lin, Hui

    2010-07-01

    Most GIS (Geographic Information System) applications tend to have heterogeneous and autonomous geospatial information resources, and the availability of these local resources is unpredictable and dynamic under a distributed computing environment. In order to make use of these local resources together to solve larger geospatial information processing problems that are related to an overall situation, in this paper, with the support of peer-to-peer computing technologies, we propose a geospatial data distributed computing mechanism that involves loosely coupled geospatial resource directories and a term named as Equivalent Distributed Program of global geospatial queries to solve geospatial distributed computing problems under heterogeneous GIS environments. First, a geospatial query process schema for distributed computing as well as a method for equivalent transformation from a global geospatial query to distributed local queries at SQL (Structured Query Language) level to solve the coordinating problem among heterogeneous resources are presented. Second, peer-to-peer technologies are used to maintain a loosely coupled network environment that consists of autonomous geospatial information resources, thus to achieve decentralized and consistent synchronization among global geospatial resource directories, and to carry out distributed transaction management of local queries. Finally, based on the developed prototype system, example applications of simple and complex geospatial data distributed queries are presented to illustrate the procedure of global geospatial information processing.

  13. Development of a web application for water resources based on open source software

    NASA Astrophysics Data System (ADS)

    Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri P.

    2014-01-01

    This article presents research and development of a prototype web application for water resources using latest advancements in Information and Communication Technologies (ICT), open source software and web GIS. The web application has three web services for: (1) managing, presenting and storing of geospatial data, (2) support of water resources modeling and (3) water resources optimization. The web application is developed using several programming languages (PhP, Ajax, JavaScript, Java), libraries (OpenLayers, JQuery) and open source software components (GeoServer, PostgreSQL, PostGIS). The presented web application has several main advantages: it is available all the time, it is accessible from everywhere, it creates a real time multi-user collaboration platform, the programing languages code and components are interoperable and designed to work in a distributed computer environment, it is flexible for adding additional components and services and, it is scalable depending on the workload. The application was successfully tested on a case study with concurrent multi-users access.

  14. Scientific Computing Strategic Plan for the Idaho National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whiting, Eric Todd

    Scientific computing is a critical foundation of modern science. Without innovations in the field of computational science, the essential missions of the Department of Energy (DOE) would go unrealized. Taking a leadership role in such innovations is Idaho National Laboratory’s (INL’s) challenge and charge, and is central to INL’s ongoing success. Computing is an essential part of INL’s future. DOE science and technology missions rely firmly on computing capabilities in various forms. Modeling and simulation, fueled by innovations in computational science and validated through experiment, are a critical foundation of science and engineering. Big data analytics from an increasing numbermore » of widely varied sources is opening new windows of insight and discovery. Computing is a critical tool in education, science, engineering, and experiments. Advanced computing capabilities in the form of people, tools, computers, and facilities, will position INL competitively to deliver results and solutions on important national science and engineering challenges. A computing strategy must include much more than simply computers. The foundational enabling component of computing at many DOE national laboratories is the combination of a showcase like data center facility coupled with a very capable supercomputer. In addition, network connectivity, disk storage systems, and visualization hardware are critical and generally tightly coupled to the computer system and co located in the same facility. The existence of these resources in a single data center facility opens the doors to many opportunities that would not otherwise be possible.« less

  15. Mediagraphy: Print and Nonprint Resources.

    ERIC Educational Resources Information Center

    Educational Media and Technology Yearbook, 1999

    1999-01-01

    Provides annotated listings for current journals, books, ERIC documents, articles, and nonprint resources in the following categories: artificial intelligence/robotics/electronic performance support systems; computer-assisted instruction; distance education; educational research; educational technology; information science and technology;…

  16. Atmospheric numerical modeling resource enhancement and model convective parameterization/scale interaction studies

    NASA Technical Reports Server (NTRS)

    Cushman, Paula P.

    1993-01-01

    Research will be undertaken in this contract in the area of Modeling Resource and Facilities Enhancement to include computer, technical and educational support to NASA investigators to facilitate model implementation, execution and analysis of output; to provide facilities linking USRA and the NASA/EADS Computer System as well as resident work stations in ESAD; and to provide a centralized location for documentation, archival and dissemination of modeling information pertaining to NASA's program. Additional research will be undertaken in the area of Numerical Model Scale Interaction/Convective Parameterization Studies to include implementation of the comparison of cloud and rain systems and convective-scale processes between the model simulations and what was observed; and to incorporate the findings of these and related research findings in at least two refereed journal articles.

  17. LINCS: Livermore's network architecture. [Octopus computing network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fletcher, J.G.

    1982-01-01

    Octopus, a local computing network that has been evolving at the Lawrence Livermore National Laboratory for over fifteen years, is currently undergoing a major revision. The primary purpose of the revision is to consolidate and redefine the variety of conventions and formats, which have grown up over the years, into a single standard family of protocols, the Livermore Interactive Network Communication Standard (LINCS). This standard treats the entire network as a single distributed operating system such that access to a computing resource is obtained in a single way, whether that resource is local (on the same computer as the accessingmore » process) or remote (on another computer). LINCS encompasses not only communication but also such issues as the relationship of customer to server processes and the structure, naming, and protection of resources. The discussion includes: an overview of the Livermore user community and computing hardware, the functions and structure of each of the seven layers of LINCS protocol, the reasons why we have designed our own protocols and why we are dissatisfied by the directions that current protocol standards are taking.« less

  18. Galaxy CloudMan: delivering cloud compute clusters.

    PubMed

    Afgan, Enis; Baker, Dannon; Coraor, Nate; Chapman, Brad; Nekrutenko, Anton; Taylor, James

    2010-12-21

    Widespread adoption of high-throughput sequencing has greatly increased the scale and sophistication of computational infrastructure needed to perform genomic research. An alternative to building and maintaining local infrastructure is "cloud computing", which, in principle, offers on demand access to flexible computational infrastructure. However, cloud computing resources are not yet suitable for immediate "as is" use by experimental biologists. We present a cloud resource management system that makes it possible for individual researchers to compose and control an arbitrarily sized compute cluster on Amazon's EC2 cloud infrastructure without any informatics requirements. Within this system, an entire suite of biological tools packaged by the NERC Bio-Linux team (http://nebc.nerc.ac.uk/tools/bio-linux) is available for immediate consumption. The provided solution makes it possible, using only a web browser, to create a completely configured compute cluster ready to perform analysis in less than five minutes. Moreover, we provide an automated method for building custom deployments of cloud resources. This approach promotes reproducibility of results and, if desired, allows individuals and labs to add or customize an otherwise available cloud system to better meet their needs. The expected knowledge and associated effort with deploying a compute cluster in the Amazon EC2 cloud is not trivial. The solution presented in this paper eliminates these barriers, making it possible for researchers to deploy exactly the amount of computing power they need, combined with a wealth of existing analysis software, to handle the ongoing data deluge.

  19. Disposal of waste computer hard disk drive: data destruction and resources recycling.

    PubMed

    Yan, Guoqing; Xue, Mianqiang; Xu, Zhenming

    2013-06-01

    An increasing quantity of discarded computers is accompanied by a sharp increase in the number of hard disk drives to be eliminated. A waste hard disk drive is a special form of waste electrical and electronic equipment because it holds large amounts of information that is closely connected with its user. Therefore, the treatment of waste hard disk drives is an urgent issue in terms of data security, environmental protection and sustainable development. In the present study the degaussing method was adopted to destroy the residual data on the waste hard disk drives and the housing of the disks was used as an example to explore the coating removal process, which is the most important pretreatment for aluminium alloy recycling. The key operation points of the degaussing determined were: (1) keep the platter plate parallel with the magnetic field direction; and (2) the enlargement of magnetic field intensity B and action time t can lead to a significant upgrade in the degaussing effect. The coating removal experiment indicated that heating the waste hard disk drives housing at a temperature of 400 °C for 24 min was the optimum condition. A novel integrated technique for the treatment of waste hard disk drives is proposed herein. This technique offers the possibility of destroying residual data, recycling the recovered resources and disposing of the disks in an environmentally friendly manner.

  20. Utilitarianism and the disabled: distribution of resources.

    PubMed

    Stein, Mark S

    2002-02-01

    Utilitarianism is more convincing than resource egalitarianism or welfare egalitarianism as a theory of how resources should be distributed between disabled people and nondisabled people. Unlike resource egalitarianism, utilitarianism can redistribute resources to the disabled when they would benefit more from those resources than nondisabled people. Unlike welfare egalitarianism, utilitarianism can halt redistribution when the disabled would no longer benefit more than the nondisabled from additional resources. The author considers one objection to this view: it has been argued, by Sen and others, that there are circumstances under which utilitarianism would unfairly distribute fewer resources to the physically disabled than to nondisabled people, on the ground that the disabled would derive less benefit from those resources. In response, the author claims that critics of utilitarianism have fallaciously exaggerated the circumstances under which the disabled would benefit less than the nondisabled from additional resources. In those limited circumstances in which the disabled really would benefit less from resources, the author argues, it does not seem unfair to distribute fewer resources to them.