Sample records for reasonable computer resources

  1. An Architecture for Cross-Cloud System Management

    NASA Astrophysics Data System (ADS)

    Dodda, Ravi Teja; Smith, Chris; van Moorsel, Aad

    The emergence of the cloud computing paradigm promises flexibility and adaptability through on-demand provisioning of compute resources. As the utilization of cloud resources extends beyond a single provider, for business as well as technical reasons, the issue of effectively managing such resources comes to the fore. Different providers expose different interfaces to their compute resources utilizing varied architectures and implementation technologies. This heterogeneity poses a significant system management problem, and can limit the extent to which the benefits of cross-cloud resource utilization can be realized. We address this problem through the definition of an architecture to facilitate the management of compute resources from different cloud providers in an homogenous manner. This preserves the flexibility and adaptability promised by the cloud computing paradigm, whilst enabling the benefits of cross-cloud resource utilization to be realized. The practical efficacy of the architecture is demonstrated through an implementation utilizing compute resources managed through different interfaces on the Amazon Elastic Compute Cloud (EC2) service. Additionally, we provide empirical results highlighting the performance differential of these different interfaces, and discuss the impact of this performance differential on efficiency and profitability.

  2. Professional Computer Education Organizations--A Resource for Administrators.

    ERIC Educational Resources Information Center

    Ricketts, Dick

    Professional computer education organizations serve a valuable function by generating, collecting, and disseminating information concerning the role of the computer in education. This report touches briefly on the reasons for the rapid and successful development of professional computer education organizations. A number of attributes of effective…

  3. Benefits and Challenges in Using Computers and the Internet with Adult English Learners.

    ERIC Educational Resources Information Center

    Terrill, Lynda

    Although resources and training vary from program to program, adult English as a Second or Other Language (ESOL) teachers and English learners across the country are integrating computers and Internet use with ESOL instruction. This can be seen in the growing number of ESOL resources available on the World Wide Web. There are very good reasons for…

  4. Exploring Cloud Computing for Large-scale Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Guang; Han, Binh; Yin, Jian

    This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less

  5. Computational aerodynamics and artificial intelligence

    NASA Technical Reports Server (NTRS)

    Mehta, U. B.; Kutler, P.

    1984-01-01

    The general principles of artificial intelligence are reviewed and speculations are made concerning how knowledge based systems can accelerate the process of acquiring new knowledge in aerodynamics, how computational fluid dynamics may use expert systems, and how expert systems may speed the design and development process. In addition, the anatomy of an idealized expert system called AERODYNAMICIST is discussed. Resource requirements for using artificial intelligence in computational fluid dynamics and aerodynamics are examined. Three main conclusions are presented. First, there are two related aspects of computational aerodynamics: reasoning and calculating. Second, a substantial portion of reasoning can be achieved with artificial intelligence. It offers the opportunity of using computers as reasoning machines to set the stage for efficient calculating. Third, expert systems are likely to be new assets of institutions involved in aeronautics for various tasks of computational aerodynamics.

  6. Teaching World History With Computers: Why Do I Do It and What's Involved.

    ERIC Educational Resources Information Center

    Tucker, Sara W.

    2002-01-01

    Identifies reasons for using computers to teach world history. Discusses how instructors can acquire and use digital classroom resources. Describes how to develop and use online courses and course Web pages. (PAL)

  7. Probabilistic Reasoning for Robustness in Automated Planning

    NASA Technical Reports Server (NTRS)

    Schaffer, Steven; Clement, Bradley; Chien, Steve

    2007-01-01

    A general-purpose computer program for planning the actions of a spacecraft or other complex system has been augmented by incorporating a subprogram that reasons about uncertainties in such continuous variables as times taken to perform tasks and amounts of resources to be consumed. This subprogram computes parametric probability distributions for time and resource variables on the basis of user-supplied models of actions and resources that they consume. The current system accepts bounded Gaussian distributions over action duration and resource use. The distributions are then combined during planning to determine the net probability distribution of each resource at any time point. In addition to a full combinatoric approach, several approximations for arriving at these combined distributions are available, including maximum-likelihood and pessimistic algorithms. Each such probability distribution can then be integrated to obtain a probability that execution of the plan under consideration would violate any constraints on the resource. The key idea is to use these probabilities of conflict to score potential plans and drive a search toward planning low-risk actions. An output plan provides a balance between the user s specified averseness to risk and other measures of optimality.

  8. A Menu-Driven Interface to Unix-Based Resources

    PubMed Central

    Evans, Elizabeth A.

    1989-01-01

    Unix has often been overlooked in the past as a viable operating system for anyone other than computer scientists. Its terseness, non-mnemonic nature of the commands, and the lack of user-friendly software to run under it are but a few of the user-related reasons which have been cited. It is, nevertheless, the operating system of choice in many cases. This paper describes a menu-driven interface to Unix which provides user-friendlier access to the software resources available on the computers running under Unix.

  9. PST and PARR: Plan specification tools and a planning and resource reasoning shell for use in satellite mission planning

    NASA Technical Reports Server (NTRS)

    Mclean, David; Yen, Wen

    1989-01-01

    Plan Specification Tools (PST) are tools that allow the user to specify satellite mission plans in terms of satellite activities, relevent orbital events, and targets for observation. The output of these tools is a set of knowledge bases and environmental events which can then be used by a Planning And Resource Reasoning (PARR) shell to build a schedule. PARR is a reactive planning shell which is capable of reasoning about actions in the satellite mission planning domain. Each of the PST tools and PARR are described as well as the use of PARR for scheduling computer usage in the multisatellite operations control center at Goddard Space Flight Center.

  10. Managing competing elastic Grid and Cloud scientific computing applications using OpenNebula

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Lusso, S.; Masera, M.; Vallero, S.

    2015-12-01

    Elastic cloud computing applications, i.e. applications that automatically scale according to computing needs, work on the ideal assumption of infinite resources. While large public cloud infrastructures may be a reasonable approximation of this condition, scientific computing centres like WLCG Grid sites usually work in a saturated regime, in which applications compete for scarce resources through queues, priorities and scheduling policies, and keeping a fraction of the computing cores idle to allow for headroom is usually not an option. In our particular environment one of the applications (a WLCG Tier-2 Grid site) is much larger than all the others and cannot autoscale easily. Nevertheless, other smaller applications can benefit of automatic elasticity; the implementation of this property in our infrastructure, based on the OpenNebula cloud stack, will be described and the very first operational experiences with a small number of strategies for timely allocation and release of resources will be discussed.

  11. 42 CFR 4.6 - Reference, bibliographic, reproduction, and consultation services.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... requests from health-sciences professionals for services not reasonably available through local or regional... bibliographic information. (c) Information retrieval system computer tapes. To the extent Library resources...

  12. 42 CFR 4.6 - Reference, bibliographic, reproduction, and consultation services.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... requests from health-sciences professionals for services not reasonably available through local or regional... bibliographic information. (c) Information retrieval system computer tapes. To the extent Library resources...

  13. 42 CFR 4.6 - Reference, bibliographic, reproduction, and consultation services.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... requests from health-sciences professionals for services not reasonably available through local or regional... bibliographic information. (c) Information retrieval system computer tapes. To the extent Library resources...

  14. 42 CFR 4.6 - Reference, bibliographic, reproduction, and consultation services.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... requests from health-sciences professionals for services not reasonably available through local or regional... bibliographic information. (c) Information retrieval system computer tapes. To the extent Library resources...

  15. 42 CFR 4.6 - Reference, bibliographic, reproduction, and consultation services.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... requests from health-sciences professionals for services not reasonably available through local or regional... bibliographic information. (c) Information retrieval system computer tapes. To the extent Library resources...

  16. Elucidating reaction mechanisms on quantum computers.

    PubMed

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M; Wecker, Dave; Troyer, Matthias

    2017-07-18

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources.

  17. Elucidating reaction mechanisms on quantum computers

    PubMed Central

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M.; Wecker, Dave; Troyer, Matthias

    2017-01-01

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources. PMID:28674011

  18. Elucidating reaction mechanisms on quantum computers

    NASA Astrophysics Data System (ADS)

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M.; Wecker, Dave; Troyer, Matthias

    2017-07-01

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources.

  19. General Education Courses at the University of Botswana: Application of the Theory of Reasoned Action in Measuring Course Outcomes

    ERIC Educational Resources Information Center

    Garg, Deepti; Garg, Ajay K.

    2007-01-01

    This study applied the Theory of Reasoned Action and the Technology Acceptance Model to measure outcomes of general education courses (GECs) under the University of Botswana Computer and Information Skills (CIS) program. An exploratory model was validated for responses from 298 students. The results suggest that resources currently committed to…

  20. Design Tools for Evaluating Multiprocessor Programs

    DTIC Science & Technology

    1976-07-01

    than large uniprocessing machines, and 2. economies of scale in manufacturing. Perhaps the most compelling reason (possibly a consequence of the...speed, redundancy, (inefficiency, resource utilization, and economies of the components. [Browne 73, Lehman 66] 6. How can the system be scheduled...mejsures are interesting about the computation? Somn may be: speed, redundancy, (inefficiency, resource utilization, and economies of the components

  1. Secure and Efficient Signature Scheme Based on NTRU for Mobile Payment

    NASA Astrophysics Data System (ADS)

    Xia, Yunhao; You, Lirong; Sun, Zhe; Sun, Zhixin

    2017-10-01

    Mobile payment becomes more and more popular, however the traditional public-key encryption algorithm has higher requirements for hardware which is not suitable for mobile terminals of limited computing resources. In addition, these public-key encryption algorithms do not have the ability of anti-quantum computing. This paper researches public-key encryption algorithm NTRU for quantum computation through analyzing the influence of parameter q and k on the probability of generating reasonable signature value. Two methods are proposed to improve the probability of generating reasonable signature value. Firstly, increase the value of parameter q. Secondly, add the authentication condition that meet the reasonable signature requirements during the signature phase. Experimental results show that the proposed signature scheme can realize the zero leakage of the private key information of the signature value, and increase the probability of generating the reasonable signature value. It also improve rate of the signature, and avoid the invalid signature propagation in the network, but the scheme for parameter selection has certain restrictions.

  2. Access control and privacy in large distributed systems

    NASA Technical Reports Server (NTRS)

    Leiner, B. M.; Bishop, M.

    1986-01-01

    Large scale distributed systems consists of workstations, mainframe computers, supercomputers and other types of servers, all connected by a computer network. These systems are being used in a variety of applications including the support of collaborative scientific research. In such an environment, issues of access control and privacy arise. Access control is required for several reasons, including the protection of sensitive resources and cost control. Privacy is also required for similar reasons, including the protection of a researcher's proprietary results. A possible architecture for integrating available computer and communications security technologies into a system that meet these requirements is described. This architecture is meant as a starting point for discussion, rather that the final answer.

  3. Scaling predictive modeling in drug development with cloud computing.

    PubMed

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  4. On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

    NASA Astrophysics Data System (ADS)

    Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.

    2017-10-01

    This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.

  5. Abstract-Reasoning Software for Coordinating Multiple Agents

    NASA Technical Reports Server (NTRS)

    Clement, Bradley; Barrett, Anthony; Rabideau, Gregg; Knight, Russell

    2003-01-01

    A computer program for scheduling the activities of multiple agents that share limited resources has been incorporated into the Automated Scheduling and Planning Environment (ASPEN) software system, aspects of which have been reported in several previous NASA Tech Briefs articles. In the original intended application, the agents would be multiple spacecraft and/or robotic vehicles engaged in scientific exploration of distant planets. The program could also be used on Earth in such diverse settings as production lines and military maneuvers. This program includes a planning/scheduling subprogram of the iterative repair type that reasons about the activities of multiple agents at abstract levels in order to greatly improve the scheduling of their use of shared resources. The program summarizes the information about the constraints on, and resource requirements of, abstract activities on the basis of the constraints and requirements that pertain to their potential refinements (decomposition into less-abstract and ultimately to primitive activities). The advantage of reasoning about summary information is that time needed to find consistent schedules is exponentially smaller than the time that would be needed for reasoning about the same tasks at the primitive level.

  6. Probabilistic Reasoning for Plan Robustness

    NASA Technical Reports Server (NTRS)

    Schaffer, Steve R.; Clement, Bradley J.; Chien, Steve A.

    2005-01-01

    A planning system must reason about the uncertainty of continuous variables in order to accurately project the possible system state over time. A method is devised for directly reasoning about the uncertainty in continuous activity duration and resource usage for planning problems. By representing random variables as parametric distributions, computing projected system state can be simplified in some cases. Common approximation and novel methods are compared for over-constrained and lightly constrained domains. The system compares a few common approximation methods for an iterative repair planner. Results show improvements in robustness over the conventional non-probabilistic representation by reducing the number of constraint violations witnessed by execution. The improvement is more significant for larger problems and problems with higher resource subscription levels but diminishes as the system is allowed to accept higher risk levels.

  7. Cloud-based crowd sensing: a framework for location-based crowd analyzer and advisor

    NASA Astrophysics Data System (ADS)

    Aishwarya, K. C.; Nambi, A.; Hudson, S.; Nadesh, R. K.

    2017-11-01

    Cloud computing is an emerging field of computer science to integrate and explore large and powerful computing systems and storages for personal and also for enterprise requirements. Mobile Cloud Computing is the inheritance of this concept towards mobile hand-held devices. Crowdsensing, or to be precise, Mobile Crowdsensing is the process of sharing resources from an available group of mobile handheld devices that support sharing of different resources such as data, memory and bandwidth to perform a single task for collective reasons. In this paper, we propose a framework to use Crowdsensing and perform a crowd analyzer and advisor whether the user can go to the place or not. This is an ongoing research and is a new concept to which the direction of cloud computing has shifted and is viable for more expansion in the near future.

  8. LINCS: Livermore's network architecture. [Octopus computing network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fletcher, J.G.

    1982-01-01

    Octopus, a local computing network that has been evolving at the Lawrence Livermore National Laboratory for over fifteen years, is currently undergoing a major revision. The primary purpose of the revision is to consolidate and redefine the variety of conventions and formats, which have grown up over the years, into a single standard family of protocols, the Livermore Interactive Network Communication Standard (LINCS). This standard treats the entire network as a single distributed operating system such that access to a computing resource is obtained in a single way, whether that resource is local (on the same computer as the accessingmore » process) or remote (on another computer). LINCS encompasses not only communication but also such issues as the relationship of customer to server processes and the structure, naming, and protection of resources. The discussion includes: an overview of the Livermore user community and computing hardware, the functions and structure of each of the seven layers of LINCS protocol, the reasons why we have designed our own protocols and why we are dissatisfied by the directions that current protocol standards are taking.« less

  9. Computational methods for unsteady transonic flows

    NASA Technical Reports Server (NTRS)

    Edwards, John W.; Thomas, J. L.

    1987-01-01

    Computational methods for unsteady transonic flows are surveyed with emphasis on prediction. Computational difficulty is discussed with respect to type of unsteady flow; attached, mixed (attached/separated) and separated. Significant early computations of shock motions, aileron buzz and periodic oscillations are discussed. The maturation of computational methods towards the capability of treating complete vehicles with reasonable computational resources is noted and a survey of recent comparisons with experimental results is compiled. The importance of mixed attached and separated flow modeling for aeroelastic analysis is discussed, and recent calculations of periodic aerodynamic oscillations for an 18 percent thick circular arc airfoil are given.

  10. Barriers and benefits associated with nurses information seeking related to patient education needs on clinical nursing units.

    PubMed

    Jones, Josette; Schilling, Katherine; Pesut, Daniel

    2011-01-01

    The purpose of this study was to answer the following two questions: What are clinical nurses' rationales for their approaches to finding patient educational materials on the web? What are perceived barriers and benefits associated with the use of web-based information resources for patient education in the context of nursing clinical practice?Over 179 individual data units were analyzed to understand clinical nurses' rationales for their approaches to find patient educational materials on the web. Rationales were defined as those underlying catalysts or activators leading to an information need. Analyses found that the primary reasons why clinical nurses conducted web-based information searches included direct patient requests ( 9 requests), colleague requests (6 requests), building patient materials collections (4), patients' family requests (3), routine teaching (1), personal development (1), or staff development (1). From these data, four broad themes emerged: professional reasons, personal reasons, technology reasons, and organization reasons for selecting information resources. Content analysis identified 306 individual data units representing either 'benefits' (178 units) or 'barriers' (128) to the nurses' use of web resources for on-unit patient care. Inter-rater reliability was assessed and found to be excellent (r = 0.943 to 0.961). The primary themes that emerged as barriers to the used of web-based resources included: 1) time requirements to perform a search, 2) nurses' experience and knowledge about the resources or required technology, 3) specific characteristics of individuals electronic information resources, and 4) organizational procedures and policies. Three primary themes that represented the benefits of using web-based resources were also identified: 1) past experiences and knowledge of a specific resource or the required technologies, 2) availability and accessibility on the unit, and 3) specific characteristics of individual information tool. In many cases, nurses commented on specific characteristics or features of favorite information resources. Favorite sites included a variety or reputable health care organizations that displayed context in text, audio, and/or video. In addition such sites were described as easy-to read and provided content related to patient-focused information or specific content such as toll free telephone contact numbers.Information searching is the interaction between and among information users and computer-based information systems. Information seeking is becoming an important part of the knowledge work of nurses. Information seeking and searching intersects with the field of human computer interaction (HCI), which focuses on all aspects of human, and computer interactions. Users of an information system are understood as "actors" in situations, with a set of skills and shared practices based on work experiences with others. Designing better tools and developing information searching strategies that support, extend, and transform practices, begins by asking: Who are the users? What are the tasks? What is the interplay between the technology and the organization of the task? This study contributes fundamental data and information about the rationales nurses use in information seeking tasks. In addition it provides empirical evidences regarding barriers and benefits of information seeking in the context of patient education needs in inpatient clinical settings.

  11. Remembrance of inferences past: Amortization in human hypothesis generation.

    PubMed

    Dasgupta, Ishita; Schulz, Eric; Goodman, Noah D; Gershman, Samuel J

    2018-05-21

    Bayesian models of cognition assume that people compute probability distributions over hypotheses. However, the required computations are frequently intractable or prohibitively expensive. Since people often encounter many closely related distributions, selective reuse of computations (amortized inference) is a computationally efficient use of the brain's limited resources. We present three experiments that provide evidence for amortization in human probabilistic reasoning. When sequentially answering two related queries about natural scenes, participants' responses to the second query systematically depend on the structure of the first query. This influence is sensitive to the content of the queries, only appearing when the queries are related. Using a cognitive load manipulation, we find evidence that people amortize summary statistics of previous inferences, rather than storing the entire distribution. These findings support the view that the brain trades off accuracy and computational cost, to make efficient use of its limited cognitive resources to approximate probabilistic inference. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Combining qualitative and quantitative spatial and temporal information in a hierarchical structure: Approximate reasoning for plan execution monitoring

    NASA Technical Reports Server (NTRS)

    Hoebel, Louis J.

    1993-01-01

    The problem of plan generation (PG) and the problem of plan execution monitoring (PEM), including updating, queries, and resource-bounded replanning, have different reasoning and representation requirements. PEM requires the integration of qualitative and quantitative information. PEM is the receiving of data about the world in which a plan or agent is executing. The problem is to quickly determine the relevance of the data, the consistency of the data with respect to the expected effects, and if execution should continue. Only spatial and temporal aspects of the plan are addressed for relevance in this work. Current temporal reasoning systems are deficient in computational aspects or expressiveness. This work presents a hybrid qualitative and quantitative system that is fully expressive in its assertion language while offering certain computational efficiencies. In order to proceed, methods incorporating approximate reasoning using hierarchies, notions of locality, constraint expansion, and absolute parameters need be used and are shown to be useful for the anytime nature of PEM.

  13. Elucidating Reaction Mechanisms on Quantum Computers

    NASA Astrophysics Data System (ADS)

    Wiebe, Nathan; Reiher, Markus; Svore, Krysta; Wecker, Dave; Troyer, Matthias

    We show how a quantum computer can be employed to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical-computer simulations for such problems, to significantly increase their accuracy and enable hitherto intractable simulations. Detailed resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. This demonstrates that quantum computers will realistically be able to tackle important problems in chemistry that are both scientifically and economically significant.

  14. The Computer-based Lecture

    PubMed Central

    Wofford, Marcia M; Spickard, Anderson W; Wofford, James L

    2001-01-01

    Advancing computer technology, cost-containment pressures, and desire to make innovative improvements in medical education argue for moving learning resources to the computer. A reasonable target for such a strategy is the traditional clinical lecture. The purpose of the lecture, the advantages and disadvantages of “live” versus computer-based lectures, and the technical options in computerizing the lecture deserve attention in developing a cost-effective, complementary learning strategy that preserves the teacher-learner relationship. Based on a literature review of the traditional clinical lecture, we build on the strengths of the lecture format and discuss strategies for converting the lecture to a computer-based learning presentation. PMID:11520384

  15. Designing for emotion (among other things)

    PubMed Central

    Gaver, William

    2009-01-01

    Using computational approaches to emotion in design appears problematic for a range of technical, cultural and aesthetic reasons. After introducing some of the reasons as to why I am sceptical of such approaches, I describe a prototype we built that tried to address some of these problems, using sensor-based inferencing to comment upon domestic ‘well-being’ in ways that encouraged users to take authority over the emotional judgements offered by the system. Unfortunately, over two iterations we concluded that the prototype we built was a failure. I discuss the possible reasons for this and conclude that many of the problems we found are relevant more generally for designs based on computational approaches to emotion. As an alternative, I advocate a broader view of interaction design in which open-ended designs serve as resources for individual appropriation, and suggest that emotional experiences become one of several outcomes of engaging with them. PMID:19884154

  16. ATLAS user analysis on private cloud resources at GoeGrid

    NASA Astrophysics Data System (ADS)

    Glaser, F.; Nadal Serrano, J.; Grabowski, J.; Quadt, A.

    2015-12-01

    User analysis job demands can exceed available computing resources, especially before major conferences. ATLAS physics results can potentially be slowed down due to the lack of resources. For these reasons, cloud research and development activities are now included in the skeleton of the ATLAS computing model, which has been extended by using resources from commercial and private cloud providers to satisfy the demands. However, most of these activities are focused on Monte-Carlo production jobs, extending the resources at Tier-2. To evaluate the suitability of the cloud-computing model for user analysis jobs, we developed a framework to launch an ATLAS user analysis cluster in a cloud infrastructure on demand and evaluated two solutions. The first solution is entirely integrated in the Grid infrastructure by using the same mechanism, which is already in use at Tier-2: A designated Panda-Queue is monitored and additional worker nodes are launched in a cloud environment and assigned to a corresponding HTCondor queue according to the demand. Thereby, the use of cloud resources is completely transparent to the user. However, using this approach, submitted user analysis jobs can still suffer from a certain delay introduced by waiting time in the queue and the deployed infrastructure lacks customizability. Therefore, our second solution offers the possibility to easily deploy a totally private, customizable analysis cluster on private cloud resources belonging to the university.

  17. Evaluating High School IT

    ERIC Educational Resources Information Center

    Thompson, Brett A.

    2004-01-01

    Since its inception in 1997, Cisco's curriculum has entered thousands of high schools across the U.S. and around the world for two reasons: (1) Cisco has a large portion of the computer networking market, and thus has the resources for and interest in developing high school academies; and (2) high school curriculum development teams recognize the…

  18. 25 CFR 36.102 - What student resources must be provided by a homeliving program?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... program? 36.102 Section 36.102 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION MINIMUM ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY... equivalent for peripheral dorms; and (c) Reasonable access to a computer with Internet access to facilitate...

  19. Infrastructure Systems for Advanced Computing in E-science applications

    NASA Astrophysics Data System (ADS)

    Terzo, Olivier

    2013-04-01

    In the e-science field are growing needs for having computing infrastructure more dynamic and customizable with a model of use "on demand" that follow the exact request in term of resources and storage capacities. The integration of grid and cloud infrastructure solutions allows us to offer services that can adapt the availability in terms of up scaling and downscaling resources. The main challenges for e-sciences domains will on implement infrastructure solutions for scientific computing that allow to adapt dynamically the demands of computing resources with a strong emphasis on optimizing the use of computing resources for reducing costs of investments. Instrumentation, data volumes, algorithms, analysis contribute to increase the complexity for applications who require high processing power and storage for a limited time and often exceeds the computational resources that equip the majority of laboratories, research Unit in an organization. Very often it is necessary to adapt or even tweak rethink tools, algorithms, and consolidate existing applications through a phase of reverse engineering in order to adapt them to a deployment on Cloud infrastructure. For example, in areas such as rainfall monitoring, meteorological analysis, Hydrometeorology, Climatology Bioinformatics Next Generation Sequencing, Computational Electromagnetic, Radio occultation, the complexity of the analysis raises several issues such as the processing time, the scheduling of tasks of processing, storage of results, a multi users environment. For these reasons, it is necessary to rethink the writing model of E-Science applications in order to be already adapted to exploit the potentiality of cloud computing services through the uses of IaaS, PaaS and SaaS layer. An other important focus is on create/use hybrid infrastructure typically a federation between Private and public cloud, in fact in this way when all resources owned by the organization are all used it will be easy with a federate cloud infrastructure to add some additional resources form the Public cloud for following the needs in term of computational and storage resources and release them where process are finished. Following the hybrid model, the scheduling approach is important for managing both cloud models. Thanks to this model infrastructure every time resources are available for additional request in term of IT capacities that can used "on demand" for a limited time without having to proceed to purchase additional servers.

  20. Application of computational aero-acoustics to real world problems

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.

    1996-01-01

    The application of computational aeroacoustics (CAA) to real problems is discussed in relation to the analysis performed with the aim of assessing the application of the various techniques. It is considered that the applications are limited by the inability of the computational resources to resolve the large range of scales involved in high Reynolds number flows. Possible simplifications are discussed. It is considered that problems remain to be solved in relation to the efficient use of the power of parallel computers and in the development of turbulent modeling schemes. The goal of CAA is stated as being the implementation of acoustic design studies on a computer terminal with reasonable run times.

  1. ASPEN Version 3.0

    NASA Technical Reports Server (NTRS)

    Rabideau, Gregg; Chien, Steve; Knight, Russell; Schaffer, Steven; Tran, Daniel; Cichy, Benjamin; Sherwood, Robert

    2006-01-01

    The Automated Scheduling and Planning Environment (ASPEN) computer program has been updated to version 3.0. ASPEN is a modular, reconfigurable, application software framework for solving batch problems that involve reasoning about time, activities, states, and resources. Applications of ASPEN can include planning spacecraft missions, scheduling of personnel, and managing supply chains, inventories, and production lines. ASPEN 3.0 can be customized for a wide range of applications and for a variety of computing environments that include various central processing units and random access memories.

  2. Trusted computation through biologically inspired processes

    NASA Astrophysics Data System (ADS)

    Anderson, Gustave W.

    2013-05-01

    Due to supply chain threats it is no longer a reasonable assumption that traditional protections alone will provide sufficient security for enterprise systems. The proposed cognitive trust model architecture extends the state-of-the-art in enterprise anti-exploitation technologies by providing collective immunity through backup and cross-checking, proactive health monitoring and adaptive/autonomic threat response, and network resource diversity.

  3. Five reasons not to use numerical models in water resource management (Arne Richter Award Lecture for OYS)

    NASA Astrophysics Data System (ADS)

    Pianosi, Francesca

    2015-04-01

    Sustainable water resource management in a quickly changing world poses new challenges to hydrology and decision sciences. Systems analysis can contribute to promote sustainable practices by providing the theoretical background and the operational tools for an objective and transparent appraisal of policy options for water resource systems (WRS) management. Traditionally, limited availability of data and computing resources imposed to use oversimplified WRS models, with little consideration of modeling uncertainties and of the non-stationarity and feedbacks between WRS drivers, and a priori aggregation of costs and benefits. Nowadays we increasingly recognize the inadequacy of these simplifications, and consider them among the reasons for the limited use of model-generated information in actual decision-making processes. On the other hand, fast-growing availability of data and computing resources are opening up unprecedented possibilities in the way we build and apply numerical models. In this talk I will discuss my experiences and ideas on how we can exploit this potential to improve model-informed decision-making while facing the challenges of uncertainty, non-stationarity, feedbacks and conflicting objectives. In particular, through practical examples of WRS design and operation problems, my talk will aim at stimulating discussion about the impact of uncertainty on decisions: can inaccurate and imprecise predictions still carry valuable information for decision-making? Does uncertainty in predictions necessarily limit our ability to make 'good' decisions? Or can uncertainty even be of help for decision-making, for instance by reducing the projected conflict between competing water use? Finally, I will also discuss how the traditionally separate disciplines of numerical modelling, optimization, and uncertainty and sensitivity analysis have in my experience been just different facets of the same 'systems approach'.

  4. An efficient two-stage approach for image-based FSI analysis of atherosclerotic arteries

    PubMed Central

    Rayz, Vitaliy L.; Mofrad, Mohammad R. K.; Saloner, David

    2010-01-01

    Patient-specific biomechanical modeling of atherosclerotic arteries has the potential to aid clinicians in characterizing lesions and determining optimal treatment plans. To attain high levels of accuracy, recent models use medical imaging data to determine plaque component boundaries in three dimensions, and fluid–structure interaction is used to capture mechanical loading of the diseased vessel. As the plaque components and vessel wall are often highly complex in shape, constructing a suitable structured computational mesh is very challenging and can require a great deal of time. Models based on unstructured computational meshes require relatively less time to construct and are capable of accurately representing plaque components in three dimensions. These models unfortunately require additional computational resources and computing time for accurate and meaningful results. A two-stage modeling strategy based on unstructured computational meshes is proposed to achieve a reasonable balance between meshing difficulty and computational resource and time demand. In this method, a coarsegrained simulation of the full arterial domain is used to guide and constrain a fine-scale simulation of a smaller region of interest within the full domain. Results for a patient-specific carotid bifurcation model demonstrate that the two-stage approach can afford a large savings in both time for mesh generation and time and resources needed for computation. The effects of solid and fluid domain truncation were explored, and were shown to minimally affect accuracy of the stress fields predicted with the two-stage approach. PMID:19756798

  5. The Need for Integration of Technology in K-12 School Settings in Kenya, Africa

    ERIC Educational Resources Information Center

    Momanyi, Lilian; Norby, RenaFaye; Strand, Sharon

    2006-01-01

    Many computer users around the world have access to the latest advances in technology and use of the World Wide Web (WWW or Web). However, for a variety of political, economic, and social reasons, some peoples of the world do not have access to these resources. The educational systems of developing countries have not completely missed the…

  6. Theoretical Framework for Interaction Game Design

    DTIC Science & Technology

    2016-05-19

    modeling. We take a data-driven quantitative approach to understand conversational behaviors by measuring conversational behaviors using advanced sensing...current state of the art, human computing is considered to be a reasonable approach to break through the current limitation. To solicit high quality and...proper resources in conversation to enable smooth and effective interaction. The last technique is about conversation measurement , analysis, and

  7. Advancing Science through Mining Libraries, Ontologies, and Communities*

    PubMed Central

    Evans, James A.; Rzhetsky, Andrey

    2011-01-01

    Life scientists today cannot hope to read everything relevant to their research. Emerging text-mining tools can help by identifying topics and distilling statements from books and articles with increased accuracy. Researchers often organize these statements into ontologies, consistent systems of reality claims. Like scientific thinking and interchange, however, text-mined information (even when accurately captured) is complex, redundant, sometimes incoherent, and often contradictory: it is rooted in a mixture of only partially consistent ontologies. We review work that models scientific reason and suggest how computational reasoning across ontologies and the broader distribution of textual statements can assess the certainty of statements and the process by which statements become certain. With the emergence of digitized data regarding networks of scientific authorship, institutions, and resources, we explore the possibility of accounting for social dependences and cultural biases in reasoning models. Computational reasoning is starting to fill out ontologies and flag internal inconsistencies in several areas of bioscience. In the not too distant future, scientists may be able to use statements and rich models of the processes that produced them to identify underexplored areas, resurrect forgotten findings and ideas, deconvolute the spaghetti of underlying ontologies, and synthesize novel knowledge and hypotheses. PMID:21566119

  8. A model to forecast data centre infrastructure costs.

    NASA Astrophysics Data System (ADS)

    Vernet, R.

    2015-12-01

    The computing needs in the HEP community are increasing steadily, but the current funding situation in many countries is tight. As a consequence experiments, data centres, and funding agencies have to rationalize resource usage and expenditures. CC-IN2P3 (Lyon, France) provides computing resources to many experiments including LHC, and is a major partner for astroparticle projects like LSST, CTA or Euclid. The financial cost to accommodate all these experiments is substantial and has to be planned well in advance for funding and strategic reasons. In that perspective, leveraging infrastructure expenses, electric power cost and hardware performance observed in our site over the last years, we have built a model that integrates these data and provides estimates of the investments that would be required to cater to the experiments for the mid-term future. We present how our model is built and the expenditure forecast it produces, taking into account the experiment roadmaps. We also examine the resource growth predicted by our model over the next years assuming a flat-budget scenario.

  9. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berres, Anne Sabine; Adhinarayanan, Vignesh; Turton, Terece

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline atmore » the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.« less

  10. Exhaustive Versus Randomized Searchers for Nonlinear Optimization in 21st Century Computing: Solar Application

    NASA Technical Reports Server (NTRS)

    Sen, Syamal K.; AliShaykhian, Gholam

    2010-01-01

    We present a simple multi-dimensional exhaustive search method to obtain, in a reasonable time, the optimal solution of a nonlinear programming problem. It is more relevant in the present day non-mainframe computing scenario where an estimated 95% computing resources remains unutilized and computing speed touches petaflops. While the processor speed is doubling every 18 months, the band width is doubling every 12 months, and the hard disk space is doubling every 9 months. A randomized search algorithm or, equivalently, an evolutionary search method is often used instead of an exhaustive search algorithm. The reason is that a randomized approach is usually polynomial-time, i.e., fast while an exhaustive search method is exponential-time i.e., slow. We discuss the increasing importance of exhaustive search in optimization with the steady increase of computing power for solving many real-world problems of reasonable size. We also discuss the computational error and complexity of the search algorithm focusing on the fact that no measuring device can usually measure a quantity with an accuracy greater than 0.005%. We stress the fact that the quality of solution of the exhaustive search - a deterministic method - is better than that of randomized search. In 21 st century computing environment, exhaustive search cannot be left aside as an untouchable and it is not always exponential. We also describe a possible application of these algorithms in improving the efficiency of solar cells - a real hot topic - in the current energy crisis. These algorithms could be excellent tools in the hands of experimentalists and could save not only large amount of time needed for experiments but also could validate the theory against experimental results fast.

  11. ASME V\\&V challenge problem: Surrogate-based V&V

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beghini, Lauren L.; Hough, Patricia D.

    2015-12-18

    The process of verification and validation can be resource intensive. From the computational model perspective, the resource demand typically arises from long simulation run times on multiple cores coupled with the need to characterize and propagate uncertainties. In addition, predictive computations performed for safety and reliability analyses have similar resource requirements. For this reason, there is a tradeoff between the time required to complete the requisite studies and the fidelity or accuracy of the results that can be obtained. At a high level, our approach is cast within a validation hierarchy that provides a framework in which we perform sensitivitymore » analysis, model calibration, model validation, and prediction. The evidence gathered as part of these activities is mapped into the Predictive Capability Maturity Model to assess credibility of the model used for the reliability predictions. With regard to specific technical aspects of our analysis, we employ surrogate-based methods, primarily based on polynomial chaos expansions and Gaussian processes, for model calibration, sensitivity analysis, and uncertainty quantification in order to reduce the number of simulations that must be done. The goal is to tip the tradeoff balance to improving accuracy without increasing the computational demands.« less

  12. Scientific Services on the Cloud

    NASA Astrophysics Data System (ADS)

    Chapman, David; Joshi, Karuna P.; Yesha, Yelena; Halem, Milt; Yesha, Yaacov; Nguyen, Phuong

    Scientific Computing was one of the first every applications for parallel and distributed computation. To this date, scientific applications remain some of the most compute intensive, and have inspired creation of petaflop compute infrastructure such as the Oak Ridge Jaguar and Los Alamos RoadRunner. Large dedicated hardware infrastructure has become both a blessing and a curse to the scientific community. Scientists are interested in cloud computing for much the same reason as businesses and other professionals. The hardware is provided, maintained, and administrated by a third party. Software abstraction and virtualization provide reliability, and fault tolerance. Graduated fees allow for multi-scale prototyping and execution. Cloud computing resources are only a few clicks away, and by far the easiest high performance distributed platform to gain access to. There may still be dedicated infrastructure for ultra-scale science, but the cloud can easily play a major part of the scientific computing initiative.

  13. A robot sets a table: a case for hybrid reasoning with different types of knowledge

    NASA Astrophysics Data System (ADS)

    Mansouri, Masoumeh; Pecora, Federico

    2016-09-01

    An important contribution of AI to Robotics is the model-centred approach, whereby competent robot behaviour stems from automated reasoning in models of the world which can be changed to suit different environments, physical capabilities and tasks. However models need to capture diverse (and often application-dependent) aspects of the robot's environment and capabilities. They must also have good computational properties, as robots need to reason while they act in response to perceived context. In this article, we investigate the use of a meta-CSP-based technique to interleave reasoning in diverse knowledge types. We reify the approach through a robotic waiter case study, for which a particular selection of spatial, temporal, resource and action KR formalisms is made. Using this case study, we discuss general principles pertaining to the selection of appropriate KR formalisms and jointly reasoning about them. The resulting integration is evaluated both formally and experimentally on real and simulated robotic platforms.

  14. BRYNTRN: A baryon transport model

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Townsend, Lawrence W.; Nealy, John E.; Chun, Sang Y.; Hong, B. S.; Buck, Warren W.; Lamkin, S. L.; Ganapol, Barry D.; Khan, Ferdous; Cucinotta, Francis A.

    1989-01-01

    The development of an interaction data base and a numerical solution to the transport of baryons through an arbitrary shield material based on a straight ahead approximation of the Boltzmann equation are described. The code is most accurate for continuous energy boundary values, but gives reasonable results for discrete spectra at the boundary using even a relatively coarse energy grid (30 points) and large spatial increments (1 cm in H2O). The resulting computer code is self-contained, efficient and ready to use. The code requires only a very small fraction of the computer resources required for Monte Carlo codes.

  15. The engine design engine. A clustered computer platform for the aerodynamic inverse design and analysis of a full engine

    NASA Technical Reports Server (NTRS)

    Sanz, J.; Pischel, K.; Hubler, D.

    1992-01-01

    An application for parallel computation on a combined cluster of powerful workstations and supercomputers was developed. A Parallel Virtual Machine (PVM) is used as message passage language on a macro-tasking parallelization of the Aerodynamic Inverse Design and Analysis for a Full Engine computer code. The heterogeneous nature of the cluster is perfectly handled by the controlling host machine. Communication is established via Ethernet with the TCP/IP protocol over an open network. A reasonable overhead is imposed for internode communication, rendering an efficient utilization of the engaged processors. Perhaps one of the most interesting features of the system is its versatile nature, that permits the usage of the computational resources available that are experiencing less use at a given point in time.

  16. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization

    PubMed Central

    Chen, Qingkui; Zhao, Deyu; Wang, Jingjuan

    2017-01-01

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services. PMID:28777325

  17. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization.

    PubMed

    Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan

    2017-08-04

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.

  18. Use of Multiple GPUs to Speedup the Execution of a Three-Dimensional Computational Model of the Innate Immune System

    NASA Astrophysics Data System (ADS)

    Xavier, M. P.; do Nascimento, T. M.; dos Santos, R. W.; Lobosco, M.

    2014-03-01

    The development of computational systems that mimics the physiological response of organs or even the entire body is a complex task. One of the issues that makes this task extremely complex is the huge computational resources needed to execute the simulations. For this reason, the use of parallel computing is mandatory. In this work, we focus on the simulation of temporal and spatial behaviour of some human innate immune system cells and molecules in a small three-dimensional section of a tissue. To perform this simulation, we use multiple Graphics Processing Units (GPUs) in a shared-memory environment. Despite of high initialization and communication costs imposed by the use of GPUs, the techniques used to implement the HIS simulator have shown to be very effective to achieve this purpose.

  19. Adaptive Management of Computing and Network Resources for Spacecraft Systems

    NASA Technical Reports Server (NTRS)

    Pfarr, Barbara; Welch, Lonnie R.; Detter, Ryan; Tjaden, Brett; Huh, Eui-Nam; Szczur, Martha R. (Technical Monitor)

    2000-01-01

    It is likely that NASA's future spacecraft systems will consist of distributed processes which will handle dynamically varying workloads in response to perceived scientific events, the spacecraft environment, spacecraft anomalies and user commands. Since all situations and possible uses of sensors cannot be anticipated during pre-deployment phases, an approach for dynamically adapting the allocation of distributed computational and communication resources is needed. To address this, we are evolving the DeSiDeRaTa adaptive resource management approach to enable reconfigurable ground and space information systems. The DeSiDeRaTa approach embodies a set of middleware mechanisms for adapting resource allocations, and a framework for reasoning about the real-time performance of distributed application systems. The framework and middleware will be extended to accommodate (1) the dynamic aspects of intra-constellation network topologies, and (2) the complete real-time path from the instrument to the user. We are developing a ground-based testbed that will enable NASA to perform early evaluation of adaptive resource management techniques without the expense of first deploying them in space. The benefits of the proposed effort are numerous, including the ability to use sensors in new ways not anticipated at design time; the production of information technology that ties the sensor web together; the accommodation of greater numbers of missions with fewer resources; and the opportunity to leverage the DeSiDeRaTa project's expertise, infrastructure and models for adaptive resource management for distributed real-time systems.

  20. Method and system for data clustering for very large databases

    NASA Technical Reports Server (NTRS)

    Livny, Miron (Inventor); Zhang, Tian (Inventor); Ramakrishnan, Raghu (Inventor)

    1998-01-01

    Multi-dimensional data contained in very large databases is efficiently and accurately clustered to determine patterns therein and extract useful information from such patterns. Conventional computer processors may be used which have limited memory capacity and conventional operating speed, allowing massive data sets to be processed in a reasonable time and with reasonable computer resources. The clustering process is organized using a clustering feature tree structure wherein each clustering feature comprises the number of data points in the cluster, the linear sum of the data points in the cluster, and the square sum of the data points in the cluster. A dense region of data points is treated collectively as a single cluster, and points in sparsely occupied regions can be treated as outliers and removed from the clustering feature tree. The clustering can be carried out continuously with new data points being received and processed, and with the clustering feature tree being restructured as necessary to accommodate the information from the newly received data points.

  1. Water resources of Borrego Valley and vicinity, San Diego County, California; Phase 2, Development of a ground-water flow model

    USGS Publications Warehouse

    Mitten, H.T.; Lines, G.C.; Berenbrock, Charles; Durbin, T.J.

    1988-01-01

    Because of the imbalance between recharge and pumpage, groundwater levels declined as much as 100 ft in some areas of Borrego Valley, California during drinking 1945-80. As an aid to analyzing the effects of pumping on the groundwater system, a three-dimensional finite-element groundwater flow model was developed. The model was calibrated for both steady-state (1945) and transient-state (1946-79) conditions. For the steady-state calibration, hydraulic conductivities of the three aquifers were varied within reasonable limits to obtain an acceptable match between measured and computed hydraulic heads. Recharge from streamflow infiltration (4,800 acre-ft/yr) was balanced by computed evapotranspiration (3,900 acre-ft/yr) and computed subsurface outflow from the model area (930 acre-ft/yr). For the transient state calibration, the volumes and distribution of net groundwater pumpage were estimated from land-use data and estimates of consumptive use for irrigated crops. The pumpage was assigned to the appropriate nodes in the model for each of seventeen 2-year time steps representing the period 1946-79. The specific yields of the three aquifers were varied within reasonable limits to obtain an acceptable match between measured and computed hydraulic heads. Groundwater pumpage input to the model was compensated by declines in both the computed evapotranspiration and the amount of groundwater in storage. (USGS)

  2. Toward Scalable Trustworthy Computing Using the Human-Physiology-Immunity Metaphor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hively, Lee M; Sheldon, Frederick T

    The cybersecurity landscape consists of an ad hoc patchwork of solutions. Optimal cybersecurity is difficult for various reasons: complexity, immense data and processing requirements, resource-agnostic cloud computing, practical time-space-energy constraints, inherent flaws in 'Maginot Line' defenses, and the growing number and sophistication of cyberattacks. This article defines the high-priority problems and examines the potential solution space. In that space, achieving scalable trustworthy computing and communications is possible through real-time knowledge-based decisions about cyber trust. This vision is based on the human-physiology-immunity metaphor and the human brain's ability to extract knowledge from data and information. The article outlines future steps towardmore » scalable trustworthy systems requiring a long-term commitment to solve the well-known challenges.« less

  3. Spontaneous Ad Hoc Mobile Cloud Computing Network

    PubMed Central

    Lacuesta, Raquel; Sendra, Sandra; Peñalver, Lourdes

    2014-01-01

    Cloud computing helps users and companies to share computing resources instead of having local servers or personal devices to handle the applications. Smart devices are becoming one of the main information processing devices. Their computing features are reaching levels that let them create a mobile cloud computing network. But sometimes they are not able to create it and collaborate actively in the cloud because it is difficult for them to build easily a spontaneous network and configure its parameters. For this reason, in this paper, we are going to present the design and deployment of a spontaneous ad hoc mobile cloud computing network. In order to perform it, we have developed a trusted algorithm that is able to manage the activity of the nodes when they join and leave the network. The paper shows the network procedures and classes that have been designed. Our simulation results using Castalia show that our proposal presents a good efficiency and network performance even by using high number of nodes. PMID:25202715

  4. Spontaneous ad hoc mobile cloud computing network.

    PubMed

    Lacuesta, Raquel; Lloret, Jaime; Sendra, Sandra; Peñalver, Lourdes

    2014-01-01

    Cloud computing helps users and companies to share computing resources instead of having local servers or personal devices to handle the applications. Smart devices are becoming one of the main information processing devices. Their computing features are reaching levels that let them create a mobile cloud computing network. But sometimes they are not able to create it and collaborate actively in the cloud because it is difficult for them to build easily a spontaneous network and configure its parameters. For this reason, in this paper, we are going to present the design and deployment of a spontaneous ad hoc mobile cloud computing network. In order to perform it, we have developed a trusted algorithm that is able to manage the activity of the nodes when they join and leave the network. The paper shows the network procedures and classes that have been designed. Our simulation results using Castalia show that our proposal presents a good efficiency and network performance even by using high number of nodes.

  5. Developing cloud-based Business Process Management (BPM): a survey

    NASA Astrophysics Data System (ADS)

    Mercia; Gunawan, W.; Fajar, A. N.; Alianto, H.; Inayatulloh

    2018-03-01

    In today’s highly competitive business environment, modern enterprises are dealing difficulties to cut unnecessary costs, eliminate wastes and delivery huge benefits for the organization. Companies are increasingly turning to a more flexible IT environment to help them realize this goal. For this reason, the article applies cloud based Business Process Management (BPM) that enables to focus on modeling, monitoring and process management. Cloud based BPM consists of business processes, business information and IT resources, which help build real-time intelligence systems, based on business management and cloud technology. Cloud computing is a paradigm that involves procuring dynamically measurable resources over the internet as an IT resource service. Cloud based BPM service enables to address common problems faced by traditional BPM, especially in promoting flexibility, event-driven business process to exploit opportunities in the marketplace.

  6. 77 FR 13573 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-07

    ... FR 71537). Reason: The system at Army Human Resource Command (AHRC) has been deactivated and records... (January 6, 2004, 69 FR 790). Reason: The files are no longer collected at Army Human Resource Command... 8183). Reason: The files are no longer collected at Army Human Resource Command, records have met the...

  7. Optimization of a Monte Carlo Model of the Transient Reactor Test Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Kristin; DeHart, Mark; Goluoglu, Sedat

    2017-03-01

    The ultimate goal of modeling and simulation is to obtain reasonable answers to problems that don’t have representations which can be easily evaluated while minimizing the amount of computational resources. With the advances during the last twenty years of large scale computing centers, researchers have had the ability to create a multitude of tools to minimize the number of approximations necessary when modeling a system. The tremendous power of these centers requires the user to possess an immense amount of knowledge to optimize the models for accuracy and efficiency.This paper seeks to evaluate the KENO model of TREAT to optimizemore » calculational efforts.« less

  8. Dawn Usage, Scheduling, and Governance Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Louis, S

    2009-11-02

    This document describes Dawn use, scheduling, and governance concerns. Users started running full-machine science runs in early April 2009 during the initial open shakedown period. Scheduling Dawn while in the Open Computing Facility (OCF) was controlled and coordinated via phone calls, emails, and a small number of controlled banks. With Dawn moving to the Secure Computing Facility (SCF) in fall of 2009, a more detailed scheduling and governance model is required. The three major objectives are: (1) Ensure Dawn resources are allocated on a program priority-driven basis; (2) Utilize Dawn resources on the job mixes for which they were intended;more » and (3) Minimize idle cycles through use of partitions, banks and proper job mix. The SCF workload for Dawn will be inherently different than Purple or BG/L, and therefore needs a different approach. Dawn's primary function is to permit adequate access for tri-lab code development in preparation for Sequoia, and in particular for weapons multi-physics codes in support of UQ. A second purpose is to provide time allocations for large-scale science runs and for UQ suite calculations to advance SSP program priorities. This proposed governance model will be the basis for initial time allocation of Dawn computing resources for the science and UQ workloads that merit priority on this class of resource, either because they cannot be reasonably attempted on any other resources due to size of problem, or because of the unavailability of sizable allocations on other ASC capability or capacity platforms. This proposed model intends to make the most effective use of Dawn as possible, but without being overly constrained by more formal proposal processes such as those now used for Purple CCCs.« less

  9. Oak Ridge Institutional Cluster Autotune Test Drive Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jibonananda, Sanyal; New, Joshua Ryan

    2014-02-01

    The Oak Ridge Institutional Cluster (OIC) provides general purpose computational resources for the ORNL staff to run computation heavy jobs that are larger than desktop applications but do not quite require the scale and power of the Oak Ridge Leadership Computing Facility (OLCF). This report details the efforts made and conclusions derived in performing a short test drive of the cluster resources on Phase 5 of the OIC. EnergyPlus was used in the analysis as a candidate user program and the overall software environment was evaluated against anticipated challenges experienced with resources such as the shared memory-Nautilus (JICS) and Titanmore » (OLCF). The OIC performed within reason and was found to be acceptable in the context of running EnergyPlus simulations. The number of cores per node and the availability of scratch space per node allow non-traditional desktop focused applications to leverage parallel ensemble execution. Although only individual runs of EnergyPlus were executed, the software environment on the OIC appeared suitable to run ensemble simulations with some modifications to the Autotune workflow. From a standpoint of general usability, the system supports common Linux libraries, compilers, standard job scheduling software (Torque/Moab), and the OpenMPI library (the only MPI library) for MPI communications. The file system is a Panasas file system which literature indicates to be an efficient file system.« less

  10. From ontology selection and semantic web to an integrated information system for food-borne diseases and food safety.

    PubMed

    Yan, Xianghe; Peng, Yun; Meng, Jianghong; Ruzante, Juliana; Fratamico, Pina M; Huang, Lihan; Juneja, Vijay; Needleman, David S

    2011-01-01

    Several factors have hindered effective use of information and resources related to food safety due to inconsistency among semantically heterogeneous data resources, lack of knowledge on profiling of food-borne pathogens, and knowledge gaps among research communities, government risk assessors/managers, and end-users of the information. This paper discusses technical aspects in the establishment of a comprehensive food safety information system consisting of the following steps: (a) computational collection and compiling publicly available information, including published pathogen genomic, proteomic, and metabolomic data; (b) development of ontology libraries on food-borne pathogens and design automatic algorithms with formal inference and fuzzy and probabilistic reasoning to address the consistency and accuracy of distributed information resources (e.g., PulseNet, FoodNet, OutbreakNet, PubMed, NCBI, EMBL, and other online genetic databases and information); (c) integration of collected pathogen profiling data, Foodrisk.org ( http://www.foodrisk.org ), PMP, Combase, and other relevant information into a user-friendly, searchable, "homogeneous" information system available to scientists in academia, the food industry, and government agencies; and (d) development of a computational model in semantic web for greater adaptability and robustness.

  11. Development of hybrid computer plasma models for different pressure regimes

    NASA Astrophysics Data System (ADS)

    Hromadka, Jakub; Ibehej, Tomas; Hrach, Rudolf

    2016-09-01

    With increased performance of contemporary computers during last decades numerical simulations became a very powerful tool applicable also in plasma physics research. Plasma is generally an ensemble of mutually interacting particles that is out of the thermodynamic equilibrium and for this reason fluid computer plasma models give results with only limited accuracy. On the other hand, much more precise particle models are often limited only on 2D problems because of their huge demands on the computer resources. Our contribution is devoted to hybrid modelling techniques that combine advantages of both modelling techniques mentioned above, particularly to their so-called iterative version. The study is focused on mutual relations between fluid and particle models that are demonstrated on the calculations of sheath structures of low temperature argon plasma near a cylindrical Langmuir probe for medium and higher pressures. Results of a simple iterative hybrid plasma computer model are also given. The authors acknowledge the support of the Grant Agency of Charles University in Prague (project 220215).

  12. Explorationists and dinosaurs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    French, W.S.

    1993-02-01

    The exploration industry is changing, exploration technology is changing and the explorationist's job is changing. Resource companies are diversifying internationally and their central organizations are providing advisors rather than services. As a result, the relationship between the resource company and the contractor is changing. Resource companies are promoting standards so that all contract services in all parts of the world will look the same to their advisors. Contractors, for competitive reasons, want to look [open quotes]different[close quotes] from other contractors. The resource companies must encourage competition between contractors to insure the availability of new technology but must also resist themore » current trend of burdening the contractor with more and more of the risk involved in exploration. It is becoming more and more obvious that geophysical expenditures represent the best [open quotes]value added[close quotes] expenditures in exploration and development budgets. As a result, seismic-related contractors represent the growth component of our industry. The predominant growth is in 3-D seismic technology, and this growth is being further propelled by the computational power of the new generation of massively parallel computers and by recent advances in computer graphic techniques. Interpretation of seismic data involves the analysis of wavelet shapes and amplitudes prior to stacking the data. Thus, modern interpretation involves understanding compressional waves, shear waves, and propagating modes which create noise and interference. Modern interpretation and processing are carried out simultaneously, iteratively, and interactively and involve many physics-related concepts. These concepts are not merely tools for the interpretation, they are the interpretation. Explorationists who do not recognize this fact are going the way of the dinosaurs.« less

  13. Unified Performance and Power Modeling of Scientific Workloads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Shuaiwen; Barker, Kevin J.; Kerbyson, Darren J.

    2013-11-17

    It is expected that scientific applications executing on future large-scale HPC must be optimized not only in terms of performance, but also in terms of power consumption. As power and energy become increasingly constrained resources, researchers and developers must have access to tools that will allow for accurate prediction of both performance and power consumption. Reasoning about performance and power consumption in concert will be critical for achieving maximum utilization of limited resources on future HPC systems. To this end, we present a unified performance and power model for the Nek-Bone mini-application developed as part of the DOE's CESAR Exascalemore » Co-Design Center. Our models consider the impact of computation, point-to-point communication, and collective communication« less

  14. Application of linear logic to simulation

    NASA Astrophysics Data System (ADS)

    Clarke, Thomas L.

    1998-08-01

    Linear logic, since its introduction by Girard in 1987 has proven expressive and powerful. Linear logic has provided natural encodings of Turing machines, Petri nets and other computational models. Linear logic is also capable of naturally modeling resource dependent aspects of reasoning. The distinguishing characteristic of linear logic is that it accounts for resources; two instances of the same variable are considered differently from a single instance. Linear logic thus must obey a form of the linear superposition principle. A proportion can be reasoned with only once, unless a special operator is applied. Informally, linear logic distinguishes two kinds of conjunction, two kinds of disjunction, and also introduces a modal storage operator that explicitly indicates propositions that can be reused. This paper discuses the application of linear logic to simulation. A wide variety of logics have been developed; in addition to classical logic, there are fuzzy logics, affine logics, quantum logics, etc. All of these have found application in simulations of one sort or another. The special characteristics of linear logic and its benefits for simulation will be discussed. Of particular interest is a connection that can be made between linear logic and simulated dynamics by using the concept of Lie algebras and Lie groups. Lie groups provide the connection between the exponential modal storage operators of linear logic and the eigen functions of dynamic differential operators. Particularly suggestive are possible relations between complexity result for linear logic and non-computability results for dynamical systems.

  15. Propagating Resource Constraints Using Mutual Exclusion Reasoning

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Sanchez, Romeo; Do, Minh B.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    One of the most recent techniques for propagating resource constraints in Constraint Based scheduling is Energy Constraint. This technique focuses in precedence based scheduling, where precedence relations are taken into account rather than the absolute position of activities. Although, this particular technique proved to be efficient on discrete unary resources, it provides only loose bounds for jobs using discrete multi-capacity resources. In this paper we show how mutual exclusion reasoning can be used to propagate time bounds for activities using discrete resources. We show that our technique based on critical path analysis and mutex reasoning is just as effective on unary resources, and also shows that it is more effective on multi-capacity resources, through both examples and empirical study.

  16. Predicting Development of Mathematical Word Problem Solving Across the Intermediate Grades

    PubMed Central

    Tolar, Tammy D.; Fuchs, Lynn; Cirino, Paul T.; Fuchs, Douglas; Hamlett, Carol L.; Fletcher, Jack M.

    2012-01-01

    This study addressed predictors of the development of word problem solving (WPS) across the intermediate grades. At beginning of 3rd grade, 4 cohorts of students (N = 261) were measured on computation, language, nonverbal reasoning skills, and attentive behavior and were assessed 4 times from beginning of 3rd through end of 5th grade on 2 measures of WPS at low and high levels of complexity. Language skills were related to initial performance at both levels of complexity and did not predict growth at either level. Computational skills had an effect on initial performance in low- but not high-complexity problems and did not predict growth at either level of complexity. Attentive behavior did not predict initial performance but did predict growth in low-complexity, whereas it predicted initial performance but not growth for high-complexity problems. Nonverbal reasoning predicted initial performance and growth for low-complexity WPS, but only growth for high-complexity WPS. This evidence suggests that although mathematical structure is fixed, different cognitive resources may act as limiting factors in WPS development when the WPS context is varied. PMID:23325985

  17. Tool for Analysis and Reduction of Scientific Data

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    The Automated Scheduling and Planning Environment (ASPEN) computer program has been updated to version 3.0. ASPEN as a whole (up to version 2.0) has been summarized, and selected aspects of ASPEN have been discussed in several previous NASA Tech Briefs articles. Restated briefly, ASPEN is a modular, reconfigurable, application software framework for solving batch problems that involve reasoning about time, activities, states, and resources. Applications of ASPEN can include planning spacecraft missions, scheduling of personnel, and managing supply chains, inventories, and production lines. ASPEN 3.0 can be customized for a wide range of applications and for a variety of computing environments that include various central processing units and randomaccess memories. Domain-specific reasoning modules (e.g., modules for determining orbits for spacecraft) can easily be plugged into ASPEN 3.0. Improvements over other, similar software that have been incorporated into ASPEN 3.0 include a provision for more expressive time-line values, new parsing capabilities afforded by an ASPEN language based on Extensible Markup Language, improved search capabilities, and improved interfaces to other, utility-type software (notably including MATLAB).

  18. The Effect of Functional Hearing and Hearing Aid Usage on Verbal Reasoning in a Large Community-Dwelling Population.

    PubMed

    Keidser, Gitte; Rudner, Mary; Seeto, Mark; Hygge, Staffan; Rönnberg, Jerker

    2016-01-01

    Verbal reasoning performance is an indicator of the ability to think constructively in everyday life and relies on both crystallized and fluid intelligence. This study aimed to determine the effect of functional hearing on verbal reasoning when controlling for age, gender, and education. In addition, the study investigated whether hearing aid usage mitigated the effect and examined different routes from hearing to verbal reasoning. Cross-sectional data on 40- to 70-year-old community-dwelling participants from the UK Biobank resource were accessed. Data consisted of behavioral and subjective measures of functional hearing, assessments of numerical and linguistic verbal reasoning, measures of executive function, and demographic and lifestyle information. Data on 119,093 participants who had completed hearing and verbal reasoning tests were submitted to multiple regression analyses, and data on 61,688 of these participants, who had completed additional cognitive tests and provided relevant lifestyle information, were submitted to structural equation modeling. Poorer performance on the behavioral measure of functional hearing was significantly associated with poorer verbal reasoning in both the numerical and linguistic domains (p < 0.001). There was no association between the subjective measure of functional hearing and verbal reasoning. Functional hearing significantly interacted with education (p < 0.002), showing a trend for functional hearing to have a greater impact on verbal reasoning among those with a higher level of formal education. Among those with poor hearing, hearing aid usage had a significant positive, but not necessarily causal, effect on both numerical and linguistic verbal reasoning (p < 0.005). The estimated effect of hearing aid usage was less than the effect of poor functional hearing. Structural equation modeling analyses confirmed that controlling for education reduced the effect of functional hearing on verbal reasoning and showed that controlling for executive function eliminated the effect. However, when computer usage was controlled for, the eliminating effect of executive function was weakened. Poor functional hearing was associated with poor verbal reasoning in a 40- to 70-year-old community-dwelling population after controlling for age, gender, and education. The effect of functional hearing on verbal reasoning was significantly reduced among hearing aid users and completely overcome by good executive function skills, which may be enhanced by playing computer games.

  19. Experience on HTCondor batch system for HEP and other research fields at KISTI-GSDC

    NASA Astrophysics Data System (ADS)

    Ahn, S. U.; Jaikar, A.; Kong, B.; Yeo, I.; Bae, S.; Kim, J.

    2017-10-01

    Global Science experimental Data hub Center (GSDC) at Korea Institute of Science and Technology Information (KISTI) located at Daejeon in South Korea is the unique datacenter in the country which helps with its computing resources fundamental research fields dealing with the large-scale of data. For historical reason, it has run Torque batch system while recently it starts running HTCondor for new systems. Having different kinds of batch systems implies inefficiency in terms of resource management and utilization. We conducted a research on resource management with HTCondor for several user scenarios corresponding to the user environments that currently GSDC supports. A recent research on the resource usage patterns at GSDC is considered in this research to build the possible user scenarios. Checkpointing and Super-Collector model of HTCondor give us more efficient and flexible way to manage resources and Grid Gate provided by HTCondor helps to interface with the Grid environment. In this paper, the overview on the essential features of HTCondor exploited in this work is described and the practical examples for HTCondor cluster configuration in our cases are presented.

  20. 30 CFR 721.14 - Failure to give notice and lack of reasonable belief.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 3 2013-07-01 2013-07-01 false Failure to give notice and lack of reasonable belief. 721.14 Section 721.14 Mineral Resources OFFICE OF SURFACE MINING RECLAMATION AND ENFORCEMENT... and lack of reasonable belief. No notice of violation or cessation order may be vacated by reason of...

  1. 30 CFR 721.14 - Failure to give notice and lack of reasonable belief.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 3 2010-07-01 2010-07-01 false Failure to give notice and lack of reasonable belief. 721.14 Section 721.14 Mineral Resources OFFICE OF SURFACE MINING RECLAMATION AND ENFORCEMENT... and lack of reasonable belief. No notice of violation or cessation order may be vacated by reason of...

  2. 30 CFR 721.14 - Failure to give notice and lack of reasonable belief.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 3 2012-07-01 2012-07-01 false Failure to give notice and lack of reasonable belief. 721.14 Section 721.14 Mineral Resources OFFICE OF SURFACE MINING RECLAMATION AND ENFORCEMENT... and lack of reasonable belief. No notice of violation or cessation order may be vacated by reason of...

  3. 30 CFR 721.14 - Failure to give notice and lack of reasonable belief.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 3 2014-07-01 2014-07-01 false Failure to give notice and lack of reasonable belief. 721.14 Section 721.14 Mineral Resources OFFICE OF SURFACE MINING RECLAMATION AND ENFORCEMENT... and lack of reasonable belief. No notice of violation or cessation order may be vacated by reason of...

  4. 30 CFR 721.14 - Failure to give notice and lack of reasonable belief.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 3 2011-07-01 2011-07-01 false Failure to give notice and lack of reasonable belief. 721.14 Section 721.14 Mineral Resources OFFICE OF SURFACE MINING RECLAMATION AND ENFORCEMENT... and lack of reasonable belief. No notice of violation or cessation order may be vacated by reason of...

  5. Computer classification of remotely sensed multispectral image data by extraction and classification of homogeneous objects

    NASA Technical Reports Server (NTRS)

    Kettig, R. L.

    1975-01-01

    A method of classification of digitized multispectral images is developed and experimentally evaluated on actual earth resources data collected by aircraft and satellite. The method is designed to exploit the characteristic dependence between adjacent states of nature that is neglected by the more conventional simple-symmetric decision rule. Thus contextual information is incorporated into the classification scheme. The principle reason for doing this is to improve the accuracy of the classification. For general types of dependence this would generally require more computation per resolution element than the simple-symmetric classifier. But when the dependence occurs in the form of redundance, the elements can be classified collectively, in groups, therby reducing the number of classifications required.

  6. Learning and Reasoning in Unknown Domains

    NASA Astrophysics Data System (ADS)

    Strannegård, Claes; Nizamani, Abdul Rahim; Juel, Jonas; Persson, Ulf

    2016-12-01

    In the story Alice in Wonderland, Alice fell down a rabbit hole and suddenly found herself in a strange world called Wonderland. Alice gradually developed knowledge about Wonderland by observing, learning, and reasoning. In this paper we present the system Alice In Wonderland that operates analogously. As a theoretical basis of the system, we define several basic concepts of logic in a generalized setting, including the notions of domain, proof, consistency, soundness, completeness, decidability, and compositionality. We also prove some basic theorems about those generalized notions. Then we model Wonderland as an arbitrary symbolic domain and Alice as a cognitive architecture that learns autonomously by observing random streams of facts from Wonderland. Alice is able to reason by means of computations that use bounded cognitive resources. Moreover, Alice develops her belief set by continuously forming, testing, and revising hypotheses. The system can learn a wide class of symbolic domains and challenge average human problem solvers in such domains as propositional logic and elementary arithmetic.

  7. Understanding the Performance and Potential of Cloud Computing for Scientific Applications

    DOE PAGES

    Sadooghi, Iman; Martin, Jesus Hernandez; Li, Tonglin; ...

    2015-02-19

    In this paper, commercial clouds bring a great opportunity to the scientific computing area. Scientific applications usually require significant resources, however not all scientists have access to sufficient high-end computing systems, may of which can be found in the Top500 list. Cloud Computing has gained the attention of scientists as a competitive resource to run HPC applications at a potentially lower cost. But as a different infrastructure, it is unclear whether clouds are capable of running scientific applications with a reasonable performance per money spent. This work studies the performance of public clouds and places this performance in context tomore » price. We evaluate the raw performance of different services of AWS cloud in terms of the basic resources, such as compute, memory, network and I/O. We also evaluate the performance of the scientific applications running in the cloud. This paper aims to assess the ability of the cloud to perform well, as well as to evaluate the cost of the cloud running scientific applications. We developed a full set of metrics and conducted a comprehensive performance evlauation over the Amazon cloud. We evaluated EC2, S3, EBS and DynamoDB among the many Amazon AWS services. We evaluated the memory sub-system performance with CacheBench, the network performance with iperf, processor and network performance with the HPL benchmark application, and shared storage with NFS and PVFS in addition to S3. We also evaluated a real scientific computing application through the Swift parallel scripting system at scale. Armed with both detailed benchmarks to gauge expected performance and a detailed monetary cost analysis, we expect this paper will be a recipe cookbook for scientists to help them decide where to deploy and run their scientific applications between public clouds, private clouds, or hybrid clouds.« less

  8. Understanding the Performance and Potential of Cloud Computing for Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadooghi, Iman; Martin, Jesus Hernandez; Li, Tonglin

    In this paper, commercial clouds bring a great opportunity to the scientific computing area. Scientific applications usually require significant resources, however not all scientists have access to sufficient high-end computing systems, may of which can be found in the Top500 list. Cloud Computing has gained the attention of scientists as a competitive resource to run HPC applications at a potentially lower cost. But as a different infrastructure, it is unclear whether clouds are capable of running scientific applications with a reasonable performance per money spent. This work studies the performance of public clouds and places this performance in context tomore » price. We evaluate the raw performance of different services of AWS cloud in terms of the basic resources, such as compute, memory, network and I/O. We also evaluate the performance of the scientific applications running in the cloud. This paper aims to assess the ability of the cloud to perform well, as well as to evaluate the cost of the cloud running scientific applications. We developed a full set of metrics and conducted a comprehensive performance evlauation over the Amazon cloud. We evaluated EC2, S3, EBS and DynamoDB among the many Amazon AWS services. We evaluated the memory sub-system performance with CacheBench, the network performance with iperf, processor and network performance with the HPL benchmark application, and shared storage with NFS and PVFS in addition to S3. We also evaluated a real scientific computing application through the Swift parallel scripting system at scale. Armed with both detailed benchmarks to gauge expected performance and a detailed monetary cost analysis, we expect this paper will be a recipe cookbook for scientists to help them decide where to deploy and run their scientific applications between public clouds, private clouds, or hybrid clouds.« less

  9. Optimizing R with SparkR on a commodity cluster for biomedical research.

    PubMed

    Sedlmayr, Martin; Würfl, Tobias; Maier, Christian; Häberle, Lothar; Fasching, Peter; Prokosch, Hans-Ulrich; Christoph, Jan

    2016-12-01

    Medical researchers are challenged today by the enormous amount of data collected in healthcare. Analysis methods such as genome-wide association studies (GWAS) are often computationally intensive and thus require enormous resources to be performed in a reasonable amount of time. While dedicated clusters and public clouds may deliver the desired performance, their use requires upfront financial efforts or anonymous data, which is often not possible for preliminary or occasional tasks. We explored the possibilities to build a private, flexible cluster for processing scripts in R based on commodity, non-dedicated hardware of our department. For this, a GWAS-calculation in R on a single desktop computer, a Message Passing Interface (MPI)-cluster, and a SparkR-cluster were compared with regards to the performance, scalability, quality, and simplicity. The original script had a projected runtime of three years on a single desktop computer. Optimizing the script in R already yielded a significant reduction in computing time (2 weeks). By using R-MPI and SparkR, we were able to parallelize the computation and reduce the time to less than three hours (2.6 h) on already available, standard office computers. While MPI is a proven approach in high-performance clusters, it requires rather static, dedicated nodes. SparkR and its Hadoop siblings allow for a dynamic, elastic environment with automated failure handling. SparkR also scales better with the number of nodes in the cluster than MPI due to optimized data communication. R is a popular environment for clinical data analysis. The new SparkR solution offers elastic resources and allows supporting big data analysis using R even on non-dedicated resources with minimal change to the original code. To unleash the full potential, additional efforts should be invested to customize and improve the algorithms, especially with regards to data distribution. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  10. Using Computer Simulations for Promoting Model-based Reasoning. Epistemological and Educational Dimensions

    NASA Astrophysics Data System (ADS)

    Develaki, Maria

    2017-11-01

    Scientific reasoning is particularly pertinent to science education since it is closely related to the content and methodologies of science and contributes to scientific literacy. Much of the research in science education investigates the appropriate framework and teaching methods and tools needed to promote students' ability to reason and evaluate in a scientific way. This paper aims (a) to contribute to an extended understanding of the nature and pedagogical importance of model-based reasoning and (b) to exemplify how using computer simulations can support students' model-based reasoning. We provide first a background for both scientific reasoning and computer simulations, based on the relevant philosophical views and the related educational discussion. This background suggests that the model-based framework provides an epistemologically valid and pedagogically appropriate basis for teaching scientific reasoning and for helping students develop sounder reasoning and decision-taking abilities and explains how using computer simulations can foster these abilities. We then provide some examples illustrating the use of computer simulations to support model-based reasoning and evaluation activities in the classroom. The examples reflect the procedure and criteria for evaluating models in science and demonstrate the educational advantages of their application in classroom reasoning activities.

  11. Study of sensor spectral responses and data processing algorithms and architectures for onboard feature identification

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Davis, R. E.; Fales, C. L.; Aherron, R. M.

    1982-01-01

    A computational model of the deterministic and stochastic processes involved in remote sensing is used to study spectral feature identification techniques for real-time onboard processing of data acquired with advanced earth-resources sensors. Preliminary results indicate that: Narrow spectral responses are advantageous; signal normalization improves mean-square distance (MSD) classification accuracy but tends to degrade maximum-likelihood (MLH) classification accuracy; and MSD classification of normalized signals performs better than the computationally more complex MLH classification when imaging conditions change appreciably from those conditions during which reference data were acquired. The results also indicate that autonomous categorization of TM signals into vegetation, bare land, water, snow and clouds can be accomplished with adequate reliability for many applications over a reasonably wide range of imaging conditions. However, further analysis is required to develop computationally efficient boundary approximation algorithms for such categorization.

  12. Integrated system dynamics toolbox for water resources planning.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reno, Marissa Devan; Passell, Howard David; Malczynski, Leonard A.

    2006-12-01

    Public mediated resource planning is quickly becoming the norm rather than the exception. Unfortunately, supporting tools are lacking that interactively engage the public in the decision-making process and integrate over the myriad values that influence water policy. In the pages of this report we document the first steps toward developing a specialized decision framework to meet this need; specifically, a modular and generic resource-planning ''toolbox''. The technical challenge lies in the integration of the disparate systems of hydrology, ecology, climate, demographics, economics, policy and law, each of which influence the supply and demand for water. Specifically, these systems, their associatedmore » processes, and most importantly the constitutive relations that link them must be identified, abstracted, and quantified. For this reason, the toolbox forms a collection of process modules and constitutive relations that the analyst can ''swap'' in and out to model the physical and social systems unique to their problem. This toolbox with all of its modules is developed within the common computational platform of system dynamics linked to a Geographical Information System (GIS). Development of this resource-planning toolbox represents an important foundational element of the proposed interagency center for Computer Aided Dispute Resolution (CADRe). The Center's mission is to manage water conflict through the application of computer-aided collaborative decision-making methods. The Center will promote the use of decision-support technologies within collaborative stakeholder processes to help stakeholders find common ground and create mutually beneficial water management solutions. The Center will also serve to develop new methods and technologies to help federal, state and local water managers find innovative and balanced solutions to the nation's most vexing water problems. The toolbox is an important step toward achieving the technology development goals of this center.« less

  13. [APPLICATION OF COMPUTER-ASSISTED TECHNOLOGY IN ANALYSIS OF REVISION REASON OF UNICOMPARTMENTAL KNEE ARTHROPLASTY].

    PubMed

    Jia, Di; Li, Yanlin; Wang, Guoliang; Gao, Huanyu; Yu, Yang

    2016-01-01

    To conclude the revision reason of unicompartmental knee arthroplasty (UKA) using computer-assisted technology so as to provide reference for reducing the revision incidence and improving the level of surgical technique and rehabilitation. The relevant literature on analyzing revision reason of UKA using computer-assisted technology in recent years was extensively reviewed. The revision reasons by computer-assisted technology are fracture of the medial tibial plateau, progressive osteoarthritis of reserved compartment, dislocation of mobile bearing, prosthesis loosening, polyethylene wear, and unexplained persistent pain. Computer-assisted technology can be used to analyze the revision reason of UKA and guide the best operating method and rehabilitation scheme by simulating the operative process and knee joint activities.

  14. 75 FR 6792 - Proposed Information Collection (Reasonable Accommodation) Activity: Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-10

    ... Collection (Reasonable Accommodation) Activity: Comment Request AGENCY: Office of Human Resources and Administration, Department of Veterans Affairs. ACTION: Notice. SUMMARY: The Office of Human Resources and... ; or to David Walton, Office of Human Resources Management (06), Department of Veterans Affairs, 810...

  15. Visualization Methods for Viability Studies of Inspection Modules for the Space Shuttle

    NASA Technical Reports Server (NTRS)

    Mobasher, Amir A.

    2005-01-01

    An effective simulation of an object, process, or task must be similar to that object, process, or task. A simulation could consist of a physical device, a set of mathematical equations, a computer program, a person, or some combination of these. There are many reasons for the use of simulators. Although some of the reasons are unique to a specific situation, there are many general reasons and purposes for using simulators. Some are listed but not limited to (1) Safety, (2) Scarce resources, (3) Teaching/education, (4) Additional capabilities, (5) Flexibility and (6) Cost. Robot simulators are in use for all of these reasons. Virtual environments such as simulators will eliminate physical contact with humans and hence will increase the safety of work environment. Corporations with limited funding and resources may utilize simulators to accomplish their goals while saving manpower and money. A computer simulation is safer than working with a real robot. Robots are typically a scarce resource. Schools typically don t have a large number of robots, if any. Factories don t want the robots not performing useful work unless absolutely necessary. Robot simulators are useful in teaching robotics. A simulator gives a student hands-on experience, if only with a simulator. The simulator is more flexible. A user can quickly change the robot configuration, workcell, or even replace the robot with a different one altogether. In order to be useful, a robot simulator must create a model that accurately performs like the real robot. A powerful simulator is usually thought of as a combination of a CAD package with simulation capabilities. Computer Aided Design (CAD) techniques are used extensively by engineers in virtually all areas of engineering. Parts are designed interactively aided by the graphical display of both wireframe and more realistic shaded renderings. Once a part s dimensions have been specified to the CAD package, designers can view the part from any direction to examine how it will look and perform in relation to other parts. If changes are deemed necessary, the designer can easily make the changes and view the results graphically. However, a complex process of moving parts intended for operation in a complex environment can only be fully understood through the process of animated graphical simulation. A CAD package with simulation capabilities allows the designer to develop geometrical models of the process being designed, as well as the environment in which the process will be used, and then test the process in graphical animation much as the actual physical system would be run . By being able to operate the system of moving and stationary parts, the designer is able to see in simulation how the system will perform under a wide variety of conditions. If, for example, undesired collisions occur between parts of the system, design changes can be easily made without the expense or potential danger of testing the physical system.

  16. Investigating College and Graduate Students' Multivariable Reasoning in Computational Modeling

    ERIC Educational Resources Information Center

    Wu, Hsin-Kai; Wu, Pai-Hsing; Zhang, Wen-Xin; Hsu, Ying-Shao

    2013-01-01

    Drawing upon the literature in computational modeling, multivariable reasoning, and causal attribution, this study aims at characterizing multivariable reasoning practices in computational modeling and revealing the nature of understanding about multivariable causality. We recruited two freshmen, two sophomores, two juniors, two seniors, four…

  17. Comparison of gross anatomy test scores using traditional specimens vs. QuickTime Virtual Reality animated specimens

    NASA Astrophysics Data System (ADS)

    Maza, Paul Sadiri

    In recent years, technological advances such as computers have been employed in teaching gross anatomy at all levels of education, even in professional schools such as medical and veterinary medical colleges. Benefits of computer based instructional tools for gross anatomy include the convenience of not having to physically view or dissect a cadaver. Anatomy educators debate over the advantages versus the disadvantages of computer based resources for gross anatomy instruction. Many studies, case reports, and editorials argue for the increased use of computer based anatomy educational tools, while others discuss the necessity of dissection for various reasons important in learning anatomy, such as a three-dimensional physical view of the specimen, physical handling of tissues, interactions with fellow students during dissection, and differences between specific specimens. While many articles deal with gross anatomy education using computers, there seems to be a lack of studies investigating the use of computer based resources as an assessment tool for gross anatomy, specifically using the Apple application QuickTime Virtual Reality (QTVR). This study investigated the use of QTVR movie modules to assess if using computer based QTVR movie module assessments were equal in quality to actual physical specimen examinations. A gross anatomy course in the College of Veterinary Medicine at Cornell University was used as a source of anatomy students and gross anatomy examinations. Two groups were compared, one group taking gross anatomy examinations in a traditional manner, by viewing actual physical specimens and answering questions based on those specimens. The other group took the same examinations using the same specimens, but the specimens were viewed as simulated three-dimensional objects in a QTVR movie module. Sample group means for the assessments were compared. A survey was also administered asking students' perceptions of quality and user-friendliness of the QTVR movie modules. The comparison of the two sample group means of the examinations show that there was no difference in results between using QTVR movie modules to test gross anatomy knowledge versus using physical specimens. The results of this study are discussed to explain the benefits of using such computer based anatomy resources in gross anatomy assessments.

  18. Computer access and Internet use by urban and suburban emergency department customers.

    PubMed

    Bond, Michael C; Klemt, Ryan; Merlis, Jennifer; Kopinski, Judith E; Hirshon, Jon Mark

    2012-07-01

    Patients are increasingly using the Internet (43% in 2000 vs. 70% in 2006) to obtain health information, but is there a difference in the ability of urban and suburban emergency department (ED) customers to access the Internet? To assess computer and Internet resources available to and used by people waiting to be seen in an urban ED and a suburban ED. Individuals waiting in the ED were asked survey questions covering demographics, type of insurance, access to a primary care provider, reason for their ED visit, computer access, and ability to access the Internet for health-related matters. There were 304 individuals who participated, 185 in the urban ED and 119 in the suburban ED. Urban subjects were more likely than suburban to be women, black, have low household income, and were less likely to have insurance. The groups were similar in regard to average age, education, and having a primary care physician. Suburban respondents were more likely to own a computer, but the majority in both groups had access to computers and the Internet. Their frequency of accessing the Internet was similar, as were their reasons for using it. Individuals from the urban ED were less willing to schedule appointments via the Internet but more willing to contact their health care provider via e-mail. The groups were equally willing to use the Internet to fill prescriptions and view laboratory results. Urban and suburban ED customers had similar access to the Internet. Both groups were willing to use the Internet to access personal health information. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. A development framework for distributed artificial intelligence

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.; Cottman, Bruce H.

    1989-01-01

    The authors describe distributed artificial intelligence (DAI) applications in which multiple organizations of agents solve multiple domain problems. They then describe work in progress on a DAI system development environment, called SOCIAL, which consists of three primary language-based components. The Knowledge Object Language defines models of knowledge representation and reasoning. The metaCourier language supplies the underlying functionality for interprocess communication and control access across heterogeneous computing environments. The metaAgents language defines models for agent organization coordination, control, and resource management. Application agents and agent organizations will be constructed by combining metaAgents and metaCourier building blocks with task-specific functionality such as diagnostic or planning reasoning. This architecture hides implementation details of communications, control, and integration in distributed processing environments, enabling application developers to concentrate on the design and functionality of the intelligent agents and agent networks themselves.

  20. Centrality-based Selection of Semantic Resources for Geosciences

    NASA Astrophysics Data System (ADS)

    Cerba, Otakar; Jedlicka, Karel

    2017-04-01

    Semantical questions intervene almost in all disciplines dealing with geographic data and information, because relevant semantics is crucial for any way of communication and interaction among humans as well as among machines. But the existence of such a large number of different semantic resources (such as various thesauri, controlled vocabularies, knowledge bases or ontologies) makes the process of semantics implementation much more difficult and complicates the use of the advantages of semantics. This is because in many cases users are not able to find the most suitable resource for their purposes. The research presented in this paper introduces a methodology consisting of an analysis of identical relations in Linked Data space, which covers a majority of semantic resources, to find a suitable resource of semantic information. Identical links interconnect representations of an object or a concept in various semantic resources. Therefore this type of relations is considered to be crucial from the view of Linked Data, because these links provide new additional information, including various views on one concept based on different cultural or regional aspects (so-called social role of Linked Data). For these reasons it is possible to declare that one reasonable criterion for feasible semantic resources for almost all domains, including geosciences, is their position in a network of interconnected semantic resources and level of linking to other knowledge bases and similar products. The presented methodology is based on searching of mutual connections between various instances of one concept using "follow your nose" approach. The extracted data on interconnections between semantic resources are arranged to directed graphs and processed by various metrics patterned on centrality computing (degree, closeness or betweenness centrality). Semantic resources recommended by the research could be used for providing semantically described keywords for metadata records or as names of items in data models. Such an approach enables much more efficient data harmonization, integration, sharing and exploitation. * * * * This publication was supported by the project LO1506 of the Czech Ministry of Education, Youth and Sports. This publication was supported by project Data-Driven Bioeconomy (DataBio) from the ICT-15-2016-2017, Big Data PPP call.

  1. An acceptable role for computers in the aircraft design process

    NASA Technical Reports Server (NTRS)

    Gregory, T. J.; Roberts, L.

    1980-01-01

    Some of the reasons why the computerization trend is not wholly accepted are explored for two typical cases: computer use in the technical specialties and computer use in aircraft synthesis. The factors that limit acceptance are traced in part, to the large resources needed to understand the details of computer programs, the inability to include measured data as input to many of the theoretical programs, and the presentation of final results without supporting intermediate answers. Other factors are due solely to technical issues such as limited detail in aircraft synthesis and major simplifying assumptions in the technical specialties. These factors and others can be influenced by the technical specialist and aircraft designer. Some of these factors may become less significant as the computerization process evolves, but some issues, such as understanding large integrated systems, may remain issues in the future. Suggestions for improved acceptance include publishing computer programs so that they may be reviewed, edited, and read. Other mechanisms include extensive modularization of programs and ways to include measured information as part of the input to theoretical approaches.

  2. Parallel Computation of Unsteady Flows on a Network of Workstations

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Parallel computation of unsteady flows requires significant computational resources. The utilization of a network of workstations seems an efficient solution to the problem where large problems can be treated at a reasonable cost. This approach requires the solution of several problems: 1) the partitioning and distribution of the problem over a network of workstation, 2) efficient communication tools, 3) managing the system efficiently for a given problem. Of course, there is the question of the efficiency of any given numerical algorithm to such a computing system. NPARC code was chosen as a sample for the application. For the explicit version of the NPARC code both two- and three-dimensional problems were studied. Again both steady and unsteady problems were investigated. The issues studied as a part of the research program were: 1) how to distribute the data between the workstations, 2) how to compute and how to communicate at each node efficiently, 3) how to balance the load distribution. In the following, a summary of these activities is presented. Details of the work have been presented and published as referenced.

  3. Reasoning and dyslexia: is visual memory a compensatory resource?

    PubMed

    Bacon, Alison M; Handley, Simon J

    2014-11-01

    Effective reasoning is fundamental to problem solving and achievement in education and employment. Protocol studies have previously suggested that people with dyslexia use reasoning strategies based on visual mental representations, whereas non-dyslexics use abstract verbal strategies. This research presents converging evidence from experimental and individual differences perspectives. In Experiment 1, dyslexic and non-dyslexic participants were similarly accurate on reasoning problems, but scores on a measure of visual memory ability only predicted reasoning accuracy for dyslexics. In Experiment 2, a secondary task loaded visual memory resources during concurrent reasoning. Dyslexics were significantly less accurate when reasoning under conditions of high memory load and showed reduced ability to subsequently recall the visual stimuli, suggesting that the memory and reasoning tasks were competing for the same visual cognitive resource. The results are consistent with an explanation based on limitations in the verbal and executive components of working memory in dyslexia and the use of compensatory visual strategies for reasoning. There are implications for cognitive activities that do not readily support visual thinking, whether in education, employment or less formal everyday settings. Copyright © 2014 John Wiley & Sons, Ltd.

  4. 75 FR 20427 - Agency Information Collection (Reasonable Accommodation) Activities Under OMB Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-19

    ... Collection (Reasonable Accommodation) Activities Under OMB Review AGENCY: Office of Human Resources and... Reduction Act (PRA) of 1995 (44 U.S.C. 3501-21), this notice announces that the Office of Human Resources... VA's OMB Desk Officer, OMB Human Resources and Housing Branch, New Executive Office Building, Room...

  5. Service-oriented Reasoning Architecture for Resource-Task Assignment in Sensor Networks

    DTIC Science & Technology

    2011-04-01

    www.csd.abdn.ac.uk/research/ita/sam/downloads/ontology/ISTAR.owl Sensing Resource Platform Sensors SR4 Nimrod MR2 LDRFCamera, SARCamera, TVCamera SR5 WASP...resources in the theatre. This is because according to the knowledge available to the ISTAR reasoner service, a ‘ Nimrod ’ could perform high altitude

  6. Applying Utility Functions to Adaptation Planning for Home Automation Applications

    NASA Astrophysics Data System (ADS)

    Bratskas, Pyrros; Paspallis, Nearchos; Kakousis, Konstantinos; Papadopoulos, George A.

    A pervasive computing environment typically comprises multiple embedded devices that may interact together and with mobile users. These users are part of the environment, and they experience it through a variety of devices embedded in the environment. This perception involves technologies which may be heterogeneous, pervasive, and dynamic. Due to the highly dynamic properties of such environments, the software systems running on them have to face problems such as user mobility, service failures, or resource and goal changes which may happen in an unpredictable manner. To cope with these problems, such systems must be autonomous and self-managed. In this chapter we deal with a special kind of a ubiquitous environment, a smart home environment, and introduce a user-preference-based model for adaptation planning. The model, which dynamically forms a set of configuration plans for resources, reasons automatically and autonomously, based on utility functions, on which plan is likely to best achieve the user's goals with respect to resource availability and user needs.

  7. Privacy Preserving Association Rule Mining Revisited: Privacy Enhancement and Resources Efficiency

    NASA Astrophysics Data System (ADS)

    Mohaisen, Abedelaziz; Jho, Nam-Su; Hong, Dowon; Nyang, Daehun

    Privacy preserving association rule mining algorithms have been designed for discovering the relations between variables in data while maintaining the data privacy. In this article we revise one of the recently introduced schemes for association rule mining using fake transactions (FS). In particular, our analysis shows that the FS scheme has exhaustive storage and high computation requirements for guaranteeing a reasonable level of privacy. We introduce a realistic definition of privacy that benefits from the average case privacy and motivates the study of a weakness in the structure of FS by fake transactions filtering. In order to overcome this problem, we improve the FS scheme by presenting a hybrid scheme that considers both privacy and resources as two concurrent guidelines. Analytical and empirical results show the efficiency and applicability of our proposed scheme.

  8. Automated generation of patient-tailored electronic care pathways by translating computer-interpretable guidelines into hierarchical task networks.

    PubMed

    González-Ferrer, Arturo; ten Teije, Annette; Fdez-Olivares, Juan; Milian, Krystyna

    2013-02-01

    This paper describes a methodology which enables computer-aided support for the planning, visualization and execution of personalized patient treatments in a specific healthcare process, taking into account complex temporal constraints and the allocation of institutional resources. To this end, a translation from a time-annotated computer-interpretable guideline (CIG) model of a clinical protocol into a temporal hierarchical task network (HTN) planning domain is presented. The proposed method uses a knowledge-driven reasoning process to translate knowledge previously described in a CIG into a corresponding HTN Planning and Scheduling domain, taking advantage of HTNs known ability to (i) dynamically cope with temporal and resource constraints, and (ii) automatically generate customized plans. The proposed method, focusing on the representation of temporal knowledge and based on the identification of workflow and temporal patterns in a CIG, makes it possible to automatically generate time-annotated and resource-based care pathways tailored to the needs of any possible patient profile. The proposed translation is illustrated through a case study based on a 70 pages long clinical protocol to manage Hodgkin's disease, developed by the Spanish Society of Pediatric Oncology. We show that an HTN planning domain can be generated from the corresponding specification of the protocol in the Asbru language, providing a running example of this translation. Furthermore, the correctness of the translation is checked and also the management of ten different types of temporal patterns represented in the protocol. By interpreting the automatically generated domain with a state-of-art HTN planner, a time-annotated care pathway is automatically obtained, customized for the patient's and institutional needs. The generated care pathway can then be used by clinicians to plan and manage the patients long-term care. The described methodology makes it possible to automatically generate patient-tailored care pathways, leveraging an incremental knowledge-driven engineering process that starts from the expert knowledge of medical professionals. The presented approach makes the most of the strengths inherent in both CIG languages and HTN planning and scheduling techniques: for the former, knowledge acquisition and representation of the original clinical protocol, and for the latter, knowledge reasoning capabilities and an ability to deal with complex temporal and resource constraints. Moreover, the proposed approach provides immediate access to technologies such as business process management (BPM) tools, which are increasingly being used to support healthcare processes. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Controlling user access to electronic resources without password

    DOEpatents

    Smith, Fred Hewitt

    2015-06-16

    Described herein are devices and techniques for remotely controlling user access to a restricted computer resource. The process includes pre-determining an association of the restricted computer resource and computer-resource-proximal environmental information. Indicia of user-proximal environmental information are received from a user requesting access to the restricted computer resource. Received indicia of user-proximal environmental information are compared to associated computer-resource-proximal environmental information. User access to the restricted computer resource is selectively granted responsive to a favorable comparison in which the user-proximal environmental information is sufficiently similar to the computer-resource proximal environmental information. In at least some embodiments, the process further includes comparing user-supplied biometric measure and comparing it with a predetermined association of at least one biometric measure of an authorized user. Access to the restricted computer resource is granted in response to a favorable comparison.

  10. Laboratory Computing Resource Center

    Science.gov Websites

    Systems Computing and Data Resources Purchasing Resources Future Plans For Users Getting Started Using LCRC Software Best Practices and Policies Getting Help Support Laboratory Computing Resource Center Laboratory Computing Resource Center Latest Announcements See All April 27, 2018, Announcements, John Low

  11. A Practical, Robust Methodology for Acquiring New Observation Data Using Computationally Expensive Groundwater Models

    NASA Astrophysics Data System (ADS)

    Siade, Adam J.; Hall, Joel; Karelse, Robert N.

    2017-11-01

    Regional groundwater flow models play an important role in decision making regarding water resources; however, the uncertainty embedded in model parameters and model assumptions can significantly hinder the reliability of model predictions. One way to reduce this uncertainty is to collect new observation data from the field. However, determining where and when to obtain such data is not straightforward. There exist a number of data-worth and experimental design strategies developed for this purpose. However, these studies often ignore issues related to real-world groundwater models such as computational expense, existing observation data, high-parameter dimension, etc. In this study, we propose a methodology, based on existing methods and software, to efficiently conduct such analyses for large-scale, complex regional groundwater flow systems for which there is a wealth of available observation data. The method utilizes the well-established d-optimality criterion, and the minimax criterion for robust sampling strategies. The so-called Null-Space Monte Carlo method is used to reduce the computational burden associated with uncertainty quantification. And, a heuristic methodology, based on the concept of the greedy algorithm, is proposed for developing robust designs with subsets of the posterior parameter samples. The proposed methodology is tested on a synthetic regional groundwater model, and subsequently applied to an existing, complex, regional groundwater system in the Perth region of Western Australia. The results indicate that robust designs can be obtained efficiently, within reasonable computational resources, for making regional decisions regarding groundwater level sampling.

  12. Development of a multi-disciplinary ERTS user program in the state of Ohio. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Baldridge, P. E.; Weber, C.; Schaal, G.; Wilhelm, C.; Wurelic, G. E.; Stephan, J. G.; Ebbert, T. F.; Smail, H. E.; Mckeon, J.; Schmidt, N. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. A current uniform land inventory was derived, in part, from LANDSAT data. The State has the ability to convert processed land information from LANDSAT to Ohio Capability Analysis Program (OCAP). The OCAP is a computer information and mapping system comprised of various programs used to digitally store, analyze, and display land capability information. More accurate processing of LANDSAT data could lead to reasonably accurate, useful land allocations models. It was feasible to use LANDSAT data to investigate minerals, pollution, land use, and resource inventory.

  13. 30 CFR 550.271 - For what reasons will BOEM disapprove the DPP or DOCD?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 2 2012-07-01 2012-07-01 false For what reasons will BOEM disapprove the DPP or DOCD? 550.271 Section 550.271 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, DEPARTMENT OF... Information Review and Decision Process for the Dpp Or Docd § 550.271 For what reasons will BOEM disapprove...

  14. 30 CFR 550.271 - For what reasons will BOEM disapprove the DPP or DOCD?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 2 2013-07-01 2013-07-01 false For what reasons will BOEM disapprove the DPP or DOCD? 550.271 Section 550.271 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, DEPARTMENT OF... Information Review and Decision Process for the Dpp Or Docd § 550.271 For what reasons will BOEM disapprove...

  15. A FairShare Scheduling Service for OpenNebula

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Vallero, S.; Zaccolo, V.

    2017-10-01

    In the ideal limit of infinite resources, multi-tenant applications are able to scale in/out on a Cloud driven only by their functional requirements. While a large Public Cloud may be a reasonable approximation of this condition, small scientific computing centres usually work in a saturated regime. In this case, an advanced resource allocation policy is needed in order to optimize the use of the data centre. The general topic of advanced resource scheduling is addressed by several components of the EU-funded INDIGO-DataCloud project. In this contribution, we describe the FairShare Scheduler Service (FaSS) for OpenNebula (ONE). The service must satisfy resource requests according to an algorithm which prioritizes tasks according to an initial weight and to the historical resource usage of the project. The software was designed to be less intrusive as possible in the ONE code. We keep the original ONE scheduler implementation to match requests to available resources, but the queue of pending jobs to be processed is the one ordered according to priorities as delivered by the FaSS. The FaSS implementation is still being finalized and in this contribution we describe the functional and design requirements the module should satisfy, as well as its high-level architecture.

  16. Benchmarking the SPHINX and CTH shock physics codes for three problems in ballistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, L.T.; Hertel, E.; Schwalbe, L.

    1998-02-01

    The CTH Eulerian hydrocode, and the SPHINX smooth particle hydrodynamics (SPH) code were used to model a shock tube, two long rod penetrations into semi-infinite steel targets, and a long rod penetration into a spaced plate array. The results were then compared to experimental data. Both SPHINX and CTH modeled the one-dimensional shock tube problem well. Both codes did a reasonable job in modeling the outcome of the axisymmetric rod impact problem. Neither code correctly reproduced the depth of penetration in both experiments. In the 3-D problem, both codes reasonably replicated the penetration of the rod through the first plate.more » After this, however, the predictions of both codes began to diverge from the results seen in the experiment. In terms of computer resources, the run times are problem dependent, and are discussed in the text.« less

  17. Research on elastic resource management for multi-queue under cloud computing environment

    NASA Astrophysics Data System (ADS)

    CHENG, Zhenjing; LI, Haibo; HUANG, Qiulan; Cheng, Yaodong; CHEN, Gang

    2017-10-01

    As a new approach to manage computing resource, virtualization technology is more and more widely applied in the high-energy physics field. A virtual computing cluster based on Openstack was built at IHEP, using HTCondor as the job queue management system. In a traditional static cluster, a fixed number of virtual machines are pre-allocated to the job queue of different experiments. However this method cannot be well adapted to the volatility of computing resource requirements. To solve this problem, an elastic computing resource management system under cloud computing environment has been designed. This system performs unified management of virtual computing nodes on the basis of job queue in HTCondor based on dual resource thresholds as well as the quota service. A two-stage pool is designed to improve the efficiency of resource pool expansion. This paper will present several use cases of the elastic resource management system in IHEPCloud. The practical run shows virtual computing resource dynamically expanded or shrunk while computing requirements change. Additionally, the CPU utilization ratio of computing resource was significantly increased when compared with traditional resource management. The system also has good performance when there are multiple condor schedulers and multiple job queues.

  18. Biomedical discovery acceleration, with applications to craniofacial development.

    PubMed

    Leach, Sonia M; Tipney, Hannah; Feng, Weiguo; Baumgartner, William A; Kasliwal, Priyanka; Schuyler, Ronald P; Williams, Trevor; Spritz, Richard A; Hunter, Lawrence

    2009-03-01

    The profusion of high-throughput instruments and the explosion of new results in the scientific literature, particularly in molecular biomedicine, is both a blessing and a curse to the bench researcher. Even knowledgeable and experienced scientists can benefit from computational tools that help navigate this vast and rapidly evolving terrain. In this paper, we describe a novel computational approach to this challenge, a knowledge-based system that combines reading, reasoning, and reporting methods to facilitate analysis of experimental data. Reading methods extract information from external resources, either by parsing structured data or using biomedical language processing to extract information from unstructured data, and track knowledge provenance. Reasoning methods enrich the knowledge that results from reading by, for example, noting two genes that are annotated to the same ontology term or database entry. Reasoning is also used to combine all sources into a knowledge network that represents the integration of all sorts of relationships between a pair of genes, and to calculate a combined reliability score. Reporting methods combine the knowledge network with a congruent network constructed from experimental data and visualize the combined network in a tool that facilitates the knowledge-based analysis of that data. An implementation of this approach, called the Hanalyzer, is demonstrated on a large-scale gene expression array dataset relevant to craniofacial development. The use of the tool was critical in the creation of hypotheses regarding the roles of four genes never previously characterized as involved in craniofacial development; each of these hypotheses was validated by further experimental work.

  19. Educating executive function.

    PubMed

    Blair, Clancy

    2017-01-01

    Executive functions are thinking skills that assist with reasoning, planning, problem solving, and managing one's life. The brain areas that underlie these skills are interconnected with and influenced by activity in many different brain areas, some of which are associated with emotion and stress. One consequence of the stress-specific connections is that executive functions, which help us to organize our thinking, tend to be disrupted when stimulation is too high and we are stressed out, or too low when we are bored and lethargic. Given their central role in reasoning and also in managing stress and emotion, scientists have conducted studies, primarily with adults, to determine whether executive functions can be improved by training. By and large, results have shown that they can be, in part through computer-based videogame-like activities. Evidence of wider, more general benefits from such computer-based training, however, is mixed. Accordingly, scientists have reasoned that training will have wider benefits if it is implemented early, with very young children as the neural circuitry of executive functions is developing, and that it will be most effective if embedded in children's everyday activities. Evidence produced by this research, however, is also mixed. In sum, much remains to be learned about executive function training. Without question, however, continued research on this important topic will yield valuable information about cognitive development. WIREs Cogn Sci 2017, 8:e1403. doi: 10.1002/wcs.1403 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.

  20. Fostering Multilinguality in the UMLS: A Computational Approach to Terminology Expansion for Multiple Languages

    PubMed Central

    Hellrich, Johannes; Hahn, Udo

    2014-01-01

    We here report on efforts to computationally support the maintenance and extension of multilingual biomedical terminology resources. Our main idea is to treat term acquisition as a classification problem guided by term alignment in parallel multilingual corpora, using termhood information coming from of a named entity recognition system as a novel feature. We report on experiments for Spanish, French, German and Dutch parts of a multilingual UMLS-derived biomedical terminology. These efforts yielded 19k, 18k, 23k and 12k new terms and synonyms, respectively, from which about half relate to concepts without a previously available term label for these non-English languages. Based on expert assessment of a novel German terminology sample, 80% of the newly acquired terms were judged as reasonable additions to the terminology. PMID:25954371

  1. Computer-Aided Experiment Planning toward Causal Discovery in Neuroscience.

    PubMed

    Matiasz, Nicholas J; Wood, Justin; Wang, Wei; Silva, Alcino J; Hsu, William

    2017-01-01

    Computers help neuroscientists to analyze experimental results by automating the application of statistics; however, computer-aided experiment planning is far less common, due to a lack of similar quantitative formalisms for systematically assessing evidence and uncertainty. While ontologies and other Semantic Web resources help neuroscientists to assimilate required domain knowledge, experiment planning requires not only ontological but also epistemological (e.g., methodological) information regarding how knowledge was obtained. Here, we outline how epistemological principles and graphical representations of causality can be used to formalize experiment planning toward causal discovery. We outline two complementary approaches to experiment planning: one that quantifies evidence per the principles of convergence and consistency, and another that quantifies uncertainty using logical representations of constraints on causal structure. These approaches operationalize experiment planning as the search for an experiment that either maximizes evidence or minimizes uncertainty. Despite work in laboratory automation, humans must still plan experiments and will likely continue to do so for some time. There is thus a great need for experiment-planning frameworks that are not only amenable to machine computation but also useful as aids in human reasoning.

  2. Scheduling multimedia services in cloud computing environment

    NASA Astrophysics Data System (ADS)

    Liu, Yunchang; Li, Chunlin; Luo, Youlong; Shao, Yanling; Zhang, Jing

    2018-02-01

    Currently, security is a critical factor for multimedia services running in the cloud computing environment. As an effective mechanism, trust can improve security level and mitigate attacks within cloud computing environments. Unfortunately, existing scheduling strategy for multimedia service in the cloud computing environment do not integrate trust mechanism when making scheduling decisions. In this paper, we propose a scheduling scheme for multimedia services in multi clouds. At first, a novel scheduling architecture is presented. Then, We build a trust model including both subjective trust and objective trust to evaluate the trust degree of multimedia service providers. By employing Bayesian theory, the subjective trust degree between multimedia service providers and users is obtained. According to the attributes of QoS, the objective trust degree of multimedia service providers is calculated. Finally, a scheduling algorithm integrating trust of entities is proposed by considering the deadline, cost and trust requirements of multimedia services. The scheduling algorithm heuristically hunts for reasonable resource allocations and satisfies the requirement of trust and meets deadlines for the multimedia services. Detailed simulated experiments demonstrate the effectiveness and feasibility of the proposed trust scheduling scheme.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solis, John Hector

    In this paper, we present a modular framework for constructing a secure and efficient program obfuscation scheme. Our approach, inspired by the obfuscation with respect to oracle machines model of [4], retains an interactive online protocol with an oracle, but relaxes the original computational and storage restrictions. We argue this is reasonable given the computational resources of modern personal devices. Furthermore, we relax the information-theoretic security requirement for computational security to utilize established cryptographic primitives. With this additional flexibility we are free to explore different cryptographic buildingblocks. Our approach combines authenticated encryption with private information retrieval to construct a securemore » program obfuscation framework. We give a formal specification of our framework, based on desired functionality and security properties, and provide an example instantiation. In particular, we implement AES in Galois/Counter Mode for authenticated encryption and the Gentry-Ramzan [13]constant communication-rate private information retrieval scheme. We present our implementation results and show that non-trivial sized programs can be realized, but scalability is quickly limited by computational overhead. Finally, we include a discussion on security considerations when instantiating specific modules.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radtke, M.A.

    This paper will chronicle the activity at Wisconsin Public Service Corporation (WPSC) that resulted in the complete migration of a traditional, late 1970`s vintage, Energy Management System (EMS). The new environment includes networked microcomputers, minicomputers, and the corporate mainframe, and provides on-line access to employees outside the energy control center and some WPSC customers. In the late 1980`s, WPSC was forecasting an EMS computer upgrade or replacement to address both capacity and technology needs. Reasoning that access to diverse computing resources would best position the company to accommodate the uncertain needs of the energy industry in the 90`s, WPSC chosemore » to investigate an in-place migration to a network of computers, able to support heterogeneous hardware and operating systems. The system was developed in a modular fashion, with individual modules being deployed as soon as they were completed. The functional and technical specification was continuously enhanced as operating experience was gained from each operational module. With the migration off the original EMS computers complete, the networked system called DEMAXX (Distributed Energy Management Architecture with eXtensive eXpandability) has exceeded expectations in the areas of: cost, performance, flexibility, and reliability.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radtke, M.A.

    This paper will chronicle the activity at Wisconsin Public Service Corporation (WPSC) that resulted in the complete migration of a traditional, late 1970`s vintage, Energy management System (EMS). The new environment includes networked microcomputers, minicomputers, and the corporate mainframe, and provides on-line access to employees outside the energy control center and some WPSC customers. In the late 1980`s, WPSC was forecasting an EMS computer upgrade or replacement to address both capacity and technology needs. Reasoning that access to diverse computing resources would best position the company to accommodate the uncertain needs of the energy industry in the 90`s, WPSC chosemore » to investigate an in-place migration to a network of computers, able to support heterogeneous hardware and operating systems. The system was developed in a modular fashion, with individual modules being deployed as soon as they were completed. The functional and technical specification was continuously enhanced as operating experience was gained from each operational module. With the migration of the original EMS computers complete, the networked system called DEMAXX (Distributed Energy Management Architecture with eXtensive eXpandability) has exceeded expectations in the areas of: cost, performance, flexibility, and reliability.« less

  6. Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds

    NASA Astrophysics Data System (ADS)

    Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni

    2012-09-01

    Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.

  7. Simulation Use in Paramedic Education Research (SUPER): A Descriptive Study

    PubMed Central

    McKenna, Kim D.; Carhart, Elliot; Bercher, Daniel; Spain, Andrew; Todaro, John; Freel, Joann

    2015-01-01

    Abstract Objectives. The purpose of this research was to characterize the use of simulation in initial paramedic education programs in order assist stakeholders’ efforts to target educational initiatives and resources. This group sought to provide a snapshot of what simulation resources programs have or have access to and how they are used; faculty perceptions about simulation; whether program characteristics, resources, or faculty training influence simulation use; and if simulation resources are uniform for patients of all ages. Methods. This was a cross-sectional census survey of paramedic programs that were accredited or had a Letter of Review from the Committee on Accreditation of Educational Programs for the EMS Professions at the time of the study. The data were analyzed using descriptive statistics and chi-square analyses. Results. Of the 638 surveys sent, 389 valid responses (61%) were analyzed. Paramedic programs reported they have or have access to a wide range of simulation resources (task trainers [100%], simple manikins [100%], intermediate manikins [99%], advanced/fully programmable manikins [91%], live simulated patients [83%], computer-based [71%], and virtual reality [19%]); however, they do not consistently use them, particularly advanced (71%), live simulated patients (66%), computer-based (games, scenarios) (31%), and virtual reality (4%). Simulation equipment (of any type) reportedly sits idle and unused in (31%) of programs. Lack of training was cited as the most common reason. Personnel support specific to simulation was available in 44% of programs. Programs reported using simulation to replace skills more frequently than to replace field or clinical hours. Simulation goals included assessment, critical thinking, and problem-solving most frequently, and patient and crew safety least often. Programs using advanced manikins report manufacturers as their primary means of training (87%) and that 19% of faculty had no training specific to those manikins. Many (78%) respondents felt they should use more simulation. Conclusions. Paramedic programs have and have access to diverse simulation resources; however, faculty training and other program resources appear to influence their use. PMID:25664774

  8. Simulation Use in Paramedic Education Research (SUPER): A Descriptive Study.

    PubMed

    McKenna, Kim D; Carhart, Elliot; Bercher, Daniel; Spain, Andrew; Todaro, John; Freel, Joann

    2015-01-01

    The purpose of this research was to characterize the use of simulation in initial paramedic education programs in order assist stakeholders' efforts to target educational initiatives and resources. This group sought to provide a snapshot of what simulation resources programs have or have access to and how they are used; faculty perceptions about simulation; whether program characteristics, resources, or faculty training influence simulation use; and if simulation resources are uniform for patients of all ages. This was a cross-sectional census survey of paramedic programs that were accredited or had a Letter of Review from the Committee on Accreditation of Educational Programs for the EMS Professions at the time of the study. The data were analyzed using descriptive statistics and chi-square analyses. Of the 638 surveys sent, 389 valid responses (61%) were analyzed. Paramedic programs reported they have or have access to a wide range of simulation resources (task trainers [100%], simple manikins [100%], intermediate manikins [99%], advanced/fully programmable manikins [91%], live simulated patients [83%], computer-based [71%], and virtual reality [19%]); however, they do not consistently use them, particularly advanced (71%), live simulated patients (66%), computer-based (games, scenarios) (31%), and virtual reality (4%). Simulation equipment (of any type) reportedly sits idle and unused in (31%) of programs. Lack of training was cited as the most common reason. Personnel support specific to simulation was available in 44% of programs. Programs reported using simulation to replace skills more frequently than to replace field or clinical hours. Simulation goals included assessment, critical thinking, and problem-solving most frequently, and patient and crew safety least often. Programs using advanced manikins report manufacturers as their primary means of training (87%) and that 19% of faculty had no training specific to those manikins. Many (78%) respondents felt they should use more simulation. Paramedic programs have and have access to diverse simulation resources; however, faculty training and other program resources appear to influence their use.

  9. Graded meshes in bio-thermal problems with transmission-line modeling method.

    PubMed

    Milan, Hugo F M; Carvalho, Carlos A T; Maia, Alex S C; Gebremedhin, Kifle G

    2014-10-01

    In this study, the transmission-line modeling (TLM) applied to bio-thermal problems was improved by incorporating several novel computational techniques, which include application of graded meshes which resulted in 9 times faster in computational time and uses only a fraction (16%) of the computational resources used by regular meshes in analyzing heat flow through heterogeneous media. Graded meshes, unlike regular meshes, allow heat sources to be modeled in all segments of the mesh. A new boundary condition that considers thermal properties and thus resulting in a more realistic modeling of complex problems is introduced. Also, a new way of calculating an error parameter is introduced. The calculated temperatures between nodes were compared against the results obtained from the literature and agreed within less than 1% difference. It is reasonable, therefore, to conclude that the improved TLM model described herein has great potential in heat transfer of biological systems. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Developing Computer Model-Based Assessment of Chemical Reasoning: A Feasibility Study

    ERIC Educational Resources Information Center

    Liu, Xiufeng; Waight, Noemi; Gregorius, Roberto; Smith, Erica; Park, Mihwa

    2012-01-01

    This paper reports a feasibility study on developing computer model-based assessments of chemical reasoning at the high school level. Computer models are flash and NetLogo environments to make simultaneously available three domains in chemistry: macroscopic, submicroscopic, and symbolic. Students interact with computer models to answer assessment…

  11. Solving probability reasoning based on DNA strand displacement and probability modules.

    PubMed

    Zhang, Qiang; Wang, Xiaobiao; Wang, Xiaojun; Zhou, Changjun

    2017-12-01

    In computation biology, DNA strand displacement technology is used to simulate the computation process and has shown strong computing ability. Most researchers use it to solve logic problems, but it is only rarely used in probabilistic reasoning. To process probabilistic reasoning, a conditional probability derivation model and total probability model based on DNA strand displacement were established in this paper. The models were assessed through the game "read your mind." It has been shown to enable the application of probabilistic reasoning in genetic diagnosis. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. A study of computer graphics technology in application of communication resource management

    NASA Astrophysics Data System (ADS)

    Li, Jing; Zhou, Liang; Yang, Fei

    2017-08-01

    With the development of computer technology, computer graphics technology has been widely used. Especially, the success of object-oriented technology and multimedia technology promotes the development of graphics technology in the computer software system. Therefore, the computer graphics theory and application technology have become an important topic in the field of computer, while the computer graphics technology becomes more and more extensive in various fields of application. In recent years, with the development of social economy, especially the rapid development of information technology, the traditional way of communication resource management cannot effectively meet the needs of resource management. In this case, the current communication resource management is still using the original management tools and management methods, resource management equipment management and maintenance, which brought a lot of problems. It is very difficult for non-professionals to understand the equipment and the situation in communication resource management. Resource utilization is relatively low, and managers cannot quickly and accurately understand the resource conditions. Aimed at the above problems, this paper proposes to introduce computer graphics technology into the communication resource management. The introduction of computer graphics not only makes communication resource management more vivid, but also reduces the cost of resource management and improves work efficiency.

  13. Fixed-Base Comb with Window-Non-Adjacent Form (NAF) Method for Scalar Multiplication

    PubMed Central

    Seo, Hwajeong; Kim, Hyunjin; Park, Taehwan; Lee, Yeoncheol; Liu, Zhe; Kim, Howon

    2013-01-01

    Elliptic curve cryptography (ECC) is one of the most promising public-key techniques in terms of short key size and various crypto protocols. For this reason, many studies on the implementation of ECC on resource-constrained devices within a practical execution time have been conducted. To this end, we must focus on scalar multiplication, which is the most expensive operation in ECC. A number of studies have proposed pre-computation and advanced scalar multiplication using a non-adjacent form (NAF) representation, and more sophisticated approaches have employed a width-w NAF representation and a modified pre-computation table. In this paper, we propose a new pre-computation method in which zero occurrences are much more frequent than in previous methods. This method can be applied to ordinary group scalar multiplication, but it requires large pre-computation table, so we combined the previous method with ours for practical purposes. This novel structure establishes a new feature that adjusts speed performance and table size finely, so we can customize the pre-computation table for our own purposes. Finally, we can establish a customized look-up table for embedded microprocessors. PMID:23881143

  14. A resource management architecture based on complex network theory in cloud computing federation

    NASA Astrophysics Data System (ADS)

    Zhang, Zehua; Zhang, Xuejie

    2011-10-01

    Cloud Computing Federation is a main trend of Cloud Computing. Resource Management has significant effect on the design, realization, and efficiency of Cloud Computing Federation. Cloud Computing Federation has the typical characteristic of the Complex System, therefore, we propose a resource management architecture based on complex network theory for Cloud Computing Federation (abbreviated as RMABC) in this paper, with the detailed design of the resource discovery and resource announcement mechanisms. Compare with the existing resource management mechanisms in distributed computing systems, a Task Manager in RMABC can use the historical information and current state data get from other Task Managers for the evolution of the complex network which is composed of Task Managers, thus has the advantages in resource discovery speed, fault tolerance and adaptive ability. The result of the model experiment confirmed the advantage of RMABC in resource discovery performance.

  15. Approaching Gender Parity: Women in Computer Science at Afghanistan's Kabul University

    ERIC Educational Resources Information Center

    Plane, Jandelyn

    2010-01-01

    This study explores the representation of women in computer science at the tertiary level through data collected about undergraduate computer science education at Kabul University in Afghanistan. Previous studies have theorized reasons for underrepresentation of women in computer science, and while many of these reasons are indeed present in…

  16. Teaching Inductive Reasoning with Puzzles

    ERIC Educational Resources Information Center

    Wanko, Jeffrey J.

    2017-01-01

    Working with language-independent logic structures can help students develop both inductive and deductive reasoning skills. The Japanese publisher Nikoli (with resources available both in print and online) produces a treasure trove of language-independent logic puzzles. The Nikoli print resources are mostly in Japanese, creating the extra…

  17. A resource-sharing model based on a repeated game in fog computing.

    PubMed

    Sun, Yan; Zhang, Nan

    2017-03-01

    With the rapid development of cloud computing techniques, the number of users is undergoing exponential growth. It is difficult for traditional data centers to perform many tasks in real time because of the limited bandwidth of resources. The concept of fog computing is proposed to support traditional cloud computing and to provide cloud services. In fog computing, the resource pool is composed of sporadic distributed resources that are more flexible and movable than a traditional data center. In this paper, we propose a fog computing structure and present a crowd-funding algorithm to integrate spare resources in the network. Furthermore, to encourage more resource owners to share their resources with the resource pool and to supervise the resource supporters as they actively perform their tasks, we propose an incentive mechanism in our algorithm. Simulation results show that our proposed incentive mechanism can effectively reduce the SLA violation rate and accelerate the completion of tasks.

  18. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment.

    PubMed

    Abdullahi, Mohammed; Ngadi, Md Asri

    2016-01-01

    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.

  19. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment

    PubMed Central

    Abdullahi, Mohammed; Ngadi, Md Asri

    2016-01-01

    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan. PMID:27348127

  20. A review of Computer Science resources for learning and teaching with K-12 computing curricula: an Australian case study

    NASA Astrophysics Data System (ADS)

    Falkner, Katrina; Vivian, Rebecca

    2015-10-01

    To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age children, with the intention to engage children and increase interest, rather than to formally teach concepts and skills. What is the educational quality of existing Computer Science resources and to what extent are they suitable for classroom learning and teaching? In this paper, an assessment framework is presented to evaluate the quality of online Computer Science resources. Further, a semi-systematic review of available online Computer Science resources was conducted to evaluate resources available for classroom learning and teaching and to identify gaps in resource availability, using the Australian curriculum as a case study analysis. The findings reveal a predominance of quality resources, however, a number of critical gaps were identified. This paper provides recommendations and guidance for the development of new and supplementary resources and future research.

  1. iTools: a framework for classification, categorization and integration of computational biology resources.

    PubMed

    Dinov, Ivo D; Rubin, Daniel; Lorensen, William; Dugan, Jonathan; Ma, Jeff; Murphy, Shawn; Kirschner, Beth; Bug, William; Sherman, Michael; Floratos, Aris; Kennedy, David; Jagadish, H V; Schmidt, Jeanette; Athey, Brian; Califano, Andrea; Musen, Mark; Altman, Russ; Kikinis, Ron; Kohane, Isaac; Delp, Scott; Parker, D Stott; Toga, Arthur W

    2008-05-28

    The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management. We demonstrate several applications of iTools as a framework for integrated bioinformatics. iTools and the complete details about its specifications, usage and interfaces are available at the iTools web page http://iTools.ccb.ucla.edu.

  2. The LHCb software and computing upgrade for Run 3: opportunities and challenges

    NASA Astrophysics Data System (ADS)

    Bozzi, C.; Roiser, S.; LHCb Collaboration

    2017-10-01

    The LHCb detector will be upgraded for the LHC Run 3 and will be readout at 30 MHz, corresponding to the full inelastic collision rate, with major implications on the full software trigger and offline computing. If the current computing model and software framework are kept, the data storage capacity and computing power required to process data at this rate, and to generate and reconstruct equivalent samples of simulated events, will exceed the current capacity by at least one order of magnitude. A redesign of the software framework, including scheduling, the event model, the detector description and the conditions database, is needed to fully exploit the computing power of multi-, many-core architectures, and coprocessors. Data processing and the analysis model will also change towards an early streaming of different data types, in order to limit storage resources, with further implications for the data analysis workflows. Fast simulation options will allow to obtain a reasonable parameterization of the detector response in considerably less computing time. Finally, the upgrade of LHCb will be a good opportunity to review and implement changes in the domains of software design, test and review, and analysis workflow and preservation. In this contribution, activities and recent results in all the above areas are presented.

  3. Signalling and obfuscation for congestion control

    NASA Astrophysics Data System (ADS)

    Mareček, Jakub; Shorten, Robert; Yu, Jia Yuan

    2015-10-01

    We aim to reduce the social cost of congestion in many smart city applications. In our model of congestion, agents interact over limited resources after receiving signals from a central agent that observes the state of congestion in real time. Under natural models of agent populations, we develop new signalling schemes and show that by introducing a non-trivial amount of uncertainty in the signals, we reduce the social cost of congestion, i.e., improve social welfare. The signalling schemes are efficient in terms of both communication and computation, and are consistent with past observations of the congestion. Moreover, the resulting population dynamics converge under reasonable assumptions.

  4. Provider-Independent Use of the Cloud

    NASA Astrophysics Data System (ADS)

    Harmer, Terence; Wright, Peter; Cunningham, Christina; Perrott, Ron

    Utility computing offers researchers and businesses the potential of significant cost-savings, making it possible for them to match the cost of their computing and storage to their demand for such resources. A utility compute provider enables the purchase of compute infrastructures on-demand; when a user requires computing resources a provider will provision a resource for them and charge them only for their period of use of that resource. There has been a significant growth in the number of cloud computing resource providers and each has a different resource usage model, application process and application programming interface (API)-developing generic multi-resource provider applications is thus difficult and time consuming. We have developed an abstraction layer that provides a single resource usage model, user authentication model and API for compute providers that enables cloud-provider neutral applications to be developed. In this paper we outline the issues in using external resource providers, give examples of using a number of the most popular cloud providers and provide examples of developing provider neutral applications. In addition, we discuss the development of the API to create a generic provisioning model based on a common architecture for cloud computing providers.

  5. Kaiser Permanente-Sandia National Health Care Model: Phase 1 prototype final report. Part 2 -- Domain analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, D.; Yoshimura, A.; Butler, D.

    This report describes the results of a Cooperative Research and Development Agreement between Sandia National Laboratories and Kaiser Permanente Southern California to develop a prototype computer model of Kaiser Permanente`s health care delivery system. As a discrete event simulation, SimHCO models for each of 100,000 patients the progression of disease, individual resource usage, and patient choices in a competitive environment. SimHCO is implemented in the object-oriented programming language C{sup 2}, stressing reusable knowledge and reusable software components. The versioned implementation of SimHCO showed that the object-oriented framework allows the program to grow in complexity in an incremental way. Furthermore, timingmore » calculations showed that SimHCO runs in a reasonable time on typical workstations, and that a second phase model will scale proportionally and run within the system constraints of contemporary computer technology.« less

  6. Physically Based Virtual Surgery Planning and Simulation Tools for Personal Health Care Systems

    NASA Astrophysics Data System (ADS)

    Dogan, Firat; Atilgan, Yasemin

    The virtual surgery planning and simulation tools have gained a great deal of importance in the last decade in a consequence of increasing capacities at the information technology level. The modern hardware architectures, large scale database systems, grid based computer networks, agile development processes, better 3D visualization and all the other strong aspects of the information technology brings necessary instruments into almost every desk. The last decade’s special software and sophisticated super computer environments are now serving to individual needs inside “tiny smart boxes” for reasonable prices. However, resistance to learning new computerized environments, insufficient training and all the other old habits prevents effective utilization of IT resources by the specialists of the health sector. In this paper, all the aspects of the former and current developments in surgery planning and simulation related tools are presented, future directions and expectations are investigated for better electronic health care systems.

  7. Calm water resistance prediction of a bulk carrier using Reynolds averaged Navier-Stokes based solver

    NASA Astrophysics Data System (ADS)

    Rahaman, Md. Mashiur; Islam, Hafizul; Islam, Md. Tariqul; Khondoker, Md. Reaz Hasan

    2017-12-01

    Maneuverability and resistance prediction with suitable accuracy is essential for optimum ship design and propulsion power prediction. This paper aims at providing some of the maneuverability characteristics of a Japanese bulk carrier model, JBC in calm water using a computational fluid dynamics solver named SHIP Motion and OpenFOAM. The solvers are based on the Reynolds average Navier-Stokes method (RaNS) and solves structured grid using the Finite Volume Method (FVM). This paper comprises the numerical results of calm water test for the JBC model with available experimental results. The calm water test results include the total drag co-efficient, average sinkage, and trim data. Visualization data for pressure distribution on the hull surface and free water surface have also been included. The paper concludes that the presented solvers predict the resistance and maneuverability characteristics of the bulk carrier with reasonable accuracy utilizing minimum computational resources.

  8. A new Gaussian MCTDH program: Implementation and validation on the levels of the water and glycine molecules

    NASA Astrophysics Data System (ADS)

    Skouteris, D.; Barone, V.

    2014-06-01

    We report the main features of a new general implementation of the Gaussian Multi-Configuration Time-Dependent Hartree model. The code allows effective computations of time-dependent phenomena, including calculation of vibronic spectra (in one or more electronic states), relative state populations, etc. Moreover, by expressing the Dirac-Frenkel variational principle in terms of an effective Hamiltonian, we are able to provide a new reliable estimate of the representation error. After validating the code on simple one-dimensional systems, we analyze the harmonic and anharmonic vibrational spectra of water and glycine showing that reliable and converged energy levels can be obtained with reasonable computing resources. The data obtained on water and glycine are compared with results of previous calculations using the vibrational second-order perturbation theory method. Additional features and perspectives are also shortly discussed.

  9. 18 CFR 2.15 - Specified reasonable rate of return.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Specified reasonable rate of return. 2.15 Section 2.15 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY GENERAL RULES GENERAL POLICY AND INTERPRETATIONS Statements of General...

  10. 18 CFR 2.15 - Specified reasonable rate of return.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Specified reasonable rate of return. 2.15 Section 2.15 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY GENERAL RULES GENERAL POLICY AND INTERPRETATIONS Statements of General...

  11. 18 CFR 2.15 - Specified reasonable rate of return.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Specified reasonable rate of return. 2.15 Section 2.15 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY GENERAL RULES GENERAL POLICY AND INTERPRETATIONS Statements of General...

  12. 18 CFR 2.15 - Specified reasonable rate of return.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Specified reasonable rate of return. 2.15 Section 2.15 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY GENERAL RULES GENERAL POLICY AND INTERPRETATIONS Statements of General...

  13. dV/dt - Accelerating the Rate of Progress towards Extreme Scale Collaborative Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livny, Miron

    This report introduces publications that report the results of a project that aimed to design a computational framework that enables computational experimentation at scale while supporting the model of “submit locally, compute globally”. The project focuses on estimating application resource needs, finding the appropriate computing resources, acquiring those resources,deploying the applications and data on the resources, managing applications and resources during run.

  14. iTools: A Framework for Classification, Categorization and Integration of Computational Biology Resources

    PubMed Central

    Dinov, Ivo D.; Rubin, Daniel; Lorensen, William; Dugan, Jonathan; Ma, Jeff; Murphy, Shawn; Kirschner, Beth; Bug, William; Sherman, Michael; Floratos, Aris; Kennedy, David; Jagadish, H. V.; Schmidt, Jeanette; Athey, Brian; Califano, Andrea; Musen, Mark; Altman, Russ; Kikinis, Ron; Kohane, Isaac; Delp, Scott; Parker, D. Stott; Toga, Arthur W.

    2008-01-01

    The advancement of the computational biology field hinges on progress in three fundamental directions – the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources–data, software tools and web-services. The iTools design, implementation and resource meta - data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management. We demonstrate several applications of iTools as a framework for integrated bioinformatics. iTools and the complete details about its specifications, usage and interfaces are available at the iTools web page http://iTools.ccb.ucla.edu. PMID:18509477

  15. TethysCluster: A comprehensive approach for harnessing cloud resources for hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Nelson, J.; Jones, N.; Ames, D. P.

    2015-12-01

    Advances in water resources modeling are improving the information that can be supplied to support decisions affecting the safety and sustainability of society. However, as water resources models become more sophisticated and data-intensive they require more computational power to run. Purchasing and maintaining the computing facilities needed to support certain modeling tasks has been cost-prohibitive for many organizations. With the advent of the cloud, the computing resources needed to address this challenge are now available and cost-effective, yet there still remains a significant technical barrier to leverage these resources. This barrier inhibits many decision makers and even trained engineers from taking advantage of the best science and tools available. Here we present the Python tools TethysCluster and CondorPy, that have been developed to lower the barrier to model computation in the cloud by providing (1) programmatic access to dynamically scalable computing resources, (2) a batch scheduling system to queue and dispatch the jobs to the computing resources, (3) data management for job inputs and outputs, and (4) the ability to dynamically create, submit, and monitor computing jobs. These Python tools leverage the open source, computing-resource management, and job management software, HTCondor, to offer a flexible and scalable distributed-computing environment. While TethysCluster and CondorPy can be used independently to provision computing resources and perform large modeling tasks, they have also been integrated into Tethys Platform, a development platform for water resources web apps, to enable computing support for modeling workflows and decision-support systems deployed as web apps.

  16. The JASMIN Cloud: specialised and hybrid to meet the needs of the Environmental Sciences Community

    NASA Astrophysics Data System (ADS)

    Kershaw, Philip; Lawrence, Bryan; Churchill, Jonathan; Pritchard, Matt

    2014-05-01

    Cloud computing provides enormous opportunities for the research community. The large public cloud providers provide near-limitless scaling capability. However, adapting Cloud to scientific workloads is not without its problems. The commodity nature of the public cloud infrastructure can be at odds with the specialist requirements of the research community. Issues such as trust, ownership of data, WAN bandwidth and costing models make additional barriers to more widespread adoption. Alongside the application of public cloud for scientific applications, a number of private cloud initiatives are underway in the research community of which the JASMIN Cloud is one example. Here, cloud service models are being effectively super-imposed over more established services such as data centres, compute cluster facilities and Grids. These have the potential to deliver the specialist infrastructure needed for the science community coupled with the benefits of a Cloud service model. The JASMIN facility based at the Rutherford Appleton Laboratory was established in 2012 to support the data analysis requirements of the climate and Earth Observation community. In its first year of operation, the 5PB of available storage capacity was filled and the hosted compute capability used extensively. JASMIN has modelled the concept of a centralised large-volume data analysis facility. Key characteristics have enabled success: peta-scale fast disk connected via low latency networks to compute resources and the use of virtualisation for effective management of the resources for a range of users. A second phase is now underway funded through NERC's (Natural Environment Research Council) Big Data initiative. This will see significant expansion to the resources available with a doubling of disk-based storage to 12PB and an increase of compute capacity by a factor of ten to over 3000 processing cores. This expansion is accompanied by a broadening in the scope for JASMIN, as a service available to the entire UK environmental science community. Experience with the first phase demonstrated the range of user needs. A trade-off is needed between access privileges to resources, flexibility of use and security. This has influenced the form and types of service under development for the new phase. JASMIN will deploy a specialised private cloud organised into "Managed" and "Unmanaged" components. In the Managed Cloud, users have direct access to the storage and compute resources for optimal performance but for reasons of security, via a more restrictive PaaS (Platform-as-a-Service) interface. The Unmanaged Cloud is deployed in an isolated part of the network but co-located with the rest of the infrastructure. This enables greater liberty to tenants - full IaaS (Infrastructure-as-a-Service) capability to provision customised infrastructure - whilst at the same time protecting more sensitive parts of the system from direct access using these elevated privileges. The private cloud will be augmented with cloud-bursting capability so that it can exploit the resources available from public clouds, making it effectively a hybrid solution. A single interface will overlay the functionality of both the private cloud and external interfaces to public cloud providers giving users the flexibility to migrate resources between infrastructures as requirements dictate.

  17. A Computational Account of Children's Analogical Reasoning: Balancing Inhibitory Control in Working Memory and Relational Representation

    ERIC Educational Resources Information Center

    Morrison, Robert G.; Doumas, Leonidas A. A.; Richland, Lindsey E.

    2011-01-01

    Theories accounting for the development of analogical reasoning tend to emphasize either the centrality of relational knowledge accretion or changes in information processing capability. Simulations in LISA (Hummel & Holyoak, 1997, 2003), a neurally inspired computer model of analogical reasoning, allow us to explore how these factors may…

  18. Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources

    NASA Astrophysics Data System (ADS)

    Evans, D.; Fisk, I.; Holzman, B.; Melo, A.; Metson, S.; Pordes, R.; Sheldon, P.; Tiradani, A.

    2011-12-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely "on-demand" as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the "base-line" needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.

  19. Statistics Online Computational Resource for Education

    ERIC Educational Resources Information Center

    Dinov, Ivo D.; Christou, Nicolas

    2009-01-01

    The Statistics Online Computational Resource (http://www.SOCR.ucla.edu) provides one of the largest collections of free Internet-based resources for probability and statistics education. SOCR develops, validates and disseminates two core types of materials--instructional resources and computational libraries. (Contains 2 figures.)

  20. Temporal and Resource Reasoning for Planning, Scheduling and Execution in Autonomous Agents

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Hunsberger, Luke; Tsamardinos, Ioannis

    2005-01-01

    This viewgraph slide tutorial reviews methods for planning and scheduling events. The presentation reviews several methods and uses several examples of scheduling events for the successful and timely completion of the overall plan. Using constraint based models the presentation reviews planning with time, time representations in problem solving and resource reasoning.

  1. 18 CFR 2.15 - Specified reasonable rate of return.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... average cost of long-term debt and preferred stock for the year, and the cost of common equity shall be... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Specified reasonable rate of return. 2.15 Section 2.15 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY...

  2. Quantum Computing: Selected Internet Resources for Librarians, Researchers, and the Casually Curious

    ERIC Educational Resources Information Center

    Cirasella, Jill

    2009-01-01

    This article presents an annotated selection of the most important and informative Internet resources for learning about quantum computing, finding quantum computing literature, and tracking quantum computing news. All of the quantum computing resources described in this article are freely available, English-language web sites that fall into one…

  3. A blueprint for computational analysis of acoustical scattering from orchestral panel arrays

    NASA Astrophysics Data System (ADS)

    Burns, Thomas

    2005-09-01

    Orchestral panel arrays have been a topic of interest to acousticians, and it is reasonable to expect optimal design criteria to result from a combination of musician surveys, on-stage empirical data, and computational modeling of various configurations. Preparing a musicians survey to identify specific mechanisms of perception and sound quality is best suited for a clinically experienced hearing scientist. Measuring acoustical scattering from a panel array and discerning the effects from various boundaries is best suited for the experienced researcher in engineering acoustics. Analyzing a numerical model of the panel arrays is best suited for the tools typically used in computational engineering analysis. Toward this end, a streamlined process will be described using PROENGINEER to define a panel array geometry in 3-D, a commercial mesher to numerically discretize this geometry, SYSNOISE to solve the associated boundary element integral equations, and MATLAB to visualize the results. The model was run (background priority) on an SGI Altix (Linux) server with 12 CPUs, 24 Gbytes of RAM, and 1 Tbyte of disk space. These computational resources are available to research teams interested in this topic and willing to write and pursue grants.

  4. Reinforcement learning for resource allocation in LEO satellite networks.

    PubMed

    Usaha, Wipawee; Barria, Javier A

    2007-06-01

    In this paper, we develop and assess online decision-making algorithms for call admission and routing for low Earth orbit (LEO) satellite networks. It has been shown in a recent paper that, in a LEO satellite system, a semi-Markov decision process formulation of the call admission and routing problem can achieve better performance in terms of an average revenue function than existing routing methods. However, the conventional dynamic programming (DP) numerical solution becomes prohibited as the problem size increases. In this paper, two solution methods based on reinforcement learning (RL) are proposed in order to circumvent the computational burden of DP. The first method is based on an actor-critic method with temporal-difference (TD) learning. The second method is based on a critic-only method, called optimistic TD learning. The algorithms enhance performance in terms of requirements in storage, computational complexity and computational time, and in terms of an overall long-term average revenue function that penalizes blocked calls. Numerical studies are carried out, and the results obtained show that the RL framework can achieve up to 56% higher average revenue over existing routing methods used in LEO satellite networks with reasonable storage and computational requirements.

  5. Barriers and decisions when answering clinical questions at the point of care: a grounded theory study.

    PubMed

    Cook, David A; Sorensen, Kristi J; Wilkinson, John M; Berger, Richard A

    2013-11-25

    Answering clinical questions affects patient-care decisions and is important to continuous professional development. The process of point-of-care learning is incompletely understood. To understand what barriers and enabling factors influence physician point-of-care learning and what decisions physicians face during this process. Focus groups with grounded theory analysis. Focus group discussions were transcribed and then analyzed using a constant comparative approach to identify barriers, enabling factors, and key decisions related to physician information-seeking activities. Academic medical center and outlying community sites. Purposive sample of 50 primary care and subspecialist internal medicine and family medicine physicians, interviewed in 11 focus groups. Insufficient time was the main barrier to point-of-care learning. Other barriers included the patient comorbidities and contexts, the volume of available information, not knowing which resource to search, doubt that the search would yield an answer, difficulty remembering questions for later study, and inconvenient access to computers. Key decisions were whether to search (reasons to search included infrequently seen conditions, practice updates, complex questions, and patient education), when to search (before, during, or after the clinical encounter), where to search (with the patient present or in a separate room), what type of resource to use (colleague or computer), what specific resource to use (influenced first by efficiency and second by credibility), and when to stop. Participants noted that key features of efficiency (completeness, brevity, and searchability) are often in conflict. Physicians perceive that insufficient time is the greatest barrier to point-of-care learning, and efficiency is the most important determinant in selecting an information source. Designing knowledge resources and systems to target key decisions may improve learning and patient care.

  6. Collaborative workbench for cyberinfrastructure to accelerate science algorithm development

    NASA Astrophysics Data System (ADS)

    Ramachandran, R.; Maskey, M.; Kuo, K.; Lynnes, C.

    2013-12-01

    There are significant untapped resources for information and knowledge creation within the Earth Science community in the form of data, algorithms, services, analysis workflows or scripts, and the related knowledge about these resources. Despite the huge growth in social networking and collaboration platforms, these resources often reside on an investigator's workstation or laboratory and are rarely shared. A major reason for this is that there are very few scientific collaboration platforms, and those that exist typically require the use of a new set of analysis tools and paradigms to leverage the shared infrastructure. As a result, adoption of these collaborative platforms for science research is inhibited by the high cost to an individual scientist of switching from his or her own familiar environment and set of tools to a new environment and tool set. This presentation will describe an ongoing project developing an Earth Science Collaborative Workbench (CWB). The CWB approach will eliminate this barrier by augmenting a scientist's current research environment and tool set to allow him or her to easily share diverse data and algorithms. The CWB will leverage evolving technologies such as commodity computing and social networking to design an architecture for scalable collaboration that will support the emerging vision of an Earth Science Collaboratory. The CWB is being implemented on the robust and open source Eclipse framework and will be compatible with widely used scientific analysis tools such as IDL. The myScience Catalog built into CWB will capture and track metadata and provenance about data and algorithms for the researchers in a non-intrusive manner with minimal overhead. Seamless interfaces to multiple Cloud services will support sharing algorithms, data, and analysis results, as well as access to storage and computer resources. A Community Catalog will track the use of shared science artifacts and manage collaborations among researchers.

  7. Psychological Trauma as a Reason for Computer Game Addiction among Adolescents

    ERIC Educational Resources Information Center

    Oskenbay, Fariza; Tolegenova, Aliya; Kalymbetova, Elmira; Chung, Man Cheung; Faizullina, Aida; Jakupov, Maksat

    2016-01-01

    This study explores psychological trauma as a reason for computer game addiction among adolescents. The findings of this study show that there is a connection between psychological trauma and computer game addiction. Some psychologists note that the main cause of any type of addiction derives from psychological trauma, and that finding such…

  8. Does Computer Use Matter? The Influence of Computer Usage on Eighth-Grade Students' Mathematics Reasoning

    ERIC Educational Resources Information Center

    Ayieko, Rachel A.; Gokbel, Elif N.; Nelson, Bryan

    2017-01-01

    This study uses the 2011 Trends in International Mathematics and Science Study to investigate the relationships among students' and teachers' computer use, and eighth-grade students' mathematical reasoning in three high-achieving nations: Finland, Chinese Taipei, and Singapore. The study found a significant negative relationship in all three…

  9. Computational methods for analysis and inference of kinase/inhibitor relationships

    PubMed Central

    Ferrè, Fabrizio; Palmeri, Antonio; Helmer-Citterich, Manuela

    2014-01-01

    The central role of kinases in virtually all signal transduction networks is the driving motivation for the development of compounds modulating their activity. ATP-mimetic inhibitors are essential tools for elucidating signaling pathways and are emerging as promising therapeutic agents. However, off-target ligand binding and complex and sometimes unexpected kinase/inhibitor relationships can occur for seemingly unrelated kinases, stressing that computational approaches are needed for learning the interaction determinants and for the inference of the effect of small compounds on a given kinase. Recently published high-throughput profiling studies assessed the effects of thousands of small compound inhibitors, covering a substantial portion of the kinome. This wealth of data paved the road for computational resources and methods that can offer a major contribution in understanding the reasons of the inhibition, helping in the rational design of more specific molecules, in the in silico prediction of inhibition for those neglected kinases for which no systematic analysis has been carried yet, in the selection of novel inhibitors with desired selectivity, and offering novel avenues of personalized therapies. PMID:25071826

  10. Health decision making: lynchpin of evidence-based practice.

    PubMed

    Spring, Bonnie

    2008-01-01

    Health decision making is both the lynchpin and the least developed aspect of evidence-based practice. The evidence-based practice process requires integrating the evidence with consideration of practical resources and patient preferences and doing so via a process that is genuinely collaborative. Yet, the literature is largely silent about how to accomplish integrative, shared decision making. for evidence-based practice are discussed for 2 theories of clinician decision making (expected utility and fuzzy trace) and 2 theories of patient health decision making (transtheoretical model and reasoned action). Three suggestions are offered. First, it would be advantageous to have theory-based algorithms that weight and integrate the 3 data strands (evidence, resources, preferences) in different decisional contexts. Second, patients, not providers, make the decisions of greatest impact on public health, and those decisions are behavioral. Consequently, theory explicating how provider-patient collaboration can influence patient lifestyle decisions made miles from the provider's office is greatly needed. Third, although the preponderance of data on complex decisions supports a computational approach, such an approach to evidence-based practice is too impractical to be widely applied at present. More troublesomely, until patients come to trust decisions made computationally more than they trust their providers' intuitions, patient adherence will remain problematic. A good theory of integrative, collaborative health decision making remains needed.

  11. Health Decision Making: Lynchpin of Evidence-Based Practice

    PubMed Central

    Spring, Bonnie

    2008-01-01

    Health decision making is both the lynchpin and the least developed aspect of evidence-based practice. The evidence-based practice process requires integrating the evidence with consideration of practical resources and patient preferences and doing so via a process that is genuinely collaborative. Yet, the literature is largely silent about how to accomplish integrative, shared decision making. Implications for evidence-based practice are discussed for 2 theories of clinician decision making (expected utility and fuzzy trace) and 2 theories of patient health decision making (transtheoretical model and reasoned action). Three suggestions are offered. First, it would be advantageous to have theory-based algorithms that weight and integrate the 3 data strands (evidence, resources, preferences) in different decisional contexts. Second, patients, not providers, make the decisions of greatest impact on public health, and those decisions are behavioral. Consequently, theory explicating how provider-patient collaboration can influence patient lifestyle decisions made miles from the provider's office is greatly needed. Third, although the preponderance of data on complex decisions supports a computational approach, such an approach to evidence-based practice is too impractical to be widely applied at present. More troublesomely, until patients come to trust decisions made computationally more than they trust their providers’ intuitions, patient adherence will remain problematic. A good theory of integrative, collaborative health decision making remains needed. PMID:19015288

  12. Great Computational Intelligence in the Formal Sciences via Analogical Reasoning

    DTIC Science & Technology

    2017-05-08

    computational harnessing of traditional mathematical statistics (as e.g. covered in Hogg, Craig & McKean 2005) is used to power statistical learning techniques...AFRL-AFOSR-VA-TR-2017-0099 Great Computational Intelligence in the Formal Sciences via Analogical Reasoning Selmer Bringsjord RENSSELAER POLYTECHNIC...08-05-2017 2. REPORT TYPE Final Performance 3. DATES COVERED (From - To) 15 Oct 2011 to 31 Dec 2016 4. TITLE AND SUBTITLE Great Computational

  13. 43 CFR 3280.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... OF THE INTERIOR MINERALS MANAGEMENT (3000) GEOTHERMAL RESOURCES UNIT AGREEMENTS Geothermal Resources... resulting in: (1) Diligent development; (2) Efficient exploration, production and utilization of the resource; (3) Conservation of natural resources; and (4) Prevention of waste. Reasonably proven to produce...

  14. 43 CFR 3280.2 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... OF THE INTERIOR MINERALS MANAGEMENT (3000) GEOTHERMAL RESOURCES UNIT AGREEMENTS Geothermal Resources... resulting in: (1) Diligent development; (2) Efficient exploration, production and utilization of the resource; (3) Conservation of natural resources; and (4) Prevention of waste. Reasonably proven to produce...

  15. 43 CFR 3280.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... OF THE INTERIOR MINERALS MANAGEMENT (3000) GEOTHERMAL RESOURCES UNIT AGREEMENTS Geothermal Resources... resulting in: (1) Diligent development; (2) Efficient exploration, production and utilization of the resource; (3) Conservation of natural resources; and (4) Prevention of waste. Reasonably proven to produce...

  16. 43 CFR 3280.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... OF THE INTERIOR MINERALS MANAGEMENT (3000) GEOTHERMAL RESOURCES UNIT AGREEMENTS Geothermal Resources... resulting in: (1) Diligent development; (2) Efficient exploration, production and utilization of the resource; (3) Conservation of natural resources; and (4) Prevention of waste. Reasonably proven to produce...

  17. Flexible services for the support of research.

    PubMed

    Turilli, Matteo; Wallom, David; Williams, Chris; Gough, Steve; Curran, Neal; Tarrant, Richard; Bretherton, Dan; Powell, Andy; Johnson, Matt; Harmer, Terry; Wright, Peter; Gordon, John

    2013-01-28

    Cloud computing has been increasingly adopted by users and providers to promote a flexible, scalable and tailored access to computing resources. Nonetheless, the consolidation of this paradigm has uncovered some of its limitations. Initially devised by corporations with direct control over large amounts of computational resources, cloud computing is now being endorsed by organizations with limited resources or with a more articulated, less direct control over these resources. The challenge for these organizations is to leverage the benefits of cloud computing while dealing with limited and often widely distributed computing resources. This study focuses on the adoption of cloud computing by higher education institutions and addresses two main issues: flexible and on-demand access to a large amount of storage resources, and scalability across a heterogeneous set of cloud infrastructures. The proposed solutions leverage a federated approach to cloud resources in which users access multiple and largely independent cloud infrastructures through a highly customizable broker layer. This approach allows for a uniform authentication and authorization infrastructure, a fine-grained policy specification and the aggregation of accounting and monitoring. Within a loosely coupled federation of cloud infrastructures, users can access vast amount of data without copying them across cloud infrastructures and can scale their resource provisions when the local cloud resources become insufficient.

  18. Development and comparison of computational models for estimation of absorbed organ radiation dose in rainbow trout (Oncorhynchus mykiss) from uptake of iodine-131.

    PubMed

    Martinez, N E; Johnson, T E; Capello, K; Pinder, J E

    2014-12-01

    This study develops and compares different, increasingly detailed anatomical phantoms for rainbow trout (Oncorhynchus mykiss) for the purpose of estimating organ absorbed radiation dose and dose rates from (131)I uptake in multiple organs. The models considered are: a simplistic geometry considering a single organ, a more specific geometry employing additional organs with anatomically relevant size and location, and voxel reconstruction of internal anatomy obtained from CT imaging (referred to as CSUTROUT). Dose Conversion Factors (DCFs) for whole body as well as selected organs of O. mykiss were computed using Monte Carlo modeling, and combined with estimated activity concentrations, to approximate dose rates and ultimately determine cumulative radiation dose (μGy) to selected organs after several half-lives of (131)I. The different computational models provided similar results, especially for source organs (less than 30% difference between estimated doses), and whole body DCFs for each model (∼3 × 10(-3) μGy d(-1) per Bq kg(-1)) were comparable to DCFs listed in ICRP 108 for (131)I. The main benefit provided by the computational models developed here is the ability to accurately determine organ dose. A conservative mass-ratio approach may provide reasonable results for sufficiently large organs, but is only applicable to individual source organs. Although CSUTROUT is the more anatomically realistic phantom, it required much more resource dedication to develop and is less flexible than the stylized phantom for similar results. There may be instances where a detailed phantom such as CSUTROUT is appropriate, but generally the stylized phantom appears to be the best choice for an ideal balance between accuracy and resource requirements. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Rich in resources/deficient in dollars! Which titles do reference departments really need?

    PubMed

    Fishman, D L; DelBaglivo, M

    1998-10-01

    Budget pressures, combined with the growing availability of resources, dictate careful examination of reference use. Two studies were conducted at the University of Maryland Health Sciences Library to examine this issue. A twelve-month reshelving study determined use by title and discipline; a simultaneous study analyzed print abstract and index use in an electronic environment. Staff electronically recorded statistics for unshelved reference books, coded the collection by discipline, and tracked use by school. Oral surveys administered to reference room abstract and index users focused on title usage, user demographics, and stated reason for use. Sixty-five and a half percent of reference collection titles were used. Medical titles received the most use, but, in the context of collection size, dentistry and nursing titles used the greatest percentage of their collections. At an individual title level, medical textbooks and drug handbooks were most used. Users of abstracts and indexes were primarily campus nursing and medical students who preferred print resources. The monograph data will guide reference expenditures in canceling little-used standing orders, expanding most-used portions of the collection, and analyzing underused sections. The abstract and index survey identified the following needs: targeting instruction, contacting faculty who assign print resources, increasing the number of computer workstations, and installing signs linking databases to print equivalents.

  20. Resource Aware Intelligent Network Services (RAINS) Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehman, Tom; Yang, Xi

    The Resource Aware Intelligent Network Services (RAINS) project conducted research and developed technologies in the area of cyber infrastructure resource modeling and computation. The goal of this work was to provide a foundation to enable intelligent, software defined services which spanned the network AND the resources which connect to the network. A Multi-Resource Service Plane (MRSP) was defined, which allows resource owners/managers to locate and place themselves from a topology and service availability perspective within the dynamic networked cyberinfrastructure ecosystem. The MRSP enables the presentation of integrated topology views and computation results which can include resources across the spectrum ofmore » compute, storage, and networks. The RAINS project developed MSRP includes the following key components: i) Multi-Resource Service (MRS) Ontology/Multi-Resource Markup Language (MRML), ii) Resource Computation Engine (RCE), iii) Modular Driver Framework (to allow integration of a variety of external resources). The MRS/MRML is a general and extensible modeling framework that allows for resource owners to model, or describe, a wide variety of resource types. All resources are described using three categories of elements: Resources, Services, and Relationships between the elements. This modeling framework defines a common method for the transformation of cyber infrastructure resources into data in the form of MRML models. In order to realize this infrastructure datification, the RAINS project developed a model based computation system, i.e. “RAINS Computation Engine (RCE)”. The RCE has the ability to ingest, process, integrate, and compute based on automatically generated MRML models. The RCE interacts with the resources thru system drivers which are specific to the type of external network or resource controller. The RAINS project developed a modular and pluggable driver system which facilities a variety of resource controllers to automatically generate, maintain, and distribute MRML based resource descriptions. Once all of the resource topologies are absorbed by the RCE, a connected graph of the full distributed system topology is constructed, which forms the basis for computation and workflow processing. The RCE includes a Modular Computation Element (MCE) framework which allows for tailoring of the computation process to the specific set of resources under control, and the services desired. The input and output of an MCE are both model data based on MRS/MRML ontology and schema. Some of the RAINS project accomplishments include: Development of general and extensible multi-resource modeling framework; Design of a Resource Computation Engine (RCE) system which includes the following key capabilities; Absorb a variety of multi-resource model types and build integrated models; Novel architecture which uses model based communications across the full stack for all Flexible provision of abstract or intent based user facing interfaces; Workflow processing based on model descriptions; Release of the RCE as an open source software; Deployment of RCE in the University of Maryland/Mid-Atlantic Crossroad ScienceDMZ in prototype mode with a plan under way to transition to production; Deployment at the Argonne National Laboratory DTN Facility in prototype mode; Selection of RCE by the DOE SENSE (SDN for End-to-end Networked Science at the Exascale) project as the basis for their orchestration service.« less

  1. Robust matching for voice recognition

    NASA Astrophysics Data System (ADS)

    Higgins, Alan; Bahler, L.; Porter, J.; Blais, P.

    1994-10-01

    This paper describes an automated method of comparing a voice sample of an unknown individual with samples from known speakers in order to establish or verify the individual's identity. The method is based on a statistical pattern matching approach that employs a simple training procedure, requires no human intervention (transcription, work or phonetic marketing, etc.), and makes no assumptions regarding the expected form of the statistical distributions of the observations. The content of the speech material (vocabulary, grammar, etc.) is not assumed to be constrained in any way. An algorithm is described which incorporates frame pruning and channel equalization processes designed to achieve robust performance with reasonable computational resources. An experimental implementation demonstrating the feasibility of the concept is described.

  2. Optimized maritime emergency resource allocation under dynamic demand.

    PubMed

    Zhang, Wenfen; Yan, Xinping; Yang, Jiaqi

    2017-01-01

    Emergency resource is important for people evacuation and property rescue when accident occurs. The relief efforts could be promoted by a reasonable emergency resource allocation schedule in advance. As the marine environment is complicated and changeful, the place, type, severity of maritime accident is uncertain and stochastic, bringing about dynamic demand of emergency resource. Considering dynamic demand, how to make a reasonable emergency resource allocation schedule is challenging. The key problem is to determine the optimal stock of emergency resource for supplier centers to improve relief efforts. This paper studies the dynamic demand, and which is defined as a set. Then a maritime emergency resource allocation model with uncertain data is presented. Afterwards, a robust approach is developed and used to make sure that the resource allocation schedule performs well with dynamic demand. Finally, a case study shows that the proposed methodology is feasible in maritime emergency resource allocation. The findings could help emergency manager to schedule the emergency resource allocation more flexibly in terms of dynamic demand.

  3. Optimized maritime emergency resource allocation under dynamic demand

    PubMed Central

    Yan, Xinping; Yang, Jiaqi

    2017-01-01

    Emergency resource is important for people evacuation and property rescue when accident occurs. The relief efforts could be promoted by a reasonable emergency resource allocation schedule in advance. As the marine environment is complicated and changeful, the place, type, severity of maritime accident is uncertain and stochastic, bringing about dynamic demand of emergency resource. Considering dynamic demand, how to make a reasonable emergency resource allocation schedule is challenging. The key problem is to determine the optimal stock of emergency resource for supplier centers to improve relief efforts. This paper studies the dynamic demand, and which is defined as a set. Then a maritime emergency resource allocation model with uncertain data is presented. Afterwards, a robust approach is developed and used to make sure that the resource allocation schedule performs well with dynamic demand. Finally, a case study shows that the proposed methodology is feasible in maritime emergency resource allocation. The findings could help emergency manager to schedule the emergency resource allocation more flexibly in terms of dynamic demand. PMID:29240792

  4. Optimization of tomographic reconstruction workflows on geographically distributed resources

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar; ...

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less

  5. Optimization of tomographic reconstruction workflows on geographically distributed resources

    PubMed Central

    Bicer, Tekin; Gürsoy, Doǧa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T.

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Moreover, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks. PMID:27359149

  6. Optimization of tomographic reconstruction workflows on geographically distributed resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less

  7. Attribute based encryption for secure sharing of E-health data

    NASA Astrophysics Data System (ADS)

    Charanya, R.; Nithya, S.; Manikandan, N.

    2017-11-01

    Distributed computing is one of the developing innovations in IT part and information security assumes a real part. It includes sending gathering of remote server and programming that permit the unified information and online access to PC administrations. Distributed computing depends on offering of asset among different clients are additionally progressively reallocated on interest. Cloud computing is a revolutionary computing paradigm which enables flexible, on-demand and low-cost usage of computing resources. The reasons for security and protection issues, which rise on the grounds that the health information possessed by distinctive clients are put away in some cloud servers rather than under their own particular control”z. To deal with security problems, various schemes based on the Attribute-Based Encryption have been proposed. In this paper, in order to make ehealth data’s more secure we use multi party in cloud computing system. Where the health data is encrypted using attributes and key policy. And the user with a particular attribute and key policy alone will be able to decrypt the health data after it is verified by “key distribution centre” and the “secure data distributor”. This technique can be used in medical field for secure storage of patient details and limiting to particular doctor access. To make data’s scalable secure we need to encrypt the health data before outsourcing.

  8. Enabling opportunistic resources for CMS Computing Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hufnagel, Dirk

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize opportunistic resources resources not owned by, or a priori configured for CMS to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enablemore » access and bring the CMS environment into these non CMS resources. Finally, we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less

  9. Enabling opportunistic resources for CMS Computing Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hufnagel, Dick

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize “opportunistic” resources — resources not owned by, or a priori configured for CMS — to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are usedmore » to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less

  10. Enabling opportunistic resources for CMS Computing Operations

    DOE PAGES

    Hufnagel, Dirk

    2015-12-23

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize opportunistic resources resources not owned by, or a priori configured for CMS to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enablemore » access and bring the CMS environment into these non CMS resources. Finally, we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less

  11. Interoperability of GADU in using heterogeneous Grid resources for bioinformatics applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sulakhe, D.; Rodriguez, A.; Wilde, M.

    2008-03-01

    Bioinformatics tools used for efficient and computationally intensive analysis of genetic sequences require large-scale computational resources to accommodate the growing data. Grid computational resources such as the Open Science Grid and TeraGrid have proved useful for scientific discovery. The genome analysis and database update system (GADU) is a high-throughput computational system developed to automate the steps involved in accessing the Grid resources for running bioinformatics applications. This paper describes the requirements for building an automated scalable system such as GADU that can run jobs on different Grids. The paper describes the resource-independent configuration of GADU using the Pegasus-based virtual datamore » system that makes high-throughput computational tools interoperable on heterogeneous Grid resources. The paper also highlights the features implemented to make GADU a gateway to computationally intensive bioinformatics applications on the Grid. The paper will not go into the details of problems involved or the lessons learned in using individual Grid resources as it has already been published in our paper on genome analysis research environment (GNARE) and will focus primarily on the architecture that makes GADU resource independent and interoperable across heterogeneous Grid resources.« less

  12. Using Mosix for Wide-Area Compuational Resources

    USGS Publications Warehouse

    Maddox, Brian G.

    2004-01-01

    One of the problems with using traditional Beowulf-type distributed processing clusters is that they require an investment in dedicated computer resources. These resources are usually needed in addition to pre-existing ones such as desktop computers and file servers. Mosix is a series of modifications to the Linux kernel that creates a virtual computer, featuring automatic load balancing by migrating processes from heavily loaded nodes to less used ones. An extension of the Beowulf concept is to run a Mosixenabled Linux kernel on a large number of computer resources in an organization. This configuration would provide a very large amount of computational resources based on pre-existing equipment. The advantage of this method is that it provides much more processing power than a traditional Beowulf cluster without the added costs of dedicating resources.

  13. Contextuality as a Resource for Models of Quantum Computation with Qubits

    NASA Astrophysics Data System (ADS)

    Bermejo-Vega, Juan; Delfosse, Nicolas; Browne, Dan E.; Okay, Cihan; Raussendorf, Robert

    2017-09-01

    A central question in quantum computation is to identify the resources that are responsible for quantum speed-up. Quantum contextuality has been recently shown to be a resource for quantum computation with magic states for odd-prime dimensional qudits and two-dimensional systems with real wave functions. The phenomenon of state-independent contextuality poses a priori an obstruction to characterizing the case of regular qubits, the fundamental building block of quantum computation. Here, we establish contextuality of magic states as a necessary resource for a large class of quantum computation schemes on qubits. We illustrate our result with a concrete scheme related to measurement-based quantum computation.

  14. Computing arrival times of firefighting resources for initial attack

    Treesearch

    Romain M. Mees

    1978-01-01

    Dispatching of firefighting resources requires instantaneous or precalculated decisions. A FORTRAN computer program has been developed that can provide a list of resources in order of computed arrival time for initial attack on a fire. The program requires an accurate description of the existing road system and a list of all resources available on a planning unit....

  15. The SAMI2 Open Source Project

    NASA Astrophysics Data System (ADS)

    Huba, J. D.; Joyce, G.

    2001-05-01

    In the past decade, the Open Source Model for software development has gained popularity and has had numerous major achievements: emacs, Linux, the Gimp, and Python, to name a few. The basic idea is to provide the source code of the model or application, a tutorial on its use, and a feedback mechanism with the community so that the model can be tested, improved, and archived. Given the success of the Open Source Model, we believe it may prove valuable in the development of scientific research codes. With this in mind, we are `Open Sourcing' the low to mid-latitude ionospheric model that has recently been developed at the Naval Research Laboratory: SAMI2 (Sami2 is Another Model of the Ionosphere). The model is comprehensive and uses modern numerical techniques. The structure and design of SAMI2 make it relatively easy to understand and modify: the numerical algorithms are simple and direct, and the code is reasonably well-written. Furthermore, SAMI2 is designed to run on personal computers; prohibitive computational resources are not necessary, thereby making the model accessible and usable by virtually all researchers. For these reasons, SAMI2 is an excellent candidate to explore and test the open source modeling paradigm in space physics research. We will discuss various topics associated with this project. Research supported by the Office of Naval Research.

  16. Compound Event Barrier Coverage in Wireless Sensor Networks under Multi-Constraint Conditions.

    PubMed

    Zhuang, Yaoming; Wu, Chengdong; Zhang, Yunzhou; Jia, Zixi

    2016-12-24

    It is important to monitor compound event by barrier coverage issues in wireless sensor networks (WSNs). Compound event barrier coverage (CEBC) is a novel coverage problem. Unlike traditional ones, the data of compound event barrier coverage comes from different types of sensors. It will be subject to multiple constraints under complex conditions in real-world applications. The main objective of this paper is to design an efficient algorithm for complex conditions that can combine the compound event confidence. Moreover, a multiplier method based on an active-set strategy (ASMP) is proposed to optimize the multiple constraints in compound event barrier coverage. The algorithm can calculate the coverage ratio efficiently and allocate the sensor resources reasonably in compound event barrier coverage. The proposed algorithm can simplify complex problems to reduce the computational load of the network and improve the network efficiency. The simulation results demonstrate that the proposed algorithm is more effective and efficient than existing methods, especially in the allocation of sensor resources.

  17. Contingency Planning for Planetary Rovers

    NASA Technical Reports Server (NTRS)

    Dearden, Richard; Meuleau, Nicolas; Ramakrishnan, Sailesh; Smith, David; Washington, Rich; Clancy, Daniel (Technical Monitor)

    2002-01-01

    There has been considerable work in AI on planning under uncertainty. But this work generally assumes an extremely simple model of action that does not consider continuous time and resources. These assumptions are not reasonable for a Mars rover, which must cope with uncertainty about the duration of tasks, the power required, the data storage necessary, along with its position and orientation. In this paper, we outline an approach to generating contingency plans when the sources of uncertainty involve continuous quantities such as time and resources. The approach involves first constructing a "seed" plan, and then incrementally adding contingent branches to this plan in order to improve utility. The challenge is to figure out the best places to insert contingency branches. This requires an estimate of how much utility could be gained by building a contingent branch at any given place in the seed plan. Computing this utility exactly is intractable, but we outline an approximation method that back propagates utility distributions through a graph structure similar to that of a plan graph.

  18. Incremental Contingency Planning

    NASA Technical Reports Server (NTRS)

    Dearden, Richard; Meuleau, Nicolas; Ramakrishnan, Sailesh; Smith, David E.; Washington, Rich

    2003-01-01

    There has been considerable work in AI on planning under uncertainty. However, this work generally assumes an extremely simple model of action that does not consider continuous time and resources. These assumptions are not reasonable for a Mars rover, which must cope with uncertainty about the duration of tasks, the energy required, the data storage necessary, and its current position and orientation. In this paper, we outline an approach to generating contingency plans when the sources of uncertainty involve continuous quantities such as time and resources. The approach involves first constructing a "seed" plan, and then incrementally adding contingent branches to this plan in order to improve utility. The challenge is to figure out the best places to insert contingency branches. This requires an estimate of how much utility could be gained by building a contingent branch at any given place in the seed plan. Computing this utility exactly is intractable, but we outline an approximation method that back propagates utility distributions through a graph structure similar to that of a plan graph.

  19. Compound Event Barrier Coverage in Wireless Sensor Networks under Multi-Constraint Conditions

    PubMed Central

    Zhuang, Yaoming; Wu, Chengdong; Zhang, Yunzhou; Jia, Zixi

    2016-01-01

    It is important to monitor compound event by barrier coverage issues in wireless sensor networks (WSNs). Compound event barrier coverage (CEBC) is a novel coverage problem. Unlike traditional ones, the data of compound event barrier coverage comes from different types of sensors. It will be subject to multiple constraints under complex conditions in real-world applications. The main objective of this paper is to design an efficient algorithm for complex conditions that can combine the compound event confidence. Moreover, a multiplier method based on an active-set strategy (ASMP) is proposed to optimize the multiple constraints in compound event barrier coverage. The algorithm can calculate the coverage ratio efficiently and allocate the sensor resources reasonably in compound event barrier coverage. The proposed algorithm can simplify complex problems to reduce the computational load of the network and improve the network efficiency. The simulation results demonstrate that the proposed algorithm is more effective and efficient than existing methods, especially in the allocation of sensor resources. PMID:28029118

  20. 77 FR 50675 - Virginia Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-22

    ... DEPARTMENT OF AGRICULTURE Forest Service Virginia Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Virginia Resource Advisory Committee will meet in... Contact. All reasonable accommodation requests are managed on a case by case basis. Resource Advisory...

  1. Understanding neurophobia: Reasons behind impaired understanding and learning of neuroanatomy in cross-disciplinary healthcare students.

    PubMed

    Javaid, Muhammad Asim; Chakraborty, Shelly; Cryan, John F; Schellekens, Harriët; Toulouse, André

    2018-01-01

    Recent studies have highlighted a fear or difficulty with the study and understanding of neuroanatomy among medical and healthcare students. This has been linked with a diminished confidence of clinical practitioners and students to manage patients with neurological conditions. The underlying reasons for this difficulty have been queried among a broad cohort of medical, dental, occupational therapy, and speech and language sciences students. Direct evidence of the students' perception regarding specific difficulties associated with learning neuroanatomy has been provided and some of the measures required to address these issues have been identified. Neuroanatomy is perceived as a more difficult subject compared to other anatomy topics (e.g., reproductive/pelvic anatomy) and not all components of the neuroanatomy curriculum are viewed as equally challenging. The difficulty in understanding neuroanatomical concepts is linked to intrinsic factors such as the inherent complex nature of the topic rather than outside influences (e.g., lecture duration). Participants reporting high levels of interest in the subject reported higher levels of knowledge, suggesting that teaching tools aimed at increasing interest, such as case-based scenarios, could facilitate acquisition of knowledge. Newer pedagogies, including web-resources and computer assisted learning (CAL) are considered important tools to improve neuroanatomy learning, whereas traditional tools such as lecture slides and notes were considered less important. In conclusion, it is suggested that understanding of neuroanatomy could be enhanced and neurophobia be decreased by purposefully designed CAL resources. This data could help curricular designers to refocus attention and guide educators to develop improved neuroanatomy web-resources in future. Anat Sci Educ 11: 81-93. © 2017 American Association of Anatomists. © 2017 American Association of Anatomists.

  2. A Review of Computer Science Resources for Learning and Teaching with K-12 Computing Curricula: An Australian Case Study

    ERIC Educational Resources Information Center

    Falkner, Katrina; Vivian, Rebecca

    2015-01-01

    To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age…

  3. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    NASA Technical Reports Server (NTRS)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  4. Domain-general contributions to social reasoning: theory of mind and deontic reasoning re-explored.

    PubMed

    McKinnon, Margaret C; Moscovitch, Morris

    2007-02-01

    Using older adults and dual-task interference, we examined performance on two social reasoning tasks: theory of mind (ToM) tasks and versions of the deontic selection task involving social contracts and hazardous conditions. In line with performance accounts of social reasoning, evidence from both aging and the dual-task method suggested that domain-general resources contribute to performance of these tasks. Specifically, older adults were impaired relative to younger adults on all types of social reasoning tasks tested; performance varied as a function of the demands these tasks placed on domain-general resources. Moreover, in younger adults, simultaneous performance of a working memory task interfered with younger adults' performance on both types of social reasoning tasks; here too, the magnitude of the interference effect varied with the processing demands of each task. Limits placed on social reasoning by executive functions contribute a great deal to performance, even in old age and in healthy younger adults under conditions of divided attention. The role of potentially non-modular and modular contributions to social reasoning is discussed.

  5. Study on Karst Information Identification of Qiandongnan Prefecture Based on RS and GIS Technology

    NASA Astrophysics Data System (ADS)

    Yao, M.; Zhou, G.; Wang, W.; Wu, Z.; Huang, Y.; Huang, X.

    2018-04-01

    Karst area is a pure natural resource base, at the same time, due to the special geological environment; there are droughts and floods alternating with frequent karst collapse, rocky desertification and other resource and environment problems, which seriously restrict the sustainable economic and social development in karst areas. Therefore, this paper identifies and studies the karst, and clarifies the distribution of karst. Provide basic data for the rational development of resources in the karst region and the governance of desertification. Due to the uniqueness of the karst landscape, it can't be directly recognized and extracted by computer in remote sensing images. Therefore, this paper uses the idea of "RS + DEM" to solve the above problems. this article is based on Landsat-5 TM imagery in 2010 and DEM data, proposes the methods to identify karst information research what is use of slope vector diagram, vegetation distribution map, distribution map of karst rocky desertification and other auxiliary data in combination with the signs for human-computer interaction interpretation, identification and extraction of peak forest, peaks cluster and isolated peaks, and further extraction of karst depression. Experiments show that this method achieves the "RS + DEM" mode through the reasonable combination of remote sensing images and DEM data. It not only effectively extracts karst areas covered with vegetation, but also quickly and accurately locks down the karst area and greatly improves the efficiency and precision of visual interpretation. The accurate interpretation rate of karst information in study area in this paper is 86.73 %.

  6. A simple parameterization of aerosol emissions in RAMS

    NASA Astrophysics Data System (ADS)

    Letcher, Theodore

    Throughout the past decade, a high degree of attention has been focused on determining the microphysical impact of anthropogenically enhanced concentrations of Cloud Condensation Nuclei (CCN) on orographic snowfall in the mountains of the western United States. This area has garnered a lot of attention due to the implications this effect may have on local water resource distribution within the Region. Recent advances in computing power and the development of highly advanced microphysical schemes within numerical models have provided an estimation of the sensitivity that orographic snowfall has to changes in atmospheric CCN concentrations. However, what is still lacking is a coupling between these advanced microphysical schemes and a real-world representation of CCN sources. Previously, an attempt to representation the heterogeneous evolution of aerosol was made by coupling three-dimensional aerosol output from the WRF Chemistry model to the Colorado State University (CSU) Regional Atmospheric Modeling System (RAMS) (Ward et al. 2011). The biggest problem associated with this scheme was the computational expense. In fact, the computational expense associated with this scheme was so high, that it was prohibitive for simulations with fine enough resolution to accurately represent microphysical processes. To improve upon this method, a new parameterization for aerosol emission was developed in such a way that it was fully contained within RAMS. Several assumptions went into generating a computationally efficient aerosol emissions parameterization in RAMS. The most notable assumption was the decision to neglect the chemical processes in formed in the formation of Secondary Aerosol (SA), and instead treat SA as primary aerosol via short-term WRF-CHEM simulations. While, SA makes up a substantial portion of the total aerosol burden (much of which is made up of organic material), the representation of this process is highly complex and highly expensive within a numerical model. Furthermore, SA formation is greatly reduced during the winter months due to the lack of naturally produced organic VOC's. Because of these reasons, it was felt that neglecting SOA within the model was the best course of action. The actual parameterization uses a prescribed source map to add aerosol to the model at two vertical levels that surround an arbitrary height decided by the user. To best represent the real-world, the WRF Chemistry model was run using the National Emissions Inventory (NEI2005) to represent anthropogenic emissions and the Model Emissions of Gases and Aerosols from Nature (MEGAN) to represent natural contributions to aerosol. WRF Chemistry was run for one hour, after which the aerosol output along with the hygroscopicity parameter (κ) were saved into a data file that had the capacity to be interpolated to an arbitrary grid used in RAMS. The comparison of this parameterization to observations collected at Mesa Verde National Park (MVNP) during the Inhibition of Snowfall from Pollution Aerosol (ISPA-III) field campaign yielded promising results. The model was able to simulate the variability in near surface aerosol concentration with reasonable accuracy, though with a general low bias. Furthermore, this model compared much better to the observations than did the WRF Chemistry model using a fraction of the computational expense. This emissions scheme was able to show reasonable solutions regarding the aerosol concentrations and can therefore be used to provide an estimate of the seasonal impact of increased CCN on water resources in Western Colorado with relatively low computational expense.

  7. Operating Dedicated Data Centers - Is It Cost-Effective?

    NASA Astrophysics Data System (ADS)

    Ernst, M.; Hogue, R.; Hollowell, C.; Strecker-Kellog, W.; Wong, A.; Zaytsev, A.

    2014-06-01

    The advent of cloud computing centres such as Amazon's EC2 and Google's Computing Engine has elicited comparisons with dedicated computing clusters. Discussions on appropriate usage of cloud resources (both academic and commercial) and costs have ensued. This presentation discusses a detailed analysis of the costs of operating and maintaining the RACF (RHIC and ATLAS Computing Facility) compute cluster at Brookhaven National Lab and compares them with the cost of cloud computing resources under various usage scenarios. An extrapolation of likely future cost effectiveness of dedicated computing resources is also presented.

  8. Computing the Envelope for Stepwise-Constant Resource Allocations

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Computing tight resource-level bounds is a fundamental problem in the construction of flexible plans with resource utilization. In this paper we describe an efficient algorithm that builds a resource envelope, the tightest possible such bound. The algorithm is based on transforming the temporal network of resource consuming and producing events into a flow network with nodes equal to the events and edges equal to the necessary predecessor links between events. A staged maximum flow problem on the network is then used to compute the time of occurrence and the height of each step of the resource envelope profile. Each stage has the same computational complexity of solving a maximum flow problem on the entire flow network. This makes this method computationally feasible and promising for use in the inner loop of flexible-time scheduling algorithms.

  9. Integrating knowledge representation and quantitative modelling in physiology.

    PubMed

    de Bono, Bernard; Hunter, Peter

    2012-08-01

    A wealth of potentially shareable resources, such as data and models, is being generated through the study of physiology by computational means. Although in principle the resources generated are reusable, in practice, few can currently be shared. A key reason for this disparity stems from the lack of consistent cataloguing and annotation of these resources in a standardised manner. Here, we outline our vision for applying community-based modelling standards in support of an automated integration of models across physiological systems and scales. Two key initiatives, the Physiome Project and the European contribution - the Virtual Phsysiological Human Project, have emerged to support this multiscale model integration, and we focus on the role played by two key components of these frameworks, model encoding and semantic metadata annotation. We present examples of biomedical modelling scenarios (the endocrine effect of atrial natriuretic peptide, and the implications of alcohol and glucose toxicity) to illustrate the role that encoding standards and knowledge representation approaches, such as ontologies, could play in the management, searching and visualisation of physiology models, and thus in providing a rational basis for healthcare decisions and contributing towards realising the goal of of personalized medicine. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Securing While Sampling in Wireless Body Area Networks With Application to Electrocardiography.

    PubMed

    Dautov, Ruslan; Tsouri, Gill R

    2016-01-01

    Stringent resource constraints and broadcast transmission in wireless body area network raise serious security concerns when employed in biomedical applications. Protecting data transmission where any minor alteration is potentially harmful is of significant importance in healthcare. Traditional security methods based on public or private key infrastructure require considerable memory and computational resources, and present an implementation obstacle in compact sensor nodes. This paper proposes a lightweight encryption framework augmenting compressed sensing with wireless physical layer security. Augmenting compressed sensing to secure information is based on the use of the measurement matrix as an encryption key, and allows for incorporating security in addition to compression at the time of sampling an analog signal. The proposed approach eliminates the need for a separate encryption algorithm, as well as the predeployment of a key thereby conserving sensor node's limited resources. The proposed framework is evaluated using analysis, simulation, and experimentation applied to a wireless electrocardiogram setup consisting of a sensor node, an access point, and an eavesdropper performing a proximity attack. Results show that legitimate communication is reliable and secure given that the eavesdropper is located at a reasonable distance from the sensor node and the access point.

  11. An innovative time-cost-quality tradeoff modeling of building construction project based on resource allocation.

    PubMed

    Hu, Wenfa; He, Xinhua

    2014-01-01

    The time, quality, and cost are three important but contradictive objectives in a building construction project. It is a tough challenge for project managers to optimize them since they are different parameters. This paper presents a time-cost-quality optimization model that enables managers to optimize multiobjectives. The model is from the project breakdown structure method where task resources in a construction project are divided into a series of activities and further into construction labors, materials, equipment, and administration. The resources utilized in a construction activity would eventually determine its construction time, cost, and quality, and a complex time-cost-quality trade-off model is finally generated based on correlations between construction activities. A genetic algorithm tool is applied in the model to solve the comprehensive nonlinear time-cost-quality problems. Building of a three-storey house is an example to illustrate the implementation of the model, demonstrate its advantages in optimizing trade-off of construction time, cost, and quality, and help make a winning decision in construction practices. The computational time-cost-quality curves in visual graphics from the case study prove traditional cost-time assumptions reasonable and also prove this time-cost-quality trade-off model sophisticated.

  12. The Comparison of Inductive Reasoning under Risk Conditions between Chinese and Japanese Based on Computational Models: Toward the Application to CAE for Foreign Language

    ERIC Educational Resources Information Center

    Zhang, Yujie; Terai, Asuka; Nakagawa, Masanori

    2013-01-01

    Inductive reasoning under risk conditions is an important thinking process not only for sciences but also in our daily life. From this viewpoint, it is very useful for language learning to construct computational models of inductive reasoning which realize the CAE for foreign languages. This study proposes the comparison of inductive reasoning…

  13. Knowledge Representation and Ontologies

    NASA Astrophysics Data System (ADS)

    Grimm, Stephan

    Knowledge representation and reasoning aims at designing computer systems that reason about a machine-interpretable representation of the world. Knowledge-based systems have a computational model of some domain of interest in which symbols serve as surrogates for real world domain artefacts, such as physical objects, events, relationships, etc. [1]. The domain of interest can cover any part of the real world or any hypothetical system about which one desires to represent knowledge for com-putational purposes. A knowledge-based system maintains a knowledge base, which stores the symbols of the computational model in the form of statements about the domain, and it performs reasoning by manipulating these symbols. Applications can base their decisions on answers to domain-relevant questions posed to a knowledge base.

  14. 18 CFR 1309.7 - Is the use of reasonable factors other than age an exception to the rules against age...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 18 Conservation of Power and Water Resources 2 2013-04-01 2012-04-01 true Is the use of reasonable... Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY NONDISCRIMINATION WITH RESPECT TO AGE... ages. An action may be based on a factor other than age only if the factor bears a direct and...

  15. Distributed Accounting on the Grid

    NASA Technical Reports Server (NTRS)

    Thigpen, William; Hacker, Thomas J.; McGinnis, Laura F.; Athey, Brian D.

    2001-01-01

    By the late 1990s, the Internet was adequately equipped to move vast amounts of data between HPC (High Performance Computing) systems, and efforts were initiated to link together the national infrastructure of high performance computational and data storage resources together into a general computational utility 'grid', analogous to the national electrical power grid infrastructure. The purpose of the Computational grid is to provide dependable, consistent, pervasive, and inexpensive access to computational resources for the computing community in the form of a computing utility. This paper presents a fully distributed view of Grid usage accounting and a methodology for allocating Grid computational resources for use on a Grid computing system.

  16. Distributed Problem Solving: Adaptive Networks with a Computer Intermediary Resource. Intelligent Executive Computer Communication

    DTIC Science & Technology

    1991-06-01

    Proceedings of The National Conference on Artificial Intelligence , pages 181-184, The American Association for Aritificial Intelligence , Pittsburgh...Intermediary Resource: Intelligent Executive Computer Communication John Lyman and Carla J. Conaway University of California at Los Angeles for Contracting...Include Security Classification) Interim Report: Distributed Problem Solving: Adaptive Networks With a Computer Intermediary Resource: Intelligent

  17. Leveraging the Cloud for Robust and Efficient Lunar Image Processing

    NASA Technical Reports Server (NTRS)

    Chang, George; Malhotra, Shan; Wolgast, Paul

    2011-01-01

    The Lunar Mapping and Modeling Project (LMMP) is tasked to aggregate lunar data, from the Apollo era to the latest instruments on the LRO spacecraft, into a central repository accessible by scientists and the general public. A critical function of this task is to provide users with the best solution for browsing the vast amounts of imagery available. The image files LMMP manages range from a few gigabytes to hundreds of gigabytes in size with new data arriving every day. Despite this ever-increasing amount of data, LMMP must make the data readily available in a timely manner for users to view and analyze. This is accomplished by tiling large images into smaller images using Hadoop, a distributed computing software platform implementation of the MapReduce framework, running on a small cluster of machines locally. Additionally, the software is implemented to use Amazon's Elastic Compute Cloud (EC2) facility. We also developed a hybrid solution to serve images to users by leveraging cloud storage using Amazon's Simple Storage Service (S3) for public data while keeping private information on our own data servers. By using Cloud Computing, we improve upon our local solution by reducing the need to manage our own hardware and computing infrastructure, thereby reducing costs. Further, by using a hybrid of local and cloud storage, we are able to provide data to our users more efficiently and securely. 12 This paper examines the use of a distributed approach with Hadoop to tile images, an approach that provides significant improvements in image processing time, from hours to minutes. This paper describes the constraints imposed on the solution and the resulting techniques developed for the hybrid solution of a customized Hadoop infrastructure over local and cloud resources in managing this ever-growing data set. It examines the performance trade-offs of using the more plentiful resources of the cloud, such as those provided by S3, against the bandwidth limitations such use encounters with remote resources. As part of this discussion this paper will outline some of the technologies employed, the reasons for their selection, the resulting performance metrics and the direction the project is headed based upon the demonstrated capabilities thus far.

  18. The use of computers to teach human anatomy and physiology to allied health and nursing students

    NASA Astrophysics Data System (ADS)

    Bergeron, Valerie J.

    Educational institutions are under tremendous pressure to adopt the newest technologies in order to prepare their students to meet the challenges of the twenty-first century. For the last twenty years huge amounts of money have been spent on computers, printers, software, multimedia projection equipment, and so forth. A reasonable question is, "Has it worked?" Has this infusion of resources, financial as well as human, resulted in improved learning? Are the students meeting the intended learning goals? Any attempt to develop answers to these questions should include examining the intended goals and exploring the effects of the changes on students and faculty. This project investigated the impact of a specific application of a computer program in a community college setting on students' attitudes and understanding of human anatomy and physiology. In this investigation two sites of the same community college with seemingly similar students populations, seven miles apart, used different laboratory activities to teach human anatomy and physiology. At one site nursing students were taught using traditional dissections and laboratory activities; at the other site two of the dissections, specifically cat and sheep pluck, were replaced with the A.D.A.M.RTM (Animated Dissection of Anatomy for Medicine) computer program. Analysis of the attitude data indicated that students at both sites were extremely positive about their laboratory experiences. Analysis of the content data indicated a statistically significant difference in performance between the two sites in two of the eight content areas that were studied. For both topics the students using the computer program scored higher. A detailed analysis of the surveys, interviews with faculty and students, examination of laboratory materials, and observations of laboratory facilities in both sites, and cost-benefit analysis led to the development of seven recommendations. The recommendations call for action at the level of the institution requiring investment in additional resources, and at the level of the faculty requiring a commitment to exploration and reflective practice.

  19. Experience in using commercial clouds in CMS

    NASA Astrophysics Data System (ADS)

    Bauerdick, L.; Bockelman, B.; Dykstra, D.; Fuess, S.; Garzoglio, G.; Girone, M.; Gutsche, O.; Holzman, B.; Hufnagel, D.; Kim, H.; Kennedy, R.; Mason, D.; Spentzouris, P.; Timm, S.; Tiradani, A.; Vaandering, E.; CMS Collaboration

    2017-10-01

    Historically high energy physics computing has been performed on large purpose-built computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.

  20. Experience in using commercial clouds in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauerdick, L.; Bockelman, B.; Dykstra, D.

    Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is amore » growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.« less

  1. 75 FR 17863 - Approval and Promulgation of Air Quality Implementation Plans; Delaware; Reasonable Further...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-08

    ... Inventory, Reasonably Available Control Measures, Contingency Measures, and Transportation Conformity... emissions inventory, contingency measures, and the reasonably available control measure (RACM) analysis... Resources & Environmental Control, 89 Kings Highway, P.O. Box 1401, Dover, Delaware 19903. FOR FURTHER...

  2. Simulating Biomass Fast Pyrolysis at the Single Particle Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciesielski, Peter; Wiggins, Gavin; Daw, C Stuart

    2017-07-01

    Simulating fast pyrolysis at the scale of single particles allows for the investigation of the impacts of feedstock-specific parameters such as particle size, shape, and species of origin. For this reason particle-scale modeling has emerged as an important tool for understanding how variations in feedstock properties affect the outcomes of pyrolysis processes. The origins of feedstock properties are largely dictated by the composition and hierarchical structure of biomass, from the microstructural porosity to the external morphology of milled particles. These properties may be accounted for in simulations of fast pyrolysis by several different computational approaches depending on the level ofmore » structural and chemical complexity included in the model. The predictive utility of particle-scale simulations of fast pyrolysis can still be enhanced substantially by advancements in several areas. Most notably, considerable progress would be facilitated by the development of pyrolysis kinetic schemes that are decoupled from transport phenomena, predict product evolution from whole-biomass with increased chemical speciation, and are still tractable with present-day computational resources.« less

  3. Hey Buddy can you spare a DNA? New surveillance technologies and the growth of mandatory volunteerism in collecting personal information.

    PubMed

    Marx, Gary T

    2007-01-01

    The new social surveillance can be defined as scrutiny through the use of technical means to extract or create personal or group data, whether from individuals or contexts. Examples include: video cameras; computer matching, profiling and data mining; work, computer and electronic location monitoring; biometrics; DNA analysis; drug tests; brain scans for lie detection; various forms of imaging to reveal what is behind walls and enclosures. There are two problems with the new surveillance technologies. One is that they don't work and the other is that they work too well. If the first, they fail to prevent disasters, bring miscarriages of justice, and waste resources. If the second, they can further inequality and invidious social categorization; they chill liberty. These twin threats are part of the enduring paradox of democratic government that must be strong enough to maintain reasonable order, but not so strong as to become undemocratic.

  4. The Influence of Reasons for Attending University on University Experience: A Comparison between Students with and without Disabilities

    ERIC Educational Resources Information Center

    Reed, Maureen J.; Kennett, Deborah J.; Emond, Marc

    2015-01-01

    Students choose to go to university for many reasons. They include those with disabilities and those without. The reasons why students with disabilities go to university and how these reasons impact university experience, including coping (academic resourcefulness), adapting, academic ability beliefs (academic self-efficacy), and grades, are…

  5. Study on the application of mobile internet cloud computing platform

    NASA Astrophysics Data System (ADS)

    Gong, Songchun; Fu, Songyin; Chen, Zheng

    2012-04-01

    The innovative development of computer technology promotes the application of the cloud computing platform, which actually is the substitution and exchange of a sort of resource service models and meets the needs of users on the utilization of different resources after changes and adjustments of multiple aspects. "Cloud computing" owns advantages in many aspects which not merely reduce the difficulties to apply the operating system and also make it easy for users to search, acquire and process the resources. In accordance with this point, the author takes the management of digital libraries as the research focus in this paper, and analyzes the key technologies of the mobile internet cloud computing platform in the operation process. The popularization and promotion of computer technology drive people to create the digital library models, and its core idea is to strengthen the optimal management of the library resource information through computers and construct an inquiry and search platform with high performance, allowing the users to access to the necessary information resources at any time. However, the cloud computing is able to promote the computations within the computers to distribute in a large number of distributed computers, and hence implement the connection service of multiple computers. The digital libraries, as a typical representative of the applications of the cloud computing, can be used to carry out an analysis on the key technologies of the cloud computing.

  6. Objective structured clinical examination "Death Certificate" station - Computer-based versus conventional exam format.

    PubMed

    Biolik, A; Heide, S; Lessig, R; Hachmann, V; Stoevesandt, D; Kellner, J; Jäschke, C; Watzke, S

    2018-04-01

    One option for improving the quality of medical post mortem examinations is through intensified training of medical students, especially in countries where such a requirement exists regardless of the area of specialisation. For this reason, new teaching and learning methods on this topic have recently been introduced. These new approaches include e-learning modules or SkillsLab stations; one way to objectify the resultant learning outcomes is by means of the OSCE process. However, despite offering several advantages, this examination format also requires considerable resources, in particular in regards to medical examiners. For this reason, many clinical disciplines have already implemented computer-based OSCE examination formats. This study investigates whether the conventional exam format for the OSCE forensic "Death Certificate" station could be replaced with a computer-based approach in future. For this study, 123 students completed the OSCE "Death Certificate" station, using both a computer-based and conventional format, half starting with the Computer the other starting with the conventional approach in their OSCE rotation. Assignment of examination cases was random. The examination results for the two stations were compared and both overall results and the individual items of the exam checklist were analysed by means of inferential statistics. Following statistical analysis of examination cases of varying difficulty levels and correction of the repeated measures effect, the results of both examination formats appear to be comparable. Thus, in the descriptive item analysis, while there were some significant differences between the computer-based and conventional OSCE stations, these differences were not reflected in the overall results after a correction factor was applied (e.g. point deductions for assistance from the medical examiner was possible only at the conventional station). Thus, we demonstrate that the computer-based OSCE "Death Certificate" station is a cost-efficient and standardised format for examination that yields results comparable to those from a conventional format exam. Moreover, the examination results also indicate the need to optimize both the test itself (adjusting the degree of difficulty of the case vignettes) and the corresponding instructional and learning methods (including, for example, the use of computer programmes to complete the death certificate in small group formats in the SkillsLab). Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  7. Computer-Based Resource Accounting Model for Automobile Technology Impact Assessment

    DOT National Transportation Integrated Search

    1976-10-01

    A computer-implemented resource accounting model has been developed for assessing resource impacts of future automobile technology options. The resources tracked are materials, energy, capital, and labor. The model has been used in support of the Int...

  8. System Resource Allocations | High-Performance Computing | NREL

    Science.gov Websites

    Allocations System Resource Allocations To use NREL's high-performance computing (HPC) resources : Compute hours on NREL HPC Systems including Peregrine and Eagle Storage space (in Terabytes) on Peregrine , Eagle and Gyrfalcon. Allocations are principally done in response to an annual call for allocation

  9. Computers as learning resources in the health sciences: impact and issues.

    PubMed Central

    Ellis, L B; Hannigan, G G

    1986-01-01

    Starting with two computer terminals in 1972, the Health Sciences Learning Resources Center of the University of Minnesota Bio-Medical Library expanded its instructional facilities to ten terminals and thirty-five microcomputers by 1985. Computer use accounted for 28% of total center circulation. The impact of these resources on health sciences curricula is described and issues related to use, support, and planning are raised and discussed. Judged by their acceptance and educational value, computers are successful health sciences learning resources at the University of Minnesota. PMID:3518843

  10. An emulator for minimizing finite element analysis implementation resources

    NASA Technical Reports Server (NTRS)

    Melosh, R. J.; Utku, S.; Salama, M.; Islam, M.

    1982-01-01

    A finite element analysis emulator providing a basis for efficiently establishing an optimum computer implementation strategy when many calculations are involved is described. The SCOPE emulator determines computer resources required as a function of the structural model, structural load-deflection equation characteristics, the storage allocation plan, and computer hardware capabilities. Thereby, it provides data for trading analysis implementation options to arrive at a best strategy. The models contained in SCOPE lead to micro-operation computer counts of each finite element operation as well as overall computer resource cost estimates. Application of SCOPE to the Memphis-Arkansas bridge analysis provides measures of the accuracy of resource assessments. Data indicate that predictions are within 17.3 percent for calculation times and within 3.2 percent for peripheral storage resources for the ELAS code.

  11. Dynamic virtual machine allocation policy in cloud computing complying with service level agreement using CloudSim

    NASA Astrophysics Data System (ADS)

    Aneri, Parikh; Sumathy, S.

    2017-11-01

    Cloud computing provides services over the internet and provides application resources and data to the users based on their demand. Base of the Cloud Computing is consumer provider model. Cloud provider provides resources which consumer can access using cloud computing model in order to build their application based on their demand. Cloud data center is a bulk of resources on shared pool architecture for cloud user to access. Virtualization is the heart of the Cloud computing model, it provides virtual machine as per application specific configuration and those applications are free to choose their own configuration. On one hand, there is huge number of resources and on other hand it has to serve huge number of requests effectively. Therefore, resource allocation policy and scheduling policy play very important role in allocation and managing resources in this cloud computing model. This paper proposes the load balancing policy using Hungarian algorithm. Hungarian Algorithm provides dynamic load balancing policy with a monitor component. Monitor component helps to increase cloud resource utilization by managing the Hungarian algorithm by monitoring its state and altering its state based on artificial intelligent. CloudSim used in this proposal is an extensible toolkit and it simulates cloud computing environment.

  12. Computer-intensive simulation of solid-state NMR experiments using SIMPSON.

    PubMed

    Tošner, Zdeněk; Andersen, Rasmus; Stevensson, Baltzar; Edén, Mattias; Nielsen, Niels Chr; Vosegaard, Thomas

    2014-09-01

    Conducting large-scale solid-state NMR simulations requires fast computer software potentially in combination with efficient computational resources to complete within a reasonable time frame. Such simulations may involve large spin systems, multiple-parameter fitting of experimental spectra, or multiple-pulse experiment design using parameter scan, non-linear optimization, or optimal control procedures. To efficiently accommodate such simulations, we here present an improved version of the widely distributed open-source SIMPSON NMR simulation software package adapted to contemporary high performance hardware setups. The software is optimized for fast performance on standard stand-alone computers, multi-core processors, and large clusters of identical nodes. We describe the novel features for fast computation including internal matrix manipulations, propagator setups and acquisition strategies. For efficient calculation of powder averages, we implemented interpolation method of Alderman, Solum, and Grant, as well as recently introduced fast Wigner transform interpolation technique. The potential of the optimal control toolbox is greatly enhanced by higher precision gradients in combination with the efficient optimization algorithm known as limited memory Broyden-Fletcher-Goldfarb-Shanno. In addition, advanced parallelization can be used in all types of calculations, providing significant time reductions. SIMPSON is thus reflecting current knowledge in the field of numerical simulations of solid-state NMR experiments. The efficiency and novel features are demonstrated on the representative simulations. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Comparison of computing capability and information system abilities of state hospitals owned by Ministry of Labor and Social Security and Ministry of Health.

    PubMed

    Tengilimoğlu, Dilaver; Celik, Yusuf; Ulgü, Mahir

    2006-08-01

    The main purpose of this study is to give an idea to the readers about how big and important the computing and information problems that hospital managers as well as policy makers will face with after collecting the Ministry of Labor and Social Security (MoLSS) and Ministry of Health (MoH) hospitals under single structure in Turkey by comparing the current level of computing capability of hospitals owned by two ministries. The data used in this study were obtained from 729 hospitals that belong to both ministries by using a data collection tool. The results indicate that there have been considerable differences among the hospitals owned by the two ministries in terms of human resources and information systems. The hospital managers and decision makers making their decisions based on the data produced by current hospital information system (HIS) would more likely face very important difficulties after merging MoH and MoLSS hospitals in Turkey. It is also possible to claim that the level and adequacy of computing abilities and devices do not allow the managers of public hospitals to use computer technology effectively in their information management practices. Lack of technical information, undeveloped information culture, inappropriate management styles, and being inexperienced are the main reasons of why HIS does not run properly and effectively in Turkish hospitals.

  14. Causal Reasoning in Medicine: Analysis of a Protocol.

    ERIC Educational Resources Information Center

    Kuipers, Benjamin; Kassirer, Jerome P.

    1984-01-01

    Describes the construction of a knowledge representation from the identification of the problem (nephrotic syndrome) to a running computer simulation of causal reasoning to provide a vertical slice of the construction of a cognitive model. Interactions between textbook knowledge, observations of human experts, and computational requirements are…

  15. Intrusive and Non-Intrusive Instruction in Dynamic Skill Training.

    DTIC Science & Technology

    1981-10-01

    less sensitive to the processing load imposed by the dynaic task together with instructional feedback processing than were the decison - making and...betwee computer based instruction of knowledge systems and computer based instruction of dynamic skills. There is reason to expect that the findings of...knowledge 3Ytm and computer based instruction of dynlamic skill.. There is reason to expect that the findings of research on knowledge system

  16. SCEAPI: A unified Restful Web API for High-Performance Computing

    NASA Astrophysics Data System (ADS)

    Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi

    2017-10-01

    The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.

  17. CHAMPION: Intelligent Hierarchical Reasoning Agents for Enhanced Decision Support

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hohimer, Ryan E.; Greitzer, Frank L.; Noonan, Christine F.

    2011-11-15

    We describe the design and development of an advanced reasoning framework employing semantic technologies, organized within a hierarchy of computational reasoning agents that interpret domain specific information. Designed based on an inspirational metaphor of the pattern recognition functions performed by the human neocortex, the CHAMPION reasoning framework represents a new computational modeling approach that derives invariant knowledge representations through memory-prediction belief propagation processes that are driven by formal ontological language specification and semantic technologies. The CHAMPION framework shows promise for enhancing complex decision making in diverse problem domains including cyber security, nonproliferation and energy consumption analysis.

  18. Software and resources for computational medicinal chemistry

    PubMed Central

    Liao, Chenzhong; Sitzmann, Markus; Pugliese, Angelo; Nicklaus, Marc C

    2011-01-01

    Computer-aided drug design plays a vital role in drug discovery and development and has become an indispensable tool in the pharmaceutical industry. Computational medicinal chemists can take advantage of all kinds of software and resources in the computer-aided drug design field for the purposes of discovering and optimizing biologically active compounds. This article reviews software and other resources related to computer-aided drug design approaches, putting particular emphasis on structure-based drug design, ligand-based drug design, chemical databases and chemoinformatics tools. PMID:21707404

  19. Designing computer learning environments for engineering and computer science: The scaffolded knowledge integration framework

    NASA Astrophysics Data System (ADS)

    Linn, Marcia C.

    1995-06-01

    Designing effective curricula for complex topics and incorporating technological tools is an evolving process. One important way to foster effective design is to synthesize successful practices. This paper describes a framework called scaffolded knowledge integration and illustrates how it guided the design of two successful course enhancements in the field of computer science and engineering. One course enhancement, the LISP Knowledge Integration Environment, improved learning and resulted in more gender-equitable outcomes. The second course enhancement, the spatial reasoning environment, addressed spatial reasoning in an introductory engineering course. This enhancement minimized the importance of prior knowledge of spatial reasoning and helped students develop a more comprehensive repertoire of spatial reasoning strategies. Taken together, the instructional research programs reinforce the value of the scaffolded knowledge integration framework and suggest directions for future curriculum reformers.

  20. CASPER Version 2.0

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Rabideau, Gregg; Tran, Daniel; Knight, Russell; Chouinard, Caroline; Estlin, Tara; Gaines, Daniel; Clement, Bradley; Barrett, Anthony

    2007-01-01

    CASPER is designed to perform automated planning of interdependent activities within a system subject to requirements, constraints, and limitations on resources. In contradistinction to the traditional concept of batch planning followed by execution, CASPER implements a concept of continuous planning and replanning in response to unanticipated changes (including failures), integrated with execution. Improvements over other, similar software that have been incorporated into CASPER version 2.0 include an enhanced executable interface to facilitate integration with a wide range of execution software systems and supporting software libraries; features to support execution while reasoning about urgency, importance, and impending deadlines; features that enable accommodation to a wide range of computing environments that include various central processing units and random- access-memory capacities; and improved generic time-server and time-control features.

  1. Exploiting opportunistic resources for ATLAS with ARC CE and the Event Service

    NASA Astrophysics Data System (ADS)

    Cameron, D.; Filipčič, A.; Guan, W.; Tsulaia, V.; Walker, R.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    With ever-greater computing needs and fixed budgets, big scientific experiments are turning to opportunistic resources as a means to add much-needed extra computing power. These resources can be very different in design from those that comprise the Grid computing of most experiments, therefore exploiting them requires a change in strategy for the experiment. They may be highly restrictive in what can be run or in connections to the outside world, or tolerate opportunistic usage only on condition that tasks may be terminated without warning. The Advanced Resource Connector Computing Element (ARC CE) with its nonintrusive architecture is designed to integrate resources such as High Performance Computing (HPC) systems into a computing Grid. The ATLAS experiment developed the ATLAS Event Service (AES) primarily to address the issue of jobs that can be terminated at any point when opportunistic computing capacity is needed by someone else. This paper describes the integration of these two systems in order to exploit opportunistic resources for ATLAS in a restrictive environment. In addition to the technical details, results from deployment of this solution in the SuperMUC HPC centre in Munich are shown.

  2. Integration of Cloud resources in the LHCb Distributed Computing

    NASA Astrophysics Data System (ADS)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  3. Resources within "Reason"

    ERIC Educational Resources Information Center

    Catlett, Camille

    2010-01-01

    Federally funded national centers offer high-quality products and resources for use by teachers, family members, and others. By design, they offer resources that are low cost or no cost. This article presents details about several centers that may have resources to support your work. They include: (1) Center for Early Literacy Learning (CELL); (2)…

  4. 76 FR 76956 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-09

    ... location: Delete entry and replace with ``Human Resources Directorate, Labor and Management Employee...: Reasonable Accommodation Program Records. System location: Human Resources Directorate, Labor and Management..., Labor and Management Employee Relations Division, Human Resources Directorate, Washington Headquarters...

  5. How and when Does Complex Reasoning Occur? Empirically Driven Development of a Learning Progression Focused on Complex Reasoning about Biodiversity

    ERIC Educational Resources Information Center

    Songer, Nancy Butler; Kelcey, Ben; Gotwals, Amelia Wenk

    2009-01-01

    In order to compete in a global economy, students are going to need resources and curricula focusing on critical thinking and reasoning in science. Despite awareness for the need for complex reasoning, American students perform poorly relative to peers on international standardized tests measuring complex thinking in science. Research focusing on…

  6. 48 CFR 952.227-14 - Rights in data-general. (DOE coverage-alternates VI and VII)

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... data regarded as limited rights data or restricted computer software to the Government and third parties at reasonable royalties upon request by the Department of Energy. (k) Contractor licensing. Except... rights data or restricted computer software on terms and conditions reasonable under the circumstances...

  7. 48 CFR 952.227-14 - Rights in data-general. (DOE coverage-alternates VI and VII)

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... data regarded as limited rights data or restricted computer software to the Government and third parties at reasonable royalties upon request by the Department of Energy. (k) Contractor licensing. Except... rights data or restricted computer software on terms and conditions reasonable under the circumstances...

  8. Effects of Computer Algebra System (CAS) with Metacognitive Training on Mathematical Reasoning.

    ERIC Educational Resources Information Center

    Kramarski, Bracha; Hirsch, Chaya

    2003-01-01

    Describes a study that investigated the differential effects of Computer Algebra Systems (CAS) and metacognitive training (META) on mathematical reasoning. Participants were 83 Israeli eighth-grade students. Results showed that CAS embedded within META significantly outperformed the META and CAS alone conditions, which in turn significantly…

  9. Components of Understanding in Proportional Reasoning: A Fuzzy Set Representation of Developmental Progressions.

    ERIC Educational Resources Information Center

    Moore, Colleen F.; And Others

    1991-01-01

    Examined the development of proportional reasoning by means of a temperature mixture task. Results show the importance of distinguishing between intuitive knowledge and formal computational knowledge of proportional concepts. Provides a new perspective on the relation of intuitive and computational knowledge during development. (GLR)

  10. A Web Site that Provides Resources for Assessing Students' Statistical Literacy, Reasoning and Thinking

    ERIC Educational Resources Information Center

    Garfield, Joan; delMas, Robert

    2010-01-01

    The Assessment Resource Tools for Improving Statistical Thinking (ARTIST) Web site was developed to provide high-quality assessment resources for faculty who teach statistics at the tertiary level but resources are also useful to statistics teachers at the secondary level. This article describes some of the numerous ARTIST resources and suggests…

  11. Challenges in Soft Computing: Case Study with Louisville MSD CSO Modeling

    NASA Astrophysics Data System (ADS)

    Ormsbee, L.; Tufail, M.

    2005-12-01

    The principal constituents of soft computing include fuzzy logic, neural computing, evolutionary computation, machine learning, and probabilistic reasoning. There are numerous applications of these constituents (both individually and combination of two or more) in the area of water resources and environmental systems. These range from development of data driven models to optimal control strategies to assist in more informed and intelligent decision making process. Availability of data is critical to such applications and having scarce data may lead to models that do not represent the response function over the entire domain. At the same time, too much data has a tendency to lead to over-constraining of the problem. This paper will describe the application of a subset of these soft computing techniques (neural computing and genetic algorithms) to the Beargrass Creek watershed in Louisville, Kentucky. The application include development of inductive models as substitutes for more complex process-based models to predict water quality of key constituents (such as dissolved oxygen) and use them in an optimization framework for optimal load reductions. Such a process will facilitate the development of total maximum daily loads for the impaired water bodies in the watershed. Some of the challenges faced in this application include 1) uncertainty in data sets, 2) model application, and 3) development of cause-and-effect relationships between water quality constituents and watershed parameters through use of inductive models. The paper will discuss these challenges and how they affect the desired goals of the project.

  12. Parallelisation study of a three-dimensional environmental flow model

    NASA Astrophysics Data System (ADS)

    O'Donncha, Fearghal; Ragnoli, Emanuele; Suits, Frank

    2014-03-01

    There are many simulation codes in the geosciences that are serial and cannot take advantage of the parallel computational resources commonly available today. One model important for our work in coastal ocean current modelling is EFDC, a Fortran 77 code configured for optimal deployment on vector computers. In order to take advantage of our cache-based, blade computing system we restructured EFDC from serial to parallel, thereby allowing us to run existing models more quickly, and to simulate larger and more detailed models that were previously impractical. Since the source code for EFDC is extensive and involves detailed computation, it is important to do such a port in a manner that limits changes to the files, while achieving the desired speedup. We describe a parallelisation strategy involving surgical changes to the source files to minimise error-prone alteration of the underlying computations, while allowing load-balanced domain decomposition for efficient execution on a commodity cluster. The use of conjugate gradient posed particular challenges due to implicit non-local communication posing a hindrance to standard domain partitioning schemes; a number of techniques are discussed to address this in a feasible, computationally efficient manner. The parallel implementation demonstrates good scalability in combination with a novel domain partitioning scheme that specifically handles mixed water/land regions commonly found in coastal simulations. The approach presented here represents a practical methodology to rejuvenate legacy code on a commodity blade cluster with reasonable effort; our solution has direct application to other similar codes in the geosciences.

  13. Genetic Counseling

    MedlinePlus

    ... Testing Evaluating Genomic Tests Epidemiology Pathogen Genomics Resources Genetic Counseling Recommend on Facebook Tweet Share Compartir In ... informed decisions about testing and treatment. Reasons for Genetic Counseling There are many reasons that people go ...

  14. A Test of the Validity of Inviscid Wall-Modeled LES

    NASA Astrophysics Data System (ADS)

    Redman, Andrew; Craft, Kyle; Aikens, Kurt

    2015-11-01

    Computational expense is one of the main deterrents to more widespread use of large eddy simulations (LES). As such, it is important to reduce computational costs whenever possible. In this vein, it may be reasonable to assume that high Reynolds number flows with turbulent boundary layers are inviscid when using a wall model. This assumption relies on the grid being too coarse to resolve either the viscous length scales in the outer flow or those near walls. We are not aware of other studies that have suggested or examined the validity of this approach. The inviscid wall-modeled LES assumption is tested here for supersonic flow over a flat plate on three different grids. Inviscid and viscous results are compared to those of another wall-modeled LES as well as experimental data - the results appear promising. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively, with the current LES application. Recommendations are presented as are future areas of research. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  15. Documentation of a computer program to simulate stream-aquifer relations using a modular, finite-difference, ground-water flow model

    USGS Publications Warehouse

    Prudic, David E.

    1989-01-01

    Computer models are widely used to simulate groundwater flow for evaluating and managing the groundwater resource of many aquifers, but few are designed to also account for surface flow in streams. A computer program was written for use in the US Geological Survey modular finite difference groundwater flow model to account for the amount of flow in streams and to simulate the interaction between surface streams and groundwater. The new program is called the Streamflow-Routing Package. The Streamflow-Routing Package is not a true surface water flow model, but rather is an accounting program that tracks the flow in one or more streams which interact with groundwater. The program limits the amount of groundwater recharge to the available streamflow. It permits two or more streams to merge into one with flow in the merged stream equal to the sum of the tributary flows. The program also permits diversions from streams. The groundwater flow model with the Streamflow-Routing Package has an advantage over the analytical solution in simulating the interaction between aquifer and stream because it can be used to simulate complex systems that cannot be readily solved analytically. The Streamflow-Routing Package does not include a time function for streamflow but rather streamflow entering the modeled area is assumed to be instantly available to downstream reaches during each time period. This assumption is generally reasonable because of the relatively slow rate of groundwater flow. Another assumption is that leakage between streams and aquifers is instantaneous. This assumption may not be reasonable if the streams and aquifers are separated by a thick unsaturated zone. Documentation of the Streamflow-Routing Package includes data input instructions; flow charts, narratives, and listings of the computer program for each of four modules; and input data sets and printed results for two test problems, and one example problem. (Lantz-PTT)

  16. Design & implementation of distributed spatial computing node based on WPS

    NASA Astrophysics Data System (ADS)

    Liu, Liping; Li, Guoqing; Xie, Jibo

    2014-03-01

    Currently, the research work of SIG (Spatial Information Grid) technology mostly emphasizes on the spatial data sharing in grid environment, while the importance of spatial computing resources is ignored. In order to implement the sharing and cooperation of spatial computing resources in grid environment, this paper does a systematical research of the key technologies to construct Spatial Computing Node based on the WPS (Web Processing Service) specification by OGC (Open Geospatial Consortium). And a framework of Spatial Computing Node is designed according to the features of spatial computing resources. Finally, a prototype of Spatial Computing Node is implemented and the relevant verification work under the environment is completed.

  17. Economic models for management of resources in peer-to-peer and grid computing

    NASA Astrophysics Data System (ADS)

    Buyya, Rajkumar; Stockinger, Heinz; Giddy, Jonathan; Abramson, David

    2001-07-01

    The accelerated development in Peer-to-Peer (P2P) and Grid computing has positioned them as promising next generation computing platforms. They enable the creation of Virtual Enterprises (VE) for sharing resources distributed across the world. However, resource management, application development and usage models in these environments is a complex undertaking. This is due to the geographic distribution of resources that are owned by different organizations or peers. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. In order to address complex resource management issues, we have proposed a computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments. The framework provides mechanisms for optimizing resource provider and consumer objective functions through trading and brokering services. In a real world market, there exist various economic models for setting the price for goods based on supply-and-demand and their value to the user. They include commodity market, posted price, tenders and auctions. In this paper, we discuss the use of these models for interaction between Grid components in deciding resource value and the necessary infrastructure to realize them. In addition to normal services offered by Grid computing systems, we need an infrastructure to support interaction protocols, allocation mechanisms, currency, secure banking, and enforcement services. Furthermore, we demonstrate the usage of some of these economic models in resource brokering through Nimrod/G deadline and cost-based scheduling for two different optimization strategies on the World Wide Grid (WWG) testbed that contains peer-to-peer resources located on five continents: Asia, Australia, Europe, North America, and South America.

  18. A cross-sectional evaluation of computer literacy among medical students at a tertiary care teaching hospital in Mumbai, Bombay.

    PubMed

    Panchabhai, T S; Dangayach, N S; Mehta, V S; Patankar, C V; Rege, N N

    2011-01-01

    Computer usage capabilities of medical students for introduction of computer-aided learning have not been adequately assessed. Cross-sectional study to evaluate computer literacy among medical students. Tertiary care teaching hospital in Mumbai, India. Participants were administered a 52-question questionnaire, designed to study their background, computer resources, computer usage, activities enhancing computer skills, and attitudes toward computer-aided learning (CAL). The data was classified on the basis of sex, native place, and year of medical school, and the computer resources were compared. The computer usage and attitudes toward computer-based learning were assessed on a five-point Likert scale, to calculate Computer usage score (CUS - maximum 55, minimum 11) and Attitude score (AS - maximum 60, minimum 12). The quartile distribution among the groups with respect to the CUS and AS was compared by chi-squared tests. The correlation between CUS and AS was then tested. Eight hundred and seventy-five students agreed to participate in the study and 832 completed the questionnaire. One hundred and twenty eight questionnaires were excluded and 704 were analyzed. Outstation students had significantly lesser computer resources as compared to local students (P<0.0001). The mean CUS for local students (27.0±9.2, Mean±SD) was significantly higher than outstation students (23.2±9.05). No such difference was observed for the AS. The means of CUS and AS did not differ between males and females. The CUS and AS had positive, but weak correlations for all subgroups. The weak correlation between AS and CUS for all students could be explained by the lack of computer resources or inadequate training to use computers for learning. Providing additional resources would benefit the subset of outstation students with lesser computer resources. This weak correlation between the attitudes and practices of all students needs to be investigated. We believe that this gap can be bridged with a structured computer learning program.

  19. Impact of remote sensing upon the planning, management, and development of water resources

    NASA Technical Reports Server (NTRS)

    Loats, H. L.; Fowler, T. R.; Frech, S. L.

    1974-01-01

    A survey of the principal water resource users was conducted to determine the impact of new remote data streams on hydrologic computer models. The analysis of the responses and direct contact demonstrated that: (1) the majority of water resource effort of the type suitable to remote sensing inputs is conducted by major federal water resources agencies or through federally stimulated research, (2) the federal government develops most of the hydrologic models used in this effort; and (3) federal computer power is extensive. The computers, computer power, and hydrologic models in current use were determined.

  20. Resource Provisioning in SLA-Based Cluster Computing

    NASA Astrophysics Data System (ADS)

    Xiong, Kaiqi; Suh, Sang

    Cluster computing is excellent for parallel computation. It has become increasingly popular. In cluster computing, a service level agreement (SLA) is a set of quality of services (QoS) and a fee agreed between a customer and an application service provider. It plays an important role in an e-business application. An application service provider uses a set of cluster computing resources to support e-business applications subject to an SLA. In this paper, the QoS includes percentile response time and cluster utilization. We present an approach for resource provisioning in such an environment that minimizes the total cost of cluster computing resources used by an application service provider for an e-business application that often requires parallel computation for high service performance, availability, and reliability while satisfying a QoS and a fee negotiated between a customer and the application service provider. Simulation experiments demonstrate the applicability of the approach.

  1. Report to the Institutional Computing Executive Group (ICEG) August 14, 2006

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carnes, B

    We have delayed this report from its normal distribution schedule for two reasons. First, due to the coverage provided in the White Paper on Institutional Capability Computing Requirements distributed in August 2005, we felt a separate 2005 ICEG report would not be value added. Second, we wished to provide some specific information about the Peloton procurement and we have just now reached a point in the process where we can make some definitive statements. The Peloton procurement will result in an almost complete replacement of current M&IC systems. We have plans to retire MCR, iLX, and GPS. We will replacemore » them with new parallel and serial capacity systems based on the same node architecture in the new Peloton capability system named ATLAS. We are currently adding the first users to the Green Data Oasis, a large file system on the open network that will provide the institution with external collaboration data sharing. Only Thunder will remain from the current M&IC system list and it will be converted from Capability to Capacity. We are confident that we are entering a challenging yet rewarding new phase for the M&IC program. Institutional computing has been an essential component of our S&T investment strategy and has helped us achieve recognition in many scientific and technical forums. Through consistent institutional investments, M&IC has grown into a powerful unclassified computing resource that is being used across the Lab to push the limits of computing and its application to simulation science. With the addition of Peloton, the Laboratory will significantly increase the broad-based computing resources available to meet the ever-increasing demand for the large scale simulations indispensable to advancing all scientific disciplines. All Lab research efforts are bolstered through the long term development of mission driven scalable applications and platforms. The new systems will soon be fully utilized and will position Livermore to extend the outstanding science and technology breakthroughs the M&IC program has enabled to date.« less

  2. Acausal measurement-based quantum computing

    NASA Astrophysics Data System (ADS)

    Morimae, Tomoyuki

    2014-07-01

    In measurement-based quantum computing, there is a natural "causal cone" among qubits of the resource state, since the measurement angle on a qubit has to depend on previous measurement results in order to correct the effect of by-product operators. If we respect the no-signaling principle, by-product operators cannot be avoided. Here we study the possibility of acausal measurement-based quantum computing by using the process matrix framework [Oreshkov, Costa, and Brukner, Nat. Commun. 3, 1092 (2012), 10.1038/ncomms2076]. We construct a resource process matrix for acausal measurement-based quantum computing restricting local operations to projective measurements. The resource process matrix is an analog of the resource state of the standard causal measurement-based quantum computing. We find that if we restrict local operations to projective measurements the resource process matrix is (up to a normalization factor and trivial ancilla qubits) equivalent to the decorated graph state created from the graph state of the corresponding causal measurement-based quantum computing. We also show that it is possible to consider a causal game whose causal inequality is violated by acausal measurement-based quantum computing.

  3. Step-by-step magic state encoding for efficient fault-tolerant quantum computation

    PubMed Central

    Goto, Hayato

    2014-01-01

    Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation. PMID:25511387

  4. Step-by-step magic state encoding for efficient fault-tolerant quantum computation.

    PubMed

    Goto, Hayato

    2014-12-16

    Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation.

  5. A Review of Resources for Evaluating K-12 Computer Science Education Programs

    ERIC Educational Resources Information Center

    Randolph, Justus J.; Hartikainen, Elina

    2004-01-01

    Since computer science education is a key to preparing students for a technologically-oriented future, it makes sense to have high quality resources for conducting summative and formative evaluation of those programs. This paper describes the results of a critical analysis of the resources for evaluating K-12 computer science education projects.…

  6. Computing the Envelope for Stepwise Constant Resource Allocations

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Clancy, Daniel (Technical Monitor)

    2001-01-01

    Estimating tight resource level is a fundamental problem in the construction of flexible plans with resource utilization. In this paper we describe an efficient algorithm that builds a resource envelope, the tightest possible such bound. The algorithm is based on transforming the temporal network of resource consuming and producing events into a flow network with noises equal to the events and edges equal to the necessary predecessor links between events. The incremental solution of a staged maximum flow problem on the network is then used to compute the time of occurrence and the height of each step of the resource envelope profile. The staged algorithm has the same computational complexity of solving a maximum flow problem on the entire flow network. This makes this method computationally feasible for use in the inner loop of search-based scheduling algorithms.

  7. A lightweight distributed framework for computational offloading in mobile cloud computing.

    PubMed

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.

  8. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    PubMed Central

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  9. COMPUTATIONAL TOXICOLOGY-WHERE IS THE DATA? ...

    EPA Pesticide Factsheets

    This talk will briefly describe the state of the data world for computational toxicology and one approach to improve the situation, called ACToR (Aggregated Computational Toxicology Resource). This talk will briefly describe the state of the data world for computational toxicology and one approach to improve the situation, called ACToR (Aggregated Computational Toxicology Resource).

  10. LaRC local area networks to support distributed computing

    NASA Technical Reports Server (NTRS)

    Riddle, E. P.

    1984-01-01

    The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.

  11. Geostatistics as a tool to define various categories of resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabourin, R.

    1983-02-01

    Definition of 'measured' and 'indicated' resources tend to be vague. Yet, the calculation of such categories of resources in a mineral deposit calls for specific technical criteria. The author discusses how a geostatistical methodology provides the technical criteria required to classify reasonably assured resources by levels of assurance of their existence.

  12. Reasoning abstractly about resources

    NASA Technical Reports Server (NTRS)

    Clement, B.; Barrett, A.

    2001-01-01

    r describes a way to schedule high level activities before distributing them across multiple rovers in order to coordinate the resultant use of shared resources regardless of how each rover decides how to perform its activities. We present an algorithm for summarizing the metric resource requirements of an abstract activity based n the resource usages of its potential refinements.

  13. Elaborated Corrective Feedback and the Acquisition of Reasoning Skills: A Study of Computer-Assisted Instruction.

    ERIC Educational Resources Information Center

    Collins, Maria; And Others

    1987-01-01

    Thirteen learning disabled and 15 remedial high school students were taught reasoning skills using computer-assisted instruction and were given basic or elaborated corrections. Criterion-referenced test scores were significantly higher for the elaborated-corrections treatment on the post- and maintenance tests and on a transfer test assessing…

  14. Proportional Reasoning in the Laboratory: An Intervention Study in Vocational Education

    ERIC Educational Resources Information Center

    Bakker, Arthur; Groenveld, Djonie; Wijers, Monica; Akkerman, Sanne F.; Gravemeijer, Koeno P. E.

    2014-01-01

    Based on insights into the nature of vocational mathematical knowledge, we designed a computer tool with which students in laboratory schools at senior secondary vocational school level could develop a better proficiency in the proportional reasoning involved in dilution. We did so because we had identified computations of concentrations of…

  15. Development and Assessment of CFD Models Including a Supplemental Program Code for Analyzing Buoyancy-Driven Flows Through BWR Fuel Assemblies in SFP Complete LOCA Scenarios

    NASA Astrophysics Data System (ADS)

    Artnak, Edward Joseph, III

    This work seeks to illustrate the potential benefits afforded by implementing aspects of fluid dynamics, especially the latest computational fluid dynamics (CFD) modeling approach, through numerical experimentation and the traditional discipline of physical experimentation to improve the calibration of the severe reactor accident analysis code, MELCOR, in one of several spent fuel pool (SFP) complete loss-ofcoolant accident (LOCA) scenarios. While the scope of experimental work performed by Sandia National Laboratories (SNL) extends well beyond that which is reasonably addressed by our allotted resources and computational time in accordance with initial project allocations to complete the report, these simulated case trials produced a significant array of supplementary high-fidelity solutions and hydraulic flow-field data in support of SNL research objectives. Results contained herein show FLUENT CFD model representations of a 9x9 BWR fuel assembly in conditions corresponding to a complete loss-of-coolant accident scenario. In addition to the CFD model developments, a MATLAB based controlvolume model was constructed to independently assess the 9x9 BWR fuel assembly under similar accident scenarios. The data produced from this work show that FLUENT CFD models are capable of resolving complex flow fields within a BWR fuel assembly in the realm of buoyancy-induced mass flow rates and that characteristic hydraulic parameters from such CFD simulations (or physical experiments) are reasonably employed in corresponding constitutive correlations for developing simplified numerical models of comparable solution accuracy.

  16. Mitigating active shooter impact: Analysis for policy options based on agent/computer-based modeling.

    PubMed

    Anklam, Charles; Kirby, Adam; Sharevski, Filipo; Dietz, J Eric

    2015-01-01

    Active shooting violence at confined settings, such as educational institutions, poses serious security concerns to public safety. In studying the effects of active shooter scenarios, the common denominator associated with all events, regardless of reason/intent for shooter motives, or type of weapons used, was the location chosen and time expended between the beginning of the event and its culmination. This in turn directly correlates to number of casualties incurred in any given event. The longer the event protracts, the more casualties are incurred until law enforcement or another barrier can react and culminate the situation. Using AnyLogic technology, devise modeling scenarios to test multiple hypotheses against free-agent modeling simulation to determine the best method to reduce casualties associated with active shooter scenarios. Test four possible scenarios of responding to active shooter in a public school setting using agent-based computer modeling techniques-scenario 1: basic scenario where no access control or any type of security is used within the school; scenario 2, scenario assumes that concealed carry individual(s) (5-10 percent of the work force) are present in the school; scenario 3, scenario assumes that the school has assigned resource officer; scenario 4, scenario assumes that the school has assigned resource officer and concealed carry individual(s) (5-10 percent) present in the school. Statistical data from modeling scenarios indicating which tested hypothesis resulted in fewer casualties and quicker culmination of event. The use of AnyLogic proved the initial hypothesis that a decrease on response time to an active shooter scenario directly reduced victim casualties. Modeling tests show statistically significant fewer casualties in scenarios where on scene armed responders such as resource officers and concealed carry personnel were present.

  17. An approach for heterogeneous and loosely coupled geospatial data distributed computing

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Huang, Fengru; Fang, Yu; Huang, Zhou; Lin, Hui

    2010-07-01

    Most GIS (Geographic Information System) applications tend to have heterogeneous and autonomous geospatial information resources, and the availability of these local resources is unpredictable and dynamic under a distributed computing environment. In order to make use of these local resources together to solve larger geospatial information processing problems that are related to an overall situation, in this paper, with the support of peer-to-peer computing technologies, we propose a geospatial data distributed computing mechanism that involves loosely coupled geospatial resource directories and a term named as Equivalent Distributed Program of global geospatial queries to solve geospatial distributed computing problems under heterogeneous GIS environments. First, a geospatial query process schema for distributed computing as well as a method for equivalent transformation from a global geospatial query to distributed local queries at SQL (Structured Query Language) level to solve the coordinating problem among heterogeneous resources are presented. Second, peer-to-peer technologies are used to maintain a loosely coupled network environment that consists of autonomous geospatial information resources, thus to achieve decentralized and consistent synchronization among global geospatial resource directories, and to carry out distributed transaction management of local queries. Finally, based on the developed prototype system, example applications of simple and complex geospatial data distributed queries are presented to illustrate the procedure of global geospatial information processing.

  18. A computer based approach for Material, Manpower and Equipment managementin the Construction Projects

    NASA Astrophysics Data System (ADS)

    Sasidhar, Jaladanki; Muthu, D.; Venkatasubramanian, C.; Ramakrishnan, K.

    2017-07-01

    The success of any construction project will depend on efficient management of resources in a perfect manner to complete the project with a reasonable budget and time and the quality cannot be compromised. The efficient and timely procurement of material, deployment of adequate labor at correct time and mobilization of machinery lacking in time, all of them causes delay, lack of quality and finally affect the project cost. It is known factor that Project cost can be controlled by taking corrective actions on mobilization of resources at a right time. This research focuses on integration of management systems with the computer to generate the model which uses OOM data structure which decides to include automatic commodity code generation, automatic takeoff execution, intelligent purchase order generation, and components of design and schedule integration to overcome the problems of stock out. To overcome the problem in equipment management system inventory management module is suggested and the data set of equipment registration number, equipment number, description, date of purchase, manufacturer, equipment price, market value, life of equipment, production data of the equipment which includes equipment number, date, name of the job, hourly rate, insurance, depreciation cost of the equipment, taxes, storage cost, interest, oil, grease, and fuel consumption, etc. is analyzed and the decision support systems to overcome the problem arising out improper management is generated. The problem on labor is managed using scheduling, Strategic management of human resources. From the generated support systems tool, the resources are mobilized at a right time and help the project manager to finish project in time and thereby save the abnormal project cost and also provides the percentage that can be improved and also research focuses on determining the percentage of delays that are caused by lack of management of materials, manpower and machinery in different types of projects and how the percentage various from project to project.

  19. NASA Center for Computational Sciences: History and Resources

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.

  20. Now and next-generation sequencing techniques: future of sequence analysis using cloud computing.

    PubMed

    Thakur, Radhe Shyam; Bandopadhyay, Rajib; Chaudhary, Bratati; Chatterjee, Sourav

    2012-01-01

    Advances in the field of sequencing techniques have resulted in the greatly accelerated production of huge sequence datasets. This presents immediate challenges in database maintenance at datacenters. It provides additional computational challenges in data mining and sequence analysis. Together these represent a significant overburden on traditional stand-alone computer resources, and to reach effective conclusions quickly and efficiently, the virtualization of the resources and computation on a pay-as-you-go concept (together termed "cloud computing") has recently appeared. The collective resources of the datacenter, including both hardware and software, can be available publicly, being then termed a public cloud, the resources being provided in a virtual mode to the clients who pay according to the resources they employ. Examples of public companies providing these resources include Amazon, Google, and Joyent. The computational workload is shifted to the provider, which also implements required hardware and software upgrades over time. A virtual environment is created in the cloud corresponding to the computational and data storage needs of the user via the internet. The task is then performed, the results transmitted to the user, and the environment finally deleted after all tasks are completed. In this discussion, we focus on the basics of cloud computing, and go on to analyze the prerequisites and overall working of clouds. Finally, the applications of cloud computing in biological systems, particularly in comparative genomics, genome informatics, and SNP detection are discussed with reference to traditional workflows.

  1. 30 CFR 1206.154 - Determination of quantities and qualities for computing royalties.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 3 2014-07-01 2014-07-01 false Determination of quantities and qualities for computing royalties. 1206.154 Section 1206.154 Mineral Resources OFFICE OF NATURAL RESOURCES REVENUE, DEPARTMENT OF THE INTERIOR NATURAL RESOURCES REVENUE PRODUCT VALUATION Federal Gas § 1206.154 Determination...

  2. 30 CFR 1206.154 - Determination of quantities and qualities for computing royalties.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 3 2012-07-01 2012-07-01 false Determination of quantities and qualities for computing royalties. 1206.154 Section 1206.154 Mineral Resources OFFICE OF NATURAL RESOURCES REVENUE, DEPARTMENT OF THE INTERIOR NATURAL RESOURCES REVENUE PRODUCT VALUATION Federal Gas § 1206.154 Determination...

  3. 30 CFR 1206.154 - Determination of quantities and qualities for computing royalties.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 3 2013-07-01 2013-07-01 false Determination of quantities and qualities for computing royalties. 1206.154 Section 1206.154 Mineral Resources OFFICE OF NATURAL RESOURCES REVENUE, DEPARTMENT OF THE INTERIOR NATURAL RESOURCES REVENUE PRODUCT VALUATION Federal Gas § 1206.154 Determination...

  4. The interaction of representation and reasoning.

    PubMed

    Bundy, Alan

    2013-09-08

    Automated reasoning is an enabling technology for many applications of informatics. These applications include verifying that a computer program meets its specification; enabling a robot to form a plan to achieve a task and answering questions by combining information from diverse sources, e.g. on the Internet, etc. How is automated reasoning possible? Firstly, knowledge of a domain must be stored in a computer, usually in the form of logical formulae. This knowledge might, for instance, have been entered manually, retrieved from the Internet or perceived in the environment via sensors, such as cameras. Secondly, rules of inference are applied to old knowledge to derive new knowledge. Automated reasoning techniques have been adapted from logic, a branch of mathematics that was originally designed to formalize the reasoning of humans, especially mathematicians. My special interest is in the way that representation and reasoning interact. Successful reasoning is dependent on appropriate representation of both knowledge and successful methods of reasoning. Failures of reasoning can suggest changes of representation. This process of representational change can also be automated. We will illustrate the automation of representational change by drawing on recent work in my research group.

  5. Cognitive Load Mediates the Effect of Emotion on Analytical Thinking.

    PubMed

    Trémolière, Bastien; Gagnon, Marie-Ève; Blanchette, Isabelle

    2016-11-01

    Although the detrimental effect of emotion on reasoning has been evidenced many times, the cognitive mechanism underlying this effect remains unclear. In the present paper, we explore the cognitive load hypothesis as a potential explanation. In an experiment, participants solved syllogistic reasoning problems with either neutral or emotional contents. Participants were also presented with a secondary task, for which the difficult version requires the mobilization of cognitive resources to be correctly solved. Participants performed overall worse and took longer on emotional problems than on neutral problems. Performance on the secondary task, in the difficult version, was poorer when participants were reasoning about emotional, compared to neutral contents, consistent with the idea that processing emotion requires more cognitive resources. Taken together, the findings afford evidence that the deleterious effect of emotion on reasoning is mediated by cognitive load.

  6. Tools and Techniques for Measuring and Improving Grid Performance

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Frumkin, M.; Smith, W.; VanderWijngaart, R.; Wong, P.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    This viewgraph presentation provides information on NASA's geographically dispersed computing resources, and the various methods by which the disparate technologies are integrated within a nationwide computational grid. Many large-scale science and engineering projects are accomplished through the interaction of people, heterogeneous computing resources, information systems and instruments at different locations. The overall goal is to facilitate the routine interactions of these resources to reduce the time spent in design cycles, particularly for NASA's mission critical projects. The IPG (Information Power Grid) seeks to implement NASA's diverse computing resources in a fashion similar to the way in which electric power is made available.

  7. SaaS enabled admission control for MCMC simulation in cloud computing infrastructures

    NASA Astrophysics Data System (ADS)

    Vázquez-Poletti, J. L.; Moreno-Vozmediano, R.; Han, R.; Wang, W.; Llorente, I. M.

    2017-02-01

    Markov Chain Monte Carlo (MCMC) methods are widely used in the field of simulation and modelling of materials, producing applications that require a great amount of computational resources. Cloud computing represents a seamless source for these resources in the form of HPC. However, resource over-consumption can be an important drawback, specially if the cloud provision process is not appropriately optimized. In the present contribution we propose a two-level solution that, on one hand, takes advantage of approximate computing for reducing the resource demand and on the other, uses admission control policies for guaranteeing an optimal provision to running applications.

  8. Functional requirements of computer systems for the U.S. Geological Survey, Water Resources Division, 1988-97

    USGS Publications Warehouse

    Hathaway, R.M.; McNellis, J.M.

    1989-01-01

    Investigating the occurrence, quantity, quality, distribution, and movement of the Nation 's water resources is the principal mission of the U.S. Geological Survey 's Water Resources Division. Reports of these investigations are published and available to the public. To accomplish this mission, the Division requires substantial computer technology to process, store, and analyze data from more than 57,000 hydrologic sites. The Division 's computer resources are organized through the Distributed Information System Program Office that manages the nationwide network of computers. The contract that provides the major computer components for the Water Resources Division 's Distributed information System expires in 1991. Five work groups were organized to collect the information needed to procure a new generation of computer systems for the U. S. Geological Survey, Water Resources Division. Each group was assigned a major Division activity and asked to describe its functional requirements of computer systems for the next decade. The work groups and major activities are: (1) hydrologic information; (2) hydrologic applications; (3) geographic information systems; (4) reports and electronic publishing; and (5) administrative. The work groups identified 42 functions and described their functional requirements for 1988, 1992, and 1997. A few new functions such as Decision Support Systems and Executive Information Systems, were identified, but most are the same as performed today. Although the number of functions will remain about the same, steady growth in the size, complexity, and frequency of many functions is predicted for the next decade. No compensating increase in the Division 's staff is anticipated during this period. To handle the increased workload and perform these functions, new approaches will be developed that use advanced computer technology. The advanced technology is required in a unified, tightly coupled system that will support all functions simultaneously. The new approaches and expanded use of computers will require substantial increases in the quantity and sophistication of the Division 's computer resources. The requirements presented in this report will be used to develop technical specifications that describe the computer resources needed during the 1990's. (USGS)

  9. Computer-Based Assessment of School Readiness and Early Reasoning

    ERIC Educational Resources Information Center

    Csapó, Beno; Molnár, Gyöngyvér; Nagy, József

    2014-01-01

    This study explores the potential of using online tests for the assessment of school readiness and for monitoring early reasoning. Four tests of a face-to-face-administered school readiness test battery (speech sound discrimination, relational reasoning, counting and basic numeracy, and deductive reasoning) and a paper-and-pencil inductive…

  10. Assessing Clinical Reasoning (ASCLIRE): Instrument Development and Validation

    ERIC Educational Resources Information Center

    Kunina-Habenicht, Olga; Hautz, Wolf E.; Knigge, Michel; Spies, Claudia; Ahlers, Olaf

    2015-01-01

    Clinical reasoning is an essential competency in medical education. This study aimed at developing and validating a test to assess diagnostic accuracy, collected information, and diagnostic decision time in clinical reasoning. A norm-referenced computer-based test for the assessment of clinical reasoning (ASCLIRE) was developed, integrating the…

  11. Setting Up a Grid-CERT: Experiences of an Academic CSIRT

    ERIC Educational Resources Information Center

    Moller, Klaus

    2007-01-01

    Purpose: Grid computing has often been heralded as the next logical step after the worldwide web. Users of grids can access dynamic resources such as computer storage and use the computing resources of computers under the umbrella of a virtual organisation. Although grid computing is often compared to the worldwide web, it is vastly more complex…

  12. 38 CFR 200.2 - Background.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... renewable resources and approach the maximum attainable recycling of depletable resources. (b) As an.... Consideration must be given to reasonable alternative means of achieving the purpose and need for the proposed...

  13. 38 CFR 200.2 - Background.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... renewable resources and approach the maximum attainable recycling of depletable resources. (b) As an.... Consideration must be given to reasonable alternative means of achieving the purpose and need for the proposed...

  14. 38 CFR 200.2 - Background.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... renewable resources and approach the maximum attainable recycling of depletable resources. (b) As an.... Consideration must be given to reasonable alternative means of achieving the purpose and need for the proposed...

  15. 38 CFR 200.2 - Background.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... renewable resources and approach the maximum attainable recycling of depletable resources. (b) As an.... Consideration must be given to reasonable alternative means of achieving the purpose and need for the proposed...

  16. 38 CFR 200.2 - Background.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... renewable resources and approach the maximum attainable recycling of depletable resources. (b) As an.... Consideration must be given to reasonable alternative means of achieving the purpose and need for the proposed...

  17. A Novel Resource Management Method of Providing Operating System as a Service for Mobile Transparent Computing

    PubMed Central

    Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua

    2014-01-01

    This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable. PMID:24883353

  18. A novel resource management method of providing operating system as a service for mobile transparent computing.

    PubMed

    Xiong, Yonghua; Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua

    2014-01-01

    This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable.

  19. Networking Micro-Processors for Effective Computer Utilization in Nursing

    PubMed Central

    Mangaroo, Jewellean; Smith, Bob; Glasser, Jay; Littell, Arthur; Saba, Virginia

    1982-01-01

    Networking as a social entity has important implications for maximizing computer resources for improved utilization in nursing. This paper describes the one process of networking of complementary resources at three institutions. Prairie View A&M University, Texas A&M University and the University of Texas School of Public Health, which has effected greater utilization of computers at the college. The results achieved in this project should have implications for nurses, users, and consumers in the development of computer resources.

  20. The use of the Internet within a dental school.

    PubMed

    Walmsley, A D; White, D A; Eynon, R; Somerfield, L

    2003-02-01

    The Internet is an increasingly popular medium for delivering educational material. The aim of this study was to determine the attitudes of students and their clinical teachers to the use of the Internet within a dental school in the UK. Questionnaires were distributed to undergraduate dental students in the three clinical years and to all their clinical academic teachers. All students and staff have access to computers and Internet at the university. The majority (72%) of students have access to a computer and 53% also have access to the Internet at home. Of the academic staff, 91% have a computer and 68% have access to the Internet at home. The reasons for use of the Internet differed between staff and students. Whilst equal proportions of students used the Internet for dentistry (38%) and for pleasure (35%), a higher proportion of staff used the Internet more for dentistry (36%) than for pleasure (14%). Students highlighted cost and time as barriers to Internet use, whereas staff lacked confidence in their ability to use the Internet. Less than half (44%) of the students are confident in the accuracy of information from the Internet compared to almost two-thirds (64%) of staff. This study revealed differences in the attitudes of staff and students to the use of Internet as a resource for dentistry. Students are positive to the suggestion that lectures should be presented on the web. Most students (74%) did not see that this would influence attendance at lectures whilst 91% of staff stated that it would decrease lecture attendance. In conclusion, this study revealed differences in the attitudes of staff and students to the use of Internet as a resource for dentistry.

  1. Using computer aided case based reasoning to support clinical reasoning in community occupational therapy.

    PubMed

    Taylor, Bruce; Robertson, David; Wiratunga, Nirmalie; Craw, Susan; Mitchell, Dawn; Stewart, Elaine

    2007-08-01

    Community occupational therapists have long been involved in the provision of environmental control systems. Diverse electronic technologies with the potential to improve the health and quality of life of selected clients have developed rapidly in recent years. Occupational therapists employ clinical reasoning in order to determine the most appropriate technology to meet the needs of individual clients. This paper describes a number of the drivers that may increase the adoption of information and communication technologies in the occupational therapy profession. It outlines case based reasoning as understood in the domains of expert systems and knowledge management and presents the preliminary results of an ongoing investigation into the potential of a prototype computer aided case based reasoning tool to support the clinical reasoning of community occupational therapists in the process of assisting clients to choose home electronic assistive or smart house technology.

  2. Assessing attitudes toward computers and the use of Internet resources among undergraduate microbiology students

    NASA Astrophysics Data System (ADS)

    Anderson, Delia Marie Castro

    Computer literacy and use have become commonplace in our colleges and universities. In an environment that demands the use of technology, educators should be knowledgeable of the components that make up the overall computer attitude of students and be willing to investigate the processes and techniques of effective teaching and learning that can take place with computer technology. The purpose of this study is two fold. First, it investigates the relationship between computer attitudes and gender, ethnicity, and computer experience. Second, it addresses the question of whether, and to what extent, students' attitudes toward computers change over a 16 week period in an undergraduate microbiology course that supplements the traditional lecture with computer-driven assignments. Multiple regression analyses, using data from the Computer Attitudes Scale (Loyd & Loyd, 1985), showed that, in the experimental group, no significant relationships were found between computer anxiety and gender or ethnicity or between computer confidence and gender or ethnicity. However, students who used computers the longest (p = .001) and who were self-taught (p = .046) had the lowest computer anxiety levels. Likewise students who used computers the longest (p = .001) and who were self-taught (p = .041) had the highest confidence levels. No significant relationships between computer liking, usefulness, or the use of Internet resources and gender, ethnicity, or computer experience were found. Dependent T-tests were performed to determine whether computer attitude scores (pretest and posttest) increased over a 16-week period for students who had been exposed to computer-driven assignments and other Internet resources. Results showed that students in the experimental group were less anxious about working with computers and considered computers to be more useful. In the control group, no significant changes in computer anxiety, confidence, liking, or usefulness were noted. Overall, students in the experimental group, who responded to the use of Internet Resources Survey, were positive (mean of 3.4 on the 4-point scale) toward their use of Internet resources which included the online courseware developed by the researcher. Findings from this study suggest that (1) the digital divide with respect to gender and ethnicity may be narrowing, and (2) students who are exposed to a course that augments computer-driven courseware with traditional teaching methods appear to have less anxiety, have a clearer perception of computer usefulness, and feel that online resources enhance their learning.

  3. An Innovative Time-Cost-Quality Tradeoff Modeling of Building Construction Project Based on Resource Allocation

    PubMed Central

    2014-01-01

    The time, quality, and cost are three important but contradictive objectives in a building construction project. It is a tough challenge for project managers to optimize them since they are different parameters. This paper presents a time-cost-quality optimization model that enables managers to optimize multiobjectives. The model is from the project breakdown structure method where task resources in a construction project are divided into a series of activities and further into construction labors, materials, equipment, and administration. The resources utilized in a construction activity would eventually determine its construction time, cost, and quality, and a complex time-cost-quality trade-off model is finally generated based on correlations between construction activities. A genetic algorithm tool is applied in the model to solve the comprehensive nonlinear time-cost-quality problems. Building of a three-storey house is an example to illustrate the implementation of the model, demonstrate its advantages in optimizing trade-off of construction time, cost, and quality, and help make a winning decision in construction practices. The computational time-cost-quality curves in visual graphics from the case study prove traditional cost-time assumptions reasonable and also prove this time-cost-quality trade-off model sophisticated. PMID:24672351

  4. A Feasibility Analysis of Land-Based SINS/GNSS Gravimetry for Groundwater Resource Detection in Taiwan

    PubMed Central

    Chiang, Kai-Wei; Lin, Cheng-An; Kuo, Chung-Yen

    2015-01-01

    The integration of the Strapdown Inertial Navigation System and Global Navigation Satellite System (SINS/GNSS) has been implemented for land-based gravimetry and has been proven to perform well in estimating gravity. Based on the mGal-level gravimetry results, this research aims to construct and develop a land-based SINS/GNSS gravimetry device containing a navigation-grade Inertial Measurement Unit. This research also presents a feasibility analysis for groundwater resource detection. A preliminary comparison of the kinematic velocities and accelerations using multi-combination of GNSS data including Global Positioning System, Global Navigation Satellite System, and BeiDou Navigation Satellite System, indicates that three-system observations performed better than two-system data in the computation. A comparison of gravity derived from SINS/GNSS and measured using a relative gravimeter also shows that both agree reasonably well with a mean difference of 2.30 mGal. The mean difference between repeat measurements of gravity disturbance using SINS/GNSS is 2.46 mGal with a standard deviation of 1.32 mGal. The gravity variation because of the groundwater at Pingtung Plain, Taiwan could reach 2.72 mGal. Hence, the developed land-based SINS/GNSS gravimetry can sufficiently and effectively detect groundwater resources. PMID:26426019

  5. A Feasibility Analysis of Land-Based SINS/GNSS Gravimetry for Groundwater Resource Detection in Taiwan.

    PubMed

    Chiang, Kai-Wei; Lin, Cheng-An; Kuo, Chung-Yen

    2015-09-29

    The integration of the Strapdown Inertial Navigation System and Global Navigation Satellite System (SINS/GNSS) has been implemented for land-based gravimetry and has been proven to perform well in estimating gravity. Based on the mGal-level gravimetry results, this research aims to construct and develop a land-based SINS/GNSS gravimetry device containing a navigation-grade Inertial Measurement Unit. This research also presents a feasibility analysis for groundwater resource detection. A preliminary comparison of the kinematic velocities and accelerations using multi-combination of GNSS data including Global Positioning System, Global Navigation Satellite System, and BeiDou Navigation Satellite System, indicates that three-system observations performed better than two-system data in the computation. A comparison of gravity derived from SINS/GNSS and measured using a relative gravimeter also shows that both agree reasonably well with a mean difference of 2.30 mGal. The mean difference between repeat measurements of gravity disturbance using SINS/GNSS is 2.46 mGal with a standard deviation of 1.32 mGal. The gravity variation because of the groundwater at Pingtung Plain, Taiwan could reach 2.72 mGal. Hence, the developed land-based SINS/GNSS gravimetry can sufficiently and effectively detect groundwater resources.

  6. Formalizing Resources for Planning

    NASA Technical Reports Server (NTRS)

    Bedrax-Weiss, Tania; McGann, Conor; Ramakrishnan, Sailesh

    2003-01-01

    In this paper we present a classification scheme which circumscribes a large class of resources found in the real world. Building on the work of others we also define key properties of resources that allow formal expression of the proposed classification. Furthermore, operations that change the state of a resource are formalized. Together, properties and operations go a long way in formalizing the representation and reasoning aspects of resources for planning.

  7. Optimization of PV/WIND/DIESEL Hybrid Power System in HOMER for Rural Electrification

    NASA Astrophysics Data System (ADS)

    Hassan, Q.; Jaszczur, M.; Abdulateef, J.

    2016-09-01

    A large proportion of the world's population lives in remote rural areas that are geographically isolated and sparsely populated. The present study is based on modeling, computer simulation and optimization of hybrid power generation system in the rural area in Muqdadiyah district of Diyala state, Iraq. Two renewable resources, namely, solar photovoltaic (PV) and wind turbine (WT) are considered. The HOMER software is used to study and design the proposed hybrid energy system model. Based on simulation results, it has been found that renewable energy sources perhaps replace the conventional energy sources and would be a feasible solution for the generation of electric power at remote locations with a reasonable investment. The hybrid power system solution to electrify the selected area resulted in a least-cost combination of the hybrid power system that can meet the demand in a dependable manner at a cost about (0.321/kWh). If the wind resources in the study area at the lower stage, it's not economically viable for a wind turbine to generate the electricity.

  8. ROSA: Resource-Oriented Service Management Schemes for Web of Things in a Smart Home.

    PubMed

    Liao, Chun-Feng; Chen, Peng-Yu

    2017-09-21

    A Pervasive-computing-enriched smart home environment, which contains many embedded and tiny intelligent devices and sensors coordinated by service management mechanisms, is capable of anticipating intentions of occupants and providing appropriate services accordingly. Although there are a wealth of research achievements in recent years, the degree of market acceptance is still low. The main reason is that most of the devices and services in such environments depend on particular platform or technology, making it hard to develop an application by composing the devices or services. Meanwhile, the concept of Web of Things (WoT) is becoming popular recently. Based on WoT, the developers can build applications based on popular web tools or technologies. Consequently, the objective of this paper is to propose a set of novel WoT-driven plug-and-play service management schemes for a smart home called Resource-Oriented Service Administration (ROSA). We have implemented an application prototype, and experiments are performed to show the effectiveness of the proposed approach. The results of this research can be a foundation for realizing the vision of "end user programmable smart environments".

  9. High-School Students' Reasoning while Constructing Plant Growth Models in a Computer-Supported Educational Environment. Research Report

    ERIC Educational Resources Information Center

    Ergazaki, Marida; Komis, Vassilis; Zogza, Vassiliki

    2005-01-01

    This paper highlights specific aspects of high-school students' reasoning while coping with a modeling task of plant growth in a computer-supported educational environment. It is particularly concerned with the modeling levels ('macro-phenomenological' and 'micro-conceptual' level) activated by peers while exploring plant growth and with their…

  10. Visual Reasoning in Computational Environment: A Case of Graph Sketching

    ERIC Educational Resources Information Center

    Leung, Allen; Chan, King Wah

    2004-01-01

    This paper reports the case of a form six (grade 12) Hong Kong student's exploration of graph sketching in a computational environment. In particular, the student summarized his discovery in the form of two empirical laws. The student was interviewed and the interviewed data were used to map out a possible path of his visual reasoning. Critical…

  11. Cultural Commonalities and Differences in Spatial Problem-Solving: A Computational Analysis

    ERIC Educational Resources Information Center

    Lovett, Andrew; Forbus, Kenneth

    2011-01-01

    A fundamental question in human cognition is how people reason about space. We use a computational model to explore cross-cultural commonalities and differences in spatial cognition. Our model is based upon two hypotheses: (1) the structure-mapping model of analogy can explain the visual comparisons used in spatial reasoning; and (2) qualitative,…

  12. The Effects of Learning a Computer Programming Language on the Logical Reasoning of School Children.

    ERIC Educational Resources Information Center

    Seidman, Robert H.

    The research reported in this paper explores the syntactical and semantic link between computer programming statements and logical principles, and addresses the effects of learning a programming language on logical reasoning ability. Fifth grade students in a public school in Syracuse, New York, were randomly selected as subjects, and then…

  13. The Difficult Process of Scientific Modelling: An Analysis Of Novices' Reasoning During Computer-Based Modelling

    ERIC Educational Resources Information Center

    Sins, Patrick H. M.; Savelsbergh, Elwin R.; van Joolingen, Wouter R.

    2005-01-01

    Although computer modelling is widely advocated as a way to offer students a deeper understanding of complex phenomena, the process of modelling is rather complex itself and needs scaffolding. In order to offer adequate support, a thorough understanding of the reasoning processes students employ and of difficulties they encounter during a…

  14. Developing Strategic and Reasoning Abilities with Computer Games at Primary School Level

    ERIC Educational Resources Information Center

    Bottino, R. M.; Ferlino, L.; Ott, M.; Tavella, M.

    2007-01-01

    The paper reports a small-scale, long-term pilot project designed to foster strategic and reasoning abilities in young primary school pupils by engaging them in a number of computer games, mainly those usually called mind games (brainteasers, puzzlers, etc.). In this paper, the objectives, work methodology, experimental setting, and tools used in…

  15. Desktop Computing Integration Project

    NASA Technical Reports Server (NTRS)

    Tureman, Robert L., Jr.

    1992-01-01

    The Desktop Computing Integration Project for the Human Resources Management Division (HRMD) of LaRC was designed to help division personnel use personal computing resources to perform job tasks. The three goals of the project were to involve HRMD personnel in desktop computing, link mainframe data to desktop capabilities, and to estimate training needs for the division. The project resulted in increased usage of personal computers by Awards specialists, an increased awareness of LaRC resources to help perform tasks, and personal computer output that was used in presentation of information to center personnel. In addition, the necessary skills for HRMD personal computer users were identified. The Awards Office was chosen for the project because of the consistency of their data requests and the desire of employees in that area to use the personal computer.

  16. A Computational Model of Reasoning from the Clinical Literature

    PubMed Central

    Rennels, Glenn D.

    1986-01-01

    This paper explores the premise that a formalized representation of empirical studies can play a central role in computer-based decision support. The specific motivations underlying this research include the following propositions: 1. Reasoning from experimental evidence contained in the clinical literature is central to the decisions physicians make in patient care. 2. A computational model, based upon a declarative representation for published reports of clinical studies, can drive a computer program that selectively tailors knowledge of the clinical literature as it is applied to a particular case. 3. The development of such a computational model is an important first step toward filling a void in computer-based decision support systems. Furthermore, the model may help us better understand the general principles of reasoning from experimental evidence both in medicine and other domains. Roundsman is a developmental computer system which draws upon structured representations of the clinical literature in order to critique plans for the management of primary breast cancer. Roundsman is able to produce patient-specific analyses of breast cancer management options based on the 24 clinical studies currently encoded in its knowledge base. The Roundsman system is a first step in exploring how the computer can help to bring a critical analysis of the relevant literature to the physician, structured around a particular patient and treatment decision.

  17. High Performance Computing Based Parallel HIearchical Modal Association Clustering (HPAR HMAC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patlolla, Dilip R; Surendran Nair, Sujithkumar; Graves, Daniel A.

    For many applications, clustering is a crucial step in order to gain insight into the makeup of a dataset. The best approach to a given problem often depends on a variety of factors, such as the size of the dataset, time restrictions, and soft clustering requirements. The HMAC algorithm seeks to combine the strengths of 2 particular clustering approaches: model-based and linkage-based clustering. One particular weakness of HMAC is its computational complexity. HMAC is not practical for mega-scale data clustering. For high-definition imagery, a user would have to wait months or years for a result; for a 16-megapixel image, themore » estimated runtime skyrockets to over a decade! To improve the execution time of HMAC, it is reasonable to consider an multi-core implementation that utilizes available system resources. An existing imple-mentation (Ray and Cheng 2014) divides the dataset into N partitions - one for each thread prior to executing the HMAC algorithm. This implementation benefits from 2 types of optimization: parallelization and divide-and-conquer. By running each partition in parallel, the program is able to accelerate computation by utilizing more system resources. Although the parallel implementation provides considerable improvement over the serial HMAC, it still suffers from poor computational complexity, O(N2). Once the maximum number of cores on a system is exhausted, the program exhibits slower behavior. We now consider a modification to HMAC that involves a recursive partitioning scheme. Our modification aims to exploit divide-and-conquer benefits seen by the parallel HMAC implementation. At each level in the recursion tree, partitions are divided into 2 sub-partitions until a threshold size is reached. When the partition can no longer be divided without falling below threshold size, the base HMAC algorithm is applied. This results in a significant speedup over the parallel HMAC.« less

  18. Incorporating time and spatial-temporal reasoning into situation management

    NASA Astrophysics Data System (ADS)

    Jakobson, Gabriel

    2010-04-01

    Spatio-temporal reasoning plays a significant role in situation management that is performed by intelligent agents (human or machine) by affecting how the situations are recognized, interpreted, acted upon or predicted. Many definitions and formalisms for the notion of spatio-temporal reasoning have emerged in various research fields including psychology, economics and computer science (computational linguistics, data management, control theory, artificial intelligence and others). In this paper we examine the role of spatio-temporal reasoning in situation management, particularly how to resolve situations that are described by using spatio-temporal relations among events and situations. We discuss a model for describing context sensitive temporal relations and show have the model can be extended for spatial relations.

  19. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE PAGES

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...

    2017-09-29

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  20. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  1. The Relative Effectiveness of Computer-Based and Traditional Resources for Education in Anatomy

    ERIC Educational Resources Information Center

    Khot, Zaid; Quinlan, Kaitlyn; Norman, Geoffrey R.; Wainman, Bruce

    2013-01-01

    There is increasing use of computer-based resources to teach anatomy, although no study has compared computer-based learning to traditional. In this study, we examine the effectiveness of three formats of anatomy learning: (1) a virtual reality (VR) computer-based module, (2) a static computer-based module providing Key Views (KV), (3) a plastic…

  2. Outside CT imaging among emergency department transfer patients.

    PubMed

    Sung, Jeffrey C; Sodickson, Aaron; Ledbetter, Stephen

    2009-09-01

    The aim of this study was to characterize the quantity and types of outside computed tomographic (CT) examinations submitted for reinterpretation among emergency department (ED) transfers to a tertiary care, level I trauma, academic medical center and the frequency of and reasons for repeat imaging. Reinterpretation requests for outside CT studies accompanying ED transfer patients over a 4-month period were prospectively audited. Clinicians completed forms specifying type of CT study, outside report availability, interpretational discrepancies, repeat imaging requests, and reasons for repeat imaging. A total of 425 CT studies were reviewed among 255 transfer patients, with a mean of 2.8 examinations (range, 0-16) on 1.7 patients (range, 0-8) per day. The patients' mean age was 59 years, and 57% were male. The clinicians reported no outside verbal or written reports for 16% of patients. Interpretational discrepancies were noted in 12% of those with outside reports. Repeat scans might have been avoided in as many as 25% of rescanned patients (35% of repeat examinations) because they were performed solely for imaging or information technology reasons (inadequate imaging, compact disc inoperability, or unavailable images within the hospital's picture archiving and communication system). Rescanned trauma patients in particular had a high per patient rate (32%) of potentially avoidable reasons, with a lower rate (11%) in nontrauma patients. Outside CT imaging in ED transfers adds workload and resource requirements for receiving institutions. A communication gap exists between transferring and receiving institutions, and interpretational discrepancies are common. Process improvement measures are suggested that might reduce the substantial rates of potentially avoidable reimaging.

  3. Infrastructures for Distributed Computing: the case of BESIII

    NASA Astrophysics Data System (ADS)

    Pellegrino, J.

    2018-05-01

    The BESIII is an electron-positron collision experiment hosted at BEPCII in Beijing and aimed to investigate Tau-Charm physics. Now BESIII has been running for several years and gathered more than 1PB raw data. In order to analyze these data and perform massive Monte Carlo simulations, a large amount of computing and storage resources is needed. The distributed computing system is based up on DIRAC and it is in production since 2012. It integrates computing and storage resources from different institutes and a variety of resource types such as cluster, grid, cloud or volunteer computing. About 15 sites from BESIII Collaboration from all over the world joined this distributed computing infrastructure, giving a significant contribution to the IHEP computing facility. Nowadays cloud computing is playing a key role in the HEP computing field, due to its scalability and elasticity. Cloud infrastructures take advantages of several tools, such as VMDirac, to manage virtual machines through cloud managers according to the job requirements. With the virtually unlimited resources from commercial clouds, the computing capacity could scale accordingly in order to deal with any burst demands. General computing models have been discussed in the talk and are addressed herewith, with particular focus on the BESIII infrastructure. Moreover new computing tools and upcoming infrastructures will be addressed.

  4. Now and Next-Generation Sequencing Techniques: Future of Sequence Analysis Using Cloud Computing

    PubMed Central

    Thakur, Radhe Shyam; Bandopadhyay, Rajib; Chaudhary, Bratati; Chatterjee, Sourav

    2012-01-01

    Advances in the field of sequencing techniques have resulted in the greatly accelerated production of huge sequence datasets. This presents immediate challenges in database maintenance at datacenters. It provides additional computational challenges in data mining and sequence analysis. Together these represent a significant overburden on traditional stand-alone computer resources, and to reach effective conclusions quickly and efficiently, the virtualization of the resources and computation on a pay-as-you-go concept (together termed “cloud computing”) has recently appeared. The collective resources of the datacenter, including both hardware and software, can be available publicly, being then termed a public cloud, the resources being provided in a virtual mode to the clients who pay according to the resources they employ. Examples of public companies providing these resources include Amazon, Google, and Joyent. The computational workload is shifted to the provider, which also implements required hardware and software upgrades over time. A virtual environment is created in the cloud corresponding to the computational and data storage needs of the user via the internet. The task is then performed, the results transmitted to the user, and the environment finally deleted after all tasks are completed. In this discussion, we focus on the basics of cloud computing, and go on to analyze the prerequisites and overall working of clouds. Finally, the applications of cloud computing in biological systems, particularly in comparative genomics, genome informatics, and SNP detection are discussed with reference to traditional workflows. PMID:23248640

  5. Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue

    DOE PAGES

    Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; ...

    2017-10-01

    Here, the Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a modelmore » does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.« less

  6. Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey

    Here, the Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a modelmore » does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.« less

  7. Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue

    NASA Astrophysics Data System (ADS)

    Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; Bagliesi, Giuseppe; Belforte, Stephano; Campana, Simone; Dimou, Maria; Flix, Jose; Forti, Alessandra; di Girolamo, A.; Karavakis, Edward; Lammel, Stephan; Litmaath, Maarten; Sciaba, Andrea; Valassi, Andrea

    2017-10-01

    The Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a model does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.

  8. Classical multiparty computation using quantum resources

    NASA Astrophysics Data System (ADS)

    Clementi, Marco; Pappa, Anna; Eckstein, Andreas; Walmsley, Ian A.; Kashefi, Elham; Barz, Stefanie

    2017-12-01

    In this work, we demonstrate a way to perform classical multiparty computing among parties with limited computational resources. Our method harnesses quantum resources to increase the computational power of the individual parties. We show how a set of clients restricted to linear classical processing are able to jointly compute a nonlinear multivariable function that lies beyond their individual capabilities. The clients are only allowed to perform classical xor gates and single-qubit gates on quantum states. We also examine the type of security that can be achieved in this limited setting. Finally, we provide a proof-of-concept implementation using photonic qubits that allows four clients to compute a specific example of a multiparty function, the pairwise and.

  9. Extending the Fermi-LAT data processing pipeline to the grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimmer, S.; Arrabito, L.; Glanzman, T.

    2015-05-12

    The Data Handling Pipeline ("Pipeline") has been developed for the Fermi Gamma-Ray Space Telescope (Fermi) Large Area Telescope (LAT) which launched in June 2008. Since then it has been in use to completely automate the production of data quality monitoring quantities, reconstruction and routine analysis of all data received from the satellite and to deliver science products to the collaboration and the Fermi Science Support Center. Aside from the reconstruction of raw data from the satellite (Level 1), data reprocessing and various event-level analyses are also reasonably heavy loads on the pipeline and computing resources. These other loads, unlike Levelmore » 1, can run continuously for weeks or months at a time. Additionally, it receives heavy use in performing production Monte Carlo tasks.« less

  10. Modeling of Melt-Infiltrated SiC/SiC Composite Properties

    NASA Technical Reports Server (NTRS)

    Mital, Subodh K.; Bednarcyk, Brett A.; Arnold, Steven M.; Lang, Jerry

    2009-01-01

    The elastic properties of a two-dimensional five-harness melt-infiltrated silicon carbide fiber reinforced silicon carbide matrix (MI SiC/SiC) ceramic matrix composite (CMC) were predicted using several methods. Methods used in this analysis are multiscale laminate analysis, micromechanics-based woven composite analysis, a hybrid woven composite analysis, and two- and three-dimensional finite element analyses. The elastic properties predicted are in good agreement with each other as well as with the available measured data. However, the various methods differ from each other in three key areas: (1) the fidelity provided, (2) the efforts required for input data preparation, and (3) the computational resources required. Results also indicate that efficient methods are also able to provide a reasonable estimate of local stress fields.

  11. Computer Network Resources for Physical Geography Instruction.

    ERIC Educational Resources Information Center

    Bishop, Michael P.; And Others

    1993-01-01

    Asserts that the use of computer networks provides an important and effective resource for geography instruction. Describes the use of the Internet network in physical geography instruction. Provides an example of the use of Internet resources in a climatology/meteorology course. (CFR)

  12. Self managing experiment resources

    NASA Astrophysics Data System (ADS)

    Stagni, F.; Ubeda, M.; Tsaregorodtsev, A.; Romanovskiy, V.; Roiser, S.; Charpentier, P.; Graciani, R.

    2014-06-01

    Within this paper we present an autonomic Computing resources management system, used by LHCb for assessing the status of their Grid resources. Virtual Organizations Grids include heterogeneous resources. For example, LHC experiments very often use resources not provided by WLCG, and Cloud Computing resources will soon provide a non-negligible fraction of their computing power. The lack of standards and procedures across experiments and sites generated the appearance of multiple information systems, monitoring tools, ticket portals, etc... which nowadays coexist and represent a very precious source of information for running HEP experiments Computing systems as well as sites. These two facts lead to many particular solutions for a general problem: managing the experiment resources. In this paper we present how LHCb, via the DIRAC interware, addressed such issues. With a renewed Central Information Schema hosting all resources metadata and a Status System (Resource Status System) delivering real time information, the system controls the resources topology, independently of the resource types. The Resource Status System applies data mining techniques against all possible information sources available and assesses the status changes, that are then propagated to the topology description. Obviously, giving full control to such an automated system is not risk-free. Therefore, in order to minimise the probability of misbehavior, a battery of tests has been developed in order to certify the correctness of its assessments. We will demonstrate the performance and efficiency of such a system in terms of cost reduction and reliability.

  13. Cooperative fault-tolerant distributed computing U.S. Department of Energy Grant DE-FG02-02ER25537 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunderam, Vaidy S.

    2007-01-09

    The Harness project has developed novel software frameworks for the execution of high-end simulations in a fault-tolerant manner on distributed resources. The H2O subsystem comprises the kernel of the Harness framework, and controls the key functions of resource management across multiple administrative domains, especially issues of access and allocation. It is based on a “pluggable” architecture that enables the aggregated use of distributed heterogeneous resources for high performance computing. The major contributions of the Harness II project result in significantly enhancing the overall computational productivity of high-end scientific applications by enabling robust, failure-resilient computations on cooperatively pooled resource collections.

  14. Construction and application of Red5 cluster based on OpenStack

    NASA Astrophysics Data System (ADS)

    Wang, Jiaqing; Song, Jianxin

    2017-08-01

    With the application and development of cloud computing technology in various fields, the resource utilization rate of the data center has been improved obviously, and the system based on cloud computing platform has also improved the expansibility and stability. In the traditional way, Red5 cluster resource utilization is low and the system stability is poor. This paper uses cloud computing to efficiently calculate the resource allocation ability, and builds a Red5 server cluster based on OpenStack. Multimedia applications can be published to the Red5 cloud server cluster. The system achieves the flexible construction of computing resources, but also greatly improves the stability of the cluster and service efficiency.

  15. Representational and Executive Selection Resources in "Theory of Mind": Evidence from Compromised Belief-Desire Reasoning in Old Age

    ERIC Educational Resources Information Center

    German, Tim P.; Hehman, Jessica A.

    2006-01-01

    Effective belief-desire reasoning requires both specialized representational capacities--the capacity to represent the mental states as such--as well as executive selection processes for accurate performance on tasks requiring the prediction and explanation of the actions of social agents. Compromised belief-desire reasoning in a given population…

  16. Recognition as Support for Reasoning about Horizontal Motion: A Further Resource for School Science?

    ERIC Educational Resources Information Center

    Howe, Christine; Taylor Tavares, Joana; Devine, Amy

    2016-01-01

    Background: Even infants can recognize whether patterns of motion are or are not natural, yet an acknowledged challenge for science education is to promote adequate reasoning about such patterns. Since research indicates linkage between the conceptual bases of recognition and reasoning, it seems possible that recognition can be engaged to support…

  17. Identifying Student Resources in Reasoning about Entropy and the Approach to Thermal Equilibrium

    ERIC Educational Resources Information Center

    Loverude, Michael

    2015-01-01

    As part of an ongoing project to examine student learning in upper-division courses in thermal and statistical physics, we have examined student reasoning about entropy and the second law of thermodynamics. We have examined reasoning in terms of heat transfer, entropy maximization, and statistical treatments of multiplicity and probability. In…

  18. Using Personal Computers To Acquire Special Education Information. Revised. ERIC Digest #429.

    ERIC Educational Resources Information Center

    ERIC Clearinghouse on Handicapped and Gifted Children, Reston, VA.

    This digest offers basic information about resources, available to users of personal computers, in the area of professional development in special education. Two types of resources are described: those that can be purchased on computer diskettes and those made available by linking personal computers through electronic telephone networks. Resources…

  19. If You're a Rawlsian, How Come You're So Close to Utilitarianism and Intuitionism? A Critique of Daniels's Accountability for Reasonableness.

    PubMed

    Badano, Gabriele

    2018-03-01

    Norman Daniels's theory of 'accountability for reasonableness' is an influential conception of fairness in healthcare resource allocation. Although it is widely thought that this theory provides a consistent extension of John Rawls's general conception of justice, this paper shows that accountability for reasonableness has important points of contact with both utilitarianism and intuitionism, the main targets of Rawls's argument. My aim is to demonstrate that its overlap with utilitarianism and intuitionism leaves accountability for reasonableness open to damaging critiques. The important role that utilitarian-like cost-effectiveness calculations are allowed to play in resource allocation processes disregards the separateness of persons and is seriously unfair towards individuals whose interests are sacrificed for the sake of groups. Furthermore, the function played by intuitions in settling frequent value conflicts opens the door for sheer custom and vested interests to steer decision-making.

  20. Concepts and Relations in Neurally Inspired In Situ Concept-Based Computing

    PubMed Central

    van der Velde, Frank

    2016-01-01

    In situ concept-based computing is based on the notion that conceptual representations in the human brain are “in situ.” In this way, they are grounded in perception and action. Examples are neuronal assemblies, whose connection structures develop over time and are distributed over different brain areas. In situ concepts representations cannot be copied or duplicated because that will disrupt their connection structure, and thus the meaning of these concepts. Higher-level cognitive processes, as found in language and reasoning, can be performed with in situ concepts by embedding them in specialized neurally inspired “blackboards.” The interactions between the in situ concepts and the blackboards form the basis for in situ concept computing architectures. In these architectures, memory (concepts) and processing are interwoven, in contrast with the separation between memory and processing found in Von Neumann architectures. Because the further development of Von Neumann computing (more, faster, yet power limited) is questionable, in situ concept computing might be an alternative for concept-based computing. In situ concept computing will be illustrated with a recently developed BABI reasoning task. Neurorobotics can play an important role in the development of in situ concept computing because of the development of in situ concept representations derived in scenarios as needed for reasoning tasks. Neurorobotics would also benefit from power limited and in situ concept computing. PMID:27242504

  1. Concepts and Relations in Neurally Inspired In Situ Concept-Based Computing.

    PubMed

    van der Velde, Frank

    2016-01-01

    In situ concept-based computing is based on the notion that conceptual representations in the human brain are "in situ." In this way, they are grounded in perception and action. Examples are neuronal assemblies, whose connection structures develop over time and are distributed over different brain areas. In situ concepts representations cannot be copied or duplicated because that will disrupt their connection structure, and thus the meaning of these concepts. Higher-level cognitive processes, as found in language and reasoning, can be performed with in situ concepts by embedding them in specialized neurally inspired "blackboards." The interactions between the in situ concepts and the blackboards form the basis for in situ concept computing architectures. In these architectures, memory (concepts) and processing are interwoven, in contrast with the separation between memory and processing found in Von Neumann architectures. Because the further development of Von Neumann computing (more, faster, yet power limited) is questionable, in situ concept computing might be an alternative for concept-based computing. In situ concept computing will be illustrated with a recently developed BABI reasoning task. Neurorobotics can play an important role in the development of in situ concept computing because of the development of in situ concept representations derived in scenarios as needed for reasoning tasks. Neurorobotics would also benefit from power limited and in situ concept computing.

  2. Research on the digital education resources of sharing pattern in independent colleges based on cloud computing environment

    NASA Astrophysics Data System (ADS)

    Xiong, Ting; He, Zhiwen

    2017-06-01

    Cloud computing was first proposed by Google Company in the United States, which was based on the Internet center, providing a standard and open network sharing service approach. With the rapid development of the higher education in China, the educational resources provided by colleges and universities had greatly gap in the actual needs of teaching resources. therefore, Cloud computing of using the Internet technology to provide shared methods liked the timely rain, which had become an important means of the Digital Education on sharing applications in the current higher education. Based on Cloud computing environment, the paper analyzed the existing problems about the sharing of digital educational resources in Jiangxi Province Independent Colleges. According to the sharing characteristics of mass storage, efficient operation and low input about Cloud computing, the author explored and studied the design of the sharing model about the digital educational resources of higher education in Independent College. Finally, the design of the shared model was put into the practical applications.

  3. Computationally efficient characterization of potential energy surfaces based on fingerprint distances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaefer, Bastian; Goedecker, Stefan, E-mail: stefan.goedecker@unibas.ch

    2016-07-21

    An analysis of the network defined by the potential energy minima of multi-atomic systems and their connectivity via reaction pathways that go through transition states allows us to understand important characteristics like thermodynamic, dynamic, and structural properties. Unfortunately computing the transition states and reaction pathways in addition to the significant energetically low-lying local minima is a computationally demanding task. We here introduce a computationally efficient method that is based on a combination of the minima hopping global optimization method and the insight that uphill barriers tend to increase with increasing structural distances of the educt and product states. This methodmore » allows us to replace the exact connectivity information and transition state energies with alternative and approximate concepts. Without adding any significant additional cost to the minima hopping global optimization approach, this method allows us to generate an approximate network of the minima, their connectivity, and a rough measure for the energy needed for their interconversion. This can be used to obtain a first qualitative idea on important physical and chemical properties by means of a disconnectivity graph analysis. Besides the physical insight obtained by such an analysis, the gained knowledge can be used to make a decision if it is worthwhile or not to invest computational resources for an exact computation of the transition states and the reaction pathways. Furthermore it is demonstrated that the here presented method can be used for finding physically reasonable interconversion pathways that are promising input pathways for methods like transition path sampling or discrete path sampling.« less

  4. The 4th R: Reasoning.

    ERIC Educational Resources Information Center

    Miles, Curtis

    1983-01-01

    Reviews sources of information on materials for teaching reasoning with a microcomputer. Suggests microcomputer magazines, catalogs of commercial materials, CONDUIT (a nonprofit organization devoted to educational computer use), and local microcomputer users groups. Lists Apple II software for strategy games with reasoning applications. (DMM)

  5. Sexual Resourcefulness and the Impact of Family, Sex Education, Media and Peers

    ERIC Educational Resources Information Center

    Kennett, Deborah J.; Humphreys, Terry P.; Schultz, Kristen E.

    2012-01-01

    Building on a recently developed theoretical model of sexual self-control, 178 undergraduate women completed measures of learned resourcefulness, reasons for consenting to unwanted advances, and sexual self-efficacy--variables consistently shown to be unique predictors of sexual resourcefulness. Additional measures assessed in this investigation…

  6. Linking Career Development and Human Resource Planning.

    ERIC Educational Resources Information Center

    Gutteridge, Thomas G.

    When organizations integrate their career development and human resources planning activities into a comprehensive whole, it is the exception rather than the rule. One reason for the frequent dichotomy between career development and human resource planning is the failure to recognize that they are complements rather than synonyms or substitutes.…

  7. Influences on Employee Perceptions of Organizational Work-Life Support: Signals and Resources

    ERIC Educational Resources Information Center

    Valcour, Monique; Ollier-Malaterre, Ariane; Matz-Costa, Christina; Pitt-Catsouphes, Marcie; Brown, Melissa

    2011-01-01

    This study examined predictors of employee perceptions of organizational work-life support. Using organizational support theory and conservation of resources theory, we reasoned that workplace demands and resources shape employees' perceptions of work-life support through two mechanisms: signaling that the organization cares about their work-life…

  8. University Student Conceptual Resources for Understanding Energy

    ERIC Educational Resources Information Center

    Sabo, Hannah C.; Goodhew, Lisa M.; Robertson, Amy D.

    2016-01-01

    We report some of the common, prevalent conceptual resources that students used to reason about energy, based on our analysis of written responses to questions given to 807 introductory physics students. These resources include, for example, associating forms of energy with indicators, relating forces and energy, and representing energy…

  9. 77 FR 13571 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-07

    ... Army Human Resource Command, have met the approved NARA retention schedule; therefore the notice can be... Army Human Resource Command, records have met the approved NARA retention schedule and are no longer... (January 6, 2004, 69 FR 790). Reason: The program has been discontinued at Army Human Resource Command...

  10. Why Don't All Professors Use Computers?

    ERIC Educational Resources Information Center

    Drew, David Eli

    1989-01-01

    Discusses the adoption of computer technology at universities and examines reasons why some professors don't use computers. Topics discussed include computer applications, including artificial intelligence, social science research, statistical analysis, and cooperative research; appropriateness of the technology for the task; the Computer Aptitude…

  11. BelleII@home: Integrate volunteer computing resources into DIRAC in a secure way

    NASA Astrophysics Data System (ADS)

    Wu, Wenjing; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo; Kan, Wenxiao; Urquijo, Phillip

    2017-10-01

    The exploitation of volunteer computing resources has become a popular practice in the HEP computing community as the huge amount of potential computing power it provides. In the recent HEP experiments, the grid middleware has been used to organize the services and the resources, however it relies heavily on the X.509 authentication, which is contradictory to the untrusted feature of volunteer computing resources, therefore one big challenge to utilize the volunteer computing resources is how to integrate them into the grid middleware in a secure way. The DIRAC interware which is commonly used as the major component of the grid computing infrastructure for several HEP experiments proposes an even bigger challenge to this paradox as its pilot is more closely coupled with operations requiring the X.509 authentication compared to the implementations of pilot in its peer grid interware. The Belle II experiment is a B-factory experiment at KEK, and it uses DIRAC for its distributed computing. In the project of BelleII@home, in order to integrate the volunteer computing resources into the Belle II distributed computing platform in a secure way, we adopted a new approach which detaches the payload running from the Belle II DIRAC pilot which is a customized pilot pulling and processing jobs from the Belle II distributed computing platform, so that the payload can run on volunteer computers without requiring any X.509 authentication. In this approach we developed a gateway service running on a trusted server which handles all the operations requiring the X.509 authentication. So far, we have developed and deployed the prototype of BelleII@home, and tested its full workflow which proves the feasibility of this approach. This approach can also be applied on HPC systems whose work nodes do not have outbound connectivity to interact with the DIRAC system in general.

  12. Children Can Solve Bayesian Problems: The Role of Representation in Mental Computation

    ERIC Educational Resources Information Center

    Zhu, Liqi; Gigerenzer, Gerd

    2006-01-01

    Can children reason the Bayesian way? We argue that the answer to this question depends on how numbers are represented, because a representation can do part of the computation. We test, for the first time, whether Bayesian reasoning can be elicited in children by means of natural frequencies. We show that when information was presented to fourth,…

  13. The Application of Multiobjective Evolutionary Algorithms to an Educational Computational Model of Science Information Processing: A Computational Experiment in Science Education

    ERIC Educational Resources Information Center

    Lamb, Richard L.; Firestone, Jonah B.

    2017-01-01

    Conflicting explanations and unrelated information in science classrooms increase cognitive load and decrease efficiency in learning. This reduced efficiency ultimately limits one's ability to solve reasoning problems in the science. In reasoning, it is the ability of students to sift through and identify critical pieces of information that is of…

  14. The Effects of Computer Programming on High School Students' Reasoning Skills and Mathematical Self-Efficacy and Problem Solving

    ERIC Educational Resources Information Center

    Psycharis, Sarantos; Kallia, Maria

    2017-01-01

    In this paper we investigate whether computer programming has an impact on high school student's reasoning skills, problem solving and self-efficacy in Mathematics. The quasi-experimental design was adopted to implement the study. The sample of the research comprised 66 high school students separated into two groups, the experimental and the…

  15. Reasoning Abilities in Primary School: A Pilot Study on Poor Achievers vs. Normal Achievers in Computer Game Tasks

    ERIC Educational Resources Information Center

    Dagnino, Francesca Maria; Ballauri, Margherita; Benigno, Vincenza; Caponetto, Ilaria; Pesenti, Elia

    2013-01-01

    This paper presents the results of preliminary research on the assessment of reasoning abilities in primary school poor achievers vs. normal achievers using computer game tasks. Subjects were evaluated by means of cognitive assessment on logical abilities and academic skills. The aim of this study is to better understand the relationship between…

  16. Dynamic VM Provisioning for TORQUE in a Cloud Environment

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Boland, L.; Coddington, P.; Sevior, M.

    2014-06-01

    Cloud computing, also known as an Infrastructure-as-a-Service (IaaS), is attracting more interest from the commercial and educational sectors as a way to provide cost-effective computational infrastructure. It is an ideal platform for researchers who must share common resources but need to be able to scale up to massive computational requirements for specific periods of time. This paper presents the tools and techniques developed to allow the open source TORQUE distributed resource manager and Maui cluster scheduler to dynamically integrate OpenStack cloud resources into existing high throughput computing clusters.

  17. To Feel or Not to Feel When My Group Harms Others? The Regulation of Collective Guilt as Motivated Reasoning.

    PubMed

    Sharvit, Keren; Brambilla, Marco; Babush, Maxim; Colucci, Francesco Paolo

    2015-09-01

    Four studies tested the proposition that regulation of collective guilt in the face of harmful ingroup behavior involves motivated reasoning. Cognitive energetics theory suggests that motivated reasoning is a function of goal importance, mental resource availability, and task demands. Accordingly, three studies conducted in the United States and Israel demonstrated that high importance of avoiding collective guilt, represented by group identification (Studies 1 and 3) and conservative ideological orientation (Study 2), is negatively related to collective guilt, but only when mental resources are not depleted by cognitive load. The fourth study, conducted in Italy, demonstrated that when justifications for the ingroup's harmful behavior are immediately available, the task of regulating collective guilt and shame becomes less demanding and less susceptible to resource depletion. By combining knowledge from the domains of motivated cognition, emotion regulation, and intergroup relations, these cross-cultural studies offer novel insights regarding factors underlying the regulation of collective guilt. © 2015 by the Society for Personality and Social Psychology, Inc.

  18. Pilots 2.0: DIRAC pilots for all the skies

    NASA Astrophysics Data System (ADS)

    Stagni, F.; Tsaregorodtsev, A.; McNab, A.; Luzzi, C.

    2015-12-01

    In the last few years, new types of computing infrastructures, such as IAAS (Infrastructure as a Service) and IAAC (Infrastructure as a Client), gained popularity. New resources may come as part of pledged resources, while others are opportunistic. Most of these new infrastructures are based on virtualization techniques. Meanwhile, some concepts, such as distributed queues, lost appeal, while still supporting a vast amount of resources. Virtual Organizations are therefore facing heterogeneity of the available resources and the use of an Interware software like DIRAC to hide the diversity of underlying resources has become essential. The DIRAC WMS is based on the concept of pilot jobs that was introduced back in 2004. A pilot is what creates the possibility to run jobs on a worker node. Within DIRAC, we developed a new generation of pilot jobs, that we dubbed Pilots 2.0. Pilots 2.0 are not tied to a specific infrastructure; rather they are generic, fully configurable and extendible pilots. A Pilot 2.0 can be sent, as a script to be run, or it can be fetched from a remote location. A pilot 2.0 can run on every computing resource, e.g.: on CREAM Computing elements, on DIRAC Computing elements, on Virtual Machines as part of the contextualization script, or IAAC resources, provided that these machines are properly configured, hiding all the details of the Worker Nodes (WNs) infrastructure. Pilots 2.0 can be generated server and client side. Pilots 2.0 are the “pilots to fly in all the skies”, aiming at easy use of computing power, in whatever form it is presented. Another aim is the unification and simplification of the monitoring infrastructure for all kinds of computing resources, by using pilots as a network of distributed sensors coordinated by a central resource monitoring system. Pilots 2.0 have been developed using the command pattern. VOs using DIRAC can tune pilots 2.0 as they need, and extend or replace each and every pilot command in an easy way. In this paper we describe how Pilots 2.0 work with distributed and heterogeneous resources providing the necessary abstraction to deal with different kind of computing resources.

  19. An Overview of Cloud Computing in Distributed Systems

    NASA Astrophysics Data System (ADS)

    Divakarla, Usha; Kumari, Geetha

    2010-11-01

    Cloud computing is the emerging trend in the field of distributed computing. Cloud computing evolved from grid computing and distributed computing. Cloud plays an important role in huge organizations in maintaining huge data with limited resources. Cloud also helps in resource sharing through some specific virtual machines provided by the cloud service provider. This paper gives an overview of the cloud organization and some of the basic security issues pertaining to the cloud.

  20. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    NASA Astrophysics Data System (ADS)

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-10-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.

  1. Ethical evaluation of decision-making for distribution of health resources in China.

    PubMed

    Guo-Ping, Wang

    2007-06-01

    Since distribution of health resources involves various aspects of ethics, the evaluation of ethical problems should be emphasised in health decisions using criteria of fairness and fundamental principles of ethics correctly understood and chosen in order to solve the real conflicts evident in the distribution of health resources and to enable fair and reasonable distribution of health resources.

  2. The iImpact of Reasons for Attending University on Academic Resourcefulness and Adjustment

    ERIC Educational Resources Information Center

    Kennett, Deborah J.; Reed, Maureen J.; Stuart, Amanda S.

    2013-01-01

    It is a well-known phenomenon that generally resourceful students are more likely to employ specific self-control skills, such as academic resourcefulness, to overcome stressors in their life, and as a result, are more likely to be better adjusted, to receive higher grades, and to remain in university than their less resourceful counterparts. To…

  3. Patient grouping for dose surveys and establishment of diagnostic reference levels in paediatric computed tomography.

    PubMed

    Vassileva, J; Rehani, M

    2015-07-01

    There has been confusion in literature on whether paediatric patients should be grouped according to age, weight or other parameters when dealing with dose surveys. The present work aims to suggest a pragmatic approach to achieve reasonable accuracy for performing patient dose surveys in countries with limited resources. The analysis is based on a subset of data collected within the IAEA survey of paediatric computed tomography (CT) doses, involving 82 CT facilities from 32 countries in Asia, Europe, Africa and Latin America. Data for 6115 patients were collected, in 34.5 % of which data for weight were available. The present study suggests that using four age groups, <1, >1-5, >5-10 and >10-15 y, is realistic and pragmatic for dose surveys in less resourced countries and for the establishment of DRLs. To ensure relevant accuracy of results, data for >30 patients in a particular age group should be collected if patient weight is not known. If a smaller sample is used, patient weight should be recorded and the median weight in the sample should be within 5-10 % from the median weight of the sample for which the DRLs were established. Comparison of results from different surveys should always be performed with caution, taking into consideration the way of grouping of paediatric patients. Dose results can be corrected for differences in patient weight/age group. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. The interaction of representation and reasoning

    PubMed Central

    Bundy, Alan

    2013-01-01

    Automated reasoning is an enabling technology for many applications of informatics. These applications include verifying that a computer program meets its specification; enabling a robot to form a plan to achieve a task and answering questions by combining information from diverse sources, e.g. on the Internet, etc. How is automated reasoning possible? Firstly, knowledge of a domain must be stored in a computer, usually in the form of logical formulae. This knowledge might, for instance, have been entered manually, retrieved from the Internet or perceived in the environment via sensors, such as cameras. Secondly, rules of inference are applied to old knowledge to derive new knowledge. Automated reasoning techniques have been adapted from logic, a branch of mathematics that was originally designed to formalize the reasoning of humans, especially mathematicians. My special interest is in the way that representation and reasoning interact. Successful reasoning is dependent on appropriate representation of both knowledge and successful methods of reasoning. Failures of reasoning can suggest changes of representation. This process of representational change can also be automated. We will illustrate the automation of representational change by drawing on recent work in my research group. PMID:24062623

  5. Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter

    PubMed Central

    Loganathan, Shyamala; Mukherjee, Saswati

    2015-01-01

    Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms. PMID:26473166

  6. Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter.

    PubMed

    Loganathan, Shyamala; Mukherjee, Saswati

    2015-01-01

    Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms.

  7. Parametrics on 2D Navier-Stokes analysis of a Mach 2.68 bifurcated rectangular mixed-compression inlet

    NASA Technical Reports Server (NTRS)

    Mizukami, M.; Saunders, J. D.

    1995-01-01

    The supersonic diffuser of a Mach 2.68 bifurcated, rectangular, mixed-compression inlet was analyzed using a two-dimensional (2D) Navier-Stokes flow solver. Parametric studies were performed on turbulence models, computational grids and bleed models. The computer flowfield was substantially different from the original inviscid design, due to interactions of shocks, boundary layers, and bleed. Good agreement with experimental data was obtained in many aspects. Many of the discrepancies were thought to originate primarily from 3D effects. Therefore, a balance should be struck between expending resources on a high fidelity 2D simulation, and the inherent limitations of 2D analysis. The solutions were fairly insensitive to turbulence models, grids and bleed models. Overall, the k-e turbulence model, and the bleed models based on unchoked bleed hole discharge coefficients or uniform velocity are recommended. The 2D Navier-Stokes methods appear to be a useful tool for the design and analysis of supersonic inlets, by providing a higher fidelity simulation of the inlet flowfield than inviscid methods, in a reasonable turnaround time.

  8. A diagnosis system using object-oriented fault tree models

    NASA Technical Reports Server (NTRS)

    Iverson, David L.; Patterson-Hine, F. A.

    1990-01-01

    Spaceborne computing systems must provide reliable, continuous operation for extended periods. Due to weight, power, and volume constraints, these systems must manage resources very effectively. A fault diagnosis algorithm is described which enables fast and flexible diagnoses in the dynamic distributed computing environments planned for future space missions. The algorithm uses a knowledge base that is easily changed and updated to reflect current system status. Augmented fault trees represented in an object-oriented form provide deep system knowledge that is easy to access and revise as a system changes. Given such a fault tree, a set of failure events that have occurred, and a set of failure events that have not occurred, this diagnosis system uses forward and backward chaining to propagate causal and temporal information about other failure events in the system being diagnosed. Once the system has established temporal and causal constraints, it reasons backward from heuristically selected failure events to find a set of basic failure events which are a likely cause of the occurrence of the top failure event in the fault tree. The diagnosis system has been implemented in common LISP using Flavors.

  9. Kaiser Permanente/Sandia National health care model. Phase I prototype final report. Part 1 - model overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, D.; Yoshimura, A.; Butler, D.

    1996-11-01

    This report describes the results of a Cooperative Research and Development Agreement between Sandia National Laboratories and Kaiser Permanente Southern California to develop a prototype computer model of Kaiser Permanente`s health care delivery system. As a discrete event simulation, SimHCO models for each of 100,000 patients the progression of disease, individual resource usage, and patient choices in a competitive environment. SimHCO is implemented in the object-oriented programming language C++, stressing reusable knowledge and reusable software components. The versioned implementation of SimHCO showed that the object-oriented framework allows the program to grow in complexity in an incremental way. Furthermore, timing calculationsmore » showed that SimHCO runs in a reasonable time on typical workstations, and that a second phase model will scale proportionally and run within the system constraints of contemporary computer technology. This report is published as two documents: Model Overview and Domain Analysis. A separate Kaiser-proprietary report contains the Disease and Health Care Organization Selection Models.« less

  10. Force Field Accelerated Density Functional Theory Molecular Dynamics for Simulation of Reactive Systems at Extreme Conditions

    NASA Astrophysics Data System (ADS)

    Lindsey, Rebecca; Goldman, Nir; Fried, Laurence

    2017-06-01

    Atomistic modeling of chemistry at extreme conditions remains a challenge, despite continuing advances in computing resources and simulation tools. While first principles methods provide a powerful predictive tool, the time and length scales associated with chemistry at extreme conditions (ns and μm, respectively) largely preclude extension of such models to molecular dynamics. In this work, we develop a simulation approach that retains the accuracy of density functional theory (DFT) while decreasing computational effort by several orders of magnitude. We generate n-body descriptions for atomic interactions by mapping forces arising from short density functional theory (DFT) trajectories on to simple Chebyshev polynomial series. We examine the importance of including greater than 2-body interactions, model transferability to different state points, and discuss approaches to ensure smooth and reasonable model shape outside of the distance domain sampled by the DFT training set. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  11. 43 CFR 11.40 - What are type A procedures?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 11.40 Public Lands: Interior Office of the Secretary of the Interior NATURAL RESOURCE DAMAGE... marine environments incorporates a computer model called the Natural Resource Damage Assessment Model for... environments incorporates a computer model called the Natural Resource Damage Assessment Model for Great Lakes...

  12. 43 CFR 11.40 - What are type A procedures?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 11.40 Public Lands: Interior Office of the Secretary of the Interior NATURAL RESOURCE DAMAGE... marine environments incorporates a computer model called the Natural Resource Damage Assessment Model for... environments incorporates a computer model called the Natural Resource Damage Assessment Model for Great Lakes...

  13. Competent Reasoning with Rational Numbers.

    ERIC Educational Resources Information Center

    Smith, John P. III

    1995-01-01

    Analyzed students' reasoning with fractions. Found that skilled students applied strategies specifically tailored to restricted classes of fractions and produced reliable solutions with a minimum of computation effort. Results suggest that competent reasoning depends on a knowledge base that includes numerically specific and invented strategies,…

  14. An emulator for minimizing computer resources for finite element analysis

    NASA Technical Reports Server (NTRS)

    Melosh, R.; Utku, S.; Islam, M.; Salama, M.

    1984-01-01

    A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).

  15. Experience in Implementing Resource-Based Learning in Agrarian College of Management and Law Poltava State Agrarian Academy

    ERIC Educational Resources Information Center

    Kononets, Natalia

    2015-01-01

    The introduction of resource-based learning disciplines of computer cycles in Agrarian College. The article focused on the issue of implementation of resource-based learning courses in the agricultural cycle computer college. Tested approach to creating elearning resources through free hosting and their further use in the classroom. Noted that the…

  16. EIAGRID: In-field optimization of seismic data acquisition by real-time subsurface imaging using a remote GRID computing environment.

    NASA Astrophysics Data System (ADS)

    Heilmann, B. Z.; Vallenilla Ferrara, A. M.

    2009-04-01

    The constant growth of contaminated sites, the unsustainable use of natural resources, and, last but not least, the hydrological risk related to extreme meteorological events and increased climate variability are major environmental issues of today. Finding solutions for these complex problems requires an integrated cross-disciplinary approach, providing a unified basis for environmental science and engineering. In computer science, grid computing is emerging worldwide as a formidable tool allowing distributed computation and data management with administratively-distant resources. Utilizing these modern High Performance Computing (HPC) technologies, the GRIDA3 project bundles several applications from different fields of geoscience aiming to support decision making for reasonable and responsible land use and resource management. In this abstract we present a geophysical application called EIAGRID that uses grid computing facilities to perform real-time subsurface imaging by on-the-fly processing of seismic field data and fast optimization of the processing workflow. Even though, seismic reflection profiling has a broad application range spanning from shallow targets in a few meters depth to targets in a depth of several kilometers, it is primarily used by the hydrocarbon industry and hardly for environmental purposes. The complexity of data acquisition and processing poses severe problems for environmental and geotechnical engineering: Professional seismic processing software is expensive to buy and demands large experience from the user. In-field processing equipment needed for real-time data Quality Control (QC) and immediate optimization of the acquisition parameters is often not available for this kind of studies. As a result, the data quality will be suboptimal. In the worst case, a crucial parameter such as receiver spacing, maximum offset, or recording time turns out later to be inappropriate and the complete acquisition campaign has to be repeated. The EIAGRID portal provides an innovative solution to this problem combining state-of-the-art data processing methods and modern remote grid computing technology. In field-processing equipment is substituted by remote access to high performance grid computing facilities. The latter can be ubiquitously controlled by a user-friendly web-browser interface accessed from the field by any mobile computer using wireless data transmission technology such as UMTS (Universal Mobile Telecommunications System) or HSUPA/HSDPA (High-Speed Uplink/Downlink Packet Access). The complexity of data-manipulation and processing and thus also the time demanding user interaction is minimized by a data-driven, and highly automated velocity analysis and imaging approach based on the Common-Reflection-Surface (CRS) stack. Furthermore, the huge computing power provided by the grid deployment allows parallel testing of alternative processing sequences and parameter settings, a feature which considerably reduces the turn-around times. A shared data storage using georeferencing tools and data grid technology is under current development. It will allow to publish already accomplished projects, making results, processing workflows and parameter settings available in a transparent and reproducible way. Creating a unified database shared by all users will facilitate complex studies and enable the use of data-crossing techniques to incorporate results of other environmental applications hosted on the GRIDA3 portal.

  17. ACToR A Aggregated Computational Toxicology Resource ...

    EPA Pesticide Factsheets

    We are developing the ACToR system (Aggregated Computational Toxicology Resource) to serve as a repository for a variety of types of chemical, biological and toxicological data that can be used for predictive modeling of chemical toxicology. We are developing the ACToR system (Aggregated Computational Toxicology Resource) to serve as a repository for a variety of types of chemical, biological and toxicological data that can be used for predictive modeling of chemical toxicology.

  18. ACToR A Aggregated Computational Toxicology Resource (S) ...

    EPA Pesticide Factsheets

    We are developing the ACToR system (Aggregated Computational Toxicology Resource) to serve as a repository for a variety of types of chemical, biological and toxicological data that can be used for predictive modeling of chemical toxicology. We are developing the ACToR system (Aggregated Computational Toxicology Resource) to serve as a repository for a variety of types of chemical, biological and toxicological data that can be used for predictive modeling of chemical toxicology.

  19. Visualizing complex processes using a cognitive-mapping tool to support the learning of clinical reasoning.

    PubMed

    Wu, Bian; Wang, Minhong; Grotzer, Tina A; Liu, Jun; Johnson, Janice M

    2016-08-22

    Practical experience with clinical cases has played an important role in supporting the learning of clinical reasoning. However, learning through practical experience involves complex processes difficult to be captured by students. This study aimed to examine the effects of a computer-based cognitive-mapping approach that helps students to externalize the reasoning process and the knowledge underlying the reasoning process when they work with clinical cases. A comparison between the cognitive-mapping approach and the verbal-text approach was made by analyzing their effects on learning outcomes. Fifty-two third-year or higher students from two medical schools participated in the study. Students in the experimental group used the computer-base cognitive-mapping approach, while the control group used the verbal-text approach, to make sense of their thinking and actions when they worked with four simulated cases over 4 weeks. For each case, students in both groups reported their reasoning process (involving data capture, hypotheses formulation, and reasoning with justifications) and the underlying knowledge (involving identified concepts and the relationships between the concepts) using the given approach. The learning products (cognitive maps or verbal text) revealed that students in the cognitive-mapping group outperformed those in the verbal-text group in the reasoning process, but not in making sense of the knowledge underlying the reasoning process. No significant differences were found in a knowledge posttest between the two groups. The computer-based cognitive-mapping approach has shown a promising advantage over the verbal-text approach in improving students' reasoning performance. Further studies are needed to examine the effects of the cognitive-mapping approach in improving the construction of subject-matter knowledge on the basis of practical experience.

  20. Aviation & Space Education: A Teacher's Resource Guide.

    ERIC Educational Resources Information Center

    Texas State Dept. of Aviation, Austin.

    This resource guide contains information on curriculum guides, resources for teachers, computer software and computer related programs, audio/visual presentations, model aircraft and demonstration aids, training seminars and career education, and an aerospace bibliography for primary grades. Each entry includes all or some of the following items:…

  1. Campus Computing Environment: University of Kentucky.

    ERIC Educational Resources Information Center

    CAUSE/EFFECT, 1989

    1989-01-01

    A dramatic growth in computing and communications was precipitated largely by the leadership of President David Roselle at the University of Kentucky. A new operational structure of information resource management includes not only computing (academic and administrative) and communications, instructional resources, and printing/mailing services,…

  2. Teaching Computer Literacy with Freeware and Shareware.

    ERIC Educational Resources Information Center

    Hobart, R. Dale; And Others

    1988-01-01

    Describes workshops given at Ferris State University for faculty and staff who want to acquire computer skills. Considered are a computer literacy and a software toolkit distributed to participants made from public domain/shareware resources. Stresses the benefits of shareware as an educational resource. (CW)

  3. Challenges in Securing the Interface Between the Cloud and Pervasive Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lagesse, Brent J

    2011-01-01

    Cloud computing presents an opportunity for pervasive systems to leverage computational and storage resources to accomplish tasks that would not normally be possible on such resource-constrained devices. Cloud computing can enable hardware designers to build lighter systems that last longer and are more mobile. Despite the advantages cloud computing offers to the designers of pervasive systems, there are some limitations of leveraging cloud computing that must be addressed. We take the position that cloud-based pervasive system must be secured holistically and discuss ways this might be accomplished. In this paper, we discuss a pervasive system utilizing cloud computing resources andmore » issues that must be addressed in such a system. In this system, the user's mobile device cannot always have network access to leverage resources from the cloud, so it must make intelligent decisions about what data should be stored locally and what processes should be run locally. As a result of these decisions, the user becomes vulnerable to attacks while interfacing with the pervasive system.« less

  4. Research prioritization through prediction of future impact on biomedical science: a position paper on inference-analytics.

    PubMed

    Ganapathiraju, Madhavi K; Orii, Naoki

    2013-08-30

    Advances in biotechnology have created "big-data" situations in molecular and cellular biology. Several sophisticated algorithms have been developed that process big data to generate hundreds of biomedical hypotheses (or predictions). The bottleneck to translating this large number of biological hypotheses is that each of them needs to be studied by experimentation for interpreting its functional significance. Even when the predictions are estimated to be very accurate, from a biologist's perspective, the choice of which of these predictions is to be studied further is made based on factors like availability of reagents and resources and the possibility of formulating some reasonable hypothesis about its biological relevance. When viewed from a global perspective, say from that of a federal funding agency, ideally the choice of which prediction should be studied would be made based on which of them can make the most translational impact. We propose that algorithms be developed to identify which of the computationally generated hypotheses have potential for high translational impact; this way, funding agencies and scientific community can invest resources and drive the research based on a global view of biomedical impact without being deterred by local view of feasibility. In short, data-analytic algorithms analyze big-data and generate hypotheses; in contrast, the proposed inference-analytic algorithms analyze these hypotheses and rank them by predicted biological impact. We demonstrate this through the development of an algorithm to predict biomedical impact of protein-protein interactions (PPIs) which is estimated by the number of future publications that cite the paper which originally reported the PPI. This position paper describes a new computational problem that is relevant in the era of big-data and discusses the challenges that exist in studying this problem, highlighting the need for the scientific community to engage in this line of research. The proposed class of algorithms, namely inference-analytic algorithms, is necessary to ensure that resources are invested in translating those computational outcomes that promise maximum biological impact. Application of this concept to predict biomedical impact of PPIs illustrates not only the concept, but also the challenges in designing these algorithms.

  5. Progress Towards an LES Wall Model Including Unresolved Roughness

    NASA Astrophysics Data System (ADS)

    Craft, Kyle; Redman, Andrew; Aikens, Kurt

    2015-11-01

    Wall models used in large eddy simulations (LES) are often based on theories for hydraulically smooth walls. While this is reasonable for many applications, there are also many where the impact of surface roughness is important. A previously developed wall model has been used primarily for jet engine aeroacoustics. However, jet simulations have not accurately captured thick initial shear layers found in some experimental data. This may partly be due to nozzle wall roughness used in the experiments to promote turbulent boundary layers. As a result, the wall model is extended to include the effects of unresolved wall roughness through appropriate alterations to the log-law. The methodology is tested for incompressible flat plate boundary layers with different surface roughness. Correct trends are noted for the impact of surface roughness on the velocity profile. However, velocity deficit profiles and the Reynolds stresses do not collapse as well as expected. Possible reasons for the discrepancies as well as future work will be presented. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  6. Some People Should Be Afraid of Computers.

    ERIC Educational Resources Information Center

    Rubin, Charles

    1983-01-01

    Discusses the "computerphobia" phenomenon, separating the valid reasons for some individual's anxiety about computers from their irrational fears. Among the factors examined are fear of breaking the computer, use of unclear documentation, lack of time for learning how to use the computer, and lack of computer knowledge. (JN)

  7. Methods and systems for providing reconfigurable and recoverable computing resources

    NASA Technical Reports Server (NTRS)

    Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)

    2010-01-01

    A method for optimizing the use of digital computing resources to achieve reliability and availability of the computing resources is disclosed. The method comprises providing one or more processors with a recovery mechanism, the one or more processors executing one or more applications. A determination is made whether the one or more processors needs to be reconfigured. A rapid recovery is employed to reconfigure the one or more processors when needed. A computing system that provides reconfigurable and recoverable computing resources is also disclosed. The system comprises one or more processors with a recovery mechanism, with the one or more processors configured to execute a first application, and an additional processor configured to execute a second application different than the first application. The additional processor is reconfigurable with rapid recovery such that the additional processor can execute the first application when one of the one more processors fails.

  8. Polyphony: A Workflow Orchestration Framework for Cloud Computing

    NASA Technical Reports Server (NTRS)

    Shams, Khawaja S.; Powell, Mark W.; Crockett, Tom M.; Norris, Jeffrey S.; Rossi, Ryan; Soderstrom, Tom

    2010-01-01

    Cloud Computing has delivered unprecedented compute capacity to NASA missions at affordable rates. Missions like the Mars Exploration Rovers (MER) and Mars Science Lab (MSL) are enjoying the elasticity that enables them to leverage hundreds, if not thousands, or machines for short durations without making any hardware procurements. In this paper, we describe Polyphony, a resilient, scalable, and modular framework that efficiently leverages a large set of computing resources to perform parallel computations. Polyphony can employ resources on the cloud, excess capacity on local machines, as well as spare resources on the supercomputing center, and it enables these resources to work in concert to accomplish a common goal. Polyphony is resilient to node failures, even if they occur in the middle of a transaction. We will conclude with an evaluation of a production-ready application built on top of Polyphony to perform image-processing operations of images from around the solar system, including Mars, Saturn, and Titan.

  9. Impact of Knowledge Resources Linked to an Electronic Health Record on Frequency of Unnecessary Tests and Treatments

    ERIC Educational Resources Information Center

    Goodman, Kenneth; Grad, Roland; Pluye, Pierre; Nowacki, Amy; Hickner, John

    2012-01-01

    Introduction: Electronic knowledge resources have the potential to rapidly provide answers to clinicians' questions. We sought to determine clinicians' reasons for searching these resources, the rate of finding relevant information, and the perceived clinical impact of the information they retrieved. Methods: We asked general internists, family…

  10. World Wide Web Resources for Teaching and Learning Economics. ERIC Digest.

    ERIC Educational Resources Information Center

    VanFossen, Phillip J.

    Technological resources abound for teachers of all subject areas, but for many reasons, such instructional technology seems to lend itself well to the social studies including economics. To help teachers efficiently use the latest economics resources available on the World Wide Web, this Digest identifies four sites that offer knowledge of…

  11. Rectifying Social Inequalities in a Resource Allocation Task

    PubMed Central

    Elenbaas, Laura; Rizzo, Michael T.; Cooley, Shelby; Killen, Melanie

    2016-01-01

    To investigate whether children rectify social inequalities in a resource allocation task, participants (N = 185 African-American and European-American 5–6 year-olds and 10–11 year-olds) witnessed an inequality of school supplies between peers of different racial backgrounds. Assessments were conducted on how children judged the wrongfulness of the inequality, allocated new resources to racial ingroup and outgroup recipients, evaluated alternative allocation strategies, and reasoned about their decisions. Younger children showed ingroup favorability; their responses differed depending on whether they had witnessed their ingroup or an outgroup at a disadvantage. With age, children increasingly reasoned about the importance of equal access to school supplies and correcting past disparities. Older children judged the resource inequality negatively, allocated more resources to the disadvantaged group, and positively evaluated the actions of others who did the same, regardless of whether they had seen their racial ingroup or an outgroup at a disadvantage. Thus, balancing moral and social group concerns enabled individuals to rectify inequalities and ensure fair access to important resources regardless of racial group membership. PMID:27423813

  12. Diversity in computing technologies and strategies for dynamic resource allocation

    DOE PAGES

    Garzoglio, G.; Gutsche, O.

    2015-12-23

    Here, High Energy Physics (HEP) is a very data intensive and trivially parallelizable science discipline. HEP is probing nature at increasingly finer details requiring ever increasing computational resources to process and analyze experimental data. In this paper, we discuss how HEP provisioned resources so far using Grid technologies, how HEP is starting to include new resource providers like commercial Clouds and HPC installations, and how HEP is transparently provisioning resources at these diverse providers.

  13. Interesting viewpoints to those who will put Ada into practice

    NASA Technical Reports Server (NTRS)

    Carlsson, Arne

    1986-01-01

    Ada will most probably be used as the programming language for computers in the NASA Space Station. It is reasonable to suppose that Ada will be used for at least embedded computers, because the high software costs for these embedded computers were the reason why Ada activities were initiated about ten years ago. The on-board computers are designed for use in space applications, where maintenance by man is impossible. All manipulation of such computers has to be performed in an autonomous way or remote with commands from the ground. In a manned Space Station some maintenance work can be performed by service people on board, but there are still a lot of applications, which require autonomous computers, for example, vital Space Station functions and unmanned orbital transfer vehicles. Those aspect which have come out of the analysis of Ada characteristics together with the experience of requirements for embedded on-board computers in space applications are examined.

  14. A distributed computing approach to mission operations support. [for spacecraft

    NASA Technical Reports Server (NTRS)

    Larsen, R. L.

    1975-01-01

    Computing mission operation support includes orbit determination, attitude processing, maneuver computation, resource scheduling, etc. The large-scale third-generation distributed computer network discussed is capable of fulfilling these dynamic requirements. It is shown that distribution of resources and control leads to increased reliability, and exhibits potential for incremental growth. Through functional specialization, a distributed system may be tuned to very specific operational requirements. Fundamental to the approach is the notion of process-to-process communication, which is effected through a high-bandwidth communications network. Both resource-sharing and load-sharing may be realized in the system.

  15. Computer-Game Construction: A Gender-Neutral Attractor to Computing Science

    ERIC Educational Resources Information Center

    Carbonaro, Mike; Szafron, Duane; Cutumisu, Maria; Schaeffer, Jonathan

    2010-01-01

    Enrollment in Computing Science university programs is at a dangerously low level. A major reason for this is the general lack of interest in Computing Science by females. In this paper, we discuss our experience with using a computer game construction environment as a vehicle to encourage female participation in Computing Science. Experiments…

  16. C++ Planning and Resource Reasoning (PARR) shell

    NASA Technical Reports Server (NTRS)

    Mcintyre, James; Tuchman, Alan; Mclean, David; Littlefield, Ronald

    1994-01-01

    This paper describes a generic, C++ version of the Planning and Resource Reasoning (PARR) shell which has been developed to supersede the C-based versions of PARR that are currently used to support AI planning and scheduling applications in flight operations centers at Goddard Space Flight Center. This new object-oriented version of PARR can be more easily customized to build a variety of planning and scheduling applications, and C++ PARR applications can be more easily ported to different environments. Genetic classes, constraints, strategies, and paradigms are described along with two types of PARR interfaces.

  17. Computer Technology Resources for Literacy Projects.

    ERIC Educational Resources Information Center

    Florida State Council on Aging, Tallahassee.

    This resource booklet was prepared to assist literacy projects and community adult education programs in determining the technology they need to serve more older persons. Section 1 contains the following reprinted articles: "The Human Touch in the Computer Age: Seniors Learn Computer Skills from Schoolkids" (Suzanne Kashuba);…

  18. The Computer Explosion: Implications for Educational Equity. Resource Notebook.

    ERIC Educational Resources Information Center

    Denbo, Sheryl, Comp.

    This notebook was prepared to provide resources for educators interested in using computers to increase opportunities for all students. The notebook contains specially prepared materials and selected newspaper and journal articles. The first section reviews the issues related to computer equity (equal access, tracking through different…

  19. Development of Computer-Based Resources for Textile Education.

    ERIC Educational Resources Information Center

    Hopkins, Teresa; Thomas, Andrew; Bailey, Mike

    1998-01-01

    Describes the production of computer-based resources for students of textiles and engineering in the United Kingdom. Highlights include funding by the Teaching and Learning Technology Programme (TLTP), courseware author/subject expert interaction, usage test and evaluation, authoring software, graphics, computer-aided design simulation, self-test…

  20. Parallel computing method for simulating hydrological processesof large rivers under climate change

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chen, Y.

    2016-12-01

    Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.

  1. Workflow Management Systems for Molecular Dynamics on Leadership Computers

    NASA Astrophysics Data System (ADS)

    Wells, Jack; Panitkin, Sergey; Oleynik, Danila; Jha, Shantenu

    Molecular Dynamics (MD) simulations play an important role in a range of disciplines from Material Science to Biophysical systems and account for a large fraction of cycles consumed on computing resources. Increasingly science problems require the successful execution of ''many'' MD simulations as opposed to a single MD simulation. There is a need to provide scalable and flexible approaches to the execution of the workload. We present preliminary results on the Titan computer at the Oak Ridge Leadership Computing Facility that demonstrate a general capability to manage workload execution agnostic of a specific MD simulation kernel or execution pattern, and in a manner that integrates disparate grid-based and supercomputing resources. Our results build upon our extensive experience of distributed workload management in the high-energy physics ATLAS project using PanDA (Production and Distributed Analysis System), coupled with recent conceptual advances in our understanding of workload management on heterogeneous resources. We will discuss how we will generalize these initial capabilities towards a more production level service on DOE leadership resources. This research is sponsored by US DOE/ASCR and used resources of the OLCF computing facility.

  2. A new taxonomy for distributed computer systems based upon operating system structure

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.

    1985-01-01

    Characteristics of the resource structure found in the operating system are considered as a mechanism for classifying distributed computer systems. Since the operating system resources, themselves, are too diversified to provide a consistent classification, the structure upon which resources are built and shared are examined. The location and control character of this indivisibility provides the taxonomy for separating uniprocessors, computer networks, network computers (fully distributed processing systems or decentralized computers) and algorithm and/or data control multiprocessors. The taxonomy is important because it divides machines into a classification that is relevant or important to the client and not the hardware architect. It also defines the character of the kernel O/S structure needed for future computer systems. What constitutes an operating system for a fully distributed processor is discussed in detail.

  3. Discussing sexual and relationship health with young people in a children's hospital: evaluation of a computer-based resource.

    PubMed

    Bray, Lucy; Sanders, Caroline; McKenna, Jacqueline

    2013-12-01

    To investigate health professionals' evaluation of a computer-based resource designed to improve discussions about sexual and relationship health with young people. Evidence suggests that some health professionals can experience discomfort discussing sexual health and relationship issues with young people. Professionals within hospital settings should have the knowledge, competencies and skills to be able to ask young people sexual health questions and provide accurate sexual health education. Despite some educational material being available for community and adult services, there are no resources available, which are directly relevant to holding opportunistic discussions with young people within an acute children's hospital. A descriptive survey design. One hundred and fourteen health professionals from a children's hospital in the UK were involved in evaluating a computer-based resource. All completed an online questionnaire survey comprising of closed and open questions. The health professionals reported that the computer-based resource had a positive influence on their knowledge and clinical practice. The videos as well as the concise nature of the resource were evaluated highly. Learning was facilitated by professionals being able to control their learning through rerunning and accessing the resource on numerous occasions. An engaging, accessible computer-based resource has the capability to positively impact on health professionals' knowledge of, and skills in, starting and holding sexual health conversations with young people accessing a children's hospital. Health professionals working with children and young people value accessible, relevant and short computer-based training. This can facilitate knowledge and skill acquisition despite variation in working patterns. Improving the knowledge and skills of professionals working with young people to facilitate appropriate yet opportunistic sexual health discussions is important within the public health agenda. © 2013 John Wiley & Sons Ltd.

  4. Safety model assessment and two-lane urban crash model

    DOT National Transportation Integrated Search

    2008-10-01

    There are many reasons to be concerned with estimating the frequency and social costs of highway accidents, but most reasons are motivated by a desire to minimize these costs to the extent feasible. Competition for scarce resources is a practical nec...

  5. The Object Coordination Class Applied to Wave Pulses: Analyzing Student Reasoning in Wave Physics.

    ERIC Educational Resources Information Center

    Wittmann, Michael C.

    2002-01-01

    Analyzes student responses to interview and written questions on wave physics using diSessa and Sherin's coordination class model which suggests that student use of specific reasoning resources is guided by possibly unconscious cues. (Author/MM)

  6. CloVR: a virtual machine for automated and portable sequence analysis from the desktop using cloud computing.

    PubMed

    Angiuoli, Samuel V; Matalka, Malcolm; Gussman, Aaron; Galens, Kevin; Vangala, Mahesh; Riley, David R; Arze, Cesar; White, James R; White, Owen; Fricke, W Florian

    2011-08-30

    Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing.

  7. Learning with Computers. AECA Resource Book Series, Volume 3, Number 2.

    ERIC Educational Resources Information Center

    Elliott, Alison

    1996-01-01

    Research has supported the idea that the use of computers in the education of young children promotes social interaction and academic achievement. This resource booklet provides an introduction to computers in early childhood settings to enrich learning opportunities and provides guidance to teachers to find developmentally appropriate software…

  8. Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing.

    PubMed

    Zhang, Nan; Yang, Xiaolong; Zhang, Min; Sun, Yan

    2016-01-01

    Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network.

  9. Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing

    PubMed Central

    Zhang, Min; Sun, Yan

    2016-01-01

    Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network. PMID:28030553

  10. Common sense reasoning about petroleum flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenberg, S.

    1981-02-01

    This paper describes an expert system for understanding and Reasoning in a petroleum resources domain. A basic model is implemented in FRL (Frame Representation Language). Expertise is encoded as rule frames. The model consists of a set of episodic contexts which are sequentially generated over time. Reasoning occurs in separate reasoning contexts consisting of a buffer frame and packets of rules. These function similar to small production systems. reasoning is linked to the model through an interface of Sentinels (instance driven demons) which notice anomalous conditions. Heuristics and metaknowledge are used through the creation of further reasoning contexts which overlaymore » the simpler ones.« less

  11. Research on Key Technologies of Cloud Computing

    NASA Astrophysics Data System (ADS)

    Zhang, Shufen; Yan, Hongcan; Chen, Xuebin

    With the development of multi-core processors, virtualization, distributed storage, broadband Internet and automatic management, a new type of computing mode named cloud computing is produced. It distributes computation task on the resource pool which consists of massive computers, so the application systems can obtain the computing power, the storage space and software service according to its demand. It can concentrate all the computing resources and manage them automatically by the software without intervene. This makes application offers not to annoy for tedious details and more absorbed in his business. It will be advantageous to innovation and reduce cost. It's the ultimate goal of cloud computing to provide calculation, services and applications as a public facility for the public, So that people can use the computer resources just like using water, electricity, gas and telephone. Currently, the understanding of cloud computing is developing and changing constantly, cloud computing still has no unanimous definition. This paper describes three main service forms of cloud computing: SAAS, PAAS, IAAS, compared the definition of cloud computing which is given by Google, Amazon, IBM and other companies, summarized the basic characteristics of cloud computing, and emphasized on the key technologies such as data storage, data management, virtualization and programming model.

  12. Hard-real-time resource management for autonomous spacecraft

    NASA Technical Reports Server (NTRS)

    Gat, E.

    2000-01-01

    This paper describes tickets, a computational mechanism for hard-real-time autonomous resource management. Autonomous spacecraftcontrol can be considered abstractly as a computational process whose outputs are spacecraft commands.

  13. Synchronization of Finite State Shared Resources

    DTIC Science & Technology

    1976-03-01

    IMHI uiw mmm " AFOSR -TR- 70- 0^8 3 QC o SYNCHRONIZATION OF FINITE STATE SHARED RESOURCES Edward A Sei neide.- DEPARTMENT of COMPUTER...34" ■ ■ ^ I I. i. . : ,1 . i-i SYNCHRONIZATION OF FINITE STATE SHARED RESOURCES Edward A Schneider Department of Computer...SIGNIFICANT NUMBER OF PAGES WHICH DO NOT REPRODUCE LEGIBLY. ABSTRACT The problem of synchronizing a set of operations defined on a shared resource

  14. Radar Control Optimal Resource Allocation

    DTIC Science & Technology

    2015-07-13

    other tunable parameters of radars [17, 18]. Such radar resource scheduling usually demands massive computation. Even myopic 14 Distribution A: Approved...reduced validity of the optimal choice of radar resource. In the non- myopic context, the computational problem becomes exponentially more difficult...computed as t? = ασ2 q + σ r √ α q (σ + r + α q) α q2 r − 1ασ q2 + q r2 . (19) We are only interested in t? > 1 and solving the inequality we obtain the

  15. MOLECULAR GENETIC TOOLS FOR ASSESSING THE STATUS AND VULNERABILITY OF AQUATIC RESOURCES

    EPA Science Inventory

    Development of ecological indicators that efficiently capture the present condition and project future vulnerabilities of biological resources is critical to sound environmental management. For this reason, the ORD's Ecological Research Program is developing genetic methodologies...

  16. Working Notes from the 1992 AAAI Spring Symposium on Practical Approaches to Scheduling and Planning

    NASA Technical Reports Server (NTRS)

    Drummond, Mark; Fox, Mark; Tate, Austin; Zweben, Monte

    1992-01-01

    The symposium presented issues involved in the development of scheduling systems that can deal with resource and time limitations. To qualify, a system must be implemented and tested to some degree on non-trivial problems (ideally, on real-world problems). However, a system need not be fully deployed to qualify. Systems that schedule actions in terms of metric time constraints typically represent and reason about an external numeric clock or calendar and can be contrasted with those systems that represent time purely symbolically. The following topics are discussed: integrating planning and scheduling; integrating symbolic goals and numerical utilities; managing uncertainty; incremental rescheduling; managing limited computation time; anytime scheduling and planning algorithms, systems; dependency analysis and schedule reuse; management of schedule and plan execution; and incorporation of discrete event techniques.

  17. Water resources planning for rivers draining into mobile bay

    NASA Technical Reports Server (NTRS)

    Ng, S.; April, G. C.

    1976-01-01

    A hydrodynamic model describing water movement and tidal elevation is formulated, computed, and used to provide basic data about water quality in natural systems. The hydrodynamic model is based on two-dimensional, unsteady flow equations. The water mass is considered to be reasonably mixed such that integration (averaging) in the depth direction is a valid restriction. Convective acceleration, the Coriolis force, wind and bottom interactions are included as contributing terms in the momentum equations. The solution of the equations is applied to Mobile Bay, and used to investigate the influence that river discharge rate, wind direction and speed, and tidal condition have on water circulation and holdup within the bay. Storm surge conditions, oil spill transport, artificial island construction, dredging, and areas subject to flooding are other topics which could be investigated using the mathematical modeling approach.

  18. Accessing Wind Tunnels From NASA's Information Power Grid

    NASA Technical Reports Server (NTRS)

    Becker, Jeff; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The NASA Ames wind tunnel customers are one of the first users of the Information Power Grid (IPG) storage system at the NASA Advanced Supercomputing Division. We wanted to be able to store their data on the IPG so that it could be accessed remotely in a secure but timely fashion. In addition, incorporation into the IPG allows future use of grid computational resources, e.g., for post-processing of data, or to do side-by-side CFD validation. In this paper, we describe the integration of grid data access mechanisms with the existing DARWIN web-based system that is used to access wind tunnel test data. We also show that the combined system has reasonable performance: wind tunnel data may be retrieved at 50Mbits/s over a 100 base T network connected to the IPG storage server.

  19. Performance Evaluation of Resource Management in Cloud Computing Environments.

    PubMed

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  20. Performance Evaluation of Resource Management in Cloud Computing Environments

    PubMed Central

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price. PMID:26555730

  1. Eurogrid: a new glideinWMS based portal for CDF data analysis

    NASA Astrophysics Data System (ADS)

    Amerio, S.; Benjamin, D.; Dost, J.; Compostella, G.; Lucchesi, D.; Sfiligoi, I.

    2012-12-01

    The CDF experiment at Fermilab ended its Run-II phase on September 2011 after 11 years of operations and 10 fb-1 of collected data. CDF computing model is based on a Central Analysis Farm (CAF) consisting of local computing and storage resources, supported by OSG and LCG resources accessed through dedicated portals. At the beginning of 2011 a new portal, Eurogrid, has been developed to effectively exploit computing and disk resources in Europe: a dedicated farm and storage area at the TIER-1 CNAF computing center in Italy, and additional LCG computing resources at different TIER-2 sites in Italy, Spain, Germany and France, are accessed through a common interface. The goal of this project is to develop a portal easy to integrate in the existing CDF computing model, completely transparent to the user and requiring a minimum amount of maintenance support by the CDF collaboration. In this paper we will review the implementation of this new portal, and its performance in the first months of usage. Eurogrid is based on the glideinWMS software, a glidein based Workload Management System (WMS) that works on top of Condor. As CDF CAF is based on Condor, the choice of the glideinWMS software was natural and the implementation seamless. Thanks to the pilot jobs, user-specific requirements and site resources are matched in a very efficient way, completely transparent to the users. Official since June 2011, Eurogrid effectively complements and supports CDF computing resources offering an optimal solution for the future in terms of required manpower for administration, support and development.

  2. Application of microarray analysis on computer cluster and cloud platforms.

    PubMed

    Bernau, C; Boulesteix, A-L; Knaus, J

    2013-01-01

    Analysis of recent high-dimensional biological data tends to be computationally intensive as many common approaches such as resampling or permutation tests require the basic statistical analysis to be repeated many times. A crucial advantage of these methods is that they can be easily parallelized due to the computational independence of the resampling or permutation iterations, which has induced many statistics departments to establish their own computer clusters. An alternative is to rent computing resources in the cloud, e.g. at Amazon Web Services. In this article we analyze whether a selection of statistical projects, recently implemented at our department, can be efficiently realized on these cloud resources. Moreover, we illustrate an opportunity to combine computer cluster and cloud resources. In order to compare the efficiency of computer cluster and cloud implementations and their respective parallelizations we use microarray analysis procedures and compare their runtimes on the different platforms. Amazon Web Services provide various instance types which meet the particular needs of the different statistical projects we analyzed in this paper. Moreover, the network capacity is sufficient and the parallelization is comparable in efficiency to standard computer cluster implementations. Our results suggest that many statistical projects can be efficiently realized on cloud resources. It is important to mention, however, that workflows can change substantially as a result of a shift from computer cluster to cloud computing.

  3. Collaboration Promotes Proportional Reasoning about Resource Distribution in Young Children

    ERIC Educational Resources Information Center

    Ng, Rowena; Heyman, Gail D.; Barner, David

    2011-01-01

    The authors investigated how children and adults evaluate the "niceness" of individuals who engage in resource distribution, with a focus on their sensitivity to the proportion of resources given. Across 3 experiments, subjects evaluated the niceness of a child who gave a quantity of pennies to another child. In Study 1 (N = 30), adults showed…

  4. Underutilization of Student-Centered Resources: Understanding Student Response to Supports for Improved Institutional Practice

    ERIC Educational Resources Information Center

    Craft, Rebecca W.

    2013-01-01

    Nationwide, students enrolled in community colleges respond to national surveys indicating that academic resources are very important, while at the same time failing to utilize those same resources. This study focused on identifying the reasons for this incongruity on the Terry Campus of Delaware Technical Community College. Using a sequential…

  5. A Case against Computer Symbolic Manipulation in School Mathematics Today.

    ERIC Educational Resources Information Center

    Waits, Bert K.; Demana, Franklin

    1992-01-01

    Presented are two reasons discouraging computer symbol manipulation systems use in school mathematics at present: cost for computer laboratories or expensive pocket computers; and impracticality of exact solution representations. Although development with this technology in mathematics education advances, graphing calculators are recommended to…

  6. 77 FR 58576 - Certain Wireless Communication Devices, Portable Music and Data Processing Devices, Computers...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-21

    ... Devices, Portable Music and Data Processing Devices, Computers, and Components Thereof; Institution of... communication devices, portable music and data processing devices, computers, and components thereof by reason... certain wireless communication devices, portable music and data processing devices, computers, and...

  7. Use of information and communication technology among dental students at the University of Jordan.

    PubMed

    Rajab, Lamis D; Baqain, Zaid H

    2005-03-01

    The aim of this study was to investigate the current knowledge, skills, and opinions of undergraduate dental students at the University of Jordan with respect to information communication technology (ICT). Dental students from the second, third, fourth, and fifth years were asked to complete a questionnaire presented in a lecture at the end of the second semester in the 2002-03 academic year. The response rate was 81 percent. Besides free and unlimited access to computers at the school of dentistry, 74 percent of the students had access to computers at home. However, 44 percent did not use a computer regularly. Male students were more regular and longer users of computers than females (p<0.001). A significant number of students (70 percent) judged themselves competent in information technology (IT) skills. More males felt competent in basic IT skills than did females (p<0.05). More than two-thirds acquired their computer skills through sources other than at the university. The main educational use of computers was accessing the Internet, word processing, multimedia, presentations, Medline search, and data management. More clinical students felt competent in word-processing skills (p<0.05) and many more used word processing for their studies (p<0.001) than did preclinical students. More males used word processing for their studies than females (p<0.001). Students used computers for personal activities more frequently than for academic reasons. More males used computers for both academic (p<0.01) and personal activities (p<0.001) than did females. All students had access to the Internet at the university, and 54 percent had access at home. A high percentage of students (94 percent) indicated they were comfortable using the Internet, 75 percent said they were confident in the accuracy, and 80 percent said they were confident in the relevance of information obtained from the Internet. Most students (90 percent) used email. Most students (83 percent) supported the idea of placing lectures on the web, and 61.2 percent indicated that this would not influence lecture attendance. Students used the Internet more for personal reasons than for the study of dentistry. More clinical students used the Internet for dentistry than preclinical students (p<0.001). More males than females used the Internet for dentistry (p<0.01) as well as for pleasure (p<0.01). Time and availability were the main obstacles to Internet use. Dental students at the University of Jordan have access to substantial IT resources and demonstrated attitudes toward the computer and Internet technology and use that were similar to other students in other nations. However, the educational use of ICT among Jordanian students remains low.

  8. 30 CFR 1202.351 - Royalties on geothermal resources.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... reasonably necessary to generate plant parasitic electricity or electricity for Federal lease operations; and (B) A reasonable amount of commercially demineralized water necessary for power plant operations or... generate plant parasitic electricity or electricity for Federal lease operations, as approved by BLM; or (C...

  9. Dynamic partitioning as a way to exploit new computing paradigms: the cloud use case.

    NASA Astrophysics Data System (ADS)

    Ciaschini, Vincenzo; Dal Pra, Stefano; dell'Agnello, Luca

    2015-12-01

    The WLCG community and many groups in the HEP community have based their computing strategy on the Grid paradigm, which proved successful and still ensures its goals. However, Grid technology has not spread much over other communities; in the commercial world, the cloud paradigm is the emerging way to provide computing services. WLCG experiments aim to achieve integration of their existing current computing model with cloud deployments and take advantage of the so-called opportunistic resources (including HPC facilities) which are usually not Grid compliant. One missing feature in the most common cloud frameworks, is the concept of job scheduler, which plays a key role in a traditional computing centre, by enabling a fairshare based access at the resources to the experiments in a scenario where demand greatly outstrips availability. At CNAF we are investigating the possibility to access the Tier-1 computing resources as an OpenStack based cloud service. The system, exploiting the dynamic partitioning mechanism already being used to enable Multicore computing, allowed us to avoid a static splitting of the computing resources in the Tier-1 farm, while permitting a share friendly approach. The hosts in a dynamically partitioned farm may be moved to or from the partition, according to suitable policies for request and release of computing resources. Nodes being requested in the partition switch their role and become available to play a different one. In the cloud use case hosts may switch from acting as Worker Node in the Batch system farm to cloud compute node member, made available to tenants. In this paper we describe the dynamic partitioning concept, its implementation and integration with our current batch system, LSF.

  10. Encapsulating model complexity and landscape-scale analyses of state-and-transition simulation models: an application of ecoinformatics and juniper encroachment in sagebrush steppe ecosystems

    USGS Publications Warehouse

    O'Donnell, Michael

    2015-01-01

    State-and-transition simulation modeling relies on knowledge of vegetation composition and structure (states) that describe community conditions, mechanistic feedbacks such as fire that can affect vegetation establishment, and ecological processes that drive community conditions as well as the transitions between these states. However, as the need for modeling larger and more complex landscapes increase, a more advanced awareness of computing resources becomes essential. The objectives of this study include identifying challenges of executing state-and-transition simulation models, identifying common bottlenecks of computing resources, developing a workflow and software that enable parallel processing of Monte Carlo simulations, and identifying the advantages and disadvantages of different computing resources. To address these objectives, this study used the ApexRMS® SyncroSim software and embarrassingly parallel tasks of Monte Carlo simulations on a single multicore computer and on distributed computing systems. The results demonstrated that state-and-transition simulation models scale best in distributed computing environments, such as high-throughput and high-performance computing, because these environments disseminate the workloads across many compute nodes, thereby supporting analysis of larger landscapes, higher spatial resolution vegetation products, and more complex models. Using a case study and five different computing environments, the top result (high-throughput computing versus serial computations) indicated an approximate 96.6% decrease of computing time. With a single, multicore compute node (bottom result), the computing time indicated an 81.8% decrease relative to using serial computations. These results provide insight into the tradeoffs of using different computing resources when research necessitates advanced integration of ecoinformatics incorporating large and complicated data inputs and models. - See more at: http://aimspress.com/aimses/ch/reader/view_abstract.aspx?file_no=Environ2015030&flag=1#sthash.p1XKDtF8.dpuf

  11. Assessment of Undiscovered Deposits of Gold, Silver, Copper, Lead, and Zinc in the United States: A Portable Document (PDF) Recompilation of USGS Open-File Report 96-96 and Circular 1178

    USGS Publications Warehouse

    U.S. Geological Survey National Mineral Resource Assessment Team Recompiled by Schruben, Paul G.

    2002-01-01

    This publication contains the results of a national mineral resource assessment study. The study (1) identifies regional tracts of ground believed to contain most of the nation's undiscovered resources of gold, silver, copper, lead, and zinc in conventional types of deposits; and (2) includes probabilistic estimates of the amounts of these undiscovered resources in most of the tracts. It also contains a table of the significant known deposits in the tracts, and includes descriptions of the mineral deposit models used for the assessment. The assessment was previously released in two major publications. The conterminous United States assessment was published in 1996 as USGS Open-File Report 96-96. Subsequently, the Alaska assessment was combined with the conterminous assessment in 1998 and released as USGS Circular 1178. This new recompilation was undertaken for several reasons. First, the graphical browser software used in Circular 1178 was ONLY compatible with the Microsoft Windows operating system. It was incompatible with the Macintosh operating system, Linux, and other types of Unix computers. Second, the browser on Circular 1178 is much less intuitive to operate, requiring most users to follow a tutorial to understand how to navigate the information on the CD. Third, this release corrects several errors and numbering inconsistencies in Circular 1178.

  12. Are human beings humean robots?

    NASA Astrophysics Data System (ADS)

    Génova, Gonzalo; Quintanilla Navarro, Ignacio

    2018-01-01

    David Hume, the Scottish philosopher, conceives reason as the slave of the passions, which implies that human reason has predetermined objectives it cannot question. An essential element of an algorithm running on a computational machine (or Logical Computing Machine, as Alan Turing calls it) is its having a predetermined purpose: an algorithm cannot question its purpose, because it would cease to be an algorithm. Therefore, if self-determination is essential to human intelligence, then human beings are neither Humean beings, nor computational machines. We examine also some objections to the Turing Test as a model to understand human intelligence.

  13. Tracking the NGS revolution: managing life science research on shared high-performance computing clusters.

    PubMed

    Dahlö, Martin; Scofield, Douglas G; Schaal, Wesley; Spjuth, Ola

    2018-05-01

    Next-generation sequencing (NGS) has transformed the life sciences, and many research groups are newly dependent upon computer clusters to store and analyze large datasets. This creates challenges for e-infrastructures accustomed to hosting computationally mature research in other sciences. Using data gathered from our own clusters at UPPMAX computing center at Uppsala University, Sweden, where core hour usage of ∼800 NGS and ∼200 non-NGS projects is now similar, we compare and contrast the growth, administrative burden, and cluster usage of NGS projects with projects from other sciences. The number of NGS projects has grown rapidly since 2010, with growth driven by entry of new research groups. Storage used by NGS projects has grown more rapidly since 2013 and is now limited by disk capacity. NGS users submit nearly twice as many support tickets per user, and 11 more tools are installed each month for NGS projects than for non-NGS projects. We developed usage and efficiency metrics and show that computing jobs for NGS projects use more RAM than non-NGS projects, are more variable in core usage, and rarely span multiple nodes. NGS jobs use booked resources less efficiently for a variety of reasons. Active monitoring can improve this somewhat. Hosting NGS projects imposes a large administrative burden at UPPMAX due to large numbers of inexperienced users and diverse and rapidly evolving research areas. We provide a set of recommendations for e-infrastructures that host NGS research projects. We provide anonymized versions of our storage, job, and efficiency databases.

  14. Tracking the NGS revolution: managing life science research on shared high-performance computing clusters

    PubMed Central

    2018-01-01

    Abstract Background Next-generation sequencing (NGS) has transformed the life sciences, and many research groups are newly dependent upon computer clusters to store and analyze large datasets. This creates challenges for e-infrastructures accustomed to hosting computationally mature research in other sciences. Using data gathered from our own clusters at UPPMAX computing center at Uppsala University, Sweden, where core hour usage of ∼800 NGS and ∼200 non-NGS projects is now similar, we compare and contrast the growth, administrative burden, and cluster usage of NGS projects with projects from other sciences. Results The number of NGS projects has grown rapidly since 2010, with growth driven by entry of new research groups. Storage used by NGS projects has grown more rapidly since 2013 and is now limited by disk capacity. NGS users submit nearly twice as many support tickets per user, and 11 more tools are installed each month for NGS projects than for non-NGS projects. We developed usage and efficiency metrics and show that computing jobs for NGS projects use more RAM than non-NGS projects, are more variable in core usage, and rarely span multiple nodes. NGS jobs use booked resources less efficiently for a variety of reasons. Active monitoring can improve this somewhat. Conclusions Hosting NGS projects imposes a large administrative burden at UPPMAX due to large numbers of inexperienced users and diverse and rapidly evolving research areas. We provide a set of recommendations for e-infrastructures that host NGS research projects. We provide anonymized versions of our storage, job, and efficiency databases. PMID:29659792

  15. Computing Bounds on Resource Levels for Flexible Plans

    NASA Technical Reports Server (NTRS)

    Muscvettola, Nicola; Rijsman, David

    2009-01-01

    A new algorithm efficiently computes the tightest exact bound on the levels of resources induced by a flexible activity plan (see figure). Tightness of bounds is extremely important for computations involved in planning because tight bounds can save potentially exponential amounts of search (through early backtracking and detection of solutions), relative to looser bounds. The bound computed by the new algorithm, denoted the resource-level envelope, constitutes the measure of maximum and minimum consumption of resources at any time for all fixed-time schedules in the flexible plan. At each time, the envelope guarantees that there are two fixed-time instantiations one that produces the minimum level and one that produces the maximum level. Therefore, the resource-level envelope is the tightest possible resource-level bound for a flexible plan because any tighter bound would exclude the contribution of at least one fixed-time schedule. If the resource- level envelope can be computed efficiently, one could substitute looser bounds that are currently used in the inner cores of constraint-posting scheduling algorithms, with the potential for great improvements in performance. What is needed to reduce the cost of computation is an algorithm, the measure of complexity of which is no greater than a low-degree polynomial in N (where N is the number of activities). The new algorithm satisfies this need. In this algorithm, the computation of resource-level envelopes is based on a novel combination of (1) the theory of shortest paths in the temporal-constraint network for the flexible plan and (2) the theory of maximum flows for a flow network derived from the temporal and resource constraints. The measure of asymptotic complexity of the algorithm is O(N O(maxflow(N)), where O(x) denotes an amount of computing time or a number of arithmetic operations proportional to a number of the order of x and O(maxflow(N)) is the measure of complexity (and thus of cost) of a maximumflow algorithm applied to an auxiliary flow network of 2N nodes. The algorithm is believed to be efficient in practice; experimental analysis shows the practical cost of maxflow to be as low as O(N1.5). The algorithm could be enhanced following at least two approaches. In the first approach, incremental subalgorithms for the computation of the envelope could be developed. By use of temporal scanning of the events in the temporal network, it may be possible to significantly reduce the size of the networks on which it is necessary to run the maximum-flow subalgorithm, thereby significantly reducing the time required for envelope calculation. In the second approach, the practical effectiveness of resource envelopes in the inner loops of search algorithms could be tested for multi-capacity resource scheduling. This testing would include inner-loop backtracking and termination tests and variable and value-ordering heuristics that exploit the properties of resource envelopes more directly.

  16. Cloudbus Toolkit for Market-Oriented Cloud Computing

    NASA Astrophysics Data System (ADS)

    Buyya, Rajkumar; Pandey, Suraj; Vecchiola, Christian

    This keynote paper: (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver computing as a utility; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as virtual machines; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a Service software system containing SDK (Software Development Kit) for construction of Cloud applications and deployment on private or public Clouds, in addition to supporting market-oriented resource management; (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications; (iii) creation of 3rd party Cloud brokering services for building content delivery networks and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon along with Grid mashups; (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; (v) Energy Efficient Resource Allocation Mechanisms and Techniques for creation and management of Green Clouds; and (vi) pathways for future research.

  17. System-Level Virtualization Research at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Stephen L; Vallee, Geoffroy R; Naughton, III, Thomas J

    2010-01-01

    System-level virtualization is today enjoying a rebirth as a technique to effectively share what were then considered large computing resources to subsequently fade from the spotlight as individual workstations gained in popularity with a one machine - one user approach. One reason for this resurgence is that the simple workstation has grown in capability to rival that of anything available in the past. Thus, computing centers are again looking at the price/performance benefit of sharing that single computing box via server consolidation. However, industry is only concentrating on the benefits of using virtualization for server consolidation (enterprise computing) whereas ourmore » interest is in leveraging virtualization to advance high-performance computing (HPC). While these two interests may appear to be orthogonal, one consolidating multiple applications and users on a single machine while the other requires all the power from many machines to be dedicated solely to its purpose, we propose that virtualization does provide attractive capabilities that may be exploited to the benefit of HPC interests. This does raise the two fundamental questions of: is the concept of virtualization (a machine sharing technology) really suitable for HPC and if so, how does one go about leveraging these virtualization capabilities for the benefit of HPC. To address these questions, this document presents ongoing studies on the usage of system-level virtualization in a HPC context. These studies include an analysis of the benefits of system-level virtualization for HPC, a presentation of research efforts based on virtualization for system availability, and a presentation of research efforts for the management of virtual systems. The basis for this document was material presented by Stephen L. Scott at the Collaborative and Grid Computing Technologies meeting held in Cancun, Mexico on April 12-14, 2007.« less

  18. Cloud based emergency health care information service in India.

    PubMed

    Karthikeyan, N; Sukanesh, R

    2012-12-01

    A hospital is a health care organization providing patient treatment by expert physicians, surgeons and equipments. A report from a health care accreditation group says that miscommunication between patients and health care providers is the reason for the gap in providing emergency medical care to people in need. In developing countries, illiteracy is the major key root for deaths resulting from uncertain diseases constituting a serious public health problem. Mentally affected, differently abled and unconscious patients can't communicate about their medical history to the medical practitioners. Also, Medical practitioners can't edit or view DICOM images instantly. Our aim is to provide palm vein pattern recognition based medical record retrieval system, using cloud computing for the above mentioned people. Distributed computing technology is coming in the new forms as Grid computing and Cloud computing. These new forms are assured to bring Information Technology (IT) as a service. In this paper, we have described how these new forms of distributed computing will be helpful for modern health care industries. Cloud Computing is germinating its benefit to industrial sectors especially in medical scenarios. In Cloud Computing, IT-related capabilities and resources are provided as services, via the distributed computing on-demand. This paper is concerned with sprouting software as a service (SaaS) by means of Cloud computing with an aim to bring emergency health care sector in an umbrella with physical secured patient records. In framing the emergency healthcare treatment, the crucial thing considered necessary to decide about patients is their previous health conduct records. Thus a ubiquitous access to appropriate records is essential. Palm vein pattern recognition promises a secured patient record access. Likewise our paper reveals an efficient means to view, edit or transfer the DICOM images instantly which was a challenging task for medical practitioners in the past years. We have developed two services for health care. 1. Cloud based Palm vein recognition system 2. Distributed Medical image processing tools for medical practitioners.

  19. Smoking cessation and the Internet: a qualitative method examining online consumer behavior.

    PubMed

    Frisby, Genevieve; Bessell, Tracey L; Borland, Ron; Anderson, Jeremy N

    2002-01-01

    Smoking is a major preventable cause of disease and disability around the world. Smoking cessation support-including information, discussion groups, cognitive behavioral treatment, and self-help materials-can be delivered via the Internet. There is limited information about the reasons and methods consumers access smoking cessation information on the Internet. This study aims to determine the feasibility of a method to examine the online behavior of consumers seeking smoking cessation resources. In particular, we sought to identify the reasons and methods consumers use to access and assess the quality of these resources. Thirteen participants were recruited via the state-based Quit smoking cessation campaign, operated by the Victorian Cancer Council, in December 2001. Online behavior was evaluated using semi-structured interviews and Internet simulations where participants sought smoking cessation information and addressed set-case scenarios. Online interaction was tracked through pervasive logging with specialist software. Thirteen semi-structured interviews and 4 Internet simulations were conducted in January 2002. Participants sought online smoking cessation resources for reasons of convenience, timeliness, and anonymity-and because their current information needs were unmet. They employed simple search strategies and could not always find information in an efficient manner. Participants employed several different strategies to assess the quality of online health resources. Consumer online behavior can be studied using a combination of survey, observation, and online surveillance. However, further qualitative and observational research is required to harness the full potential of the Internet to deliver public health resources.

  20. Focus issue: series on computational and systems biology.

    PubMed

    Gough, Nancy R

    2011-09-06

    The application of computational biology and systems biology is yielding quantitative insight into cellular regulatory phenomena. For the month of September, Science Signaling highlights research featuring computational approaches to understanding cell signaling and investigation of signaling networks, a series of Teaching Resources from a course in systems biology, and various other articles and resources relevant to the application of computational biology and systems biology to the study of signal transduction.

  1. ASKI: A modular toolbox for scattering-integral-based seismic full waveform inversion and sensitivity analysis utilizing external forward codes

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang

    Due to increasing computational resources, the development of new numerically demanding methods and software for imaging Earth's interior remains of high interest in Earth sciences. Here, we give a description from a user's and programmer's perspective of the highly modular, flexible and extendable software package ASKI-Analysis of Sensitivity and Kernel Inversion-recently developed for iterative scattering-integral-based seismic full waveform inversion. In ASKI, the three fundamental steps of solving the seismic forward problem, computing waveform sensitivity kernels and deriving a model update are solved by independent software programs that interact via file output/input only. Furthermore, the spatial discretizations of the model space used for solving the seismic forward problem and for deriving model updates, respectively, are kept completely independent. For this reason, ASKI does not contain a specific forward solver but instead provides a general interface to established community wave propagation codes. Moreover, the third fundamental step of deriving a model update can be repeated at relatively low costs applying different kinds of model regularization or re-selecting/weighting the inverted dataset without need to re-solve the forward problem or re-compute the kernels. Additionally, ASKI offers the user sensitivity and resolution analysis tools based on the full sensitivity matrix and allows to compose customized workflows in a consistent computational environment. ASKI is written in modern Fortran and Python, it is well documented and freely available under terms of the GNU General Public License (http://www.rub.de/aski).

  2. Intellectual performance and ego depletion: role of the self in logical reasoning and other information processing.

    PubMed

    Schmeichel, Brandon J; Vohs, Kathleen D; Baumeister, Roy F

    2003-07-01

    Some complex thinking requires active guidance by the self, but simpler mental activities do not. Depletion of the self's regulatory resources should therefore impair the former and not the latter. Resource depletion was manipulated by having some participants initially regulate attention (Studies 1 and 3) or emotion (Study 2). As compared with no-regulation participants who did not perform such exercises, depleted participants performed worse at logic and reasoning (Study 1), cognitive extrapolation (Study 2), and a test of thoughtful reading comprehension (Study 3). The same manipulations failed to cause decrements on a test of general knowledge (Study 2) or on memorization and recall of nonsense syllables (Study 3). Successful performance at complex thinking may therefore rely on limited regulatory resources.

  3. Logic as Marr's Computational Level: Four Case Studies.

    PubMed

    Baggio, Giosuè; van Lambalgen, Michiel; Hagoort, Peter

    2015-04-01

    We sketch four applications of Marr's levels-of-analysis methodology to the relations between logic and experimental data in the cognitive neuroscience of language and reasoning. The first part of the paper illustrates the explanatory power of computational level theories based on logic. We show that a Bayesian treatment of the suppression task in reasoning with conditionals is ruled out by EEG data, supporting instead an analysis based on defeasible logic. Further, we describe how results from an EEG study on temporal prepositions can be reanalyzed using formal semantics, addressing a potential confound. The second part of the article demonstrates the predictive power of logical theories drawing on EEG data on processing progressive constructions and on behavioral data on conditional reasoning in people with autism. Logical theories can constrain processing hypotheses all the way down to neurophysiology, and conversely neuroscience data can guide the selection of alternative computational level models of cognition. Copyright © 2014 Cognitive Science Society, Inc.

  4. Applications of computer-aided text analysis in natural resources.

    Treesearch

    David N. Bengston

    2000-01-01

    Ten contributed papers describe the use of a variety of approaches to computer-aided text analysis and their application to a wide range of research questions related to natural resources and the environment. Taken together, these papers paint a picture of a growing and vital area of research on the human dimensions of natural resource management.

  5. SCANIT: centralized digitizing of forest resource maps or photographs

    Treesearch

    Elliot L. Amidon; E. Joyce Dye

    1981-01-01

    Spatial data on wildland resource maps and aerial photographs can be analyzed by computer after digitizing. SCANIT is a computerized system for encoding such data in digital form. The system, consisting of a collection of computer programs and subroutines, provides a powerful and versatile tool for a variety of resource analyses. SCANIT also may be converted easily to...

  6. The DoD's High Performance Computing Modernization Program - Ensuing the National Earth Systems Prediction Capability Becomes Operational

    NASA Astrophysics Data System (ADS)

    Burnett, W.

    2016-12-01

    The Department of Defense's (DoD) High Performance Computing Modernization Program (HPCMP) provides high performance computing to address the most significant challenges in computational resources, software application support and nationwide research and engineering networks. Today, the HPCMP has a critical role in ensuring the National Earth System Prediction Capability (N-ESPC) achieves initial operational status in 2019. A 2015 study commissioned by the HPCMP found that N-ESPC computational requirements will exceed interconnect bandwidth capacity due to the additional load from data assimilation and passing connecting data between ensemble codes. Memory bandwidth and I/O bandwidth will continue to be significant bottlenecks for the Navy's Hybrid Coordinate Ocean Model (HYCOM) scalability - by far the major driver of computing resource requirements in the N-ESPC. The study also found that few of the N-ESPC model developers have detailed plans to ensure their respective codes scale through 2024. Three HPCMP initiatives are designed to directly address and support these issues: Productivity Enhancement, Technology, Transfer and Training (PETTT), the HPCMP Applications Software Initiative (HASI), and Frontier Projects. PETTT supports code conversion by providing assistance, expertise and training in scalable and high-end computing architectures. HASI addresses the continuing need for modern application software that executes effectively and efficiently on next-generation high-performance computers. Frontier Projects enable research and development that could not be achieved using typical HPCMP resources by providing multi-disciplinary teams access to exceptional amounts of high performance computing resources. Finally, the Navy's DoD Supercomputing Resource Center (DSRC) currently operates a 6 Petabyte system, of which Naval Oceanography receives 15% of operational computational system use, or approximately 1 Petabyte of the processing capability. The DSRC will provide the DoD with future computing assets to initially operate the N-ESPC in 2019. This talk will further describe how DoD's HPCMP will ensure N-ESPC becomes operational, efficiently and effectively, using next-generation high performance computing.

  7. ROSA: Resource-Oriented Service Management Schemes for Web of Things in a Smart Home

    PubMed Central

    Chen, Peng-Yu

    2017-01-01

    A Pervasive-computing-enriched smart home environment, which contains many embedded and tiny intelligent devices and sensors coordinated by service management mechanisms, is capable of anticipating intentions of occupants and providing appropriate services accordingly. Although there are a wealth of research achievements in recent years, the degree of market acceptance is still low. The main reason is that most of the devices and services in such environments depend on particular platform or technology, making it hard to develop an application by composing the devices or services. Meanwhile, the concept of Web of Things (WoT) is becoming popular recently. Based on WoT, the developers can build applications based on popular web tools or technologies. Consequently, the objective of this paper is to propose a set of novel WoT-driven plug-and-play service management schemes for a smart home called Resource-Oriented Service Administration (ROSA). We have implemented an application prototype, and experiments are performed to show the effectiveness of the proposed approach. The results of this research can be a foundation for realizing the vision of “end user programmable smart environments”. PMID:28934159

  8. Two-MILP models for scheduling elective surgeries within a private healthcare facility.

    PubMed

    Khlif Hachicha, Hejer; Zeghal Mansour, Farah

    2016-11-05

    This paper deals with an Integrated Elective Surgery-Scheduling Problem (IESSP) that arises in a privately operated healthcare facility. It aims to optimize the resource utilization of the entire surgery process including pre-operative, per-operative and post-operative activities. Moreover, it addresses a specific feature of private facilities where surgeons are independent service providers and may conduct their surgeries in different private healthcare facilities. Thus, the problem requires the assignment of surgery patients to hospital beds, operating rooms and recovery beds as well as their sequencing over a 1-day period while taking into account surgeons' availability constraints. We present two Mixed Integer Linear Programs (MILP) that model the IESSP as a three-stage hybrid flow-shop scheduling problem with recirculation, resource synchronization, dedicated machines, and blocking constraints. To assess the empirical performance of the proposed models, we conducted experiments on real-world data of a Tunisian private clinic: Clinique Ennasr and on randomly generated instances. Two criteria were minimised: the patients' average length of stay and the number of patients' overnight stays. The computational results show that the proposed models can solve instances with up to 44 surgical cases in a reasonable CPU time using a general-purpose MILP solver.

  9. Save medical personnel's time by improved user interfaces.

    PubMed

    Kindler, H

    1997-01-01

    Common objectives in the industrial countries are the improvement of quality of care, clinical effectiveness, and cost control. Cost control, in particular, has been addressed through the introduction of case mix systems for reimbursement by social-security institutions. More data is required to enable quality improvement, increases in clinical effectiveness and for juridical reasons. At first glance, this documentation effort is contradictory to cost reduction. However, integrated services for resource management based on better documentation should help to reduce costs. The clerical effort for documentation should be decreased by providing a co-operative working environment for healthcare professionals applying sophisticated human-computer interface technology. Additional services, e.g., automatic report generation, increase the efficiency of healthcare personnel. Modelling the medical work flow forms an essential prerequisite for integrated resource management services and for co-operative user interfaces. A user interface aware of the work flow provides intelligent assistance by offering the appropriate tools at the right moment. Nowadays there is a trend to client/server systems with relational databases or object-oriented databases as repository. The work flows used for controlling purposes and to steer the user interfaces must be represented in the repository.

  10. CAMPAIGN: an open-source library of GPU-accelerated data clustering algorithms.

    PubMed

    Kohlhoff, Kai J; Sosnick, Marc H; Hsu, William T; Pande, Vijay S; Altman, Russ B

    2011-08-15

    Data clustering techniques are an essential component of a good data analysis toolbox. Many current bioinformatics applications are inherently compute-intense and work with very large datasets. Sequential algorithms are inadequate for providing the necessary performance. For this reason, we have created Clustering Algorithms for Massively Parallel Architectures, Including GPU Nodes (CAMPAIGN), a central resource for data clustering algorithms and tools that are implemented specifically for execution on massively parallel processing architectures. CAMPAIGN is a library of data clustering algorithms and tools, written in 'C for CUDA' for Nvidia GPUs. The library provides up to two orders of magnitude speed-up over respective CPU-based clustering algorithms and is intended as an open-source resource. New modules from the community will be accepted into the library and the layout of it is such that it can easily be extended to promising future platforms such as OpenCL. Releases of the CAMPAIGN library are freely available for download under the LGPL from https://simtk.org/home/campaign. Source code can also be obtained through anonymous subversion access as described on https://simtk.org/scm/?group_id=453. kjk33@cantab.net.

  11. Using forest inventory data to assess use restrictions on private timberland in Illinois. Forest Service resource bulletin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leatherberry, E.C.

    1993-01-01

    About half of the Nation's 731 million acres of forest land is privately owned. Traditionally, most private forest land was open for public uses, especially hunting. Today, however, keep out' or no trespassing' signs are seen increasingly throughout the countryside. The situation concerns policymakers and administrators because private lands are important recreational and aesthetic resources. Private landowners close their land to public use for many reasons. Generally, liability concerns, property damage, reasons for owning land, landowner attitudes about hunting or other consumptive uses, and landowners' intent to lease or charge a fee for access.

  12. CloVR: A virtual machine for automated and portable sequence analysis from the desktop using cloud computing

    PubMed Central

    2011-01-01

    Background Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. Results We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. Conclusion The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing. PMID:21878105

  13. Operation ARA: A Computerized Learning Game that Teaches Critical Thinking and Scientific Reasoning

    ERIC Educational Resources Information Center

    Halpern, Diane F.; Millis, Keith; Graesser, Arthur C.; Butler, Heather; Forsyth, Carol; Cai, Zhiqiang

    2012-01-01

    Operation ARA (Acquiring Research Acumen) is a computerized learning game that teaches critical thinking and scientific reasoning. It is a valuable learning tool that utilizes principles from the science of learning and serious computer games. Students learn the skills of scientific reasoning by engaging in interactive dialogs with avatars. They…

  14. Using Computer Simulations for Promoting Model-Based Reasoning: Epistemological and Educational Dimensions

    ERIC Educational Resources Information Center

    Develaki, Maria

    2017-01-01

    Scientific reasoning is particularly pertinent to science education since it is closely related to the content and methodologies of science and contributes to scientific literacy. Much of the research in science education investigates the appropriate framework and teaching methods and tools needed to promote students' ability to reason and…

  15. Justification of Filter Selection for Robot Balancing in Conditions of Limited Computational Resources

    NASA Astrophysics Data System (ADS)

    Momot, M. V.; Politsinskaia, E. V.; Sushko, A. V.; Semerenko, I. A.

    2016-08-01

    The paper considers the problem of mathematical filter selection, used for balancing of wheeled robot in conditions of limited computational resources. The solution based on complementary filter is proposed.

  16. The Cognitive Predictors of Computational Skill with Whole versus Rational Numbers: An Exploratory Study.

    PubMed

    Seethaler, Pamela M; Fuchs, Lynn S; Star, Jon R; Bryant, Joan

    2011-10-01

    The purpose of the present study was to explore the 3(rd)-grade cognitive predictors of 5th-grade computational skill with rational numbers and how those are similar to and different from the cognitive predictors of whole-number computational skill. Students (n = 688) were assessed on incoming whole-number calculation skill, language, nonverbal reasoning, concept formation, processing speed, and working memory in the fall of 3(rd) grade. Students were followed longitudinally and assessed on calculation skill with whole numbers and with rational numbers in the spring of 5(th) grade. The unique predictors of skill with whole-number computation were incoming whole-number calculation skill, nonverbal reasoning, concept formation, and working memory (numerical executive control). In addition to these cognitive abilities, language emerged as a unique predictor of rational-number computational skill.

  17. The Cognitive Predictors of Computational Skill with Whole versus Rational Numbers: An Exploratory Study

    PubMed Central

    Seethaler, Pamela M.; Fuchs, Lynn S.; Star, Jon R.; Bryant, Joan

    2011-01-01

    The purpose of the present study was to explore the 3rd-grade cognitive predictors of 5th-grade computational skill with rational numbers and how those are similar to and different from the cognitive predictors of whole-number computational skill. Students (n = 688) were assessed on incoming whole-number calculation skill, language, nonverbal reasoning, concept formation, processing speed, and working memory in the fall of 3rd grade. Students were followed longitudinally and assessed on calculation skill with whole numbers and with rational numbers in the spring of 5th grade. The unique predictors of skill with whole-number computation were incoming whole-number calculation skill, nonverbal reasoning, concept formation, and working memory (numerical executive control). In addition to these cognitive abilities, language emerged as a unique predictor of rational-number computational skill. PMID:21966180

  18. Colovesical fistula causing an uncommon reason for failure of computed tomography colonography: a case report.

    PubMed

    Neroladaki, Angeliki; Breguet, Romain; Botsikas, Diomidis; Terraz, Sylvain; Becker, Christoph D; Montet, Xavier

    2012-07-23

    Computed tomography colonography, or virtual colonoscopy, is a good alternative to optical colonoscopy. However, suboptimal patient preparation or colon distension may reduce the diagnostic accuracy of this imaging technique. We report the case of an 83-year-old Caucasian woman who presented with a five-month history of pneumaturia and fecaluria and an acute episode of macrohematuria, leading to a high clinical suspicion of a colovesical fistula. The fistula was confirmed by standard contrast-enhanced computed tomography. Optical colonoscopy was performed to exclude the presence of an underlying colonic neoplasm. Since optical colonoscopy was incomplete, computed tomography colonography was performed, but also failed due to inadequate colon distension. The insufflated air directly accumulated within the bladder via the large fistula. Clinicians should consider colovesical fistula as a potential reason for computed tomography colonography failure.

  19. Microprogramming Handbook. Second Edition.

    ERIC Educational Resources Information Center

    Microdata Corp., Santa Ana, CA.

    Instead of instructions residing in the main memory as in a fixed instruction computer, a micro-programable computer has a separete read-only memory which is alterable so that the system can be efficiently adapted to the application at hand. Microprogramable computers are faster than fixed instruction computers for several reasons: instruction…

  20. Computers and Instruction: Implications of the Rising Tide of Criticism for Reading Education.

    ERIC Educational Resources Information Center

    Balajthy, Ernest

    1988-01-01

    Examines two major reasons that schools have adopted computers without careful prior examination and planning. Surveys a variety of criticisms targeted toward some aspects of computer-based instruction in reading in an effort to direct attention to the beneficial implications of computers in the classroom. (MS)

  1. An Educational Approach to Computationally Modeling Dynamical Systems

    ERIC Educational Resources Information Center

    Chodroff, Leah; O'Neal, Tim M.; Long, David A.; Hemkin, Sheryl

    2009-01-01

    Chemists have used computational science methodologies for a number of decades and their utility continues to be unabated. For this reason we developed an advanced lab in computational chemistry in which students gain understanding of general strengths and weaknesses of computation-based chemistry by working through a specific research problem.…

  2. Establishing a Computer Literacy Requirement for All Students.

    ERIC Educational Resources Information Center

    Kieffer, Linda M.

    Several factors have indicated the necessity of formally requiring computer literacy at the university level. This paper discusses the reasoning for, the development of, and content of two computer literacy courses required of all freshmen. The first course contains computer awareness and knowledge that students should have upon entering the…

  3. Planning Civilian Reuse of Former Military Base. Revision

    DTIC Science & Technology

    1990-08-01

    potential de- private sector- into one comprehensive policy group which /elopment. For these reasons, the leadership of the impacted , ommunity should focus...services, Its tourism and recreational resources, and (if any) should be emphasized? its educational and health resources, among others. The de- In...highly attractive industrial industial heat. structures for processing the regions wood resources ito Historic Development and Tourism : Burlington, New

  4. Cloud Bursting with GlideinWMS: Means to satisfy ever increasing computing needs for Scientific Workflows

    NASA Astrophysics Data System (ADS)

    Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt; Larson, Krista; Sfiligoi, Igor; Rynge, Mats

    2014-06-01

    Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared over the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by "Big Data" will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.

  5. Cloud Bursting with GlideinWMS: Means to satisfy ever increasing computing needs for Scientific Workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt

    Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared overmore » the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by 'Big Data' will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.« less

  6. 36 CFR 251.56 - Terms and conditions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... to be reasonable in light of all circumstances concerning the use, including (i) Resource management... 40-year authorization would be inconsistent with the approved forest land and resource management... Section 251.56 Parks, Forests, and Public Property FOREST SERVICE, DEPARTMENT OF AGRICULTURE LAND USES...

  7. 43 CFR 3200.1 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... benefit and not selling energy to another entity. Commercial production means production of geothermal..., including the electricity or energy that is reasonably required to produce the resource used in production of electricity for sale or to convert the resource into electrical energy for sale. Commercial...

  8. 43 CFR 3200.1 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... benefit and not selling energy to another entity. Commercial production means production of geothermal..., including the electricity or energy that is reasonably required to produce the resource used in production of electricity for sale or to convert the resource into electrical energy for sale. Commercial...

  9. 43 CFR 3200.1 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... benefit and not selling energy to another entity. Commercial production means production of geothermal..., including the electricity or energy that is reasonably required to produce the resource used in production of electricity for sale or to convert the resource into electrical energy for sale. Commercial...

  10. Shared-resource computing for small research labs.

    PubMed

    Ackerman, M J

    1982-04-01

    A real time laboratory computer network is described. This network is composed of four real-time laboratory minicomputers located in each of four division laboratories and a larger minicomputer in a centrally located computer room. Off the shelf hardware and software were used with no customization. The network is configured for resource sharing using DECnet communications software and the RSX-11-M multi-user real-time operating system. The cost effectiveness of the shared resource network and multiple real-time processing using priority scheduling is discussed. Examples of utilization within a medical research department are given.

  11. Student resources for learning physics

    NASA Astrophysics Data System (ADS)

    Hammer, David

    2015-04-01

    Careful observations of learners' reasoning belie simple characterizations of their knowledge or abilities: Students who appear to lack understanding or abilities at one moment show evidence of them at another. Detecting this variability generally requires close examination of what and how students are thinking, moment-to-moment, which makes research difficult. But the findings challenge unitary accounts of intelligence, stages of development, and misconceptions. Joe Redish and others have been working from a more complex theoretical framework of innumerable, fine-grained cognitive structures we call ``resources.'' They are, roughly, ways of thinking people have that may apply or not in any particular moment. (Thinking about energy, for example, may involve resources for understanding location or conservation, or oscillations in time, or differential symmetry.) The variability we observe in student reasoning reflects variability in resource activation. Resources are to models of mind what partons used to be to models of hadrons: We know we should be thinking of entities and dynamics at a smaller scale than we've been considering, even if we don't know their particular properties. Understanding minds in this way has profound implications for research and for teaching.

  12. Examining Effects of Virtual Machine Settings on Voice over Internet Protocol in a Private Cloud Environment

    ERIC Educational Resources Information Center

    Liao, Yuan

    2011-01-01

    The virtualization of computing resources, as represented by the sustained growth of cloud computing, continues to thrive. Information Technology departments are building their private clouds due to the perception of significant cost savings by managing all physical computing resources from a single point and assigning them to applications or…

  13. Dynamic provisioning of local and remote compute resources with OpenStack

    NASA Astrophysics Data System (ADS)

    Giffels, M.; Hauth, T.; Polgart, F.; Quast, G.

    2015-12-01

    Modern high-energy physics experiments rely on the extensive usage of computing resources, both for the reconstruction of measured events as well as for Monte-Carlo simulation. The Institut fur Experimentelle Kernphysik (EKP) at KIT is participating in both the CMS and Belle experiments with computing and storage resources. In the upcoming years, these requirements are expected to increase due to growing amount of recorded data and the rise in complexity of the simulated events. It is therefore essential to increase the available computing capabilities by tapping into all resource pools. At the EKP institute, powerful desktop machines are available to users. Due to the multi-core nature of modern CPUs, vast amounts of CPU time are not utilized by common desktop usage patterns. Other important providers of compute capabilities are classical HPC data centers at universities or national research centers. Due to the shared nature of these installations, the standardized software stack required by HEP applications cannot be installed. A viable way to overcome this constraint and offer a standardized software environment in a transparent manner is the usage of virtualization technologies. The OpenStack project has become a widely adopted solution to virtualize hardware and offer additional services like storage and virtual machine management. This contribution will report on the incorporation of the institute's desktop machines into a private OpenStack Cloud. The additional compute resources provisioned via the virtual machines have been used for Monte-Carlo simulation and data analysis. Furthermore, a concept to integrate shared, remote HPC centers into regular HEP job workflows will be presented. In this approach, local and remote resources are merged to form a uniform, virtual compute cluster with a single point-of-entry for the user. Evaluations of the performance and stability of this setup and operational experiences will be discussed.

  14. Evaluation of mobile learning: students' experiences in a new rural-based medical school.

    PubMed

    Nestel, Debra; Ng, Andre; Gray, Katherine; Hill, Robyn; Villanueva, Elmer; Kotsanas, George; Oaten, Andrew; Browne, Chris

    2010-08-11

    Mobile learning (ML) is an emerging educational method with success dependent on many factors including the ML device, physical infrastructure and user characteristics. At Gippsland Medical School (GMS), students are given a laptop at the commencement of their four-year degree. We evaluated the educational impact of the ML program from students' perspectives. Questionnaires and individual interviews explored students' experiences of ML. All students were invited to complete questionnaires. Convenience sampling was used for interviews. Quantitative data was entered to SPSS 17.0 and descriptive statistics computed. Free text comments from questionnaires and transcriptions of interviews were thematically analysed. Fifty students completed the questionnaire (response rate 88%). Six students participated in interviews. More than half the students owned a laptop prior to commencing studies, would recommend the laptop and took the laptop to GMS daily. Modal daily use of laptops was four hours. Most frequent use was for access to the internet and email while the most frequently used applications were Microsoft Word and PowerPoint. Students appreciated the laptops for several reasons. The reduced financial burden was valued. Students were largely satisfied with the laptop specifications. Design elements of teaching spaces limited functionality. Although students valued aspects of the virtual learning environment (VLE), they also made many suggestions for improvement. Students reported many educational benefits from school provision of laptops. In particular, the quick and easy access to electronic educational resources as and when they were needed. Improved design of physical facilities would enhance laptop use together with a more logical layout of the VLE, new computer-based resources and activities promoting interaction.

  15. Dynamic resource allocation in a hierarchical multiprocessor system: A preliminary study

    NASA Technical Reports Server (NTRS)

    Ngai, Tin-Fook

    1986-01-01

    An integrated system approach to dynamic resource allocation is proposed. Some of the problems in dynamic resource allocation and the relationship of these problems to system structures are examined. A general dynamic resource allocation scheme is presented. A hierarchial system architecture which dynamically maps between processor structure and programs at multiple levels of instantiations is described. Simulation experiments were conducted to study dynamic resource allocation on the proposed system. Preliminary evaluation based on simple dynamic resource allocation algorithms indicates that with the proposed system approach, the complexity of dynamic resource management could be significantly reduced while achieving reasonable effective dynamic resource allocation.

  16. Computers for the Faculty: How on a Limited Budget.

    ERIC Educational Resources Information Center

    Arman, Hal; Kostoff, John

    An informal investigation of the use of computers at Delta College (DC) in Michigan revealed reasonable use of computers by faculty in disciplines such as mathematics, business, and technology, but very limited use in the humanities and social sciences. In an effort to increase faculty computer usage, DC decided to make computers available to any…

  17. Computer Anxiety: Relationship to Math Anxiety and Holland Types.

    ERIC Educational Resources Information Center

    Bellando, Jayne; Winer, Jane L.

    Although the number of computers in the school system is increasing, many schools are not using computers to their capacity. One reason for this may be computer anxiety on the part of the teacher. A review of the computer anxiety literature reveals little information on the subject, and findings from previous studies suggest that basic controlled…

  18. Applications of Out-of-Domain Knowledge in Students' Reasoning about Computer Program State

    ERIC Educational Resources Information Center

    Lewis, Colleen Marie

    2012-01-01

    To meet a growing demand and a projected deficit in the supply of computer professionals (NCWIT, 2009), it is of vital importance to expand students' access to computer science. However, many researchers in the computer science education community unproductively assume that some students lack an innate ability for computer science and…

  19. The Reasoning behind the Scene: Why Do Early Childhood Educators Use Computers in Their Classrooms?

    ERIC Educational Resources Information Center

    Edwards, Suzy

    2005-01-01

    In recent times discussion surrounding the use of computers in early childhood education has emphasised the role computers play in children's everyday lives. This realisation has replaced early debate regarding the appropriateness or otherwise of computer use for young children in early childhood education. An important component of computer use…

  20. Clinical Computer Applications in Mental Health

    PubMed Central

    Greist, John H.; Klein, Marjorie H.; Erdman, Harold P.; Jefferson, James W.

    1982-01-01

    Direct patient-computer interviews were among the earliest applications of computing in medicine. Yet patient interviewing and other clinical applications have lagged behind fiscal/administrative uses. Several reasons for delays in the development and implementation of clinical computing programs and their resolution are discussed. Patient interviewing, clinician consultation and other applications of clinical computing in mental health are reviewed.

  1. Using Cloud Computing infrastructure with CloudBioLinux, CloudMan and Galaxy

    PubMed Central

    Afgan, Enis; Chapman, Brad; Jadan, Margita; Franke, Vedran; Taylor, James

    2012-01-01

    Cloud computing has revolutionized availability and access to computing and storage resources; making it possible to provision a large computational infrastructure with only a few clicks in a web browser. However, those resources are typically provided in the form of low-level infrastructure components that need to be procured and configured before use. In this protocol, we demonstrate how to utilize cloud computing resources to perform open-ended bioinformatics analyses, with fully automated management of the underlying cloud infrastructure. By combining three projects, CloudBioLinux, CloudMan, and Galaxy into a cohesive unit, we have enabled researchers to gain access to more than 100 preconfigured bioinformatics tools and gigabytes of reference genomes on top of the flexible cloud computing infrastructure. The protocol demonstrates how to setup the available infrastructure and how to use the tools via a graphical desktop interface, a parallel command line interface, and the web-based Galaxy interface. PMID:22700313

  2. Using cloud computing infrastructure with CloudBioLinux, CloudMan, and Galaxy.

    PubMed

    Afgan, Enis; Chapman, Brad; Jadan, Margita; Franke, Vedran; Taylor, James

    2012-06-01

    Cloud computing has revolutionized availability and access to computing and storage resources, making it possible to provision a large computational infrastructure with only a few clicks in a Web browser. However, those resources are typically provided in the form of low-level infrastructure components that need to be procured and configured before use. In this unit, we demonstrate how to utilize cloud computing resources to perform open-ended bioinformatic analyses, with fully automated management of the underlying cloud infrastructure. By combining three projects, CloudBioLinux, CloudMan, and Galaxy, into a cohesive unit, we have enabled researchers to gain access to more than 100 preconfigured bioinformatics tools and gigabytes of reference genomes on top of the flexible cloud computing infrastructure. The protocol demonstrates how to set up the available infrastructure and how to use the tools via a graphical desktop interface, a parallel command-line interface, and the Web-based Galaxy interface.

  3. WE-B-BRD-01: Innovation in Radiation Therapy Planning II: Cloud Computing in RT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, K; Kagadis, G; Xing, L

    As defined by the National Institute of Standards and Technology, cloud computing is “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” Despite the omnipresent role of computers in radiotherapy, cloud computing has yet to achieve widespread adoption in clinical or research applications, though the transition to such “on-demand” access is underway. As this transition proceeds, new opportunities for aggregate studies and efficient use of computational resources are set againstmore » new challenges in patient privacy protection, data integrity, and management of clinical informatics systems. In this Session, current and future applications of cloud computing and distributed computational resources will be discussed in the context of medical imaging, radiotherapy research, and clinical radiation oncology applications. Learning Objectives: Understand basic concepts of cloud computing. Understand how cloud computing could be used for medical imaging applications. Understand how cloud computing could be employed for radiotherapy research.4. Understand how clinical radiotherapy software applications would function in the cloud.« less

  4. 7 CFR 3570.61 - Eligibility for grant assistance

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... be below the higher of the poverty line or the eligible percentage (60, 70, 80, or 90) of the State... from its own resources, or through commercial credit at reasonable rates and terms, or other funding... facility and providing for its continued availability and use at reasonable rates and terms. This...

  5. 7 CFR 3570.61 - Eligibility for grant assistance

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... be below the higher of the poverty line or the eligible percentage (60, 70, 80, or 90) of the State... from its own resources, or through commercial credit at reasonable rates and terms, or other funding... facility and providing for its continued availability and use at reasonable rates and terms. This...

  6. 7 CFR 3570.61 - Eligibility for grant assistance

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... be below the higher of the poverty line or the eligible percentage (60, 70, 80, or 90) of the State... from its own resources, or through commercial credit at reasonable rates and terms, or other funding... facility and providing for its continued availability and use at reasonable rates and terms. This...

  7. 7 CFR 3570.61 - Eligibility for grant assistance

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... be below the higher of the poverty line or the eligible percentage (60, 70, 80, or 90) of the State... from its own resources, or through commercial credit at reasonable rates and terms, or other funding... facility and providing for its continued availability and use at reasonable rates and terms. This...

  8. Hoop Hoop Hooray!

    ERIC Educational Resources Information Center

    Tomsett, Ruth

    2008-01-01

    The author believes that Venn diagrams are a useful, yet hugely underused resource, to encourage purposeful talk, reasoning and logical thinking both within mathematics and across the curriculum. Here, she describes ways in which Venn diagrams can be used to add challenge and develop reasoning, discussion and mathematical thinking at Key Stage 2.…

  9. 75 FR 16504 - Meeting Notice for the Medford District Resource Advisory Council

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-01

    ... Medford, Oregon. FOR FURTHER INFORMATION CONTACT: Jim Whittington, Medford District Public Affairs Officer... minutes. If reasonable accommodation is required, please contact the BLM's Medford District Public Affairs... submissions and other matters as may reasonably come before the council. The public is welcome to attend all...

  10. Mixing HTC and HPC Workloads with HTCondor and Slurm

    NASA Astrophysics Data System (ADS)

    Hollowell, C.; Barnett, J.; Caramarcu, C.; Strecker-Kellogg, W.; Wong, A.; Zaytsev, A.

    2017-10-01

    Traditionally, the RHIC/ATLAS Computing Facility (RACF) at Brookhaven National Laboratory (BNL) has only maintained High Throughput Computing (HTC) resources for our HEP/NP user community. We’ve been using HTCondor as our batch system for many years, as this software is particularly well suited for managing HTC processor farm resources. Recently, the RACF has also begun to design/administrate some High Performance Computing (HPC) systems for a multidisciplinary user community at BNL. In this paper, we’ll discuss our experiences using HTCondor and Slurm in an HPC context, and our facility’s attempts to allow our HTC and HPC processing farms/clusters to make opportunistic use of each other’s computing resources.

  11. Virtuous and vicious virtual water trade with application to Italy.

    PubMed

    Winter, Julia Anna; Allamano, Paola; Claps, Pierluigi

    2014-01-01

    The current trade of agricultural goods, with connections involving all continents, entails for global exchanges of "virtual" water, i.e. water used in the production process of alimentary products, but not contained within. Each trade link translates into a corresponding virtual water trade, allowing quantification of import and export fluxes of virtual water. The assessment of the virtual water import for a given nation, compared to the national consumption, could give an approximate idea of the country's reliance on external resources from the food and the water resources point of view. A descriptive approach to the understanding of a nation's degree of dependency from overseas food and water resources is first proposed, and indices of water trade virtuosity, as opposed to inefficiency, are devised. Such indices are based on the concepts of self-sufficiency and relative export, computed systematically on all products from the FAOSTAT database, taking Italy as the first case study. Analysis of time series of the self-sufficiency and relative export can demonstrate effects of market tendencies and influence water-related policies at the international level. The goal of this approach is highlighting incongruent terms in the virtual water balances by the viewpoint of single products. Specific products, which are here referred to as "swap products", are in fact identified as those that lead to inefficiencies in the virtual water balance due to their contemporaneously high import and export. The inefficiencies due to the exchanges of the same products between two nations are calculated in terms of virtual water volumes. Furthermore, the cases of swap products are investigated by computing two further indexes denoting the ratio of virtual water exchanged in the swap and the ratio of the economic values of the swapped products. The analysis of these figures can help examine the reasons behind the swap phenomenon in trade.

  12. ACToR A Aggregated Computational Toxicology Resource

    EPA Science Inventory

    We are developing the ACToR system (Aggregated Computational Toxicology Resource) to serve as a repository for a variety of types of chemical, biological and toxicological data that can be used for predictive modeling of chemical toxicology.

  13. ACToR A Aggregated Computational Toxicology Resource (S)

    EPA Science Inventory

    We are developing the ACToR system (Aggregated Computational Toxicology Resource) to serve as a repository for a variety of types of chemical, biological and toxicological data that can be used for predictive modeling of chemical toxicology.

  14. Computational resources for ribosome profiling: from database to Web server and software.

    PubMed

    Wang, Hongwei; Wang, Yan; Xie, Zhi

    2017-08-14

    Ribosome profiling is emerging as a powerful technique that enables genome-wide investigation of in vivo translation at sub-codon resolution. The increasing application of ribosome profiling in recent years has achieved remarkable progress toward understanding the composition, regulation and mechanism of translation. This benefits from not only the awesome power of ribosome profiling but also an extensive range of computational resources available for ribosome profiling. At present, however, a comprehensive review on these resources is still lacking. Here, we survey the recent computational advances guided by ribosome profiling, with a focus on databases, Web servers and software tools for storing, visualizing and analyzing ribosome profiling data. This review is intended to provide experimental and computational biologists with a reference to make appropriate choices among existing resources for the question at hand. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. INDIVIDUALIZING UNIVERSITY INSTRUCTION, EXPLORING COMPUTER POTENTIAL TO AID COLLEGE TEACHERS BY DIRECTING THE LEARNING PROCESS. INTER-UNIVERSITY PROJECT ONE, PUBLICATIONS SERIES.

    ERIC Educational Resources Information Center

    FALL, CHARLES R.

    THIS DOCUMENT CONCLUDES THAT INSTRUCTION BY COMPUTER-BASED RESOURCE UNITS CAN FACILITATE LEARNING AND PROVIDE THE INSTRUCTOR WITH VALUABLE ASSISTANCE. BY PRE-PLANNING THE TEACHING-LEARNING SITUATION, RESOURCE UNITS CAN FREE THE INSTRUCTOR FOR DECISION-MAKING TASKS. RESOURCE UNITS CAN ALSO PROVIDE APPROPRIATE LEARNING GOALS AND STUDY GUIDES TO EACH…

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, A.; Sengupta, M.; Wilcox, S.

    This report was part of a multiyear collaboration with the University of Wisconsin and the National Oceanic and Atmospheric Administration (NOAA) to produce high-quality, satellite-based, solar resource datasets for the United States. High-quality, solar resource assessment accelerates technology deployment by making a positive impact on decision making and reducing uncertainty in investment decisions. Satellite-based solar resource datasets are used as a primary source in solar resource assessment. This is mainly because satellites provide larger areal coverage and longer periods of record than ground-based measurements. With the advent of newer satellites with increased information content and faster computers that can processmore » increasingly higher data volumes, methods that were considered too computationally intensive are now feasible. One class of sophisticated methods for retrieving solar resource information from satellites is a two-step, physics-based method that computes cloud properties and uses the information in a radiative transfer model to compute solar radiation. This method has the advantage of adding additional information as satellites with newer channels come on board. This report evaluates the two-step method developed at NOAA and adapted for solar resource assessment for renewable energy with the goal of identifying areas that can be improved in the future.« less

  17. A Hierarchical Auction-Based Mechanism for Real-Time Resource Allocation in Cloud Robotic Systems.

    PubMed

    Wang, Lujia; Liu, Ming; Meng, Max Q-H

    2017-02-01

    Cloud computing enables users to share computing resources on-demand. The cloud computing framework cannot be directly mapped to cloud robotic systems with ad hoc networks since cloud robotic systems have additional constraints such as limited bandwidth and dynamic structure. However, most multirobotic applications with cooperative control adopt this decentralized approach to avoid a single point of failure. Robots need to continuously update intensive data to execute tasks in a coordinated manner, which implies real-time requirements. Thus, a resource allocation strategy is required, especially in such resource-constrained environments. This paper proposes a hierarchical auction-based mechanism, namely link quality matrix (LQM) auction, which is suitable for ad hoc networks by introducing a link quality indicator. The proposed algorithm produces a fast and robust method that is accurate and scalable. It reduces both global communication and unnecessary repeated computation. The proposed method is designed for firm real-time resource retrieval for physical multirobot systems. A joint surveillance scenario empirically validates the proposed mechanism by assessing several practical metrics. The results show that the proposed LQM auction outperforms state-of-the-art algorithms for resource allocation.

  18. The Gain of Resource Delegation in Distributed Computing Environments

    NASA Astrophysics Data System (ADS)

    Fölling, Alexander; Grimme, Christian; Lepping, Joachim; Papaspyrou, Alexander

    In this paper, we address job scheduling in Distributed Computing Infrastructures, that is a loosely coupled network of autonomous acting High Performance Computing systems. In contrast to the common approach of mutual workload exchange, we consider the more intuitive operator's viewpoint of load-dependent resource reconfiguration. In case of a site's over-utilization, the scheduling system is able to lease resources from other sites to keep up service quality for its local user community. Contrary, the granting of idle resources can increase utilization in times of low local workload and thus ensure higher efficiency. The evaluation considers real workload data and is done with respect to common service quality indicators. For two simple resource exchange policies and three basic setups we show the possible gain of this approach and analyze the dynamics in workload-adaptive reconfiguration behavior.

  19. Fuzzy inductive reasoning: a consolidated approach to data-driven construction of complex dynamical systems

    NASA Astrophysics Data System (ADS)

    Nebot, Àngela; Mugica, Francisco

    2012-10-01

    Fuzzy inductive reasoning (FIR) is a modelling and simulation methodology derived from the General Systems Problem Solver. It compares favourably with other soft computing methodologies, such as neural networks, genetic or neuro-fuzzy systems, and with hard computing methodologies, such as AR, ARIMA, or NARMAX, when it is used to predict future behaviour of different kinds of systems. This paper contains an overview of the FIR methodology, its historical background, and its evolution.

  20. Retention in a Computer-based Outreach Intervention For Chronically Ill Rural Women

    PubMed Central

    Weinert, Clarann; Cudney, Shirley; Hill, Wade G.

    2009-01-01

    The study's purpose was to examine retention factors in a computer intervention with 158 chronically ill rural women. After a 22 week intervention, 18.9 percent of the women had dropped out. A Cox regression survival analysis was performed to assess the effects of selected covariates on retention. Reasons for dropping were tallied and categorized. Major reasons for dropping were: lack of time; decline in health status, and non-participation in study activities. Four covariates predicted survival time: level of computer skills, marital status, work outside of home, and impact of social events on participants' lives. Retention-enhancing strategies are suggested for implementation. PMID:18226760

Top