Science.gov

Sample records for based grid job

  1. A grid job monitoring system

    SciTech Connect

    Dumitrescu, Catalin; Nowack, Andreas; Padhi, Sanjay; Sarkar, Subir; /INFN, Pisa /Pisa, Scuola Normale Superiore

    2010-01-01

    This paper presents a web-based Job Monitoring framework for individual Grid sites that allows users to follow in detail their jobs in quasi-real time. The framework consists of several independent components: (a) a set of sensors that run on the site CE and worker nodes and update a database, (b) a simple yet extensible web services framework and (c) an Ajax powered web interface having a look-and-feel and control similar to a desktop application. The monitoring framework supports LSF, Condor and PBS-like batch systems. This is one of the first monitoring systems where an X.509 authenticated web interface can be seamlessly accessed by both end-users and site administrators. While a site administrator has access to all the possible information, a user can only view the jobs for the Virtual Organizations (VO) he/she is a part of. The monitoring framework design supports several possible deployment scenarios. For a site running a supported batch system, the system may be deployed as a whole, or existing site sensors can be adapted and reused with the web services components. A site may even prefer to build the web server independently and choose to use only the Ajax powered web interface. Finally, the system is being used to monitor a glideinWMS instance. This broadens the scope significantly, allowing it to monitor jobs over multiple sites.

  2. A Grid job monitoring system

    NASA Astrophysics Data System (ADS)

    Dumitrescu, Catalin; Nowack, Andreas; Padhi, Sanjay; Sarkar, Subir

    2010-04-01

    This paper presents a web-based Job Monitoring framework for individual Grid sites that allows users to follow in detail their jobs in quasi-real time. The framework consists of several independent components : (a) a set of sensors that run on the site CE and worker nodes and update a database, (b) a simple yet extensible web services framework and (c) an Ajax powered web interface having a look-and-feel and control similar to a desktop application. The monitoring framework supports LSF, Condor and PBS-like batch systems. This is one of the first monitoring systems where an X.509 authenticated web interface can be seamlessly accessed by both end-users and site administrators. While a site administrator has access to all the possible information, a user can only view the jobs for the Virtual Organizations (VO) he/she is a part of. The monitoring framework design supports several possible deployment scenarios. For a site running a supported batch system, the system may be deployed as a whole, or existing site sensors can be adapted and reused with the web services components. A site may even prefer to build the web server independently and choose to use only the Ajax powered web interface. Finally, the system is being used to monitor a glideinWMS instance. This broadens the scope significantly, allowing it to monitor jobs over multiple sites.

  3. Job Scheduling in a Heterogeneous Grid Environment

    NASA Technical Reports Server (NTRS)

    Shan, Hong-Zhang; Smith, Warren; Oliker, Leonid; Biswas, Rupak

    2004-01-01

    Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.

  4. Job scheduling in a heterogenous grid environment

    SciTech Connect

    Oliker, Leonid; Biswas, Rupak; Shan, Hongzhang; Smith, Warren

    2004-02-11

    Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.

  5. Mediated definite delegation - Certified Grid jobs in ALICE and beyond

    NASA Astrophysics Data System (ADS)

    Schreiner, Steffen; Grigoras, Costin; Litmaath, Maarten; Betev, Latchezar; Buchmann, Johannes

    2012-12-01

    Grid computing infrastructures need to provide traceability and accounting of their users’ activity and protection against misuse and privilege escalation, where the delegation of privileges in the course of a job submission is a key concern. This work describes an improved handling of Multi-user Grid Jobs in the ALICE Grid Services. A security analysis of the ALICE Grid job model is presented with derived security objectives, followed by a discussion of existing approaches of unrestricted delegation based on X.509 proxy certificates and the Grid middleware gLExec. Unrestricted delegation has severe security consequences and limitations, most importantly allowing for identity theft and forgery of jobs and data. These limitations are discussed and formulated, both in general and with respect to an adoption in line with Multi-user Grid Jobs. A new general model of mediated definite delegation is developed, allowing a broker to dynamically process and assign Grid jobs to agents while providing strong accountability and long-term traceability. A prototype implementation allowing for fully certified Grid jobs is presented as well as a potential interaction with gLExec. The achieved improvements regarding system security, malicious job exploitation, identity protection, and accountability are emphasized, including a discussion of non-repudiation in the face of malicious Grid jobs.

  6. Real Time Monitor of Grid job executions

    NASA Astrophysics Data System (ADS)

    Colling, D. J.; Martyniak, J.; McGough, A. S.; Křenek, A.; Sitera, J.; Mulač, M.; Dvořák, F.

    2010-04-01

    In this paper we describe the architecture and operation of the Real Time Monitor (RTM), developed by the Grid team in the HEP group at Imperial College London. This is arguably the most popular dissemination tool within the EGEE [1] Grid. Having been used, on many occasions including GridFest and LHC inauguration events held at CERN in October 2008. The RTM gathers information from EGEE sites hosting Logging and Bookkeeping (LB) services. Information is cached locally at a dedicated server at Imperial College London and made available for clients to use in near real time. The system consists of three main components: the RTM server, enquirer and an apache Web Server which is queried by clients. The RTM server queries the LB servers at fixed time intervals, collecting job related information and storing this in a local database. Job related data includes not only job state (i.e. Scheduled, Waiting, Running or Done) along with timing information but also other attributes such as Virtual Organization and Computing Element (CE) queue - if known. The job data stored in the RTM database is read by the enquirer every minute and converted to an XML format which is stored on a Web Server. This decouples the RTM server database from the client removing the bottleneck problem caused by many clients simultaneously accessing the database. This information can be visualized through either a 2D or 3D Java based client with live job data either being overlaid on to a 2 dimensional map of the world or rendered in 3 dimensions over a globe map using OpenGL.

  7. Grid workflow job execution service 'Pilot'

    NASA Astrophysics Data System (ADS)

    Shamardin, Lev; Kryukov, Alexander; Demichev, Andrey; Ilyin, Vyacheslav

    2011-12-01

    'Pilot' is a grid job execution service for workflow jobs. The main goal for the service is to automate computations with multiple stages since they can be expressed as simple workflows. Each job is a directed acyclic graph of tasks and each task is an execution of something on a grid resource (or 'computing element'). Tasks may be submitted to any WS-GRAM (Globus Toolkit 4) service. The target resources for the tasks execution are selected by the Pilot service from the set of available resources which match the specific requirements from the task and/or job definition. Some simple conditional execution logic is also provided. The 'Pilot' service is built on the REST concepts and provides a simple API through authenticated HTTPS. This service is deployed and used in production in a Russian national grid project GridNNN.

  8. Pilot job accounting and auditing in Open Science Grid

    SciTech Connect

    Sfiligoi, Igor; Green, Chris; Quinn, Greg; Thain, Greg; /Wisconsin U., Madison

    2008-06-01

    The Grid accounting and auditing mechanisms were designed under the assumption that users would submit their jobs directly to the Grid gatekeepers. However, many groups are starting to use pilot-based systems, where users submit jobs to a centralized queue and are successively transferred to the Grid resources by the pilot infrastructure. While this approach greatly improves the user experience, it does disrupt the established accounting and auditing procedures. Open Science Grid deploys gLExec on the worker nodes to keep the pilot-related accounting and auditing information and centralizes the accounting collection with GRATIA.

  9. Grid Service for User-Centric Job

    SciTech Connect

    Lauret, Jerome

    2009-07-31

    The User Centric Monitoring (UCM) project was aimed at developing a toolkit that provides the Virtual Organization (VO) with tools to build systems that serve a rich set of intuitive job and application monitoring information to the VO’s scientists so that they can be more productive. The tools help collect and serve the status and error information through a Web interface. The proposed UCM toolkit is composed of a set of library functions, a database schema, and a Web portal that will collect and filter available job monitoring information from various resources and present it to users in a user-centric view rather than and administrative-centric point of view. The goal is to create a set of tools that can be used to augment grid job scheduling systems, meta-schedulers, applications, and script sets in order to provide the UCM information. The system provides various levels of an application programming interface that is useful through out the Grid environment and at the application level for logging messages, which are combined with the other user-centric monitoring information in a abstracted “data store”. A planned monitoring portal will also dynamically present the information to users in their web browser in a secure manor, which is also easily integrated into any JSR-compliant portal deployment that a VO might employ. The UCM is meant to be flexible and modular in the ways that it can be adopted to give the VO many choices to build a solution that works for them with special attention to the smaller VOs that do not have the resources to implement home-grown solutions.

  10. Minimizing draining waste through extending the lifetime of pilot jobs in Grid environments

    NASA Astrophysics Data System (ADS)

    Sfiligoi, I.; Martin, T.; Bockelman, B. P.; Bradley, D. C.; Würthwein, F.

    2014-06-01

    The computing landscape is moving at an accelerated pace to many-core computing. Nowadays, it is not unusual to get 32 cores on a single physical node. As a consequence, there is increased pressure in the pilot systems domain to move from purely single-core scheduling and allow multi-core jobs as well. In order to allow for a gradual transition from single-core to multi-core user jobs, it is envisioned that pilot jobs will have to handle both kinds of user jobs at the same time, by requesting several cores at a time from Grid providers and then partitioning them between the user jobs at runtime. Unfortunately, the current Grid ecosystem only allows for relatively short lifetime of pilot jobs, requiring frequent draining, with the relative waste of compute resources due to varying lifetimes of the user jobs. Significantly extending the lifetime of pilot jobs is thus highly desirable, but must come without any adverse effects for the Grid resource providers. In this paper we present a mechanism, based on communication between the pilot jobs and the Grid provider, that allows for pilot jobs to run for extended periods of time when there are available resources, but also allows the Grid provider to reclaim the resources in a short amount of time when needed. We also present the experience of running a prototype system using the above mechanism on a few US-based Grid sites.

  11. Jobs masonry in LHCb with elastic Grid Jobs

    NASA Astrophysics Data System (ADS)

    Stagni, F.; Charpentier, Ph

    2015-12-01

    In any distributed computing infrastructure, a job is normally forbidden to run for an indefinite amount of time. This limitation is implemented using different technologies, the most common one being the CPU time limit implemented by batch queues. It is therefore important to have a good estimate of how much CPU work a job will require: otherwise, it might be killed by the batch system, or by whatever system is controlling the jobs’ execution. In many modern interwares, the jobs are actually executed by pilot jobs, that can use the whole available time in running multiple consecutive jobs. If at some point the available time in a pilot is too short for the execution of any job, it should be released, while it could have been used efficiently by a shorter job. Within LHCbDIRAC, the LHCb extension of the DIRAC interware, we developed a simple way to fully exploit computing capabilities available to a pilot, even for resources with limited time capabilities, by adding elasticity to production MonteCarlo (MC) simulation jobs. With our approach, independently of the time available, LHCbDIRAC will always have the possibility to execute a MC job, whose length will be adapted to the available amount of time: therefore the same job, running on different computing resources with different time limits, will produce different amounts of events. The decision on the number of events to be produced is made just in time at the start of the job, when the capabilities of the resource are known. In order to know how many events a MC job will be instructed to produce, LHCbDIRAC simply requires three values: the CPU-work per event for that type of job, the power of the machine it is running on, and the time left for the job before being killed. Knowing these values, we can estimate the number of events the job will be able to simulate with the available CPU time. This paper will demonstrate that, using this simple but effective solution, LHCb manages to make a more efficient use of

  12. Smart Grid Cybersecurity: Job Performance Model Report

    SciTech Connect

    O'Neil, Lori Ross; Assante, Michael; Tobey, David

    2012-08-01

    This is the project report to DOE OE-30 for the completion of Phase 1 of a 3 phase report. This report outlines the work done to develop a smart grid cybersecurity certification. This work is being done with the subcontractor NBISE.

  13. Job Superscheduler Architecture and Performance in Computational Grid Environments

    NASA Technical Reports Server (NTRS)

    Shan, Hongzhang; Oliker, Leonid; Biswas, Rupak

    2003-01-01

    Computational grids hold great promise in utilizing geographically separated heterogeneous resources to solve large-scale complex scientific problems. However, a number of major technical hurdles, including distributed resource management and effective job scheduling, stand in the way of realizing these gains. In this paper, we propose a novel grid superscheduler architecture and three distributed job migration algorithms. We also model the critical interaction between the superscheduler and autonomous local schedulers. Extensive performance comparisons with ideal, central, and local schemes using real workloads from leading computational centers are conducted in a simulation environment. Additionally, synthetic workloads are used to perform a detailed sensitivity analysis of our superscheduler. Several key metrics demonstrate that substantial performance gains can be achieved via smart superscheduling in distributed computational grids.

  14. Data location-aware job scheduling in the grid. Application to the GridWay metascheduler

    NASA Astrophysics Data System (ADS)

    Delgado Peris, Antonio; Hernandez, Jose; Huedo, Eduardo; Llorente, Ignacio M.

    2010-04-01

    Grid infrastructures constitute nowadays the core of the computing facilities of the biggest LHC experiments. These experiments produce and manage petabytes of data per year and run thousands of computing jobs every day to process that data. It is the duty of metaschedulers to allocate the tasks to the most appropriate resources at the proper time. Our work reviews the policies that have been proposed for the scheduling of grid jobs in the context of very data-intensive applications. We indicate some of the practical problems that such models will face and describe what we consider essential characteristics of an optimum scheduling system: aim to minimise not only job turnaround time but also data replication, flexibility to support different virtual organisation requirements and capability to coordinate the tasks of data placement and job allocation while keeping their execution decoupled. These ideas have guided the development of an enhanced prototype for GridWay, a general purpose metascheduler, part of the Globus Toolkit and member of the EGEE's RESPECT program. Current GridWay's scheduling algorithm is unaware of data location. Our prototype makes it possible for job requests to set data needs not only as absolute requirements but also as functions for resource ranking. As our tests show, this makes it more flexible than currently used resource brokers to implement different data-aware scheduling algorithms.

  15. Grid-based Visualization Framework

    NASA Astrophysics Data System (ADS)

    Thiebaux, M.; Tangmunarunkit, H.; Kesselman, C.

    2003-12-01

    Advances in science and engineering have put high demands on tools for high-performance large-scale visual data exploration and analysis. For example, earthquake scientists can now study earthquake phenomena from first principle physics-based simulations. These simulations can generate large amounts of data, possibly high spatial resolution, and long time series. Single-system visualization software running on commodity machines cannot scale up to the large amounts of data generated by these simulations. To address this problem, we propose a flexible and extensible Grid-based visualization framework for time-critical, interactively controlled visual browsing of spatially and temporally large datasets in a Grid environment. Our framework leverages Grid resources for scalable computation and data storage to maintain performance and interactivity with large visualization jobs. Our framework utilizes Globus Toolkit 2.4 components for security (i.e., GSI), resource allocation and management (i.e., DUROC, GRAM) and communication (i.e., Globus-IO) to couple commodity desktops with remote, scalable storage and computational resources in a Grid for interactive data exploration. There are two major components in this framework---Grid Data Transport (GDT) and the Grid Visualization Utility (GVU). GDT provides libraries for performing parallel data filtering and parallel data exchange among Grid resources. GDT allows arbitrary data filtering to be integrated into the system. It also facilitates multi-tiered pipeline topology construction of compute resources and displays. In addition to scientific visualization applications, GDT can be used to support other applications that require parallel processing and parallel transfer of partial ordered independent files, such as file-set transfer. On top of GDT, we have developed the Grid Visualization Utility (GVU), which is designed to assist visualization dataset management, including file formatting, data transport and automatic

  16. An improved ant colony optimization algorithm with fault tolerance for job scheduling in grid computing systems

    PubMed Central

    Idris, Hajara; Junaidu, Sahalu B.; Adewumi, Aderemi O.

    2017-01-01

    The Grid scheduler, schedules user jobs on the best available resource in terms of resource characteristics by optimizing job execution time. Resource failure in Grid is no longer an exception but a regular occurring event as resources are increasingly being used by the scientific community to solve computationally intensive problems which typically run for days or even months. It is therefore absolutely essential that these long-running applications are able to tolerate failures and avoid re-computations from scratch after resource failure has occurred, to satisfy the user’s Quality of Service (QoS) requirement. Job Scheduling with Fault Tolerance in Grid Computing using Ant Colony Optimization is proposed to ensure that jobs are executed successfully even when resource failure has occurred. The technique employed in this paper, is the use of resource failure rate, as well as checkpoint-based roll back recovery strategy. Check-pointing aims at reducing the amount of work that is lost upon failure of the system by immediately saving the state of the system. A comparison of the proposed approach with an existing Ant Colony Optimization (ACO) algorithm is discussed. The experimental results of the implemented Fault Tolerance scheduling algorithm show that there is an improvement in the user’s QoS requirement over the existing ACO algorithm, which has no fault tolerance integrated in it. The performance evaluation of the two algorithms was measured in terms of the three main scheduling performance metrics: makespan, throughput and average turnaround time. PMID:28545075

  17. Multicore job scheduling in the Worldwide LHC Computing Grid

    NASA Astrophysics Data System (ADS)

    Forti, A.; Pérez-Calero Yzquierdo, A.; Hartmann, T.; Alef, M.; Lahiff, A.; Templon, J.; Dal Pra, S.; Gila, M.; Skipsey, S.; Acosta-Silva, C.; Filipcic, A.; Walker, R.; Walker, C. J.; Traynor, D.; Gadrat, S.

    2015-12-01

    After the successful first run of the LHC, data taking is scheduled to restart in Summer 2015 with experimental conditions leading to increased data volumes and event complexity. In order to process the data generated in such scenario and exploit the multicore architectures of current CPUs, the LHC experiments have developed parallelized software for data reconstruction and simulation. However, a good fraction of their computing effort is still expected to be executed as single-core tasks. Therefore, jobs with diverse resources requirements will be distributed across the Worldwide LHC Computing Grid (WLCG), making workload scheduling a complex problem in itself. In response to this challenge, the WLCG Multicore Deployment Task Force has been created in order to coordinate the joint effort from experiments and WLCG sites. The main objective is to ensure the convergence of approaches from the different LHC Virtual Organizations (VOs) to make the best use of the shared resources in order to satisfy their new computing needs, minimizing any inefficiency originated from the scheduling mechanisms, and without imposing unnecessary complexities in the way sites manage their resources. This paper describes the activities and progress of the Task Force related to the aforementioned topics, including experiences from key sites on how to best use different batch system technologies, the evolution of workload submission tools by the experiments and the knowledge gained from scale tests of the different proposed job submission strategies.

  18. Exploring virtualisation tools with a new virtualisation provisioning method to test dynamic grid environments for ALICE grid jobs over ARC grid middleware

    NASA Astrophysics Data System (ADS)

    Wagner, B.; Kileng, B.; Alice Collaboration

    2014-06-01

    The Nordic Tier-1 centre for LHC is distributed over several computing centres. It uses ARC as the internal computing grid middleware. ALICE uses its own grid middleware AliEn to distribute jobs and the necessary software application stack. To make use of most of the AliEn infrastructure and software deployment methods for running ALICE grid jobs on ARC, we are investigating different possible virtualisation technologies. For this a testbed and possible framework for bridging different middleware systems is under development. It allows us to test a variety of virtualisation methods and software deployment technologies in the form of different virtual machines.

  19. Grid-based HPC astrophysical applications at INAF Catania.

    NASA Astrophysics Data System (ADS)

    Costa, A.; Calanducci, A.; Becciani, U.; Capuzzo Dolcetta, R.

    The research activity on grid area at INAF Catania has been devoted to two main goals: the integration of a multiprocessor supercomputer (IBM SP4) within INFN-GRID middleware and the developing of a web-portal, Astrocomp-G, for the submission of astrophysical jobs into the grid infrastructure. Most of the actual grid implementation infrastructure is based on common hardware, i.e. i386 architecture machines (Intel Celeron, Pentium III, IV, Amd Duron, Athlon) using Linux RedHat OS. We were the first institute to integrate a totally different machine, an IBM SP with RISC architecture and AIX OS, as a powerful Worker Node inside a grid infrastructure. We identified and ported to AIX OS the grid components dealing with job monitoring and execution and properly tuned the Computing Element to delivery jobs into this special Worker Node. For testing purpose we used MARA, an astrophysical application for the analysis of light curve sequences. Astrocomp-G is a user-friendly front end to our grid site. Users who want to submit the astrophysical applications already available in the portal need to own a valid personal X509 certificate in addiction to a username and password released by the grid portal web master. The personal X509 certificate is a prerequisite for the creation of a short or long-term proxy certificate that allows the grid infrastructure services to identify clearly whether the owner of the job has the permissions to use resources and data. X509 and proxy certificates are part of GSI (Grid Security Infrastructure), a standard security tool adopted by all major grid sites around the world.

  20. Remote Job Testing for the Neutron Science TeraGrid Gateway

    SciTech Connect

    Lynch, Vickie E; Cobb, John W; Miller, Stephen D; Reuter, Michael A; Smith, Bradford C

    2009-01-01

    Remote job execution gives neutron science facilities access to high performance computing such as the TeraGrid. A scientific community can use community software with a community certificate and account through a common interface of a portal. Results show this approach is successful, but with more testing and problem solving, we expect remote job executions to become more reliable.

  1. The Grid[Way] Job Template Manager, a tool for parameter sweeping

    NASA Astrophysics Data System (ADS)

    Lorca, Alejandro; Huedo, Eduardo; Llorente, Ignacio M.

    2011-04-01

    Parameter sweeping is a widely used algorithmic technique in computational science. It is specially suited for high-throughput computing since the jobs evaluating the parameter space are loosely coupled or independent. A tool that integrates the modeling of a parameter study with the control of jobs in a distributed architecture is presented. The main task is to facilitate the creation and deletion of job templates, which are the elements describing the jobs to be run. Extra functionality relies upon the GridWay Metascheduler, acting as the middleware layer for job submission and control. It supports interesting features like multi-dimensional sweeping space, wildcarding of parameters, functional evaluation of ranges, value-skipping and job template automatic indexation. The use of this tool increases the reliability of the parameter sweep study thanks to the systematic bookkeeping of job templates and respective job statuses. Furthermore, it simplifies the porting of the target application to the grid reducing the required amount of time and effort. Program summaryProgram title: Grid[Way] Job Template Manager (version 1.0) Catalogue identifier: AEIE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Apache license 2.0 No. of lines in distributed program, including test data, etc.: 3545 No. of bytes in distributed program, including test data, etc.: 126 879 Distribution format: tar.gz Programming language: Perl 5.8.5 and above Computer: Any (tested on PC x86 and x86_64) Operating system: Unix, GNU/Linux (tested on Ubuntu 9.04, Scientific Linux 4.7, centOS 5.4), Mac OS X (tested on Snow Leopard 10.6) RAM: 10 MB Classification: 6.5 External routines: The GridWay Metascheduler [1]. Nature of problem: To parameterize and manage an application running on a grid or cluster. Solution method: Generation of job templates as a cross product of

  2. Impact of admission and cache replacement policies on response times of jobs on data grids

    SciTech Connect

    Otoo, Ekow J.; Rotem, Doron; Shoshani, Arie

    2003-04-21

    Caching techniques have been used widely to improve the performance gaps of storage hierarchies in computing systems. Little is known about the impact of policies on the response times of jobs that access and process very large files in data grids particularly when data and computations on the data have to be co-located on the same host. In data intensive applications that access large data files over wide area network environment, such as data-grids, the combination of policies for job servicing (or scheduling), caching and cache replacement can significantly impact the performance of grid jobs. We present some preliminary results of a simulation study that combines an admission policy with a cache replacement policy when servicing jobs submitted to a storage resource manager. The results show that, in comparison to a first come first serve policy, the response times of jobs are significantly improved, for practical limits of disk cache sizes, when the jobs that are back-logged to access the same files are taken into consideration in scheduling the next file to be retrieved into the disk cache. Not only are the response times of jobs improved, but also the metric measures for caching policies, such as the hit ratio and the average cost per retrieval, are improved irrespective of the cache replacement policy.

  3. A modify ant colony optimization for the grid jobs scheduling problem with QoS requirements

    NASA Astrophysics Data System (ADS)

    Pu, Xun; Lu, XianLiang

    2011-10-01

    Job scheduling with customers' quality of service (QoS) requirement is challenging in grid environment. In this paper, we present a modify Ant colony optimization (MACO) for the Job scheduling problem in grid. Instead of using the conventional construction approach to construct feasible schedules, the proposed algorithm employs a decomposition method to satisfy the customer's deadline and cost requirements. Besides, a new mechanism of service instances state updating is embedded to improve the convergence of MACO. Experiments demonstrate the effectiveness of the proposed algorithm.

  4. An ACO Approach to Job Scheduling in Grid Environment

    NASA Astrophysics Data System (ADS)

    Kant, Ajay; Sharma, Arnesh; Agarwal, Sanchit; Chandra, Satish

    Due to recent advances in the wide-area network technologies and low cost of computing resources, grid computing has become an active research area. The efficiency of a grid environment largely depends on the scheduling method it follows. This paper proposes a framework for grid scheduling using dynamic information and an ant colony optimization algorithm to improve the decision of scheduling. A notion of two types of ants -'Red Ants' and 'Black Ants' have been introduced. The purpose of red and Black Ants has been explained and algorithms have been developed for optimizing the resource utilization. The proposed method does optimization at two levels and it is found to be more efficient than existing methods.

  5. Using ssh and sshfs to virtualize Grid job submission with RCondor

    NASA Astrophysics Data System (ADS)

    Sfiligoi, I.; Dost, J. M.

    2014-06-01

    The HTCondor based glideinWMS has become the product of choice for exploiting Grid resources for many communities. Unfortunately, its default operational model expects users to log into a machine running a HTCondor schedd before being able to submit their jobs. Many users would instead prefer to use their local workstation for everything. A product that addresses this problem is RCondor, a module delivered with the HTCondor package. RCondor provides command line tools that simulate the behavior of a local HTCondor installation, while using ssh under the hood to execute commands on the remote node instead. RCondor also interfaces with sshfs, virtualizing access to remote files, thus giving the user the impression of a truly local HTCondor installation. This paper presents a detailed description of RCondor, as well as comparing it to the other methods currently available for accessing remote HTCondor schedds.

  6. Grid-based Meteorological and Crisis Applications

    NASA Astrophysics Data System (ADS)

    Hluchy, Ladislav; Bartok, Juraj; Tran, Viet; Lucny, Andrej; Gazak, Martin

    2010-05-01

    forecast model is a subject of the parameterization and parameter optimization before its real deployment. The parameter optimization requires tens of evaluations of the parameterized model accuracy and each evaluation of the model parameters requires re-running of the hundreds of meteorological situations collected over the years and comparison of the model output with the observed data. The architecture and inherent heterogeneity of both examples and their computational complexity and their interfaces to other systems and services make them well suited for decomposition into a set of web and grid services. Such decomposition has been performed within several projects we participated or participate in cooperation with academic sphere, namely int.eu.grid (dispersion model deployed as a pilot application to an interactive grid), SEMCO-WS (semantic composition of the web and grid services), DMM (development of a significant meteorological phenomena prediction system based on the data mining), VEGA 2009-2011 and EGEE III. We present useful and practical applications of technologies of high performance computing. The use of grid technology provides access to much higher computation power not only for modeling and simulation, but also for the model parameterization and validation. This results in the model parameters optimization and more accurate simulation outputs. Having taken into account that the simulations are used for the aviation, road traffic and crisis management, even small improvement in accuracy of predictions may result in significant improvement of safety as well as cost reduction. We found grid computing useful for our applications. We are satisfied with this technology and our experience encourages us to extend its use. Within an ongoing project (DMM) we plan to include processing of satellite images which extends our requirement on computation very rapidly. We believe that thanks to grid computing we are able to handle the job almost in real time.

  7. Development of Job-Based Reading Tests

    DTIC Science & Technology

    1982-11-01

    representing the four types of Army job reading tasks identified in prior research (Locating Job Information in an Index , in Tables and Graphs,DD re I= 3 E...categories of Army job reading tasks established in prior research: Locating Job Information in an Index , in Tables and Graphs, and in Narrative Descriptions...as the index of general reading ability. This decision was based on a known correlation of approximately 0.80 between FA and the Metropolitan Reading

  8. A History-based Estimation for LHCb job requirements

    NASA Astrophysics Data System (ADS)

    Rauschmayr, Nathalie

    2015-12-01

    The main goal of a Workload Management System (WMS) is to find and allocate resources for the given tasks. The more and better job information the WMS receives, the easier will be to accomplish its task, which directly translates into higher utilization of resources. Traditionally, the information associated with each job, like expected runtime, is defined beforehand by the Production Manager in best case and fixed arbitrary values by default. In the case of LHCb's Workload Management System no mechanisms are provided which automate the estimation of job requirements. As a result, much more CPU time is normally requested than actually needed. Particularly, in the context of multicore jobs this presents a major problem, since single- and multicore jobs shall share the same resources. Consequently, grid sites need to rely on estimations given by the VOs in order to not decrease the utilization of their worker nodes when making multicore job slots available. The main reason for going to multicore jobs is the reduction of the overall memory footprint. Therefore, it also needs to be studied how memory consumption of jobs can be estimated. A detailed workload analysis of past LHCb jobs is presented. It includes a study of job features and their correlation with runtime and memory consumption. Following the features, a supervised learning algorithm is developed based on a history based prediction. The aim is to learn over time how jobs’ runtime and memory evolve influenced due to changes in experiment conditions and software versions. It will be shown that estimation can be notably improved if experiment conditions are taken into account.

  9. Ganga: User-friendly Grid job submission and management tool for LHC and beyond

    NASA Astrophysics Data System (ADS)

    Vanderster, D. C.; Brochu, F.; Cowan, G.; Egede, U.; Elmsheuser, J.; Gaidoz, B.; Harrison, K.; Lee, H. C.; Liko, D.; Maier, A.; Mościcki, J. T.; Muraru, A.; Pajchel, K.; Reece, W.; Samset, B.; Slater, M.; Soroko, A.; Tan, C. L.; Williams, M.

    2010-04-01

    Ganga has been widely used for several years in ATLAS, LHCb and a handful of other communities. Ganga provides a simple yet powerful interface for submitting and managing jobs to a variety of computing backends. The tool helps users configuring applications and keeping track of their work. With the major release of version 5 in summer 2008, Ganga's main user-friendly features have been strengthened. Examples include a new configuration interface, enhanced support for job collections, bulk operations and easier access to subjobs. In addition to the traditional batch and Grid backends such as Condor, LSF, PBS, gLite/EDG a point-to-point job execution via ssh on remote machines is now supported. Ganga is used as an interactive job submission interface for end-users, and also as a job submission component for higher-level tools. For example GangaRobot is used to perform automated, end-to-end testing of distributed data analysis. Ganga comes with an extensive test suite covering more than 350 test cases. The development model involves all active developers in the release management shifts which is an important and novel approach for the distributed software collaborations. Ganga 5 is a mature, stable and widely-used tool with long-term support from the HEP community.

  10. MrGrid: A Portable Grid Based Molecular Replacement Pipeline

    PubMed Central

    Reboul, Cyril F.; Androulakis, Steve G.; Phan, Jennifer M. N.; Whisstock, James C.; Goscinski, Wojtek J.; Abramson, David; Buckle, Ashley M.

    2010-01-01

    Background The crystallographic determination of protein structures can be computationally demanding and for difficult cases can benefit from user-friendly interfaces to high-performance computing resources. Molecular replacement (MR) is a popular protein crystallographic technique that exploits the structural similarity between proteins that share some sequence similarity. But the need to trial permutations of search models, space group symmetries and other parameters makes MR time- and labour-intensive. However, MR calculations are embarrassingly parallel and thus ideally suited to distributed computing. In order to address this problem we have developed MrGrid, web-based software that allows multiple MR calculations to be executed across a grid of networked computers, allowing high-throughput MR. Methodology/Principal Findings MrGrid is a portable web based application written in Java/JSP and Ruby, and taking advantage of Apple Xgrid technology. Designed to interface with a user defined Xgrid resource the package manages the distribution of multiple MR runs to the available nodes on the Xgrid. We evaluated MrGrid using 10 different protein test cases on a network of 13 computers, and achieved an average speed up factor of 5.69. Conclusions MrGrid enables the user to retrieve and manage the results of tens to hundreds of MR calculations quickly and via a single web interface, as well as broadening the range of strategies that can be attempted. This high-throughput approach allows parameter sweeps to be performed in parallel, improving the chances of MR success. PMID:20386612

  11. On the Optimization of GLite-Based Job Submission

    NASA Astrophysics Data System (ADS)

    Misurelli, Giuseppe; Palmieri, Francesco; Pardi, Silvio; Veronesi, Paolo

    2011-12-01

    A Grid is a very dynamic, complex and heterogeneous system, whose reliability can be adversely conditioned by several different factors such as communications and hardware faults, middleware bugs or wrong configurations due to human errors. As the infrastructure scales, spanning a large number of sites, each hosting hundreds or thousands of hosts/resources, the occurrence of runtime faults following job submission becomes a very frequent and phenomenon. Therefore, fault avoidance becomes a fundamental aim in modern Grids since the dependability of individual resources spread upon widely distributed computing infrastructures and often used outside of their native organizational boundaries, cannot be guaranteed in any systematic way. Accordingly, we propose a simple job optimization solution based on a user-driven fault avoidance strategy. Such strategy starts from the introduction within the grid information system of several on-line service-monitoring metrics that can be used as specific hints to the workload management system for driving resource discovery operations according to a fault-free resource-scheduling plan. This solution, whose main goal is to minimize the execution time by avoiding execution failures, demonstrated to be very effective in incrementing both the user perceivable quality and the overall grid performance.

  12. Arc Length Based Grid Distribution For Surface and Volume Grids

    NASA Technical Reports Server (NTRS)

    Mastin, C. Wayne

    1996-01-01

    Techniques are presented for distributing grid points on parametric surfaces and in volumes according to a specified distribution of arc length. Interpolation techniques are introduced which permit a given distribution of grid points on the edges of a three-dimensional grid block to be propagated through the surface and volume grids. Examples demonstrate how these methods can be used to improve the quality of grids generated by transfinite interpolation.

  13. Space-based Science Operations Grid Prototype

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Welch, Clara L.; Redman, Sandra

    2004-01-01

    Grid technology is the up and coming technology that is enabling widely disparate services to be offered to users that is very economical, easy to use and not available on a wide basis. Under the Grid concept disparate organizations generally defined as "virtual organizations" can share services i.e. sharing discipline specific computer applications, required to accomplish the specific scientific and engineering organizational goals and objectives. Grids are emerging as the new technology of the future. Grid technology has been enabled by the evolution of increasingly high speed networking. Without the evolution of high speed networking Grid technology would not have emerged. NASA/Marshall Space Flight Center's (MSFC) Flight Projects Directorate, Ground Systems Department is developing a Space-based Science Operations Grid prototype to provide to scientists and engineers the tools necessary to operate space-based science payloads/experiments and for scientists to conduct public and educational outreach. In addition Grid technology can provide new services not currently available to users. These services include mission voice and video, application sharing, telemetry management and display, payload and experiment commanding, data mining, high order data processing, discipline specific application sharing and data storage, all from a single grid portal. The Prototype will provide most of these services in a first step demonstration of integrated Grid and space-based science operations technologies. It will initially be based on the International Space Station science operational services located at the Payload Operations Integration Center at MSFC, but can be applied to many NASA projects including free flying satellites and future projects. The Prototype will use the Internet2 Abilene Research and Education Network that is currently a 10 Gb backbone network to reach the University of Alabama at Huntsville and several other, as yet unidentified, Space Station based

  14. Space-based Science Operations Grid Prototype

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Welch, Clara L.; Redman, Sandra

    2004-01-01

    Grid technology is the up and coming technology that is enabling widely disparate services to be offered to users that is very economical, easy to use and not available on a wide basis. Under the Grid concept disparate organizations generally defined as "virtual organizations" can share services i.e. sharing discipline specific computer applications, required to accomplish the specific scientific and engineering organizational goals and objectives. Grids are emerging as the new technology of the future. Grid technology has been enabled by the evolution of increasingly high speed networking. Without the evolution of high speed networking Grid technology would not have emerged. NASA/Marshall Space Flight Center's (MSFC) Flight Projects Directorate, Ground Systems Department is developing a Space-based Science Operations Grid prototype to provide to scientists and engineers the tools necessary to operate space-based science payloads/experiments and for scientists to conduct public and educational outreach. In addition Grid technology can provide new services not currently available to users. These services include mission voice and video, application sharing, telemetry management and display, payload and experiment commanding, data mining, high order data processing, discipline specific application sharing and data storage, all from a single grid portal. The Prototype will provide most of these services in a first step demonstration of integrated Grid and space-based science operations technologies. It will initially be based on the International Space Station science operational services located at the Payload Operations Integration Center at MSFC, but can be applied to many NASA projects including free flying satellites and future projects. The Prototype will use the Internet2 Abilene Research and Education Network that is currently a 10 Gb backbone network to reach the University of Alabama at Huntsville and several other, as yet unidentified, Space Station based

  15. Grid artifact reduction for direct digital radiography detectors based on rotated stationary grids with homomorphic filtering

    SciTech Connect

    Kim, Dong Sik; Lee, Sanggyun

    2013-06-15

    Purpose: Grid artifacts are caused when using the antiscatter grid in obtaining digital x-ray images. In this paper, research on grid artifact reduction techniques is conducted especially for the direct detectors, which are based on amorphous selenium. Methods: In order to analyze and reduce the grid artifacts, the authors consider a multiplicative grid image model and propose a homomorphic filtering technique. For minimal damage due to filters, which are used to suppress the grid artifacts, rotated grids with respect to the sampling direction are employed, and min-max optimization problems for searching optimal grid frequencies and angles for given sampling frequencies are established. The authors then propose algorithms for the grid artifact reduction based on the band-stop filters as well as low-pass filters. Results: The proposed algorithms are experimentally tested for digital x-ray images, which are obtained from direct detectors with the rotated grids, and are compared with other algorithms. It is shown that the proposed algorithms can successfully reduce the grid artifacts for direct detectors. Conclusions: By employing the homomorphic filtering technique, the authors can considerably suppress the strong grid artifacts with relatively narrow-bandwidth filters compared to the normal filtering case. Using rotated grids also significantly reduces the ringing artifact. Furthermore, for specific grid frequencies and angles, the authors can use simple homomorphic low-pass filters in the spatial domain, and thus alleviate the grid artifacts with very low implementation complexity.

  16. Final Report for 'An Abstract Job Handling Grid Service for Dataset Analysis'

    SciTech Connect

    David A Alexander

    2005-07-11

    For Phase I of the Job Handling project, Tech-X has built a Grid service for processing analysis requests, as well as a Graphical User Interface (GUI) client that uses the service. The service is designed to generically support High-Energy Physics (HEP) experimental analysis tasks. It has an extensible, flexible, open architecture and language. The service uses the Solenoidal Tracker At RHIC (STAR) experiment as a working example. STAR is an experiment at the Relativistic Heavy Ion Collider (RHIC) at the Brookhaven National Laboratory (BNL). STAR and other experiments at BNL generate multiple Petabytes of HEP data. The raw data is captured as millions of input files stored in a distributed data catalog. Potentially using thousands of files as input, analysis requests are submitted to a processing environment containing thousands of nodes. The Grid service provides a standard interface to the processing farm. It enables researchers to run large-scale, massively parallel analysis tasks, regardless of the computational resources available in their location.

  17. KARDIONET: telecardiology based on GRID technology.

    PubMed

    Sierdzinski, Janusz; Bala, Piotr; Rudowski, Robert; Grabowski, Marcin; Karpinski, Grzegorz; Kaczynski, Bartosz

    2009-01-01

    The telecardiological system Kardionet is being developed to support interventional cardiology. The main aim of the system is to collect specific and systemized patient data from the distant medical centers and to organize it in the best possible way to diagnose quickly and choose the medical treatment. It is the distributed GRID type system operating in shortest achievable time. Computational GRID solutions together with distributed archive data GRID support creation, implementation and operations of software using considerable computational power. Kardionet system devoted to cardiology purposes includes specially developed data bases for the multimodal data and metadata, including information on a patient and his/her medical examination results. As Kardionet uses modern technology and methods we expect it could have a considerable impact on telemedicine development in Poland. The presented telecardiological system can provide a number of important gains for the national health care system if it is implemented nationwide.

  18. Space-based Operations Grid Prototype

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Welch, Clara L.

    2003-01-01

    The Space based Operations Grid is intended to integrate the "high end" network services and compute resources that a remote payload investigator needs. This includes integrating and enhancing existing services such as access to telemetry, payload commanding, payload planning and internet voice distribution as well as the addition of services such as video conferencing, collaborative design, modeling or visualization, text messaging, application sharing, and access to existing compute or data grids. Grid technology addresses some of the greatest challenges and opportunities presented by the current trends in technology, i.e. how to take advantage of ever increasing bandwidth, how to manage virtual organizations and how to deal with the increasing threats to information technology security. We will discuss the pros and cons of using grid technology in space-based operations and share current plans for the prototype. It is hoped that early on the prototype can incorporate many of the existing as well as future services that are discussed in the first paragraph above to cooperating International Space Station Principle Investigators both nationally and internationally.

  19. Space-based Operations Grid Prototype

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Welch, Clara L.

    2003-01-01

    The Space based Operations Grid is intended to integrate the "high end" network services and compute resources that a remote payload investigator needs. This includes integrating and enhancing existing services such as access to telemetry, payload commanding, payload planning and internet voice distribution as well as the addition of services such as video conferencing, collaborative design, modeling or visualization, text messaging, application sharing, and access to existing compute or data grids. Grid technology addresses some of the greatest challenges and opportunities presented by the current trends in technology, i.e. how to take advantage of ever increasing bandwidth, how to manage virtual organizations and how to deal with the increasing threats to information technology security. We will discuss the pros and cons of using grid technology in space-based operations and share current plans for the prototype. It is hoped that early on the prototype can incorporate many of the existing as well as future services that are discussed in the first paragraph above to cooperating International Space Station Principle Investigators both nationally and internationally.

  20. Technology for a NASA Space-Based Science Operations Grid

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Redman, Sandra H.

    2003-01-01

    This viewgraph representation presents an overview of a proposal to develop a space-based operations grid in support of space-based science experiments. The development of such a grid would provide a dynamic, secure and scalable architecture based on standards and next-generation reusable software and would enable greater science collaboration and productivity through the use of shared resources and distributed computing. The authors propose developing this concept for use on payload experiments carried aboard the International Space Station. Topics covered include: grid definitions, portals, grid development and coordination, grid technology and potential uses of such a grid.

  1. Technology for a NASA Space-Based Science Operations Grid

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Redman, Sandra H.

    2003-01-01

    This viewgraph representation presents an overview of a proposal to develop a space-based operations grid in support of space-based science experiments. The development of such a grid would provide a dynamic, secure and scalable architecture based on standards and next-generation reusable software and would enable greater science collaboration and productivity through the use of shared resources and distributed computing. The authors propose developing this concept for use on payload experiments carried aboard the International Space Station. Topics covered include: grid definitions, portals, grid development and coordination, grid technology and potential uses of such a grid.

  2. Grid based calibration of SWAT hydrological models

    NASA Astrophysics Data System (ADS)

    Gorgan, D.; Bacu, V.; Mihon, D.; Rodila, D.; Abbaspour, K.; Rouholahnejad, E.

    2012-07-01

    The calibration and execution of large hydrological models, such as SWAT (soil and water assessment tool), developed for large areas, high resolution, and huge input data, need not only quite a long execution time but also high computation resources. SWAT hydrological model supports studies and predictions of the impact of land management practices on water, sediment, and agricultural chemical yields in complex watersheds. The paper presents the gSWAT application as a web practical solution for environmental specialists to calibrate extensive hydrological models and to run scenarios, by hiding the complex control of processes and heterogeneous resources across the grid based high computation infrastructure. The paper highlights the basic functionalities of the gSWAT platform, and the features of the graphical user interface. The presentation is concerned with the development of working sessions, interactive control of calibration, direct and basic editing of parameters, process monitoring, and graphical and interactive visualization of the results. The experiments performed on different SWAT models and the obtained results argue the benefits brought by the grid parallel and distributed environment as a solution for the processing platform. All the instances of SWAT models used in the reported experiments have been developed through the enviroGRIDS project, targeting the Black Sea catchment area.

  3. Cartesian-cell based grid generation and adaptive mesh refinement

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1993-01-01

    Viewgraphs on Cartesian-cell based grid generation and adaptive mesh refinement are presented. Topics covered include: grid generation; cell cutting; data structures; flow solver formulation; adaptive mesh refinement; and viscous flow.

  4. Grid-based platform for training in Earth Observation

    NASA Astrophysics Data System (ADS)

    Petcu, Dana; Zaharie, Daniela; Panica, Silviu; Frincu, Marc; Neagul, Marian; Gorgan, Dorian; Stefanut, Teodor

    2010-05-01

    found in [4]. The Workload Management System (WMS) provides two types of resource managers. The first one will be based on Condor HTC and use Condor as a job manager for task dispatching and working nodes (for development purposes) while the second one will use GT4 GRAM (for production purposes). The WMS main component, the Grid Task Dispatcher (GTD), is responsible for the interaction with other internal services as the composition engine in order to facilitate access to the processing platform. Its main responsibilities are to receive tasks from the workflow engine or directly from user interface, to use a task description language (the ClassAd meta language in case of Condor HTC) for job units, to submit and check the status of jobs inside the workload management system and to retrieve job logs for debugging purposes. More details can be found in [4]. A particular component of the platform is eGLE, the eLearning environment. It provides the functionalities necessary to create the visual appearance of the lessons through the usage of visual containers like tools, patterns and templates. The teacher uses the platform for testing the already created lessons, as well as for developing new lesson resources, such as new images and workflows describing graph-based processing. The students execute the lessons or describe and experiment with new workflows or different data. The eGLE database includes several workflow-based lesson descriptions, teaching materials and lesson resources, selected satellite and spatial data. More details can be found in [5]. A first training event of using the platform was organized in September 2009 during 11th SYNASC symposium (links to the demos, testing interface, and exercises are available on project site [1]). The eGLE component was presented at 4th GPC conference in May 2009. Moreover, the functionality of the platform will be presented as demo in April 2010 at 5th EGEE User forum. References: [1] GiSHEO consortium, Project site, http

  5. Design of a Grid Service-based Platform for In Silico Protein-Ligand Screenings

    PubMed Central

    Levesque, Marshall J.; Ichikawa, Kohei; Date, Susumu; Haga, Jason H.

    2009-01-01

    Grid computing offers the powerful alternative of sharing resources on a worldwide scale, across different institutions to run computationally intensive, scientific applications without the need for a centralized supercomputer. Much effort has been put into development of software that deploys legacy applications on a grid-based infrastructure and efficiently uses available resources. One field that can benefit greatly from the use of grid resources is that of drug discovery since molecular docking simulations are an integral part of the discovery process. In this paper, we present a scalable, reusable platform to choreograph large virtual screening experiments over a computational grid using the molecular docking simulation software DOCK. Software components are applied on multiple levels to create automated workflows consisting of input data delivery, job scheduling, status query, and collection of output to be displayed in a manageable fashion for further analysis. This was achieved using Opal OP to wrap the DOCK application as a grid service and PERL for data manipulation purposes, alleviating the requirement for extensive knowledge of grid infrastructure. With the platform in place, a screening of the ZINC 2,066,906 compound “druglike” subset database against an enzyme's catalytic site was successfully performed using the MPI version of DOCK 5.4 on the PRAGMA grid testbed. The screening required 11.56 days laboratory time and utilized 200 processors over 7 clusters. PMID:18771812

  6. Expected-Credibility-Based Job Scheduling for Reliable Volunteer Computing

    NASA Astrophysics Data System (ADS)

    Watanabe, Kan; Fukushi, Masaru; Horiguchi, Susumu

    This paper presents a proposal of an expected-credibility-based job scheduling method for volunteer computing (VC) systems with malicious participants who return erroneous results. Credibility-based voting is a promising approach to guaranteeing the computational correctness of VC systems. However, it relies on a simple round-robin job scheduling method that does not consider the jobs' order of execution, thereby resulting in numerous unnecessary job allocations and performance degradation of VC systems. To improve the performance of VC systems, the proposed job scheduling method selects a job to be executed prior to others dynamically based on two novel metrics: expected credibility and the expected number of results for each job. Simulation of VCs shows that the proposed method can improve the VC system performance up to 11%; It always outperforms the original round-robin method irrespective of the value of unknown parameters such as population and behavior of saboteurs.

  7. Spatial data grid based on CDN

    NASA Astrophysics Data System (ADS)

    Hu, XiaoGuang; Zhu, Xinyan; Li, Deren

    2008-12-01

    This paper firstly introduces the spatial data grid and the CDN (Content Delivery Network) technology. And then it depicts the significance of integrating grid with CDN. On this basis, this paper proposes a method of constructing the spatial data grid system by using CDN to support the massive spatial data online service. Finally, the simulation results by OPNET show that the programme do can improve the system performance, and reduce response time in a greater extent.

  8. Experiences of engineering Grid-based medical software.

    PubMed

    Estrella, F; Hauer, T; McClatchey, R; Odeh, M; Rogulin, D; Solomonides, T

    2007-08-01

    Grid-based technologies are emerging as potential solutions for managing and collaborating distributed resources in the biomedical domain. Few examples exist, however, of successful implementations of Grid-enabled medical systems and even fewer have been deployed for evaluation in practice. The objective of this paper is to evaluate the use in clinical practice of a Grid-based imaging prototype and to establish directions for engineering future medical Grid developments and their subsequent deployment. The MammoGrid project has deployed a prototype system for clinicians using the Grid as its information infrastructure. To assist in the specification of the system requirements (and for the first time in healthgrid applications), use-case modelling has been carried out in close collaboration with clinicians and radiologists who had no prior experience of this modelling technique. A critical qualitative and, where possible, quantitative analysis of the MammoGrid prototype is presented leading to a set of recommendations from the delivery of the first deployed Grid-based medical imaging application. We report critically on the application of software engineering techniques in the specification and implementation of the MammoGrid project and show that use-case modelling is a suitable vehicle for representing medical requirements and for communicating effectively with the clinical community. This paper also discusses the practical advantages and limitations of applying the Grid to real-life clinical applications and presents the consequent lessons learned. The work presented in this paper demonstrates that given suitable commitment from collaborating radiologists it is practical to deploy in practice medical imaging analysis applications using the Grid but that standardization in and stability of the Grid software is a necessary pre-requisite for successful healthgrids. The MammoGrid prototype has therefore paved the way for further advanced Grid-based deployments in the

  9. GridLAB-D: An Agent-Based Simulation Framework for Smart Grids

    DOE PAGES

    Chassin, David P.; Fuller, Jason C.; Djilali, Ned

    2014-01-01

    Simulation of smart grid technologies requires a fundamentally new approach to integrated modeling of power systems, energy markets, building technologies, and the plethora of other resources and assets that are becoming part of modern electricity production, delivery, and consumption systems. As a result, the US Department of Energy’s Office of Electricity commissioned the development of a new type of power system simulation tool called GridLAB-D that uses an agent-based approach to simulating smart grids. This paper presents the numerical methods and approach to time-series simulation used by GridLAB-D and reviews applications in power system studies, market design, building control systemmore » design, and integration of wind power in a smart grid.« less

  10. GridLAB-D: An Agent-Based Simulation Framework for Smart Grids

    SciTech Connect

    Chassin, David P.; Fuller, Jason C.; Djilali, Ned

    2014-06-23

    Simulation of smart grid technologies requires a fundamentally new approach to integrated modeling of power systems, energy markets, building technologies, and the plethora of other resources and assets that are becoming part of modern electricity production, delivery, and consumption systems. As a result, the US Department of Energy’s Office of Electricity commissioned the development of a new type of power system simulation tool called GridLAB-D that uses an agent-based approach to simulating smart grids. This paper presents the numerical methods and approach to time-series simulation used by GridLAB-D and reviews applications in power system studies, market design, building control system design, and integration of wind power in a smart grid.

  11. A Grid-based solution for management and analysis of microarrays in distributed experiments

    PubMed Central

    Porro, Ivan; Torterolo, Livia; Corradi, Luca; Fato, Marco; Papadimitropoulos, Adam; Scaglione, Silvia; Schenone, Andrea; Viti, Federica

    2007-01-01

    Several systems have been presented in the last years in order to manage the complexity of large microarray experiments. Although good results have been achieved, most systems tend to lack in one or more fields. A Grid based approach may provide a shared, standardized and reliable solution for storage and analysis of biological data, in order to maximize the results of experimental efforts. A Grid framework has been therefore adopted due to the necessity of remotely accessing large amounts of distributed data as well as to scale computational performances for terabyte datasets. Two different biological studies have been planned in order to highlight the benefits that can emerge from our Grid based platform. The described environment relies on storage services and computational services provided by the gLite Grid middleware. The Grid environment is also able to exploit the added value of metadata in order to let users better classify and search experiments. A state-of-art Grid portal has been implemented in order to hide the complexity of framework from end users and to make them able to easily access available services and data. The functional architecture of the portal is described. As a first test of the system performances, a gene expression analysis has been performed on a dataset of Affymetrix GeneChip® Rat Expression Array RAE230A, from the ArrayExpress database. The sequence of analysis includes three steps: (i) group opening and image set uploading, (ii) normalization, and (iii) model based gene expression (based on PM/MM difference model). Two different Linux versions (sequential and parallel) of the dChip software have been developed to implement the analysis and have been tested on a cluster. From results, it emerges that the parallelization of the analysis process and the execution of parallel jobs on distributed computational resources actually improve the performances. Moreover, the Grid environment have been tested both against the possibility of

  12. Cartesian based grid generation/adaptive mesh refinement

    NASA Technical Reports Server (NTRS)

    Coirier, William J.

    1992-01-01

    Grid adaptation has recently received attention in the computational fluid dynamics (CFD) community as a means to capture the salient features of a flowfield by either moving grid points of a structured or by adding cells in an unstructured manner. An approach based on a background cartesian mesh is investigated from which the geometry is 'cut' out of the mesh. Once the mesh is obtained, a solution on this coarse grid is found, that indicates which cells need to be refined. This process of refining/solving continues until the flow is grid refined in terms of a user specified global parameter (such as drag coefficient etc.). The advantages of this approach are twofold: the generation of the base grid is independent of the topology of the bodies or surfaces around/through which the flow is to be computed, and the resulting grid (in uncut regions) is highly isotropic, so that the truncation error is low. The flow solver (which, along with the grid generation is still under development) uses a completely unstructured data base, and is a finite volume, upwinding scheme. Current and future work will address generating Navier-Stokes suitable grids by using locally aligned and normal face/cell refining. The attached plot shows a simple grid about two turbine blades.

  13. Feature combination analysis in smart grid based using SOM for Sudan national grid

    NASA Astrophysics Data System (ADS)

    Bohari, Z. H.; Yusof, M. A. M.; Jali, M. H.; Sulaima, M. F.; Nasir, M. N. M.

    2015-12-01

    In the investigation of power grid security, the cascading failure in multicontingency situations has been a test because of its topological unpredictability and computational expense. Both system investigations and burden positioning routines have their limits. In this project, in view of sorting toward Self Organizing Maps (SOM), incorporated methodology consolidating spatial feature (distance)-based grouping with electrical attributes (load) to evaluate the vulnerability and cascading impact of various part sets in the force lattice. Utilizing the grouping result from SOM, sets of overwhelming stacked beginning victimized people to perform assault conspires and asses the consequent falling impact of their failures, and this SOM-based approach viably distinguishes the more powerless sets of substations than those from the conventional burden positioning and other bunching strategies. The robustness of power grids is a central topic in the design of the so called "smart grid". In this paper, to analyze the measures of importance of the nodes in a power grid under cascading failure. With these efforts, we can distinguish the most vulnerable nodes and protect them, improving the safety of the power grid. Also we can measure if a structure is proper for power grids.

  14. CaGrid Workflow Toolkit: A taverna based workflow tool for cancer grid

    PubMed Central

    2010-01-01

    Background In biological and medical domain, the use of web services made the data and computation functionality accessible in a unified manner, which helped automate the data pipeline that was previously performed manually. Workflow technology is widely used in the orchestration of multiple services to facilitate in-silico research. Cancer Biomedical Informatics Grid (caBIG) is an information network enabling the sharing of cancer research related resources and caGrid is its underlying service-based computation infrastructure. CaBIG requires that services are composed and orchestrated in a given sequence to realize data pipelines, which are often called scientific workflows. Results CaGrid selected Taverna as its workflow execution system of choice due to its integration with web service technology and support for a wide range of web services, plug-in architecture to cater for easy integration of third party extensions, etc. The caGrid Workflow Toolkit (or the toolkit for short), an extension to the Taverna workflow system, is designed and implemented to ease building and running caGrid workflows. It provides users with support for various phases in using workflows: service discovery, composition and orchestration, data access, and secure service invocation, which have been identified by the caGrid community as challenging in a multi-institutional and cross-discipline domain. Conclusions By extending the Taverna Workbench, caGrid Workflow Toolkit provided a comprehensive solution to compose and coordinate services in caGrid, which would otherwise remain isolated and disconnected from each other. Using it users can access more than 140 services and are offered with a rich set of features including discovery of data and analytical services, query and transfer of data, security protections for service invocations, state management in service interactions, and sharing of workflows, experiences and best practices. The proposed solution is general enough to be

  15. CaGrid Workflow Toolkit: a Taverna based workflow tool for cancer grid.

    PubMed

    Tan, Wei; Madduri, Ravi; Nenadic, Alexandra; Soiland-Reyes, Stian; Sulakhe, Dinanath; Foster, Ian; Goble, Carole A

    2010-11-02

    In biological and medical domain, the use of web services made the data and computation functionality accessible in a unified manner, which helped automate the data pipeline that was previously performed manually. Workflow technology is widely used in the orchestration of multiple services to facilitate in-silico research. Cancer Biomedical Informatics Grid (caBIG) is an information network enabling the sharing of cancer research related resources and caGrid is its underlying service-based computation infrastructure. CaBIG requires that services are composed and orchestrated in a given sequence to realize data pipelines, which are often called scientific workflows. CaGrid selected Taverna as its workflow execution system of choice due to its integration with web service technology and support for a wide range of web services, plug-in architecture to cater for easy integration of third party extensions, etc. The caGrid Workflow Toolkit (or the toolkit for short), an extension to the Taverna workflow system, is designed and implemented to ease building and running caGrid workflows. It provides users with support for various phases in using workflows: service discovery, composition and orchestration, data access, and secure service invocation, which have been identified by the caGrid community as challenging in a multi-institutional and cross-discipline domain. By extending the Taverna Workbench, caGrid Workflow Toolkit provided a comprehensive solution to compose and coordinate services in caGrid, which would otherwise remain isolated and disconnected from each other. Using it users can access more than 140 services and are offered with a rich set of features including discovery of data and analytical services, query and transfer of data, security protections for service invocations, state management in service interactions, and sharing of workflows, experiences and best practices. The proposed solution is general enough to be applicable and reusable within other

  16. A Judgement-Based Framework for Analysing Adult Job Choices

    ERIC Educational Resources Information Center

    Athanasou, James A.

    2004-01-01

    The purpose of this paper is to introduce a judgement-based framework for adult job and career choices. This approach is set out as a perceptual-judgemental-reinforcement approach. Job choice is viewed as cognitive acquisition over time and is epitomised by a learning process. Seven testable assumptions are derived from the model. (Contains 1…

  17. Job optimization in ATLAS TAG-based distributed analysis

    NASA Astrophysics Data System (ADS)

    Mambelli, M.; Cranshaw, J.; Gardner, R.; Maeno, T.; Malon, D.; Novak, M.

    2010-04-01

    The ATLAS experiment is projected to collect over one billion events/year during the first few years of operation. The efficient selection of events for various physics analyses across all appropriate samples presents a significant technical challenge. ATLAS computing infrastructure leverages the Grid to tackle the analysis across large samples by organizing data into a hierarchical structure and exploiting distributed computing to churn through the computations. This includes events at different stages of processing: RAW, ESD (Event Summary Data), AOD (Analysis Object Data), DPD (Derived Physics Data). Event Level Metadata Tags (TAGs) contain information about each event stored using multiple technologies accessible by POOL and various web services. This allows users to apply selection cuts on quantities of interest across the entire sample to compile a subset of events that are appropriate for their analysis. This paper describes new methods for organizing jobs using the TAGs criteria to analyze ATLAS data. It further compares different access patterns to the event data and explores ways to partition the workload for event selection and analysis. Here analysis is defined as a broader set of event processing tasks including event selection and reduction operations ("skimming", "slimming" and "thinning") as well as DPD making. Specifically it compares analysis with direct access to the events (AOD and ESD data) to access mediated by different TAG-based event selections. We then compare different ways of splitting the processing to maximize performance.

  18. A BASE(ic) Course on Job Analysis.

    ERIC Educational Resources Information Center

    Denis, Joe; Austin, Bruce

    1992-01-01

    Behavioral Analysis and Standards for Employees (BASE) is a job analysis process that focuses on employee behavior and the standards and conditions for it. BASE is cost effective and enables participation of stakeholders. (SK)

  19. DEM Based Modeling: Grid or TIN? The Answer Depends

    NASA Astrophysics Data System (ADS)

    Ogden, F. L.; Moreno, H. A.

    2015-12-01

    The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.

  20. ISS Space-Based Science Operations Grid for the Ground Systems Architecture Workshop (GSAW)

    NASA Technical Reports Server (NTRS)

    Welch, Clara; Bradford, Bob

    2003-01-01

    Contents include the following:What is grid? Benefits of a grid to space-based science operations. Our approach. Score of prototype grid. The security question. Short term objectives. Long term objectives. Space-based services required for operations. The prototype. Score of prototype grid. Prototype service layout. Space-based science grid service components.

  1. Bioinfogrid:. Bioinformatics Simulation and Modeling Based on Grid

    NASA Astrophysics Data System (ADS)

    Milanesi, Luciano

    2007-12-01

    Genomics sequencing projects and new technologies applied to molecular genetics analysis are producing huge amounts of raw data. In future the trend of the biomedical scientific research will be based on computing Grids for data crunching applications, data Grids for distributed storage of large amounts of accessible data and the provision of tools to all users. Biomedical research laboratories are moving towards an environment, created through the sharing of resources, in which heterogeneous and dispersed health data, such as molecular data (e.g. genomics, proteomics), cellular data (e.g. pathways), tissue data, population data (e.g. Genotyping, SNP, Epidemiology), as well the data generated by large scale analysis (eg. Simulation data, Modelling). In this paper some applications developed in the framework of the European Project "Bioinformatics Grid Application for life science - BioinfoGRID" will be described in order to show the potentiality of the GRID to carry out large scale analysis and research worldwide.

  2. Grid-based electronic structure calculations: The tensor decomposition approach

    NASA Astrophysics Data System (ADS)

    Rakhuba, M. V.; Oseledets, I. V.

    2016-05-01

    We present a fully grid-based approach for solving Hartree-Fock and all-electron Kohn-Sham equations based on low-rank approximation of three-dimensional electron orbitals. Due to the low-rank structure the total complexity of the algorithm depends linearly with respect to the one-dimensional grid size. Linear complexity allows for the usage of fine grids, e.g. 81923 and, thus, cheap extrapolation procedure. We test the proposed approach on closed-shell atoms up to the argon, several molecules and clusters of hydrogen atoms. All tests show systematical convergence with the required accuracy.

  3. Grid-based electronic structure calculations: The tensor decomposition approach

    SciTech Connect

    Rakhuba, M.V.; Oseledets, I.V.

    2016-05-01

    We present a fully grid-based approach for solving Hartree–Fock and all-electron Kohn–Sham equations based on low-rank approximation of three-dimensional electron orbitals. Due to the low-rank structure the total complexity of the algorithm depends linearly with respect to the one-dimensional grid size. Linear complexity allows for the usage of fine grids, e.g. 8192{sup 3} and, thus, cheap extrapolation procedure. We test the proposed approach on closed-shell atoms up to the argon, several molecules and clusters of hydrogen atoms. All tests show systematical convergence with the required accuracy.

  4. Fine-grained authorization for job execution in the Grid : design and implementation.

    SciTech Connect

    Keahey, K.; Welch, V.; Lang, S.; Liu, B.; Meder, S.; Mathematics and Computer Science; Univ. of Chicago; Univ. of Houston

    2004-04-25

    In this paper, we describe our work on enabling fine-grained authorization for resource usage and management. We address the need of virtual organizations to enforce their own polices in addition to those of the resource owners, with regards to both resource consumption and job management. To implement this design, we propose changes and extensions to the Globus Toolkit's version 2 resource management mechanism. We describe the prototype and policy language that we have designed to express fine-grained policies and present an analysis of our solution.

  5. A CUDA-based reverse gridding algorithm for MR reconstruction.

    PubMed

    Yang, Jingzhu; Feng, Chaolu; Zhao, Dazhe

    2013-02-01

    MR raw data collected using non-Cartesian method can be transformed on Cartesian grids by traditional gridding algorithm (GA) and reconstructed by Fourier transform. However, its runtime complexity is O(K×N(2)), where resolution of raw data is N×N and size of convolution window (CW) is K. And it involves a large number of matrix calculation including modulus, addition, multiplication and convolution. Therefore, a Compute Unified Device Architecture (CUDA)-based algorithm is proposed to improve the reconstruction efficiency of PROPELLER (a globally recognized non-Cartesian sampling method). Experiment shows a write-write conflict among multiple CUDA threads. This induces an inconsistent result when synchronously convoluting multiple k-space data onto the same grid. To overcome this problem, a reverse gridding algorithm (RGA) was developed. Different from the method of generating a grid window for each trajectory as in traditional GA, RGA calculates a trajectory window for each grid. This is what "reverse" means. For each k-space point in the CW, contribution is cumulated to this grid. Although this algorithm can be easily extended to reconstruct other non-Cartesian sampled raw data, we only implement it based on PROPELLER. Experiment illustrates that this CUDA-based RGA has successfully solved the write-write conflict and its reconstruction speed is 7.5 times higher than that of traditional GA. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Clustering algorithm based on grid and density for data stream

    NASA Astrophysics Data System (ADS)

    Wang, Lang; Li, Haiqing

    2017-05-01

    Data stream clustering analysis can extract useful information in real time from massive data, and have been widely applied in many fields. The traditional grid-based data stream clustering algorithm is not precise and the processing of the grid cell boundary points is crude. On the other side, the density-based clustering algorithm is inefficient and is not easy for the discovery of arbitrary shape cluster problem. Thus, this paper proposes a kind of clustering algorithm based on both grid and density for data stream. This algorithm method processes the boundary points by segmenting the data space and using data points to deal with the influence coefficient of the adjacent grid elements, in order to improve the efficiency and accuracy of the algorithm. The experimental results prove this algorithmic method to an accurate, quick, feasible way to identify clusters.

  7. Optimizing Resource Utilization in Grid Batch Systems

    NASA Astrophysics Data System (ADS)

    Gellrich, Andreas

    2012-12-01

    On Grid sites, the requirements of the computing tasks (jobs) to computing, storage, and network resources differ widely. For instance Monte Carlo production jobs are almost purely CPU-bound, whereas physics analysis jobs demand high data rates. In order to optimize the utilization of the compute node resources, jobs must be distributed intelligently over the nodes. Although the job resource requirements cannot be deduced directly, jobs are mapped to POSIX UID/GID according to the VO, VOMS group and role information contained in the VOMS proxy. The UID/GID then allows to distinguish jobs, if users are using VOMS proxies as planned by the VO management, e.g. ‘role=production’ for Monte Carlo jobs. It is possible to setup and configure batch systems (queuing system and scheduler) at Grid sites based on these considerations although scaling limits were observed with the scheduler MAUI. In tests these limitations could be overcome with a home-made scheduler.

  8. SoilGrids250m: Global gridded soil information based on machine learning.

    PubMed

    Hengl, Tomislav; Mendes de Jesus, Jorge; Heuvelink, Gerard B M; Ruiperez Gonzalez, Maria; Kilibarda, Milan; Blagotić, Aleksandar; Shangguan, Wei; Wright, Marvin N; Geng, Xiaoyuan; Bauer-Marschallinger, Bernhard; Guevara, Mario Antonio; Vargas, Rodrigo; MacMillan, Robert A; Batjes, Niels H; Leenaars, Johan G B; Ribeiro, Eloi; Wheeler, Ichsani; Mantel, Stephan; Kempen, Bas

    2017-01-01

    This paper describes the technical development and accuracy assessment of the most recent and improved version of the SoilGrids system at 250m resolution (June 2016 update). SoilGrids provides global predictions for standard numeric soil properties (organic carbon, bulk density, Cation Exchange Capacity (CEC), pH, soil texture fractions and coarse fragments) at seven standard depths (0, 5, 15, 30, 60, 100 and 200 cm), in addition to predictions of depth to bedrock and distribution of soil classes based on the World Reference Base (WRB) and USDA classification systems (ca. 280 raster layers in total). Predictions were based on ca. 150,000 soil profiles used for training and a stack of 158 remote sensing-based soil covariates (primarily derived from MODIS land products, SRTM DEM derivatives, climatic images and global landform and lithology maps), which were used to fit an ensemble of machine learning methods-random forest and gradient boosting and/or multinomial logistic regression-as implemented in the R packages ranger, xgboost, nnet and caret. The results of 10-fold cross-validation show that the ensemble models explain between 56% (coarse fragments) and 83% (pH) of variation with an overall average of 61%. Improvements in the relative accuracy considering the amount of variation explained, in comparison to the previous version of SoilGrids at 1 km spatial resolution, range from 60 to 230%. Improvements can be attributed to: (1) the use of machine learning instead of linear regression, (2) to considerable investments in preparing finer resolution covariate layers and (3) to insertion of additional soil profiles. Further development of SoilGrids could include refinement of methods to incorporate input uncertainties and derivation of posterior probability distributions (per pixel), and further automation of spatial modeling so that soil maps can be generated for potentially hundreds of soil variables. Another area of future research is the development of methods

  9. SoilGrids250m: Global gridded soil information based on machine learning

    PubMed Central

    Mendes de Jesus, Jorge; Heuvelink, Gerard B. M.; Ruiperez Gonzalez, Maria; Kilibarda, Milan; Blagotić, Aleksandar; Shangguan, Wei; Wright, Marvin N.; Geng, Xiaoyuan; Bauer-Marschallinger, Bernhard; Guevara, Mario Antonio; Vargas, Rodrigo; MacMillan, Robert A.; Batjes, Niels H.; Leenaars, Johan G. B.; Ribeiro, Eloi; Wheeler, Ichsani; Mantel, Stephan; Kempen, Bas

    2017-01-01

    This paper describes the technical development and accuracy assessment of the most recent and improved version of the SoilGrids system at 250m resolution (June 2016 update). SoilGrids provides global predictions for standard numeric soil properties (organic carbon, bulk density, Cation Exchange Capacity (CEC), pH, soil texture fractions and coarse fragments) at seven standard depths (0, 5, 15, 30, 60, 100 and 200 cm), in addition to predictions of depth to bedrock and distribution of soil classes based on the World Reference Base (WRB) and USDA classification systems (ca. 280 raster layers in total). Predictions were based on ca. 150,000 soil profiles used for training and a stack of 158 remote sensing-based soil covariates (primarily derived from MODIS land products, SRTM DEM derivatives, climatic images and global landform and lithology maps), which were used to fit an ensemble of machine learning methods—random forest and gradient boosting and/or multinomial logistic regression—as implemented in the R packages ranger, xgboost, nnet and caret. The results of 10–fold cross-validation show that the ensemble models explain between 56% (coarse fragments) and 83% (pH) of variation with an overall average of 61%. Improvements in the relative accuracy considering the amount of variation explained, in comparison to the previous version of SoilGrids at 1 km spatial resolution, range from 60 to 230%. Improvements can be attributed to: (1) the use of machine learning instead of linear regression, (2) to considerable investments in preparing finer resolution covariate layers and (3) to insertion of additional soil profiles. Further development of SoilGrids could include refinement of methods to incorporate input uncertainties and derivation of posterior probability distributions (per pixel), and further automation of spatial modeling so that soil maps can be generated for potentially hundreds of soil variables. Another area of future research is the development of

  10. Deploying web-based visual exploration tools on the grid

    SciTech Connect

    Jankun-Kelly, T.J.; Kreylos, Oliver; Shalf, John; Ma, Kwan-Liu; Hamann, Bernd; Joy, Kenneth; Bethel, E. Wes

    2002-02-01

    We discuss a web-based portal for the exploration, encapsulation, and dissemination of visualization results over the Grid. This portal integrates three components: an interface client for structured visualization exploration, a visualization web application to manage the generation and capture of the visualization results, and a centralized portal application server to access and manage grid resources. We demonstrate the usefulness of the developed system using an example for Adaptive Mesh Refinement (AMR) data visualization.

  11. GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE

    SciTech Connect

    Mikkelsen, K.; Næss, S. K.; Eriksen, H. K.

    2013-11-10

    We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3) better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.

  12. Advances in Distance-Based Hole Cuts on Overset Grids

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Pandya, Shishir A.

    2015-01-01

    An automatic and efficient method to determine appropriate hole cuts based on distances to the wall and donor stencil maps for overset grids is presented. A new robust procedure is developed to create a closed surface triangulation representation of each geometric component for accurate determination of the minimum hole. Hole boundaries are then displaced away from the tight grid-spacing regions near solid walls to allow grid overlap to occur away from the walls where cell sizes from neighboring grids are more comparable. The placement of hole boundaries is efficiently determined using a mid-distance rule and Cartesian maps of potential valid donor stencils with minimal user input. Application of this procedure typically results in a spatially-variable offset of the hole boundaries from the minimum hole with only a small number of orphan points remaining. Test cases on complex configurations are presented to demonstrate the new scheme.

  13. Grist : grid-based data mining for astronomy

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph C.; Katz, Daniel S.; Miller, Craig D.; Walia, Harshpreet; Williams, Roy; Djorgovski, S. George; Graham, Matthew J.; Mahabal, Ashish; Babu, Jogesh; Berk, Daniel E. Vanden; Nichol, Robert

    2004-01-01

    The Grist project is developing a grid-technology based system as a research environment for astronomy with massive and complex datasets. This knowledge extraction system will consist of a library of distributed grid services controlled by a workflow system, compliant with standards emerging from the grid computing, web services, and virtual observatory communities. This new technology is being used to find high redshift quasars, study peculiar variable objects, search for transients in real time, and fit SDSS QSO spectra to measure black hole masses. Grist services are also a component of the 'hyperatlas' project to serve high-resolution multi-wavelength imagery over the Internet. In support of these science and outreach objectives, the Grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access, federation, mining, subsetting, source extraction, image mosaicking, statistics, and visualization.

  14. Grist : grid-based data mining for astronomy

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph C.; Katz, Daniel S.; Miller, Craig D.; Walia, Harshpreet; Williams, Roy; Djorgovski, S. George; Graham, Matthew J.; Mahabal, Ashish; Babu, Jogesh; Berk, Daniel E. Vanden; hide

    2004-01-01

    The Grist project is developing a grid-technology based system as a research environment for astronomy with massive and complex datasets. This knowledge extraction system will consist of a library of distributed grid services controlled by a workflow system, compliant with standards emerging from the grid computing, web services, and virtual observatory communities. This new technology is being used to find high redshift quasars, study peculiar variable objects, search for transients in real time, and fit SDSS QSO spectra to measure black hole masses. Grist services are also a component of the 'hyperatlas' project to serve high-resolution multi-wavelength imagery over the Internet. In support of these science and outreach objectives, the Grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access, federation, mining, subsetting, source extraction, image mosaicking, statistics, and visualization.

  15. Market-Based Indian Grid Integration Study Options: Preprint

    SciTech Connect

    Stoltenberg, B.; Clark, K.; Negi, S. K.

    2012-03-01

    The Indian state of Gujarat is forecasting solar and wind generation expansion from 16% to 32% of installed generation capacity by 2015. Some states in India are already experiencing heavy wind power curtailment. Understanding how to integrate variable generation (VG) into the grid is of great interest to local transmission companies and India's Ministry of New and Renewable Energy. This paper describes the nature of a market-based integration study and how this approach, while new to Indian grid operation and planning, is necessary to understand how to operate and expand the grid to best accommodate the expansion of VG. Second, it discusses options in defining a study's scope, such as data granularity, generation modeling, and geographic scope. The paper also explores how Gujarat's method of grid operation and current system reliability will affect how an integration study can be performed.

  16. Pilot factory - a Condor-based system for scalable Pilot Job generation in the Panda WMS framework

    NASA Astrophysics Data System (ADS)

    Chiu, Po-Hsiang; Potekhin, Maxim

    2010-04-01

    The Panda Workload Management System is designed around the concept of the Pilot Job - a "smart wrapper" for the payload executable that can probe the environment on the remote worker node before pulling down the payload from the server and executing it. Such design allows for improved logging and monitoring capabilities as well as flexibility in Workload Management. In the Grid environment (such as the Open Science Grid), Panda Pilot Jobs are submitted to remote sites via mechanisms that ultimately rely on Condor-G. As our experience has shown, in cases where a large number of Panda jobs are simultaneously routed to a particular remote site, the increased load on the head node of the cluster, which is caused by the Pilot Job submission, may lead to overall lack of scalability. We have developed a Condor-inspired solution to this problem, which is using the schedd-based glidein, whose mission is to redirect pilots to the native batch system. Once a glidein schedd is installed and running, it can be utilized exactly the same way as local schedds and therefore, from the user's perspective, Pilots thus submitted are quite similar to jobs submitted to the local Condor pool.

  17. Research on the comparison of extension mechanism of cellular automaton based on hexagon grid and rectangular grid

    NASA Astrophysics Data System (ADS)

    Zhai, Xiaofang; Zhu, Xinyan; Xiao, Zhifeng; Weng, Jie

    2009-10-01

    Historically, cellular automata (CA) is a discrete dynamical mathematical structure defined on spatial grid. Research on cellular automata system (CAS) has focused on rule sets and initial condition and has not discussed its adjacency. Thus, the main focus of our study is the effect of adjacency on CA behavior. This paper is to compare rectangular grids with hexagonal grids on their characteristics, strengths and weaknesses. They have great influence on modeling effects and other applications including the role of nearest neighborhood in experimental design. Our researches present that rectangular and hexagonal grids have different characteristics. They are adapted to distinct aspects, and the regular rectangular or square grid is used more often than the hexagonal grid. But their relative merits have not been widely discussed. The rectangular grid is generally preferred because of its symmetry, especially in orthogonal co-ordinate system and the frequent use of raster from Geographic Information System (GIS). However, in terms of complex terrain, uncertain and multidirectional region, we have preferred hexagonal grids and methods to facilitate and simplify the problem. Hexagonal grids can overcome directional warp and have some unique characteristics. For example, hexagonal grids have a simpler and more symmetric nearest neighborhood, which avoids the ambiguities of the rectangular grids. Movement paths or connectivity, the most compact arrangement of pixels, make hexagonal appear great dominance in the process of modeling and analysis. The selection of an appropriate grid should be based on the requirements and objectives of the application. We use rectangular and hexagonal grids respectively for developing city model. At the same time we make use of remote sensing images and acquire 2002 and 2005 land state of Wuhan. On the base of city land state in 2002, we make use of CA to simulate reasonable form of city in 2005. Hereby, these results provide a proof of

  18. Software-Based Challenges of Developing the Future Distribution Grid

    SciTech Connect

    Stewart, Emma; Kiliccote, Sila; McParland, Charles

    2014-06-01

    distribution grid modeling, and measured data sources are a key missing element . Modeling tools need to be calibrated based on measured grid data to validate their output in varied conditions such as high renewables penetration and rapidly changing topology. In addition, establishing a standardized data modeling format would enable users to transfer data among tools to take advantage of different analysis features. ?

  19. Team Primacy Concept (TPC) Based Employee Evaluation and Job Performance

    ERIC Educational Resources Information Center

    Muniute, Eivina I.; Alfred, Mary V.

    2007-01-01

    This qualitative study explored how employees learn from Team Primacy Concept (TPC) based employee evaluation and how they use the feedback in performing their jobs. TPC based evaluation is a form of multirater evaluation, during which the employee's performance is discussed by one's peers in a face-to-face team setting. The study used Kolb's…

  20. A Cartesian grid-based unified gas kinetic scheme

    NASA Astrophysics Data System (ADS)

    Chen, Songze; Xu, Kun

    2014-12-01

    A Cartesian grid-based unified gas kinetic scheme is developed. In this approach, any oriented boundary in a Cartesian grid is represented by many directional boundary points. The numerical flux is evaluated on each boundary point. Then, a boundary flux interpolation method (BFIM) is constructed to distribute the boundary effect to the flow evolution on regular Cartesian grid points. The BFIM provides a general strategy to implement any kind of boundary condition on Cartesian grid. The newly developed technique is implemented in the unified gas kinetic scheme, where the scheme is reformulated into a finite difference format. Several typical test cases are simulated with different geometries. For example, the thermophoresis phenomenon for a plate with infinitesimal thickness immersed in a rarefied flow environment is calculated under different orientations on the same Cartesian grid. These computational results validate the BFIM in the unified scheme for the capturing of different thermal boundary conditions. The BFIM can be extended to the moving boundary problems as well.

  1. A Grid Infrastructure for Supporting Space-based Science Operations

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Redman, Sandra H.; McNair, Ann R. (Technical Monitor)

    2002-01-01

    Emerging technologies for computational grid infrastructures have the potential for revolutionizing the way computers are used in all aspects of our lives. Computational grids are currently being implemented to provide a large-scale, dynamic, and secure research and engineering environments based on standards and next-generation reusable software, enabling greater science and engineering productivity through shared resources and distributed computing for less cost than traditional architectures. Combined with the emerging technologies of high-performance networks, grids provide researchers, scientists and engineers the first real opportunity for an effective distributed collaborative environment with access to resources such as computational and storage systems, instruments, and software tools and services for the most computationally challenging applications.

  2. Constructing the ASCI computational grid

    SciTech Connect

    BEIRIGER,JUDY I.; BIVENS,HUGH P.; HUMPHREYS,STEVEN L.; JOHNSON,WILBUR R.; RHEA,RONALD E.

    2000-06-01

    The Accelerated Strategic Computing Initiative (ASCI) computational grid is being constructed to interconnect the high performance computing resources of the nuclear weapons complex. The grid will simplify access to the diverse computing, storage, network, and visualization resources, and will enable the coordinated use of shared resources regardless of location. To match existing hardware platforms, required security services, and current simulation practices, the Globus MetaComputing Toolkit was selected to provide core grid services. The ASCI grid extends Globus functionality by operating as an independent grid, incorporating Kerberos-based security, interfacing to Sandia's Cplant{trademark},and extending job monitoring services. To fully meet ASCI's needs, the architecture layers distributed work management and criteria-driven resource selection services on top of Globus. These services simplify the grid interface by allowing users to simply request ''run code X anywhere''. This paper describes the initial design and prototype of the ASCI grid.

  3. A grid service-based active thermochemical table framework.

    SciTech Connect

    von Laszewski, G.; Ruscic, B.; Wagstrom, P.; Krishnan, S.; Amin, K.; Nijsure, S.; Bittner, S.; Pinzon, R.; Hewson, J. C.; Morton, M. L.; Minkoff, M.; Wagner, A.; SNL

    2002-01-01

    In this paper we report our work on the integration of existing scientific applications using Grid Services. We describe a general architecture that provides access to these applications via Web services-based application factories. Furthermore, we demonstrate how such services can interact with each other.

  4. [Analysis of burnout and job satisfaction among nurses based on the Job Demand-Resource Model].

    PubMed

    Yom, Young-Hee

    2013-02-01

    The purpose of this study was to examine burnout and job satisfaction among nurses based on Job Demand-Resource Model. A survey using a structured questionnaire was conducted with 464 hospital nurses. Analysis of data was done with both SPSS Win 17.0 for descriptive statistics and AMOS 18.0 for the structural equation model. The hypothetical model yielded the following Chi-square=34.13 (p = <.001), df=6, GFI=.98, AGFI=.92, CFI=.94, RMSR=.02, NFI=.93, IFI=.94 and showed good fit indices. Workload had a direct effect on emotional exhaustion (β = 0.39), whereas supervisor support had direct effects on emotional exhaustion (β = -0.24), depersonalization (β = -0.11), and low personal accomplishment (β = -0.22). Emotional exhaustion (β = -0.42), depersonalization (β = -0.11) and low personal accomplishment (β = -0.36) had significant direct effects on job satisfaction. The results suggest that nurses' workload should be decreased and supervisor's support should be increased in order to retain nurses. Further study with a longitudinal design is necessary.

  5. A geometry-based adaptive unstructured grid generation algorithm for complex geological media

    NASA Astrophysics Data System (ADS)

    Bahrainian, Seyed Saied; Dezfuli, Alireza Daneh

    2014-07-01

    In this paper a novel unstructured grid generation algorithm is presented that considers the effect of geological features and well locations in grid resolution. The proposed grid generation algorithm presents a strategy for definition and construction of an initial grid based on the geological model, geometry adaptation of geological features, and grid resolution control. The algorithm is applied to seismotectonic map of the Masjed-i-Soleiman reservoir. Comparison of grid results with the “Triangle” program shows a more suitable permeability contrast. Immiscible two-phase flow solutions are presented for a fractured porous media test case using different grid resolutions. Adapted grid on the fracture geometry gave identical results with that of a fine grid. The adapted grid employed 88.2% less CPU time when compared to the solutions obtained by the fine grid.

  6. Grid-Based Fourier Transform Phase Contrast Imaging

    NASA Astrophysics Data System (ADS)

    Tahir, Sajjad

    Low contrast in x-ray attenuation imaging between different materials of low electron density is a limitation of traditional x-ray radiography. Phase contrast imaging offers the potential to improve the contrast between such materials, but due to the requirements on the spatial coherence of the x-ray beam, practical implementation of such systems with tabletop (i.e. non-synchrotron) sources has been limited. One recently developed phase imaging technique employs multiple fine-pitched gratings. However, the strict manufacturing tolerances and precise alignment requirements have limited the widespread adoption of grating-based techniques. In this work, we have investigated a technique recently demonstrated by Bennett et al. that utilizes a single grid of much coarser pitch. Our system consisted of a low power 100 microm spot Mo source, a CCD with 22 microm pixel pitch, and either a focused mammography linear grid or a stainless steel woven mesh. Phase is extracted from a single image by windowing and comparing data localized about harmonics of the grid in the Fourier domain. A Matlab code was written to perform the image processing. For the first time, the effects on the diffraction phase contrast and scattering amplitude images of varying grid types and periods, and of varying the window function type used to separate the harmonics, and the window widths, were investigated. Using the wire mesh, derivatives of the phase along two orthogonal directions were obtained and new methods investigated to form improved phase contrast images.

  7. Fast Outlier Detection Using a Grid-Based Algorithm.

    PubMed

    Lee, Jihwan; Cho, Nam-Wook

    2016-01-01

    As one of data mining techniques, outlier detection aims to discover outlying observations that deviate substantially from the reminder of the data. Recently, the Local Outlier Factor (LOF) algorithm has been successfully applied to outlier detection. However, due to the computational complexity of the LOF algorithm, its application to large data with high dimension has been limited. The aim of this paper is to propose grid-based algorithm that reduces the computation time required by the LOF algorithm to determine the k-nearest neighbors. The algorithm divides the data spaces in to a smaller number of regions, called as a "grid", and calculates the LOF value of each grid. To examine the effectiveness of the proposed method, several experiments incorporating different parameters were conducted. The proposed method demonstrated a significant computation time reduction with predictable and acceptable trade-off errors. Then, the proposed methodology was successfully applied to real database transaction logs of Korea Atomic Energy Research Institute. As a result, we show that for a very large dataset, the grid-LOF can be considered as an acceptable approximation for the original LOF. Moreover, it can also be effectively used for real-time outlier detection.

  8. GRID based Thermal Images Processing for volcanic activity monitoring

    NASA Astrophysics Data System (ADS)

    Mangiagli, S.; Coco, S.; Drago, L.; Laudani, A.,; Lodato, L.; Pollicino, G.; Torrisi, O.

    2009-04-01

    evolution. Clearly the analysis of this amount of data requires a lot of CPU and storage resources and this represent a serious limitation, and often this can overwhelm the performance capability of a workstation. Fortunately the INGV and the University of Catania are involved in a project for the development of a GRID infrastructure (a virtual supercomputer created by using a network of independent, geographically dispersed, computing clusters which act like a grid) and in software for this GRID. The performance of the VTA can be improved by using GRID thanks to its kernel thought to perform analysis for each thermal image independently from the others, and consequently it can be adequately parallelized in such a way the different parts of the same computation job can run on a multiplicity of machines. In particular the VTA grid version has been conceived considering the application as a Direct Acyclic Graph (DAG): the analysis task is first subdivided in the major number of machines available and then another part of the program proved the aggregation of the results. Consequently the porting of this software in the GRID environment greatly enhanced VTA's potentialities, allowing us to perform faster and multiple analysis on huge set of data, proving itself as a really usefull instrument for scientific research.

  9. The biometric-based module of smart grid system

    NASA Astrophysics Data System (ADS)

    Engel, E.; Kovalev, I. V.; Ermoshkina, A.

    2015-10-01

    Within Smart Grid concept the flexible biometric-based module base on Principal Component Analysis (PCA) and selective Neural Network is developed. The formation of the selective Neural Network the biometric-based module uses the method which includes three main stages: preliminary processing of the image, face localization and face recognition. Experiments on the Yale face database show that (i) selective Neural Network exhibits promising classification capability for face detection, recognition problems; and (ii) the proposed biometric-based module achieves near real-time face detection, recognition speed and the competitive performance, as compared to some existing subspaces-based methods.

  10. Computer-Based Job Aiding: Problem Solving at Work.

    DTIC Science & Technology

    1984-01-01

    KEY .ORDS (CUMue M mum. Wif. of aeeeM. am 8 F Wp Wi MMW) technical literacy , problem solving, computer based job aiding comliute~r based instruction...discourse processes, although those notions are opera- tionalized in a new way. Infomation Search in Technical Literacy as Problem Solving The dimensions of...computer-assisted technical literacy , information seeking strategies employed during an assembly task were analyzed in terms of overall group frequencies

  11. An APEL Tool Based CPU Usage Accounting Infrastructure for Large Scale Computing Grids

    NASA Astrophysics Data System (ADS)

    Jiang, Ming; Novales, Cristina Del Cano; Mathieu, Gilles; Casson, John; Rogers, William; Gordon, John

    The APEL (Accounting Processor for Event Logs) is the fundamental tool for the CPU usage accounting infrastructure deployed within the WLCG and EGEE Grids. In these Grids, jobs are submitted by users to computing resources via a Grid Resource Broker (e.g. gLite Workload Management System). As a log processing tool, APEL interprets logs of Grid gatekeeper (e.g. globus) and batch system logs (e.g. PBS, LSF, SGE and Condor) to produce CPU job accounting records identified with Grid identities. These records provide a complete description of usage of computing resources by user's jobs. APEL publishes accounting records into an accounting record repository at a Grid Operations Centre (GOC) for the access from a GUI web tool. The functions of log files parsing, records generation and publication are implemented by the APEL Parser, APEL Core, and APEL Publisher component respectively. Within the distributed accounting infrastructure, accounting records are transported from APEL Publishers at Grid sites to either a regionalised accounting system or the central one by choice via a common ActiveMQ message broker network. This provides an open transport layer for other accounting systems to publish relevant accounting data to a central accounting repository via a unified interface provided an APEL Publisher and also will give regional/National Grid Initiatives (NGIs) Grids the flexibility in their choice of accounting system. The robust and secure delivery of accounting record messages at an NGI level and between NGI accounting instances and the central one are achieved by using configurable APEL Publishers and an ActiveMQ message broker network.

  12. Design and Implementation of Real-Time Off-Grid Detection Tool Based on FNET/GridEye

    SciTech Connect

    Guo, Jiahui; Zhang, Ye; Liu, Yilu; Young II, Marcus Aaron; Irminger, Philip; Dimitrovski, Aleksandar D; Willging, Patrick

    2014-01-01

    Real-time situational awareness tools are of critical importance to power system operators, especially during emergencies. The availability of electric power has become a linchpin of most post disaster response efforts as it is the primary dependency for public and private sector services, as well as individuals. Knowledge of the scope and extent of facilities impacted, as well as the duration of their dependence on backup power, enables emergency response officials to plan for contingencies and provide better overall response. Based on real-time data acquired by Frequency Disturbance Recorders (FDRs) deployed in the North American power grid, a real-time detection method is proposed. This method monitors critical electrical loads and detects the transition of these loads from an on-grid state, where the loads are fed by the power grid to an off-grid state, where the loads are fed by an Uninterrupted Power Supply (UPS) or a backup generation system. The details of the proposed detection algorithm are presented, and some case studies and off-grid detection scenarios are also provided to verify the effectiveness and robustness. Meanwhile, the algorithm has already been implemented based on the Grid Solutions Framework (GSF) and has effectively detected several off-grid situations.

  13. Multilayer neural network models based on grid methods

    NASA Astrophysics Data System (ADS)

    Lazovskaya, T.; Tarkhov, D.

    2016-11-01

    The article discusses building hybrid models relating classical numerical methods for solving ordinary and partial differential equations and the universal neural network approach being developed by D Tarkhov and A Vasilyev. The different ways of constructing multilayer neural network structures based on grid methods are considered. The technique of building a continuous approximation using one simple modification of classical schemes is presented. Introduction non-linear relationships into the classic models with and without posterior learning are investigated. The numerical experiments are conducted.

  14. A grid-based approach for simulating stream temperature

    NASA Astrophysics Data System (ADS)

    Yearsley, John

    2012-03-01

    Applications of grid-based systems are widespread in many areas of environmental analysis. In this study, the concept is adapted to the modeling of water temperature by integrating a macroscale hydrologic model, variable infiltration capacity (VIC), with a computationally efficient and accurate water temperature model. The hydrologic model has been applied to many river basins at scales from 0.0625° to 1.0°. The water temperature model, which uses a semi-Lagrangian numerical scheme to solve the one-dimensional, time-dependent equations for thermal energy balance in advective river systems, has been applied and tested on segmented river systems in the Pacific Northwest. The state-space structure of the water temperature model described in previous work is extended to include propagation of uncertainty. Model results focus on proof of concept by comparing statistics from a study of a test basin with results from other studies that have used either process models or statistical models to estimate water temperature. The results from this study compared favorably with those of selected case studies using data-driven statistical models. The results for deterministic process models of water temperature were generally better than the grid-based method, particularly for those models developed from site-specific, data-intensive studies. Biases in the results from the grid-based system are attributed to heterogeneity in hydraulic characteristics and the method of estimating headwater temperatures.

  15. Invulnerability of power grids based on maximum flow theory

    NASA Astrophysics Data System (ADS)

    Fan, Wenli; Huang, Shaowei; Mei, Shengwei

    2016-11-01

    The invulnerability analysis against cascades is of great significance in evaluating the reliability of power systems. In this paper, we propose a novel cascading failure model based on the maximum flow theory to analyze the invulnerability of power grids. In the model, node initial loads are built on the feasible flows of nodes with a tunable parameter γ used to control the initial node load distribution. The simulation results show that both the invulnerability against cascades and the tolerance parameter threshold αT are affected by node load distribution greatly. As γ grows, the invulnerability shows the distinct change rules under different attack strategies and different tolerance parameters α respectively. These results are useful in power grid planning and cascading failure prevention.

  16. Silicon-based metallic micro grid for electron field emission

    NASA Astrophysics Data System (ADS)

    Kim, Jaehong; Jeon, Seok-Gy; Kim, Jung-Il; Kim, Geun-Ju; Heo, Duchang; Shin, Dong Hoon; Sun, Yuning; Lee, Cheol Jin

    2012-10-01

    A micro-scale metal grid based on a silicon frame for application to electron field emission devices is introduced and experimentally demonstrated. A silicon lattice containing aperture holes with an area of 80 × 80 µm2 and a thickness of 10 µm is precisely manufactured by dry etching the silicon on one side of a double-polished silicon wafer and by wet etching the opposite side. Because a silicon lattice is more rigid than a pure metal lattice, a thin layer of Au/Ti deposited on the silicon lattice for voltage application can be more resistant to the geometric stress caused by the applied electric field. The micro-fabrication process, the images of the fabricated grid with 88% geometric transparency and the surface profile measurement after thermal feasibility testing up to 700 °C are presented.

  17. New method adaptive to geospatial information acquisition and share based on grid

    NASA Astrophysics Data System (ADS)

    Fu, Yingchun; Yuan, Xiuxiao

    2005-11-01

    As we all know, it is difficult and time-consuming to acquire and share multi-source geospatial information in grid computing environment, especially for the data of different geo-reference benchmark. Although middleware for data format transformation has been applied by many grid applications and GIS software systems, it remains difficult to on demand realize spatial data assembly jobs among various geo-reference benchmarks because of complex computation of rigorous coordinate transformation model. To address the problem, an efficient hierarchical quadtree structure referred as multi-level grids is designed and coded to express the multi-scale global geo-space. The geospatial objects located in a certain grid of multi-level grids may be expressed as an increment value which is relative to the grid central point and is constant in different geo-reference benchmark. A mediator responsible for geo-reference transformation function with multi-level grids has been developed and aligned with grid service. With help of the mediator, a map or query spatial data sets from individual source of different geo-references can be merged into an uniform composite result. Instead of complex data pre-processing prior to compatible spatial integration, the introduced method is adaptive to be integrated with grid-enable service.

  18. Agent-based modeling supporting the migration of registry systems to grid based architectures.

    PubMed

    Cryer, Martin E; Frey, Lewis

    2009-03-01

    With the increasing age and cost of operation of the existing NCI SEER platform core technologies, such essential resources in the fight against cancer as these will eventually have to be migrated to Grid based systems. In order to model this migration, a simulation is proposed based upon an agent modeling technology. This modeling technique allows for simulation of complex and distributed services provided by a large scale Grid computing platform such as the caBIG(™) project's caGRID. In order to investigate such a migration to a Grid based platform technology, this paper proposes using agent-based modeling simulations to predict the performance of current and Grid configurations of the NCI SEER system integrated with the existing translational opportunities afforded by caGRID. The model illustrates how the use of Grid technology can potentially improve system response time as systems under test are scaled. In modeling SEER nodes accessing multiple registry silos, we show that the performance of SEER applications re-implemented in a Grid native manner exhibits a nearly constant user response time with increasing numbers of distributed registry silos, compared with the current application architecture which exhibits a linear increase in response time for increasing numbers of silos.

  19. Improving mobile robot localization: grid-based approach

    NASA Astrophysics Data System (ADS)

    Yan, Junchi

    2012-02-01

    Autonomous mobile robots have been widely studied not only as advanced facilities for industrial and daily life automation, but also as a testbed in robotics competitions for extending the frontier of current artificial intelligence. In many of such contests, the robot is supposed to navigate on the ground with a grid layout. Based on this observation, we present a localization error correction method by exploring the geometric feature of the tile patterns. On top of the classical inertia-based positioning, our approach employs three fiber-optic sensors that are assembled under the bottom of the robot, presenting an equilateral triangle layout. The sensor apparatus, together with the proposed supporting algorithm, are designed to detect a line's direction (vertical or horizontal) by monitoring the grid crossing events. As a result, the line coordinate information can be fused to rectify the cumulative localization deviation from inertia positioning. The proposed method is analyzed theoretically in terms of its error bound and also has been implemented and tested on a customary developed two-wheel autonomous mobile robot.

  20. A grid-based coulomb collision model for PIC codes

    SciTech Connect

    Jones, M.E.; Lemons, D.S.; Mason, R.J.; Thomas, V.A.; Winske, D.

    1996-01-01

    A new method is presented to model the intermediate regime between collisionless and Coulobm collision dominated plasmas in particle-in-cell codes. Collisional processes between particles of different species are treated throuqh the concept of a grid-based {open_quotes}collision field,{close_quotes} which can be particularly efficient for multi-dimensional applications. In this method, particles are scattered using a force which is determined from the moments of the distribution functions accumulated on the grid. The form of the force is such to reproduce themulti-fluid transport equations through the second (energy) moment. Collisions between particles of the same species require a separate treatment. For this, a Monte Carlo-like scattering method based on the Langevin equation is used. The details of both methods are presented, and their implementation in a new hybrid (particle ion, massless fluid electron) algorithm is described. Aspects of the collision model are illustrated through several one- and two-dimensional test problems as well as examples involving laser produced colliding plasmas.

  1. Performance-based contracting: turning vocational policy into jobs.

    PubMed

    Gates, Lauren B; Klein, Suzanne W; Akabas, Sheila H; Myers, Robert; Schwager, Marian; Kaelin-Kee, Jan

    2004-01-01

    The New York State Office of Mental Health has implemented a 2-year demonstration to determine if performance-based contracting (PBC) improves rates of competitive employment for people with serious persistent mental health conditions, and promotes best practice among providers. This article reports the interim findings from the demonstration. Initial results suggest that PBC is reaching the target population and promoting employment for a significant proportion of participants. It is also stimulating agency re-evaluation of consumer recruitment strategies, job development models, staffing patterns, coordination with support services, methods of post-placement support, and commitment to competitive employment for consumers.

  2. GSIMF : a web service based software and database management system for the generation grids.

    SciTech Connect

    Wang, N.; Ananthan, B.; Gieraltowski, G.; May, E.; Vaniachine, A.; Tech-X Corp.

    2008-01-01

    To process the vast amount of data from high energy physics experiments, physicists rely on Computational and Data Grids; yet, the distribution, installation, and updating of a myriad of different versions of different programs over the Grid environment is complicated, time-consuming, and error-prone. Our Grid Software Installation Management Framework (GSIMF) is a set of Grid Services that has been developed for managing versioned and interdependent software applications and file-based databases over the Grid infrastructure. This set of Grid services provide a mechanism to install software packages on distributed Grid computing elements, thus automating the software and database installation management process on behalf of the users. This enables users to remotely install programs and tap into the computing power provided by Grids.

  3. Grid regulation services for energy storage devices based on grid frequency

    SciTech Connect

    Pratt, Richard M; Hammerstrom, Donald J; Kintner-Meyer, Michael C.W.; Tuffner, Francis K

    2014-04-15

    Disclosed herein are representative embodiments of methods, apparatus, and systems for charging and discharging an energy storage device connected to an electrical power distribution system. In one exemplary embodiment, a controller monitors electrical characteristics of an electrical power distribution system and provides an output to a bi-directional charger causing the charger to charge or discharge an energy storage device (e.g., a battery in a plug-in hybrid electric vehicle (PHEV)). The controller can help stabilize the electrical power distribution system by increasing the charging rate when there is excess power in the electrical power distribution system (e.g., when the frequency of an AC power grid exceeds an average value), or by discharging power from the energy storage device to stabilize the grid when there is a shortage of power in the electrical power distribution system (e.g., when the frequency of an AC power grid is below an average value).

  4. Grid regulation services for energy storage devices based on grid frequency

    SciTech Connect

    Pratt, Richard M; Hammerstrom, Donald J; Kintner-Meyer, Michael C.W.; Tuffner, Francis K

    2013-07-02

    Disclosed herein are representative embodiments of methods, apparatus, and systems for charging and discharging an energy storage device connected to an electrical power distribution system. In one exemplary embodiment, a controller monitors electrical characteristics of an electrical power distribution system and provides an output to a bi-directional charger causing the charger to charge or discharge an energy storage device (e.g., a battery in a plug-in hybrid electric vehicle (PHEV)). The controller can help stabilize the electrical power distribution system by increasing the charging rate when there is excess power in the electrical power distribution system (e.g., when the frequency of an AC power grid exceeds an average value), or by discharging power from the energy storage device to stabilize the grid when there is a shortage of power in the electrical power distribution system (e.g., when the frequency of an AC power grid is below an average value).

  5. Grid regulation services for energy storage devices based on grid frequency

    DOEpatents

    Pratt, Richard M.; Hammerstrom, Donald J.; Kintner-Meyer, Michael C. W.; Tuffner, Francis K.

    2017-09-05

    Disclosed herein are representative embodiments of methods, apparatus, and systems for charging and discharging an energy storage device connected to an electrical power distribution system. In one exemplary embodiment, a controller monitors electrical characteristics of an electrical power distribution system and provides an output to a bi-directional charger causing the charger to charge or discharge an energy storage device (e.g., a battery in a plug-in hybrid electric vehicle (PHEV)). The controller can help stabilize the electrical power distribution system by increasing the charging rate when there is excess power in the electrical power distribution system (e.g., when the frequency of an AC power grid exceeds an average value), or by discharging power from the energy storage device to stabilize the grid when there is a shortage of power in the electrical power distribution system (e.g., when the frequency of an AC power grid is below an average value).

  6. Modeling earthquake activity using a memristor-based cellular grid

    NASA Astrophysics Data System (ADS)

    Vourkas, Ioannis; Sirakoulis, Georgios Ch.

    2013-04-01

    Earthquakes are absolutely among the most devastating natural phenomena because of their immediate and long-term severe consequences. Earthquake activity modeling, especially in areas known to experience frequent large earthquakes, could lead to improvements in infrastructure development that will prevent possible loss of lives and property damage. An earthquake process is inherently a nonlinear complex system and lately scientists have become interested in finding possible analogues of earthquake dynamics. The majority of the models developed so far were based on a mass-spring model of either one or two dimensions. An early approach towards the reordering and the improvement of existing models presenting the capacitor-inductor (LC) analogue, where the LC circuit resembles a mass-spring system and simulates earthquake activity, was also published recently. Electromagnetic oscillation occurs when energy is transferred between the capacitor and the inductor. This energy transformation is similar to the mechanical oscillation that takes place in the mass-spring system. A few years ago memristor-based oscillators were used as learning circuits exposed to a train of voltage pulses that mimic environment changes. The mathematical foundation of the memristor (memory resistor), as the fourth fundamental passive element, has been expounded by Leon Chua and later extended to a more broad class of memristors, known as memristive devices and systems. This class of two-terminal passive circuit elements with memory performs both information processing and storing of computational data on the same physical platform. Importantly, the states of these devices adjust to input signals and provide analog capabilities unavailable in standard circuit elements, resulting in adaptive circuitry and providing analog parallel computation. In this work, a memristor-based cellular grid is used to model earthquake activity. An LC contour along with a memristor is used to model seismic activity

  7. GPU based contouring method on grid DEM data

    NASA Astrophysics Data System (ADS)

    Tan, Liheng; Wan, Gang; Li, Feng; Chen, Xiaohui; Du, Wenlong

    2017-08-01

    This paper presents a novel method to generate contour lines from grid DEM data based on the programmable GPU pipeline. The previous contouring approaches often use CPU to construct a finite element mesh from the raw DEM data, and then extract contour segments from the elements. They also need a tracing or sorting strategy to generate the final continuous contours. These approaches can be heavily CPU-costing and time-consuming. Meanwhile the generated contours would be unsmooth if the raw data is sparsely distributed. Unlike the CPU approaches, we employ the GPU's vertex shader to generate a triangular mesh with arbitrary user-defined density, in which the height of each vertex is calculated through a third-order Cardinal spline function. Then in the same frame, segments are extracted from the triangles by the geometry shader, and translated to the CPU-side with an internal order in the GPU's transform feedback stage. Finally we propose a ;Grid Sorting; algorithm to achieve the continuous contour lines by travelling the segments only once. Our method makes use of multiple stages of GPU pipeline for computation, which can generate smooth contour lines, and is significantly faster than the previous CPU approaches. The algorithm can be easily implemented with OpenGL 3.3 API or higher on consumer-level PCs.

  8. Knowledge-based zonal grid generation for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Andrews, Alison E.

    1988-01-01

    Automation of flow field zoning in two dimensions is an important step towards reducing the difficulty of three-dimensional grid generation in computational fluid dynamics. Using a knowledge-based approach makes sense, but problems arise which are caused by aspects of zoning involving perception, lack of expert consensus, and design processes. These obstacles are overcome by means of a simple shape and configuration language, a tunable zoning archetype, and a method of assembling plans from selected, predefined subplans. A demonstration system for knowledge-based two-dimensional flow field zoning has been successfully implemented and tested on representative aerodynamic configurations. The results show that this approach can produce flow field zonings that are acceptable to experts with differing evaluation criteria.

  9. Transaction-Based Controls for Building-Grid Integration: VOLTTRON™

    SciTech Connect

    Akyol, Bora A.; Haack, Jereme N.; Hernandez, George; Katipamula, Srinivas; Widergren, Steven E.

    2015-07-01

    The U.S. Department of Energy’s (DOE’s) Building Technologies Office (BTO) is supporting the development of a “transactional network” concept that supports energy, operational, and financial transactions between building systems (e.g., rooftop units -- RTUs), and the electric power grid using applications, or 'agents', that reside either on the equipment, on local building controllers, or in the Cloud. The transactional network vision is delivered using a real-time, scalable reference platform called VOLTTRON that supports the needs of the changing energy system. VOLTTRON is an agent execution and an innovative distributed control and sensing software platform that supports modern control strategies, including agent-based and transaction-based controls. It enables mobile and stationary software agents to perform information gathering, processing, and control actions.

  10. DICOM image communication in globus-based medical grids.

    PubMed

    Vossberg, Michal; Tolxdorff, Thomas; Krefting, Dagmar

    2008-03-01

    Grid computing, the collaboration of distributed resources across institutional borders, is an emerging technology to meet the rising demand on computing power and storage capacity in fields such as high-energy physics, climate modeling, or more recently, life sciences. A secure, reliable, and highly efficient data transport plays an integral role in such grid environments and even more so in medical grids. Unfortunately, many grid middleware distributions, such as the well-known Globus Toolkit, lack the integration of the world-wide medical image communication standard Digital Imaging and Communication in Medicine (DICOM). Currently, the DICOM protocol first needs to be converted to the file transfer protocol (FTP) that is offered by the grid middleware. This effectively reduces most of the advantages and security an integrated network of DICOM devices offers. In this paper, a solution is proposed that adapts the DICOM protocol to the Globus grid security infrastructure and utilizes routers to transparently route traffic to and from DICOM systems. Thus, all legacy DICOM devices can be seamlessly integrated into the grid without modifications. A prototype of the grid routers with the most important DICOM functionality has been developed and successfully tested in the MediGRID test bed, the German grid project for life sciences.

  11. The agent-based spatial information semantic grid

    NASA Astrophysics Data System (ADS)

    Cui, Wei; Zhu, YaQiong; Zhou, Yong; Li, Deren

    2006-10-01

    Analyzing the characteristic of multi-Agent and geographic Ontology, The concept of the Agent-based Spatial Information Semantic Grid (ASISG) is defined and the architecture of the ASISG is advanced. ASISG is composed with Multi-Agents and geographic Ontology. The Multi-Agent Systems are composed with User Agents, General Ontology Agent, Geo-Agents, Broker Agents, Resource Agents, Spatial Data Analysis Agents, Spatial Data Access Agents, Task Execution Agent and Monitor Agent. The architecture of ASISG have three layers, they are the fabric layer, the grid management layer and the application layer. The fabric layer what is composed with Data Access Agent, Resource Agent and Geo-Agent encapsulates the data of spatial information system so that exhibits a conceptual interface for the Grid management layer. The Grid management layer, which is composed with General Ontology Agent, Task Execution Agent and Monitor Agent and Data Analysis Agent, used a hybrid method to manage all resources that were registered in a General Ontology Agent that is described by a General Ontology System. The hybrid method is assembled by resource dissemination and resource discovery. The resource dissemination push resource from Local Ontology Agent to General Ontology Agent and the resource discovery pull resource from the General Ontology Agent to Local Ontology Agents. The Local Ontology Agent is derived from special domain and describes the semantic information of local GIS. The nature of the Local Ontology Agents can be filtrated to construct a virtual organization what could provides a global scheme. The virtual organization lightens the burdens of guests because they need not search information site by site manually. The application layer what is composed with User Agent, Geo-Agent and Task Execution Agent can apply a corresponding interface to a domain user. The functions that ASISG should provide are: 1) It integrates different spatial information systems on the semantic The Grid

  12. On the applications of algebraic grid generation methods based on transfinite interpolation

    NASA Technical Reports Server (NTRS)

    Nguyen, Hung Lee

    1989-01-01

    Algebraic grid generation methods based on transfinite interpolation called the two-boundary and four-boundary methods are applied for generating grids with highly complex boundaries. These methods yield grid point distributions that allow for accurate application to regions of sharp gradients in the physical domain or time-dependent problems with small length scale phenomena. Algebraic grids are derived using the two-boundary and four-boundary methods for applications in both two- and three-dimensional domains. Grids are developed for distinctly different geometrical problems and the two-boundary and four-boundary methods are demonstrated to be applicable to a wide class of geometries.

  13. Relationship between bases of power and job stresses: role of mentoring.

    PubMed

    Lo, May-Chiun; Thurasamy, Ramayah; Liew, Wei Tak

    2014-01-01

    Building upon the social exchange theory, this paper hypothesized the direct effect of bases of power on job stress with mentoring as moderator. Power bases and job stresses were conceptualized as 7- and 3- dimensional constructs, respectively. One hundred and ninety-five Malaysian managers and executives working in large-scale multinational companies participated in this study. The results have indicated that bases of power as possessed by supervisors have strong effect on employees' job stress and mentoring was found to have moderated the relationship between power bases and job stress. Implications of the findings, potential limitations of the study, and directions for future research were discussed further.

  14. Competency-based certification project. Phase I: Job analysis.

    PubMed

    Gessaroli, M E; Poliquin, M

    1994-08-01

    The Canadian Association of Medical Radiation Technologists (C.A.M.R.T.) is transforming its existing certification process into a competency-based process, consistent with the knowledge and skills required by entry-level radiography, radiation therapy and nuclear medicine technology practitioners. The project concurs with the change in focus advocated by the Conjoint Committee on Allied Medical Education Accreditation. The Committee supports new accreditation requirements that, among other things, place more emphasis on competency-based learning outcomes. Following is the first of three papers prepared by the C.A.M.R.T. to explain the project and the strategy for its implementation, focusing respectively on each phase. This paper discusses Phase One: the job analysis.

  15. Machine learning based job status prediction in scientific clusters

    SciTech Connect

    Yoo, Wucherl; Sim, Alex; Wu, Kesheng

    2016-09-01

    Large high-performance computing systems are built with increasing number of components with more CPU cores, more memory, and more storage space. At the same time, scientific applications have been growing in complexity. Together, they are leading to more frequent unsuccessful job statuses on HPC systems. From measured job statuses, 23.4% of CPU time was spent to the unsuccessful jobs. Here, we set out to study whether these unsuccessful job statuses could be anticipated from known job characteristics. To explore this possibility, we have developed a job status prediction method for the execution of jobs on scientific clusters. The Random Forests algorithm was applied to extract and characterize the patterns of unsuccessful job statuses. Experimental results show that our method can predict the unsuccessful job statuses from the monitored ongoing job executions in 99.8% the cases with 83.6% recall and 94.8% precision. Lastly, this prediction accuracy can be sufficiently high that it can be used to mitigation procedures of predicted failures.

  16. Machine learning based job status prediction in scientific clusters

    DOE PAGES

    Yoo, Wucherl; Sim, Alex; Wu, Kesheng

    2016-09-01

    Large high-performance computing systems are built with increasing number of components with more CPU cores, more memory, and more storage space. At the same time, scientific applications have been growing in complexity. Together, they are leading to more frequent unsuccessful job statuses on HPC systems. From measured job statuses, 23.4% of CPU time was spent to the unsuccessful jobs. Here, we set out to study whether these unsuccessful job statuses could be anticipated from known job characteristics. To explore this possibility, we have developed a job status prediction method for the execution of jobs on scientific clusters. The Random Forestsmore » algorithm was applied to extract and characterize the patterns of unsuccessful job statuses. Experimental results show that our method can predict the unsuccessful job statuses from the monitored ongoing job executions in 99.8% the cases with 83.6% recall and 94.8% precision. Lastly, this prediction accuracy can be sufficiently high that it can be used to mitigation procedures of predicted failures.« less

  17. A genetic algorithm-based job scheduling model for big data analytics.

    PubMed

    Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei

    Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.

  18. The Construction of an Ontology-Based Ubiquitous Learning Grid

    ERIC Educational Resources Information Center

    Liao, Ching-Jung; Chou, Chien-Chih; Yang, Jin-Tan David

    2009-01-01

    The purpose of this study is to incorporate adaptive ontology into ubiquitous learning grid to achieve seamless learning environment. Ubiquitous learning grid uses ubiquitous computing environment to infer and determine the most adaptive learning contents and procedures in anytime, any place and with any device. To achieve the goal, an…

  19. The Construction of an Ontology-Based Ubiquitous Learning Grid

    ERIC Educational Resources Information Center

    Liao, Ching-Jung; Chou, Chien-Chih; Yang, Jin-Tan David

    2009-01-01

    The purpose of this study is to incorporate adaptive ontology into ubiquitous learning grid to achieve seamless learning environment. Ubiquitous learning grid uses ubiquitous computing environment to infer and determine the most adaptive learning contents and procedures in anytime, any place and with any device. To achieve the goal, an…

  20. A peer-to-peer resource scheduling approach for photonic grid network based on OBGP

    NASA Astrophysics Data System (ADS)

    Wu, Runze; Ji, Yuefeng

    2005-11-01

    In this paper we present a resource scheduling mechanism for providing dynamic lightpaths to photonic grid network and point out that grid enabled by optical network has huge potential effect on pushing the next optical network applications. Furthermore we investigate photonic grid architecture and control plane based on peer-to-peer is also provided to control optical network communication resources dynamically. We also certificate the idea of extending BGP towards optical network, which is called Optical Border Gateway Protocol used to provide IP-based protocols to control optical network, and gives a dynamic lightpath scheduling approach over multi-wavelength optical network as a new grid service based on OBGP.

  1. Jobs, Jobs, Jobs!

    ERIC Educational Resources Information Center

    Jacobson, Linda

    2011-01-01

    Teaching is not the safe career bet that it once was. The thinking used to be: New students will always be entering the public schools, and older teachers will always be retiring, so new teachers will always be needed. But teaching jobs aren't secure enough to stand up to the "Great Recession," as this drawn-out downturn has been called. Across…

  2. Jobs, Jobs, Jobs!

    ERIC Educational Resources Information Center

    Jacobson, Linda

    2011-01-01

    Teaching is not the safe career bet that it once was. The thinking used to be: New students will always be entering the public schools, and older teachers will always be retiring, so new teachers will always be needed. But teaching jobs aren't secure enough to stand up to the "Great Recession," as this drawn-out downturn has been called. Across…

  3. Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd

    2015-01-01

    Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.

  4. Environmental applications based on GIS and GRID technologies

    NASA Astrophysics Data System (ADS)

    Demontis, R.; Lorrai, E.; Marrone, V. A.; Muscas, L.; Spanu, V.; Vacca, A.; Valera, P.

    2009-04-01

    In the last decades, the collection and use of environmental data has enormously increased in a wide range of applications. Simultaneously, the explosive development of information technology and its ever wider data accessibility have made it possible to store and manipulate huge quantities of data. In this context, the GRID approach is emerging worldwide as a tool allowing to provision a computational task with administratively-distant resources. The aim of this paper is to present three environmental applications (Land Suitability, Desertification Risk Assessment, Georesources and Environmental Geochemistry) foreseen within the AGISGRID (Access and query of a distributed GIS/Database within the GRID infrastructure, http://grida3.crs4.it/enginframe/agisgrid/index.xml) activities of the GRIDA3 (Administrator of sharing resources for data analysis and environmental applications, http://grida3.crs4.it) project. This project, co-funded by the Italian Ministry of research, is based on the use of shared environmental data through GRID technologies and accessible by a WEB interface, aimed at public and private users in the field of environmental management and land use planning. The technologies used for AGISGRID include: - the client-server-middleware iRODS™ (Integrated Rule-Oriented Data System) (https://irods.org); - the EnginFrame system (http://www.nice-italy.com/main/index.php?id=32), the grid portal that supplies a frame to make available, via Intranet/Internet, the developed GRID applications; - the software GIS GRASS (Geographic Resources Analysis Support System) (http://grass.itc.it); - the relational database PostgreSQL (http://www.posgresql.org) and the spatial database extension PostGis; - the open source multiplatform Mapserver (http://mapserver.gis.umn.edu), used to represent the geospatial data through typical WEB GIS functionalities. Three GRID nodes are directly involved in the applications: the application workflow is implemented at the CRS4 (Pula

  5. Grid Work

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Pointwise Inc.'s, Gridgen Software is a system for the generation of 3D (three dimensional) multiple block, structured grids. Gridgen is a visually-oriented, graphics-based interactive code used to decompose a 3D domain into blocks, distribute grid points on curves, initialize and refine grid points on surfaces and initialize volume grid points. Gridgen is available to U.S. citizens and American-owned companies by license.

  6. Jobs to Manufacturing Careers: Work-Based Courses. Work-Based Learning in Action

    ERIC Educational Resources Information Center

    Kobes, Deborah

    2016-01-01

    This case study, one of a series of publications exploring effective and inclusive models of work-based learning, finds that work-based courses bring college to the production line by using the job as a learning lab. Work-based courses are an innovative way to give incumbent workers access to community college credits and degrees. They are…

  7. Classroom-Based Interventions and Teachers' Perceived Job Stressors and Confidence: Evidence from a Randomized Trial in Head Start Settings

    ERIC Educational Resources Information Center

    Zhai, Fuhua; Raver, C. Cybele; Li-Grining, Christine

    2011-01-01

    Preschool teachers' job stressors have received increasing attention but have been understudied in the literature. We investigated the impacts of a classroom-based intervention, the Chicago School Readiness Project (CSRP), on teachers' perceived job stressors and confidence, as indexed by their perceptions of job control, job resources, job…

  8. Classroom-Based Interventions and Teachers' Perceived Job Stressors and Confidence: Evidence from a Randomized Trial in Head Start Settings

    ERIC Educational Resources Information Center

    Zhai, Fuhua; Raver, C. Cybele; Li-Grining, Christine

    2011-01-01

    Preschool teachers' job stressors have received increasing attention but have been understudied in the literature. We investigated the impacts of a classroom-based intervention, the Chicago School Readiness Project (CSRP), on teachers' perceived job stressors and confidence, as indexed by their perceptions of job control, job resources, job…

  9. Faster GPU-based convolutional gridding via thread coarsening

    NASA Astrophysics Data System (ADS)

    Merry, B.

    2016-07-01

    Convolutional gridding is a processor-intensive step in interferometric imaging. While it is possible to use graphics processing units (GPUs) to accelerate this operation, existing methods use only a fraction of the available flops. We apply thread coarsening to improve the efficiency of an existing algorithm, and observe performance gains of up to 3.2 × for single-polarization gridding and 1.9 × for quad-polarization gridding on a GeForce GTX 980, and smaller but still significant gains on a Radeon R9 290X.

  10. Using Grid for the BABAR Experiment

    SciTech Connect

    Bozzi, C.

    2005-02-11

    The BaBar experiment has been taking data since 1999. In 2001 the computing group started to evaluate the possibility to evolve toward a distributed computing model in a grid environment. We built a prototype system, based on the European Data Grid (EDG), to submit full-scale analysis and Monte Carlo simulation jobs. Computing elements, storage elements, and worker nodes have been installed at SLAC and at various European sites. A BaBar virtual organization (VO) and a test replica catalog (RC) are maintained in Manchester, U.K., and the experiment is using three EDG testbed resource brokers in the U.K. and in Italy. First analysis tests were performed under the assumption that a standard BaBar software release was available at the grid target sites, using RC to register information about the executable and the produced n-tuples. Hundreds of analysis jobs accessing either Objectivity or Root data files ran on the grid. We tested the Monte Carlo production using a farm of the INFN-grid testbed customized to install an Objectivity database and run BaBar simulation software. First simulation production tests were performed using standard Job Description Language commands and the output files were written on the closest storage element. A package that can be officially distributed to grid sites not specifically customized for BaBar has been prepared. We are studying the possibility to add a user friendly interface to access grid services for BaBar.

  11. Organizational and Environmental Predictors of Job Satisfaction in Community-based HIV/AIDS Service Organizations.

    ERIC Educational Resources Information Center

    Gimbel, Ronald W.; Lehrman, Sue; Strosberg, Martin A.; Ziac, Veronica; Freedman, Jay; Savicki, Karen; Tackley, Lisa

    2002-01-01

    Using variables measuring organizational characteristics and environmental influences, this study analyzed job satisfaction in community-based HIV/AIDS organizations. Organizational characteristics were found to predict job satisfaction among employees with varying intensity based on position within the organization. Environmental influences had…

  12. Risk Aware Overbooking for Commercial Grids

    NASA Astrophysics Data System (ADS)

    Birkenheuer, Georg; Brinkmann, André; Karl, Holger

    The commercial exploitation of the emerging Grid and Cloud markets needs SLAs to sell computing run times. Job traces show that users have a limited ability to estimate the resource needs of their applications. This offers the possibility to apply overbooking to negotiation, but overbooking increases the risk of SLA violations. This work presents an overbooking approach with an integrated risk assessment model. Simulations for this model, which are based on real-world job traces, show that overbooking offers significant opportunities for Grid and Cloud providers.

  13. The 2004 knowledge base parametric grid data software suite.

    SciTech Connect

    Wilkening, Lisa K.; Simons, Randall W.; Ballard, Sandy; Jensen, Lee A.; Chang, Marcus C.; Hipp, James Richard

    2004-08-01

    One of the most important types of data in the National Nuclear Security Administration (NNSA) Ground-Based Nuclear Explosion Monitoring Research and Engineering (GNEM R&E) Knowledge Base (KB) is parametric grid (PG) data. PG data can be used to improve signal detection, signal association, and event discrimination, but so far their greatest use has been for improving event location by providing ground-truth-based corrections to travel-time base models. In this presentation we discuss the latest versions of the complete suite of Knowledge Base PG tools developed by NNSA to create, access, manage, and view PG data. The primary PG population tool is the Knowledge Base calibration integration tool (KBCIT). KBCIT is an interactive computer application to produce interpolated calibration-based information that can be used to improve monitoring performance by improving precision of model predictions and by providing proper characterizations of uncertainty. It is used to analyze raw data and produce kriged correction surfaces that can be included in the Knowledge Base. KBCIT not only produces the surfaces but also records all steps in the analysis for later review and possible revision. New features in KBCIT include a new variogram autofit algorithm; the storage of database identifiers with a surface; the ability to merge surfaces; and improved surface-smoothing algorithms. The Parametric Grid Library (PGL) provides the interface to access the data and models stored in a PGL file database. The PGL represents the core software library used by all the GNEM R&E tools that read or write PGL data (e.g., KBCIT and LocOO). The library provides data representations and software models to support accurate and efficient seismic phase association and event location. Recent improvements include conversion of the flat-file database (FDB) to an Oracle database representation; automatic access of station/phase tagged models from the FDB during location; modification of the core

  14. Magnetic resonance imaging (MRI) simulation on EGEE grid architecture: a web portal design.

    PubMed

    Bellet, F; Nistoreanu, I; Pera, C; Benoit-Cattin, H

    2006-01-01

    In this paper, we present a web portal that enables simulation of MRI images on the grid. Such simulations are done using the SIMRI MRI simulator that is implemented on the grid using MPI and the LCG2 middleware. MRI simulations are mainly used to study MRI sequence, and to validate image processing algorithms. As MRI simulation is computationally very expensive, grid technologies appear to be a real added value for the MRI simulation task. Nevertheless the grid access should be simplified to enable final user running MRI simulations. That is why we develop this specific web portal to propose a user friendly interface for MRI simulation on the grid. The web portal is designed using a three layers client/server architecture. Its main component is the process layer part that manages the simulation jobs. This part is mainly based on a java thread that screens a data base of simulation jobs. The thread submits the new jobs to the grid and updates the status of the running jobs. When a job is terminated, the thread sends the simulated image to the user. Through a client web interface, the user can submit new simulation jobs, get a detailed status of the running jobs, have the history of all the terminated jobs as well as their status and corresponding simulated image.

  15. A windows-based job safety analysis program for mine safety management

    SciTech Connect

    Chakraborty, P.R.; Poukhovski, D.A.; Bise, C.J.

    1996-12-31

    Job Safety Analysis (JSA) is a process used to determine hazards of and safe procedures for each step of a job. With JSA, the most important steps needed to properly perform a job are first identified. Thus, a specific job or work assignment can be separated into a series of relatively simple steps; the hazards associated with each step are then identified. Finally, solutions can be developed to control each hazard. A Windows-based Job Safety Analysis program (WIN-JSA) was developed at Penn State to assist the safety officials at a mine location in creating new JSAs and regularly reviewing the existing JSAs. The program is an integrated collection of four databases that contain information regarding jobs, job steps, hazards associated with each job step, and recommendations for overcoming the hazards, respectively. This Windows-based personal-computer (PC) program allows the user to access these databases to build a new job configuration (essentially, a new JSA), modify an existing JSA, and print hard copies. It is designed to be used by safety and training supervisors who possess little or no previous computer experience. Therefore, the screen views are designed to be self-explanatory, and the print-outs simulate the commonly used JSA format. Overall, the PC-based approach of creating and maintaining JSAs provides flexibility, reduces paperwork, and can be successfully integrated into existing JSA programs to increase their effectiveness.

  16. Integrating Reconfigurable Hardware-Based Grid for High Performance Computing

    PubMed Central

    Dondo Gazzano, Julio; Sanchez Molina, Francisco; Rincon, Fernando; López, Juan Carlos

    2015-01-01

    FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC). The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process. PMID:25874241

  17. Power Grid Construction Project Portfolio Optimization Based on Bi-level programming model

    NASA Astrophysics Data System (ADS)

    Zhao, Erdong; Li, Shangqi

    2017-08-01

    As the main body of power grid operation, county-level power supply enterprises undertake an important emission to guarantee the security of power grid operation and safeguard social power using order. The optimization of grid construction projects has been a key issue of power supply capacity and service level of grid enterprises. According to the actual situation of power grid construction project optimization of county-level power enterprises, on the basis of qualitative analysis of the projects, this paper builds a Bi-level programming model based on quantitative analysis. The upper layer of the model is the target restriction of the optimal portfolio; the lower layer of the model is enterprises’ financial restrictions on the size of the enterprise project portfolio. Finally, using a real example to illustrate operation proceeding and the optimization result of the model. Through qualitative analysis and quantitative analysis, the bi-level programming model improves the accuracy and normative standardization of power grid enterprises projects.

  18. OPNET/Simulink Based Testbed for Disturbance Detection in the Smart Grid

    SciTech Connect

    Sadi, Mohammad A. H.; Dasgupta, Dipankar; Ali, Mohammad Hassan; Abercrombie, Robert K

    2015-01-01

    The important backbone of the smart grid is the cyber/information infrastructure, which is primarily used to communicate with different grid components. A smart grid is a complex cyber physical system containing a numerous and variety number of sources, devices, controllers and loads. Therefore, the smart grid is vulnerable to grid related disturbances. For such dynamic system, disturbance and intrusion detection is a paramount issue. This paper presents a Simulink and Opnet based co-simulated platform to carry out a cyber-intrusion in cyber network for modern power systems and the smart grid. The IEEE 30 bus power system model is used to demonstrate the effectiveness of the simulated testbed. The experiments were performed by disturbing the circuit breakers reclosing time through a cyber-attack. Different disturbance situations in the considered test system are considered and the results indicate the effectiveness of the proposed co-simulated scheme.

  19. Partitioning medical image databases for content-based queries on a Grid.

    PubMed

    Montagnat, J; Breton, V; E Magnin, I

    2005-01-01

    In this paper we study the impact of executing a medical image database query application on the grid. For lowering the total computation time, the image database is partitioned into subsets to be processed on different grid nodes. A theoretical model of the application complexity and estimates of the grid execution overhead are used to efficiently partition the database. We show results demonstrating that smart partitioning of the database can lead to significant improvements in terms of total computation time. Grids are promising for content-based image retrieval in medical databases.

  20. An adaptive grid-based all hexahedral meshing algorithm based on 2-refinement.

    SciTech Connect

    Edgel, Jared; Benzley, Steven E.; Owen, Steven James

    2010-08-01

    Most adaptive mesh generation algorithms employ a 3-refinement method. This method, although easy to employ, provides a mesh that is often too coarse in some areas and over refined in other areas. Because this method generates 27 new hexes in place of a single hex, there is little control on mesh density. This paper presents an adaptive all-hexahedral grid-based meshing algorithm that employs a 2-refinement method. 2-refinement is based on dividing the hex to be refined into eight new hexes. This method allows a greater control on mesh density when compared to a 3-refinement procedure. This adaptive all-hexahedral meshing algorithm provides a mesh that is efficient for analysis by providing a high element density in specific locations and a reduced mesh density in other areas. In addition, this tool can be effectively used for inside-out hexahedral grid based schemes, using Cartesian structured grids for the base mesh, which have shown great promise in accommodating automatic all-hexahedral algorithms. This adaptive all-hexahedral grid-based meshing algorithm employs a 2-refinement insertion method. This allows greater control on mesh density when compared to 3-refinement methods. This algorithm uses a two layer transition zone to increase element quality and keeps transitions from lower to higher mesh densities smooth. Templates were introduced to allow both convex and concave refinement.

  1. The relationships among nurses' job characteristics and attitudes toward web-based continuing learning.

    PubMed

    Chiu, Yen-Lin; Tsai, Chin-Chung; Fan Chiang, Chih-Yun

    2013-04-01

    The purpose of this study was to explore the relationships between job characteristics (job demands, job control and social support) and nurses' attitudes toward web-based continuing learning. A total of 221 in-service nurses from hospitals in Taiwan were surveyed. The Attitudes toward Web-based Continuing Learning Survey (AWCL) was employed as the outcome variables, and the Chinese version Job Characteristic Questionnaire (C-JCQ) was administered to assess the predictors for explaining the nurses' attitudes toward web-based continuing learning. To examine the relationships among these variables, hierarchical regression was conducted. The results of the regression analysis revealed that job control and social support positively associated with nurses' attitudes toward web-based continuing learning. However, the relationship of job demands to such learning was not significant. Moreover, a significant demands×job control interaction was found, but the job demands×social support interaction had no significant relationships with attitudes toward web-based continuing learning.

  2. A personality trait-based interactionist model of job performance.

    PubMed

    Tett, Robert P; Burnett, Dawn D

    2003-06-01

    Evidence for situational specificity of personality-job performance relations calls for better understanding of how personality is expressed as valued work behavior. On the basis of an interactionist principle of trait activation (R. P. Tett & H. A. Guterman, 2000), a model is proposed that distinguishes among 5 situational features relevant to trait expression (job demands, distracters, constraints, releasers, and facilitators), operating at task, social, and organizational levels. Trait-expressive work behavior is distinguished from (valued) job performance in clarifying the conditions favoring personality use in selection efforts. The model frames linkages between situational taxonomies (e.g., J. L. Holland's [1985] RIASEC model) and the Big Five and promotes useful discussion of critical issues, including situational specificity, personality-oriented job analysis, team building, and work motivation.

  3. School Based Job Placement Service Model. Final Report 1974-75.

    ERIC Educational Resources Information Center

    Fehnel, Barry J.; Grande, Joseph J.

    The school-based job placement model described in the report was implemented as a cooperative effort between the Reading-Muhlenberg Area Vocational-Technical School and the Bureau of Employment Security. It was designed to help students trained for entry-level positions to find jobs in the areas of their training. General objectives of the program…

  4. Implications of Method-Based Differences in Measuring Job Characteristics

    DTIC Science & Technology

    1988-08-01

    1950’s and grew into a theory of job enrichment and work motivation ( Herzberg , 1966; Herzberg , Mausner, & Snyderman, 1959). ". 3 This motivation-hygiene...Significance) / 3] x Autonomy x Feedback. In a sample of 658 employees in 62 different jobs in seven organizations, this composite correlated in...environment 37 9. Engaging in physical activities 18 10. Supervising/directing/estimating 11 11. Public/customer/related contacts 5 12. Working in an

  5. Fast and precise dense grid size measurement method based on coaxial dual optical imaging system

    NASA Astrophysics Data System (ADS)

    Guo, Jiping; Peng, Xiang; Yu, Jiping; Hao, Jian; Diao, Yan; Song, Tao; Li, Ameng; Lu, Xiaowei

    2015-10-01

    Test sieves with dense grid structure are widely used in many fields, accurate gird size calibration is rather critical for success of grading analysis and test sieving. But traditional calibration methods suffer from the disadvantages of low measurement efficiency and shortage of sampling number of grids which could lead to quality judgment risk. Here, a fast and precise test sieve inspection method is presented. Firstly, a coaxial imaging system with low and high optical magnification probe is designed to capture the grid images of the test sieve. Then, a scaling ratio between low and high magnification probes can be obtained by the corresponding grids in captured images. With this, all grid dimensions in low magnification image can be obtained by measuring few corresponding grids in high magnification image with high accuracy. Finally, by scanning the stage of the tri-axis platform of the measuring apparatus, whole surface of the test sieve can be quickly inspected. Experiment results show that the proposed method can measure the test sieves with higher efficiency compare to traditional methods, which can measure 0.15 million grids (gird size 0.1mm) within only 60 seconds, and it can measure grid size range from 20μm to 5mm precisely. In a word, the presented method can calibrate the grid size of test sieve automatically with high efficiency and accuracy. By which, surface evaluation based on statistical method can be effectively implemented, and the quality judgment will be more reasonable.

  6. A framework for graph-based synthesis, analysis, and visualization of HPC cluster job data.

    SciTech Connect

    Mayo, Jackson R.; Kegelmeyer, W. Philip, Jr.; Wong, Matthew H.; Pebay, Philippe Pierre; Gentile, Ann C.; Thompson, David C.; Roe, Diana C.; De Sapio, Vincent; Brandt, James M.

    2010-08-01

    The monitoring and system analysis of high performance computing (HPC) clusters is of increasing importance to the HPC community. Analysis of HPC job data can be used to characterize system usage and diagnose and examine failure modes and their effects. This analysis is not straightforward, however, due to the complex relationships that exist between jobs. These relationships are based on a number of factors, including shared compute nodes between jobs, proximity of jobs in time, etc. Graph-based techniques represent an approach that is particularly well suited to this problem, and provide an effective technique for discovering important relationships in job queuing and execution data. The efficacy of these techniques is rooted in the use of a semantic graph as a knowledge representation tool. In a semantic graph job data, represented in a combination of numerical and textual forms, can be flexibly processed into edges, with corresponding weights, expressing relationships between jobs, nodes, users, and other relevant entities. This graph-based representation permits formal manipulation by a number of analysis algorithms. This report presents a methodology and software implementation that leverages semantic graph-based techniques for the system-level monitoring and analysis of HPC clusters based on job queuing and execution data. Ontology development and graph synthesis is discussed with respect to the domain of HPC job data. The framework developed automates the synthesis of graphs from a database of job information. It also provides a front end, enabling visualization of the synthesized graphs. Additionally, an analysis engine is incorporated that provides performance analysis, graph-based clustering, and failure prediction capabilities for HPC systems.

  7. Grid-based Model of The Volga Basin

    NASA Astrophysics Data System (ADS)

    Tate, E.; Georgievsky, M.; Shalygin, A.; Yezhov, A.

    The Volga is the largest river in Europe and is of great significance for the economy of Russia. The Volga basin, of about 1.4 million km2, displays a wide range of to- pography, hydrometeorology and water resource problems. Its cascade of 12 large reservoirs controls the river flow. The Volga contributes about 80% of the total water inflow to the Caspian Sea and thus forms the main influence on Sea level fluctuations. Variability in climate and climate change give uncertainty to the current and future availability and distribution of water resources in the Volga basin. This Volga model was part of a larger study that aimed to develop a realistic and consistent methodol- ogy, including the facility to take into account the effects of climate change scenarios for the year 2050, indicating possible changes in future river inflows to the Caspian Sea. The methodology involved examining flows and water demands on a 0.5 by 0.5 grid. This choice was a compromise between that needed to represent spatial variabil- ity and the availability of suitable data. The modelling approach was based on work aimed at examining water resources availability on a world-wide scale (Meigh et al., 1998). At a preliminary stage the main direction of flow for each cell is determined assuming that all the flow from one cell flows into one of the adjoining cells. Based on these flow directions, the order in which the cells must be processed is determined so that the flows from upstream cells have always been calculated before processing the cell into which they flow. The processing order also takes into account the artificial transfers between cells. Surface runoff is generated for each cell by using a rainfall- runoff model; the model chosen was the probability-distributed model (PDM) devel- oped by Moore (1985). The flows are then routed through the linked cells to estimate total runoff for each cell. The effects of lakes and wetlands, water abstractions, return flows, artificial water

  8. Parallel and Grid-Based Data Mining - Algorithms, Models and Systems for High-Performance KDD

    NASA Astrophysics Data System (ADS)

    Congiusta, Antonio; Talia, Domenico; Trunfio, Paolo

    Data Mining often is a computing intensive and time requiring process. For this reason, several Data Mining systems have been implemented on parallel computing platforms to achieve high performance in the analysis of large data sets. Moreover, when large data repositories are coupled with geographical distribution of data, users and systems, more sophisticated technologies are needed to implement high-performance distributed KDD systems. Since computational Grids emerged as privileged platforms for distributed computing, a growing number of Grid-based KDD systems has been proposed. In this chapter we first discuss different ways to exploit parallelism in the main Data Mining techniques and algorithms, then we discuss Grid-based KDD systems. Finally, we introduce the Knowledge Grid, an environment which makes use of standard Grid middleware to support the development of parallel and distributed knowledge discovery applications.

  9. SARS Grid--an AG-based disease management and collaborative platform.

    PubMed

    Hung, Shu-Hui; Hung, Tsung-Chieh; Juang, Jer-Nan

    2006-01-01

    This paper describes the development of the NCHC's Severe Acute Respiratory Syndrome (SARS) Grid project-An Access Grid (AG)-based disease management and collaborative platform that allowed for SARS patient's medical data to be dynamically shared and discussed between hospitals and doctors using AG's video teleconferencing (VTC) capabilities. During the height of the SARS epidemic in Asia, SARS Grid and the SARShope website significantly curved the spread of SARS by helping doctors manage the in-hospital and in-home care of quarantined SARS patients through medical data exchange and the monitoring of the patient's symptoms. Now that the SARS epidemic has ended, the primary function of the SARS Grid project is that of a web-based informatics tool to increase pubic awareness of SARS and other epidemic diseases. Additionally, the SARS Grid project can be viewed and further studied as an outstanding model of epidemic disease prevention and/or containment.

  10. Grid-based medical image workflow and archiving for research and enterprise PACS applications

    NASA Astrophysics Data System (ADS)

    Erberich, Stephan G.; Dixit, Manasee; Chen, Vincent; Chervenak, Ann; Nelson, Marvin D.; Kesselmann, Carl

    2006-03-01

    PACS provides a consistent model to communicate and to store images with recent additions to fault tolerance and disaster reliability. However PACS still lacks fine granulated user based authentication and authorization, flexible data distribution, and semantic associations between images and their embedded information. These are critical components for future Enterprise operations in dynamic medical research and health care environments. Here we introduce a flexible Grid based model of a PACS in order to add these methods and to describe its implementation in the Children's Oncology Group (COG) Grid. The combination of existing standards for medical images, DICOM, and the abstraction to files and meta catalog information in the Grid domain provides new flexibility beyond traditional PACS design. We conclude that Grid technology demonstrates a reliable and efficient distributed informatics infrastructure which is well applicable to medical informatics as described in this work. Grid technology will provide new opportunities for PACS deployment and subsequently new medical image applications.

  11. VIM-based dynamic sparse grid approach to partial differential equations.

    PubMed

    Mei, Shu-Li

    2014-01-01

    Combining the variational iteration method (VIM) with the sparse grid theory, a dynamic sparse grid approach for nonlinear PDEs is proposed in this paper. In this method, a multilevel interpolation operator is constructed based on the sparse grids theory firstly. The operator is based on the linear combination of the basic functions and independent of them. Second, by means of the precise integration method (PIM), the VIM is developed to solve the nonlinear system of ODEs which is obtained from the discretization of the PDEs. In addition, a dynamic choice scheme on both of the inner and external grid points is proposed. It is different from the traditional interval wavelet collocation method in which the choice of both of the inner and external grid points is dynamic. The numerical experiments show that our method is better than the traditional wavelet collocation method, especially in solving the PDEs with the Nuemann boundary conditions.

  12. Analyzing data flows of WLCG jobs at batch job level

    NASA Astrophysics Data System (ADS)

    Kuehn, Eileen; Fischer, Max; Giffels, Manuel; Jung, Christopher; Petzold, Andreas

    2015-05-01

    With the introduction of federated data access to the workflows of WLCG, it is becoming increasingly important for data centers to understand specific data flows regarding storage element accesses, firewall configurations, as well as the scheduling of batch jobs themselves. As existing batch system monitoring and related system monitoring tools do not support measurements at batch job level, a new tool has been developed and put into operation at the GridKa Tier 1 center for monitoring continuous data streams and characteristics of WLCG jobs and pilots. Long term measurements and data collection are in progress. These measurements already have been proven to be useful analyzing misbehaviors and various issues. Therefore we aim for an automated, realtime approach for anomaly detection. As a requirement, prototypes for standard workflows have to be examined. Based on measurements of several months, different features of HEP jobs are evaluated regarding their effectiveness for data mining approaches to identify these common workflows. The paper will introduce the actual measurement approach and statistics as well as the general concept and first results classifying different HEP job workflows derived from the measurements at GridKa.

  13. Grid-based visual aid for enhanced microscopy screening in diagnostic cytopathology

    NASA Astrophysics Data System (ADS)

    Riziotis, Christos; Tsiambas, Evangelos

    2016-10-01

    A grid acting as a spatial reference and calibration aid, fabricated on glass cover slips by laser micromachining and attached on the carrier microscope slide, is proposed as a visual aid for the improvement of microscopy diagnostic procedure in the screening of cytological slides. A set of borderline and also abnormal PAP test cases -according to Bethesda 2014 revised terminology- was analyzed by conventional and grid based screening procedures, and statistical analysis showed that the introduced grid-based microscopy led to an improved diagnosis by identifying an increased number of abnormal cells in a shorter period of time, especially concerning the number of pre- or neoplastic/cancerous cells.

  14. Fast and accurate grid representations for atom-based docking with partner flexibility.

    PubMed

    de Vries, Sjoerd J; Zacharias, Martin

    2017-06-30

    Macromolecular docking methods can broadly be divided into geometric and atom-based methods. Geometric methods use fast algorithms that operate on simplified, grid-like molecular representations, while atom-based methods are more realistic and flexible, but far less efficient. Here, a hybrid approach of grid-based and atom-based docking is presented, combining precalculated grid potentials with neighbor lists for fast and accurate calculation of atom-based intermolecular energies and forces. The grid representation is compatible with simultaneous multibody docking and can tolerate considerable protein flexibility. When implemented in our docking method ATTRACT, grid-based docking was found to be ∼35x faster. With the OPLSX forcefield instead of the ATTRACT coarse-grained forcefield, the average speed improvement was >100x. Grid-based representations may allow atom-based docking methods to explore large conformational spaces with many degrees of freedom, such as multiple macromolecules including flexibility. This increases the domain of biological problems to which docking methods can be applied. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  15. Decision Support Method with AHP Based on Evaluation Grid Method

    NASA Astrophysics Data System (ADS)

    Yumoto, Masaki

    In the Decision Support Method with AHP, there is a tendency for accuracy to fall remarkably when only qualitative criteria estimate alternatives. To solve this problem, it is necessary to define the setting method of criteria clearly. Evaluation Grid Method can construct the recognition structure, which is the element of the target causality model. Through the verification of the hypothesis, the criteria of AHP can be extracted. This paper proposes how to model human's recognition structure with Evaluation Grid Method, and how to support the decision with AHP using the criteria which constructs the model. In practical experiments, the proposal method contributed to creation of objective criteria, and examinees were able to receive the good decision support.

  16. Head and neck 192Ir HDR-brachytherapy dosimetry using a grid-based Boltzmann solver

    PubMed Central

    Wolf, Sabine; Kóvacs, George

    2013-01-01

    Purpose To compare dosimetry for head and neck cancer patients, calculated with TG-43 formalism and a commercially available grid-based Boltzmann solver. Material and methods This study included 3D-dosimetry of 49 consecutive brachytherapy head and neck cancer patients, computed by a grid-based Boltzmann solver that takes into account tissue inhomogeneities as well as TG-43 formalism. 3D-treatment planning was carried out by using computed tomography. Results Dosimetric indices D90 and V100 for target volume were about 3% lower (median value) for the grid-based Boltzmann solver relative to TG-43-based computation (p < 0.01). The V150 dose parameter showed 1.6% increase from grid-based Boltzmann solver to TG-43 (p < 0.01). Conclusions Dose differences between results of a grid-based Boltzmann solver and TG-43 formalism for high-dose-rate head and neck brachytherapy patients to the target volume were found. Distinctions in D90 of CTV were low (2.63 Gy for grid-based Boltzmann solver vs. 2.71 Gy TG-43 in mean). In our clinical practice, prescription doses remain unchanged for high-dose-rate head and neck brachytherapy for the time being. PMID:24474973

  17. Head and neck (192)Ir HDR-brachytherapy dosimetry using a grid-based Boltzmann solver.

    PubMed

    Siebert, Frank-André; Wolf, Sabine; Kóvacs, George

    2013-12-01

    To compare dosimetry for head and neck cancer patients, calculated with TG-43 formalism and a commercially available grid-based Boltzmann solver. This study included 3D-dosimetry of 49 consecutive brachytherapy head and neck cancer patients, computed by a grid-based Boltzmann solver that takes into account tissue inhomogeneities as well as TG-43 formalism. 3D-treatment planning was carried out by using computed tomography. Dosimetric indices D90 and V100 for target volume were about 3% lower (median value) for the grid-based Boltzmann solver relative to TG-43-based computation (p < 0.01). The V150 dose parameter showed 1.6% increase from grid-based Boltzmann solver to TG-43 (p < 0.01). Dose differences between results of a grid-based Boltzmann solver and TG-43 formalism for high-dose-rate head and neck brachytherapy patients to the target volume were found. Distinctions in D90 of CTV were low (2.63 Gy for grid-based Boltzmann solver vs. 2.71 Gy TG-43 in mean). In our clinical practice, prescription doses remain unchanged for high-dose-rate head and neck brachytherapy for the time being.

  18. Interviewing for the Principal's Job: A Behavior-Based Approach

    ERIC Educational Resources Information Center

    Clement, Mary C.

    2009-01-01

    The stakes are high when one decides to leave a tenured teaching position or an assistant principalship to interview for a principal's position. However, the stakes are high for the future employer as well. The school district needs to know that the applicant is ready for a job that is very complex. As a new principal, the applicant will be…

  19. Interviewing for the Principal's Job: A Behavior-Based Approach

    ERIC Educational Resources Information Center

    Clement, Mary C.

    2009-01-01

    The stakes are high when one decides to leave a tenured teaching position or an assistant principalship to interview for a principal's position. However, the stakes are high for the future employer as well. The school district needs to know that the applicant is ready for a job that is very complex. As a new principal, the applicant will be…

  20. Job Search Methods: Consequences for Gender-based Earnings Inequality.

    ERIC Educational Resources Information Center

    Huffman, Matt L.; Torres, Lisa

    2001-01-01

    Data from adults in Atlanta, Boston, and Los Angeles (n=1,942) who searched for work using formal (ads, agencies) or informal (networks) methods indicated that type of method used did not contribute to the gender gap in earnings. Results do not support formal job search as a way to reduce gender inequality. (Contains 55 references.) (SK)

  1. Micro-grid platform based on NODE.JS architecture, implemented in electrical network instrumentation

    NASA Astrophysics Data System (ADS)

    Duque, M.; Cando, E.; Aguinaga, A.; Llulluna, F.; Jara, N.; Moreno, T.

    2016-05-01

    In this document, I propose a theory about the impact of systems based on microgrids in non-industrialized countries that have the goal to improve energy exploitation through alternatives methods of a clean and renewable energy generation and the creation of the app to manage the behavior of the micro-grids based on the NodeJS, Django and IOJS technologies. The micro-grids allow the optimal way to manage energy flow by electric injection directly in electric network small urban's cells in a low cost and available way. In difference from conventional systems, micro-grids can communicate between them to carry energy to places that have higher demand in accurate moments. This system does not require energy storage, so, costs are lower than conventional systems like fuel cells, solar panels or else; even though micro-grids are independent systems, they are not isolated. The impact that this analysis will generate, is the improvement of the electrical network without having greater control than an intelligent network (SMART-GRID); this leads to move to a 20% increase in energy use in a specified network; that suggest there are others sources of energy generation; but for today's needs, we need to standardize methods and remain in place to support all future technologies and the best option are the Smart Grids and Micro-Grids.

  2. Development and pilot trial of a web-based job placement information network.

    PubMed

    Chan, Eliza W C; Tam, S F

    2005-01-01

    The purpose of this project was to develop and pilot a web-based job placement information network aiming at enhancing the work trial and job placement opportunities of people with disabilities (PWD). Efficient uses of information technology in vocational rehabilitation were suggested to help improve PWD employment opportunities and thus enable them to contribute as responsible citizens to the society. In this preliminary study, a web-based employer network was so developed to explore Hong Kong employers' needs and intentions in employing PWD. The results indicated that Hong Kong employers generally agreed to arrange work trials for PWD whose work abilities match job requirements. They also expressed that they would offer permanent job placements to those PWD who showed satisfactory performance in work trials. The present study evidenced that using an information network could expedite communications between employers and job placement services, and thus job placement service outcomes. It is hoped that a job placement databank could thus be developed through accumulating responses from potential employers.

  3. Smart Energy Management and Control for Fuel Cell Based Micro-Grid Connected Neighborhoods

    SciTech Connect

    Dr. Mohammad S. Alam

    2006-03-15

    Fuel cell power generation promises to be an efficient, pollution-free, reliable power source in both large scale and small scale, remote applications. DOE formed the Solid State Energy Conversion Alliance with the intention of breaking one of the last barriers remaining for cost effective fuel cell power generation. The Alliance’s goal is to produce a core solid-state fuel cell module at a cost of no more than $400 per kilowatt and ready for commercial application by 2010. With their inherently high, 60-70% conversion efficiencies, significantly reduced carbon dioxide emissions, and negligible emissions of other pollutants, fuel cells will be the obvious choice for a broad variety of commercial and residential applications when their cost effectiveness is improved. In a research program funded by the Department of Energy, the research team has been investigating smart fuel cell-operated residential micro-grid communities. This research has focused on using smart control systems in conjunction with fuel cell power plants, with the goal to reduce energy consumption, reduce demand peaks and still meet the energy requirements of any household in a micro-grid community environment. In Phases I and II, a SEMaC was developed and extended to a micro-grid community. In addition, an optimal configuration was determined for a single fuel cell power plant supplying power to a ten-home micro-grid community. In Phase III, the plan is to expand this work to fuel cell based micro-grid connected neighborhoods (mini-grid). The economic implications of hydrogen cogeneration will be investigated. These efforts are consistent with DOE’s mission to decentralize domestic electric power generation and to accelerate the onset of the hydrogen economy. A major challenge facing the routine implementation and use of a fuel cell based mini-grid is the varying electrical demand of the individual micro-grids, and, therefore, analyzing these issues is vital. Efforts are needed to determine

  4. Skill-based job descriptions for sterile processing technicians--a total quality approach.

    PubMed

    Doyle, F F; Marriott, M A

    1994-05-01

    Rochester General Hospital in Rochester, NY, included as part of its total quality management effort the task of revising job descriptions for its sterile processing technicians as a way to decrease turnover and increase job satisfaction, teamwork and quality output. The department's quality team developed "skill banding," a tool that combines skill-based pay with large salary ranges that span job classifications normally covered by several separate salary ranges. They defined the necessary competencies needed to move through five skill bands and worked with the rest of the department to fine-tune the details. The process has only recently been implemented, but department employees are enthusiastic about it.

  5. A grid-based infrastructure for ecological forecasting of rice land Anopheles arabiensis aquatic larval habitats

    PubMed Central

    Jacob, Benjamin G; Muturi, Ephantus J; Funes, Jose E; Shililu, Josephat I; Githure, John I; Kakoma, Ibulaimu I; Novak, Robert J

    2006-01-01

    Background For remote identification of mosquito habitats the first step is often to construct a discrete tessellation of the region. In applications where complex geometries do not need to be represented such as urban habitats, regular orthogonal grids are constructed in GIS and overlaid on satellite images. However, rice land vector mosquito aquatic habitats are rarely uniform in space or character. An orthogonal grid overlaid on satellite data of rice-land areas may fail to capture physical or man-made structures, i.e paddies, canals, berms at these habitats. Unlike an orthogonal grid, digitizing each habitat converts a polygon into a grid cell, which may conform to rice-land habitat boundaries. This research illustrates the application of a random sampling methodology, comparing an orthogonal and a digitized grid for assessment of rice land habitats. Methods A land cover map was generated in Erdas Imagine V8.7® using QuickBird data acquired July 2005, for three villages within the Mwea Rice Scheme, Kenya. An orthogonal grid was overlaid on the images. In the digitized dataset, each habitat was traced in Arc Info 9.1®. All habitats in each study site were stratified based on levels of rice stage Results The orthogonal grid did not identify any habitat while the digitized grid identified every habitat by strata and study site. An analysis of variance test indicated the relative abundance of An. arabiensis at the three study sites to be significantly higher during the post-transplanting stage of the rice cycle. Conclusion Regions of higher Anopheles abundance, based on digitized grid cell information probably reflect underlying differences in abundance of mosquito habitats in a rice land environment, which is where limited control resources could be concentrated to reduce vector abundance. PMID:17062142

  6. A grid-based infrastructure for ecological forecasting of rice land Anopheles arabiensis aquatic larval habitats.

    PubMed

    Jacob, Benjamin G; Muturi, Ephantus J; Funes, Jose E; Shililu, Josephat I; Githure, John I; Kakoma, Ibulaimu I; Novak, Robert J

    2006-10-24

    For remote identification of mosquito habitats the first step is often to construct a discrete tessellation of the region. In applications where complex geometries do not need to be represented such as urban habitats, regular orthogonal grids are constructed in GIS and overlaid on satellite images. However, rice land vector mosquito aquatic habitats are rarely uniform in space or character. An orthogonal grid overlaid on satellite data of rice-land areas may fail to capture physical or man-made structures, i.e paddies, canals, berms at these habitats. Unlike an orthogonal grid, digitizing each habitat converts a polygon into a grid cell, which may conform to rice-land habitat boundaries. This research illustrates the application of a random sampling methodology, comparing an orthogonal and a digitized grid for assessment of rice land habitats. A land cover map was generated in Erdas Imagine V8.7 using QuickBird data acquired July 2005, for three villages within the Mwea Rice Scheme, Kenya. An orthogonal grid was overlaid on the images. In the digitized dataset, each habitat was traced in Arc Info 9.1. All habitats in each study site were stratified based on levels of rice stage The orthogonal grid did not identify any habitat while the digitized grid identified every habitat by strata and study site. An analysis of variance test indicated the relative abundance of An. arabiensis at the three study sites to be significantly higher during the post-transplanting stage of the rice cycle. Regions of higher Anopheles abundance, based on digitized grid cell information probably reflect underlying differences in abundance of mosquito habitats in a rice land environment, which is where limited control resources could be concentrated to reduce vector abundance.

  7. Research and design of smart grid monitoring control via terminal based on iOS system

    NASA Astrophysics Data System (ADS)

    Fu, Wei; Gong, Li; Chen, Heli; Pan, Guangji

    2017-06-01

    Aiming at a series of problems existing in current smart grid monitoring Control Terminal, such as high costs, poor portability, simple monitoring system, poor software extensions, low system reliability when transmitting information, single man-machine interface, poor security, etc., smart grid remote monitoring system based on the iOS system has been designed. The system interacts with smart grid server so that it can acquire grid data through WiFi/3G/4G networks, and monitor each grid line running status, as well as power plant equipment operating conditions. When it occurs an exception in the power plant, incident information can be sent to the user iOS terminal equipment timely, which will provide troubleshooting information to help the grid staff to make the right decisions in a timely manner, to avoid further accidents. Field tests have shown the system realizes the integrated grid monitoring functions, low maintenance cost, friendly interface, high security and reliability, and it possesses certain applicable value.

  8. The job competency of radiological technologists in Korea based on specialists opinion and questionnaire survey

    PubMed Central

    2017-01-01

    Purpose Although there are over 40,000 licensed radiological technologists (RTs) in Korea, job competency standards have yet to be defined. This study aims to clarify the job competency of Korean RTs. Methods A task force team of 11 professional RTs were recruited in order to analyze the job competency of domestic and international RTs. A draft for the job competency of Korean RTs was prepared. A survey was then conducted sampling RTs and the attitudes of their competencies were recorded from May 21 to July 30, 2016. Results We identified five modules of professionalism, patient management, health and safety, operation of equipment, and procedure management and 131 detailed job competencies for RTs in Korea. “Health and safety” had the highest average score and “professionalism” had the lowest average score for both job performance and importance. The content validity ratios for the 131 subcompetencies were mostly valid. Conclusion Establishment of standard guidelines for RT job competency for multidisciplinary healthcare at medical institutions may be possible based on our results, which will help educators of RT training institutions to clarify their training and education. PMID:28502973

  9. OGC and Grid Interoperability in enviroGRIDS Project

    NASA Astrophysics Data System (ADS)

    Gorgan, Dorian; Rodila, Denisa; Bacu, Victor; Giuliani, Gregory; Ray, Nicolas

    2010-05-01

    EnviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is a 4-years FP7 Project aiming to address the subjects of ecologically unsustainable development and inadequate resource management. The project develops a Spatial Data Infrastructure of the Black Sea Catchment region. The geospatial technologies offer very specialized functionality for Earth Science oriented applications as well as the Grid oriented technology that is able to support distributed and parallel processing. One challenge of the enviroGRIDS project is the interoperability between geospatial and Grid infrastructures by providing the basic and the extended features of the both technologies. The geospatial interoperability technology has been promoted as a way of dealing with large volumes of geospatial data in distributed environments through the development of interoperable Web service specifications proposed by the Open Geospatial Consortium (OGC), with applications spread across multiple fields but especially in Earth observation research. Due to the huge volumes of data available in the geospatial domain and the additional introduced issues (data management, secure data transfer, data distribution and data computation), the need for an infrastructure capable to manage all those problems becomes an important aspect. The Grid promotes and facilitates the secure interoperations of geospatial heterogeneous distributed data within a distributed environment, the creation and management of large distributed computational jobs and assures a security level for communication and transfer of messages based on certificates. This presentation analysis and discusses the most significant use cases for enabling the OGC Web services interoperability with the Grid environment and focuses on the description and implementation of the most promising one. In these use cases we give a special attention to issues such as: the relations between computational grid and

  10. Effects of an ergonomics-based job stress management program on job strain, psychological distress, and blood cortisol among employees of a national private bank in Denpasar Bali.

    PubMed

    Purnawati, Susy; Kawakami, Norito; Shimazu, Akihito; Sutjana, Dewa Putu; Adiputra, Nyoman

    2016-08-06

    The present work describes a newly developed ergonomics-based job stress management program - Ergo-JSI (Ergonomics-based Job Stress Intervention) - including a pilot study to ascertain the effects of the program on job strain, psychological distress, and blood cortisol levels among bank employees in Indonesia. A single-group, pre- and post-test experimental study was conducted in a sample of employees in a National Bank in Denpasar, Bali, Indonesia. The outcomes of the study focused on reductions in job strain index and psychological distress, measured by the Indonesian version of the Brief Job Stress Questionnaire (BJSQ), and improvement in blood cortisol levels following the study.A total of 25 male employees, with an average age of 39, received an eight-week intervention with the Ergo-JSI. Compared to baseline, the job strain index decreased by 46% (p<0.05), and psychological distress decreased by 28% (p<0.05). These changes were accompanied by a 24% reduction in blood cortisol levels (p<0.05). The newly developed Ergo-JSI program may hence be effective for decreasing job strain, psychosocial distress, and blood cortisol among employees in Indonesia.

  11. Grid technology in tissue-based diagnosis: fundamentals and potential developments

    PubMed Central

    Görtler, Jürgen; Berghoff, Martin; Kayser, Gian; Kayser, Klaus

    2006-01-01

    Tissue-based diagnosis still remains the most reliable and specific diagnostic medical procedure. It is involved in all technological developments in medicine and biology and incorporates tools of quite different applications. These range from molecular genetics to image acquisition and recognition algorithms (for image analysis), or from tissue culture to electronic communication services. Grid technology seems to possess all features to efficiently target specific constellations of an individual patient in order to obtain a detailed and accurate diagnosis in providing all relevant information and references. Grid technology can be briefly explained by so-called nodes that are linked together and share certain communication rules in using open standards. The number of nodes can vary as well as their functionality, depending on the needs of a specific user at a given point in time. In the beginning of grid technology, the nodes were used as supercomputers in combining and enhancing the computation power. At present, at least five different Grid functions can be distinguished, that comprise 1) computation services, 2) data services, 3) application services, 4) information services, and 5) knowledge services. The general structures and functions of a Grid are described, and their potential implementation into virtual tissue-based diagnosis is analyzed. As a result Grid technology offers a new dimension to access distributed information and knowledge and to improving the quality in tissue-based diagnosis and therefore improving the medical quality. PMID:16930477

  12. The Particle Physics Data Grid. Final Report

    SciTech Connect

    Livny, Miron

    2002-08-16

    The main objective of the Particle Physics Data Grid (PPDG) project has been to implement and evaluate distributed (Grid-enabled) data access and management technology for current and future particle and nuclear physics experiments. The specific goals of PPDG have been to design, implement, and deploy a Grid-based software infrastructure capable of supporting the data generation, processing and analysis needs common to the physics experiments represented by the participants, and to adapt experiment-specific software to operate in the Grid environment and to exploit this infrastructure. To accomplish these goals, the PPDG focused on the implementation and deployment of several critical services: reliable and efficient file replication service, high-speed data transfer services, multisite file caching and staging service, and reliable and recoverable job management services. The focus of the activity was the job management services and the interplay between these services and distributed data access in a Grid environment. Software was developed to study the interaction between HENP applications and distributed data storage fabric. One key conclusion was the need for a reliable and recoverable tool for managing large collections of interdependent jobs. An attached document provides an overview of the current status of the Directed Acyclic Graph Manager (DAGMan) with its main features and capabilities.

  13. An open source software for fast grid-based data-mining in spatial epidemiology (FGBASE).

    PubMed

    Baker, David M; Valleron, Alain-Jacques

    2014-10-30

    Examining whether disease cases are clustered in space is an important part of epidemiological research. Another important part of spatial epidemiology is testing whether patients suffering from a disease are more, or less, exposed to environmental factors of interest than adequately defined controls. Both approaches involve determining the number of cases and controls (or population at risk) in specific zones. For cluster searches, this often must be done for millions of different zones. Doing this by calculating distances can lead to very lengthy computations. In this work we discuss the computational advantages of geographical grid-based methods, and introduce an open source software (FGBASE) which we have created for this purpose. Geographical grids based on the Lambert Azimuthal Equal Area projection are well suited for spatial epidemiology because they preserve area: each cell of the grid has the same area. We describe how data is projected onto such a grid, as well as grid-based algorithms for spatial epidemiological data-mining. The software program (FGBASE), that we have developed, implements these grid-based methods. The grid based algorithms perform extremely fast. This is particularly the case for cluster searches. When applied to a cohort of French Type 1 Diabetes (T1D) patients, as an example, the grid based algorithms detected potential clusters in a few seconds on a modern laptop. This compares very favorably to an equivalent cluster search using distance calculations instead of a grid, which took over 4 hours on the same computer. In the case study we discovered 4 potential clusters of T1D cases near the cities of Le Havre, Dunkerque, Toulouse and Nantes. One example of environmental analysis with our software was to study whether a significant association could be found between distance to vineyards with heavy pesticide. None was found. In both examples, the software facilitates the rapid testing of hypotheses. Grid-based algorithms for mining

  14. A population-based job exposure matrix for power-frequency magnetic fields.

    PubMed

    Bowman, Joseph D; Touchstone, Jennifer A; Yost, Michael G

    2007-09-01

    A population-based job exposure matrix (JEM) was developed to assess personal exposures to power-frequency magnetic fields (MF) for epidemiologic studies. The JEM compiled 2,317 MF measurements taken on or near workers by 10 studies in the United States, Sweden, New Zealand, Finland, and Italy. A database was assembled from the original data for six studies plus summary statistics grouped by occupation from four other published studies. The job descriptions were coded into the 1980 Standard Occupational Classification system (SOC) and then translated to the 1980 job categories of the U.S. Bureau of the Census (BOC). For each job category, the JEM database calculated the arithmetic mean, standard deviation, geometric mean, and geometric standard deviation of the workday-average MF magnitude from the combined data. Analysis of variance demonstrated that the combining of MF data from the different sources was justified, and that the homogeneity of MF exposures in the SOC occupations was comparable to JEMs for solvents and particulates. BOC occupation accounted for 30% of the MF variance (p < 10(-6)), and the contrast (ratio of the between-job variance to the total of within- and between-job variances) was 88%. Jobs lacking data had their exposures inferred from measurements on similar occupations. The JEM provided MF exposures for 97% of the person-months in a population-based case-control study and 95% of the jobs on death certificates in a registry study covering 22 states. Therefore, we expect this JEM to be useful in other population-based epidemiologic studies.

  15. Operational flash flood forecasting platform based on grid technology

    NASA Astrophysics Data System (ADS)

    Thierion, V.; Ayral, P.-A.; Angelini, V.; Sauvagnargues-Lesage, S.; Nativi, S.; Payrastre, O.

    2009-04-01

    Flash flood events of south of France such as the 8th and 9th September 2002 in the Grand Delta territory caused important economic and human damages. Further to this catastrophic hydrological situation, a reform of flood warning services have been initiated (set in 2006). Thus, this political reform has transformed the 52 existing flood warning services (SAC) in 22 flood forecasting services (SPC), in assigning them territories more hydrological consistent and new effective hydrological forecasting mission. Furthermore, national central service (SCHAPI) has been created to ease this transformation and support local services in their new objectives. New functioning requirements have been identified: - SPC and SCHAPI carry the responsibility to clearly disseminate to public organisms, civil protection actors and population, crucial hydrologic information to better anticipate potential dramatic flood event, - a new effective hydrological forecasting mission to these flood forecasting services seems essential particularly for the flash floods phenomenon. Thus, models improvement and optimization was one of the most critical requirements. Initially dedicated to support forecaster in their monitoring mission, thanks to measuring stations and rainfall radar images analysis, hydrological models have to become more efficient in their capacity to anticipate hydrological situation. Understanding natural phenomenon occuring during flash floods mainly leads present hydrological research. Rather than trying to explain such complex processes, the presented research try to manage the well-known need of computational power and data storage capacities of these services. Since few years, Grid technology appears as a technological revolution in high performance computing (HPC) allowing large-scale resource sharing, computational power using and supporting collaboration across networks. Nowadays, EGEE (Enabling Grids for E-science in Europe) project represents the most important

  16. AVQS: attack route-based vulnerability quantification scheme for smart grid.

    PubMed

    Ko, Jongbin; Lim, Hyunwoo; Lee, Seokjun; Shon, Taeshik

    2014-01-01

    A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification.

  17. AVQS: Attack Route-Based Vulnerability Quantification Scheme for Smart Grid

    PubMed Central

    Lim, Hyunwoo; Lee, Seokjun; Shon, Taeshik

    2014-01-01

    A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification. PMID:25152923

  18. Developing Information Power Grid Based Algorithms and Software

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This was an exploratory study to enhance our understanding of problems involved in developing large scale applications in a heterogeneous distributed environment. It is likely that the large scale applications of the future will be built by coupling specialized computational modules together. For example, efforts now exist to couple ocean and atmospheric prediction codes to simulate a more complete climate system. These two applications differ in many respects. They have different grids, the data is in different unit systems and the algorithms for inte,-rating in time are different. In addition the code for each application is likely to have been developed on different architectures and tend to have poor performance when run on an architecture for which the code was not designed, if it runs at all. Architectural differences may also induce differences in data representation which effect precision and convergence criteria as well as data transfer issues. In order to couple such dissimilar codes some form of translation must be present. This translation should be able to handle interpolation from one grid to another as well as construction of the correct data field in the correct units from available data. Even if a code is to be developed from scratch, a modular approach will likely be followed in that standard scientific packages will be used to do the more mundane tasks such as linear algebra or Fourier transform operations. This approach allows the developers to concentrate on their science rather than becoming experts in linear algebra or signal processing. Problems associated with this development approach include difficulties associated with data extraction and translation from one module to another, module performance on different nodal architectures, and others. In addition to these data and software issues there exists operational issues such as platform stability and resource management.

  19. Risk-based generation dispatch in the power grid for resilience against extreme weather events

    NASA Astrophysics Data System (ADS)

    Javanbakht, Pirooz

    Natural disasters have been considered as one of the main causes of the largest blackouts in North America. When it comes to power grid resiliency against natural hazards, different solutions exist that are mainly categorized based on the time-frame of analysis. At the design stage, robustness and resiliency may be improved through redundant designs and inclusion of advanced measurement, monitoring, control and protection systems. However, since massive destructive energy may be released during the course of a natural disaster (such as a hurricane) causing large-scale and widespread disturbances, design-stage remedies may not be sufficient for ensuring power grid robustness. As a result, to limit the consequent impacts on the operation of the power grid, the system operator may be forced to take immediate remedial actions in real-time. To effectively manage the disturbances caused by severe weather events, weather forecast information should be incorporated into the operational model of the power grid in order to predict imminent contingencies. In this work, a weather-driven generation dispatch model is developed based on stochastic programming to provide a proactive solution for power grid resiliency against imminent large-scale disturbances. Hurricanes and ice storms are studied as example disaster events to provide numerical results. In this approach, the statistics of the natural disaster event are taken into account along with the expected impact on various power grid components in order to determine the availability of the grid. Then, a generation dispatch strategy is devised that helps operate the grid subject to weather-driven operational constraints.

  20. A grid-based pseudo-cache solution for MISD biomedical problems with high confidentiality and efficiency.

    PubMed

    Dai, Yuan-Shun; Palakal, Mathew; Hartanto, Shielly; Wang, Xiaolong; Guo, Yanming

    2006-01-01

    The complexity of most biomedical/bioinformatics problems requires efficient solutions using collaborative/parallel computing. One promising solution is to implement Grid computing, as an emerging new field called BioGrid. However, one of the most stringent requirements in such a Grid-based solution is data privacy. This paper presents a novel solution to provide the Confidentiality when using the Grid to efficiently solve MISD biomedical problems. It is called the Grid-Based Pseudo-Cache (GBPC) solution. It is proved to have equal or better performance than traditional MIMD solution. Via case studies our theories are validated in practice, and the data dependence is also addressed.

  1. MaGate Simulator: A Simulation Environment for a Decentralized Grid Scheduler

    NASA Astrophysics Data System (ADS)

    Huang, Ye; Brocco, Amos; Courant, Michele; Hirsbrunner, Beat; Kuonen, Pierre

    This paper presents a simulator for of a decentralized modular grid scheduler named MaGate. MaGate’s design emphasizes scheduler interoperability by providing intelligent scheduling serving the grid community as a whole. Each MaGate scheduler instance is able to deal with dynamic scheduling conditions, with continuously arriving grid jobs. Received jobs are either allocated on local resources, or delegated to other MaGates for remote execution. The proposed MaGate simulator is based on GridSim toolkit and Alea simulator, and abstracts the features and behaviors of complex fundamental grid elements, such as grid jobs, grid resources, and grid users. Simulation of scheduling tasks is supported by a grid network overlay simulator executing distributed ant-based swarm intelligence algorithms to provide services such as group communication and resource discovery. For evaluation, a comparison of behaviors of different collaborative policies among a community of MaGates is provided. Results support the use of the proposed approach as a functional ready grid scheduler simulator.

  2. Design and implementation of GRID-based PACS in a hospital with multiple imaging departments

    NASA Astrophysics Data System (ADS)

    Yang, Yuanyuan; Jin, Jin; Sun, Jianyong; Zhang, Jianguo

    2008-03-01

    Usually, there were multiple clinical departments providing imaging-enabled healthcare services in enterprise healthcare environment, such as radiology, oncology, pathology, and cardiology, the picture archiving and communication system (PACS) is now required to support not only radiology-based image display, workflow and data flow management, but also to have more specific expertise imaging processing and management tools for other departments providing imaging-guided diagnosis and therapy, and there were urgent demand to integrate the multiple PACSs together to provide patient-oriented imaging services for enterprise collaborative healthcare. In this paper, we give the design method and implementation strategy of developing grid-based PACS (Grid-PACS) for a hospital with multiple imaging departments or centers. The Grid-PACS functions as a middleware between the traditional PACS archiving servers and workstations or image viewing clients and provide DICOM image communication and WADO services to the end users. The images can be stored in distributed multiple archiving servers, but can be managed with central mode. The grid-based PACS has auto image backup and disaster recovery services and can provide best image retrieval path to the image requesters based on the optimal algorithms. The designed grid-based PACS has been implemented in Shanghai Huadong Hospital and been running for two years smoothly.

  3. Triaging jobs in a community-based case-control study to increase efficiency of the expert occupational assessment method.

    PubMed

    Fritschi, Lin; Sadkowsky, Troy; Benke, Geza P; Thomson, Allyson; Glass, Deborah C

    2012-05-01

    Expert assessment is useful to assess occupational exposures in cases where measured exposure data are not available. However, the process may be inefficient in a community-based study with low prevalence of exposure. This study aimed to determine if formally triaging the jobs as to likelihood of exposure before the experts review those jobs could improve study efficiency. One thousand nine hundred and sixty-one jobs from a case-control study were triaged by study staff (non-occupational health professionals) into four groups depending on the likelihood of exposure to solvents. For jobs in one group, we had additional information available in the form of job-specific modules and automatic exposure assignments for solvents based on rules pre-programmed into the job-specific module. After the automatic assignment, two experts reviewed the jobs to assign exposure to solvents in order to evaluate the process. The prevalence of exposure and the agreement between the two raters and between the raters' and the automatic assignments were compared for the four triage groups. The majority of jobs (76%) were triaged as unexposed by study staff and very few of these jobs were assigned as exposed by the raters (1%). For jobs with automatic assignment (18% of total), the raters tended to agree with the automatic assignment if that assignment was unexposed or probably exposed. There was less agreement for jobs in which the automatic assignment was possible exposure. For jobs triaged as ones with potential exposure based only on job title but with no further information available, the level of disagreement between the raters tended to be higher. Formal triaging of jobs can improve the efficiency of the expert assessment process. Of the 75% of jobs initially triaged as unexposed, virtually no exposures were found, and omitting manual review of this group would save considerable time.

  4. CMS Configuration Editor: GUI based application for user analysis job

    NASA Astrophysics Data System (ADS)

    de Cosa, A.

    2011-12-01

    We present the user interface and the software architecture of the Configuration Editor for the CMS experiment. The analysis workflow is organized in a modular way integrated within the CMS framework that organizes in a flexible way user analysis code. The Python scripting language is adopted to define the job configuration that drives the analysis workflow. It could be a challenging task for users, especially for newcomers, to develop analysis jobs managing the configuration of many required modules. For this reason a graphical tool has been conceived in order to edit and inspect configuration files. A set of common analysis tools defined in the CMS Physics Analysis Toolkit (PAT) can be steered and configured using the Config Editor. A user-defined analysis workflow can be produced starting from a standard configuration file, applying and configuring PAT tools according to the specific user requirements. CMS users can adopt this tool, the Config Editor, to create their analysis visualizing in real time which are the effects of their actions. They can visualize the structure of their configuration, look at the modules included in the workflow, inspect the dependences existing among the modules and check the data flow. They can visualize at which values parameters are set and change them according to what is required by their analysis task. The integration of common tools in the GUI needed to adopt an object-oriented structure in the Python definition of the PAT tools and the definition of a layer of abstraction from which all PAT tools inherit.

  5. A Grid Middleware Framework Support for a Workflow Model Based on Virtualized Resources

    NASA Astrophysics Data System (ADS)

    Lee, Jinbock; Lee, Sangkeon; Choi, Jaeyoung

    Nowadays, the virtualization technologies are widely used to overcome the difficulty of managing Grid computing infrastructures. The virtual account and the virtual workspace are very optimistic to allocate Grid resources to specific user, but they lacks of capability of interaction between portal services and virtualized resources which required by Grid portal. The virtual application is fitted to wrap simple application as a Grid portal service, but integrating some applications to compose larger application service is difficult. In this paper, we present a Grid middleware framework which supports for a workflow model based on virtualized resources. Meta Services in the framework exposes workflow as a portal service and service call is converted different workflow according to parameter and workflow generated by the Meta Services is scheduled in a virtual cluster which configured by this framework. Because of virtual application service can be composed of workflow and service interface wraps the workflow providing a complex portal services composed by small application could effectively integrated to Grid portal and scheduled in virtual computing resources.

  6. Microcontroller based spectrophotometer using compact disc as diffraction grid

    NASA Astrophysics Data System (ADS)

    Bano, Saleha; Altaf, Talat; Akbar, Sunila

    2010-12-01

    This paper describes the design and implementation of a portable, inexpensive and cost effective spectrophotometer. The device combines the use of compact disc (CD) media as diffraction grid and 60 watt bulb as a light source. Moreover it employs a moving slit along with stepper motor for obtaining a monochromatic light, photocell with spectral sensitivity in visible region to determine the intensity of light and an amplifier with a very high gain as well as an advanced virtual RISC (AVR) microcontroller ATmega32 as a control unit. The device was successfully applied to determine the absorbance and transmittance of KMnO4 and the unknown concentration of KMnO4 with the help of calibration curve. For comparison purpose a commercial spectrophotometer was used. There are not significant differences between the absorbance and transmittance values estimated by the two instruments. Furthermore, good results are obtained at all visible wavelengths of light. Therefore, the designed instrument offers an economically feasible alternative for spectrophotometric sample analysis in small routine, research and teaching laboratories, because the components used in the designing of the device are cheap and of easy acquisition.

  7. An Interoperable GridWorkflow Management System

    NASA Astrophysics Data System (ADS)

    Mirto, Maria; Passante, Marco; Epicoco, Italo; Aloisio, Giovanni

    A WorkFlow Management System (WFMS) is a fundamental componentenabling to integrate data, applications and a wide set of project resources. Although a number of scientific WFMSs support this task, many analysis pipelines require large-scale Grid computing infrastructures to cope with their high compute and storage requirements. Such scientific workflows complicate the management of resources, especially in cases where they are offered by several resource providers, managed by different Grid middleware, since resource access must be synchronised in advance to allow reliable workflow execution. Different types of Grid middleware such as gLite, Unicore and Globus are used around the world and may cause interoperability issues if applications involve two or more of them. In this paperwe describe the ProGenGrid Workflow Management System which the main goal is to provide interoperability among these different grid middleware when executing workflows. It allows the composition of batch; parameter sweep and MPI based jobs. The ProGenGrid engine implements the logic to execute such jobs by using a standard language OGF compliant such as JSDL that has been extended for this purpose. Currently, we are testing our system on some bioinformatics case studies in the International Laboratory of Bioinformatics (LIBI) Project (www.libi.it).

  8. A New Family of Multilevel Grid Connected Inverters Based on Packed U Cell Topology.

    PubMed

    Pakdel, Majid; Jalilzadeh, Saeid

    2017-09-29

    In this paper a novel packed U cell (PUC) based multilevel grid connected inverter is proposed. Unlike the U cell arrangement which consists of two power switches and one capacitor, in the proposed converter topology a lower DC power supply from renewable energy resources such as photovoltaic arrays (PV) is used as a base power source. The proposed topology offers higher efficiency and lower cost using a small number of power switches and a lower DC power source which is supplied from renewable energy resources. Other capacitor voltages are extracted from the base lower DC power source using isolated DC-DC power converters. The operation principle of proposed transformerless multilevel grid connected inverter is analyzed theoretically. Operation of the proposed multilevel grid connected inverter is verified through simulation studies. An experimental prototype using STM32F407 discovery controller board is performed to verify the simulation results.

  9. Task-based estimation of mechanical job exposure in occupational groups.

    PubMed

    Mathiassen, Svend Erik; Nordander, Catarina; Svendsen, Susanne W; Wellman, Helen M; Dempsey, Patrick G

    2005-04-01

    This study examined the validity of a common belief in epidemiology with respect to work-related musculoskeletal disorders, that individual mechanical job exposure is better estimated from tasks performed in the job than from the mean exposure of the occupational group. Whole-day recordings of upper trapezius electromyography were obtained from 24 cleaners and 23 office workers. Trapezius activity was analyzed in the level (gap time) and frequency (jerk time) dimensions. On the same day, the job of each person was divided into periods of active work and breaks by means of continuous observations. The bootstrap re-sampling technique was used with this database to compare task-based job exposure estimates with estimates based on the occupational mean. For a particular person, the task-based estimate was obtained by combining the average work and break exposures in the occupation with the personal time proportions of the two tasks in the job. The task-based estimates were, in general, equivalent to, or less correct than, occupation-based estimates for both exposure parameters in both occupations and for individual exposures, as well as for group means. This was the result in spite of significant and consistent exposure differences between work and breaks, in particular among the cleaners. Even if task exposure contrasts are large, task-based estimates of job exposures can be less correct than estimates based on the occupational mean. Since collecting and processing task information is costly, it is recommended that task-based modeling of mechanical exposure be implemented in studies only after careful examination of its possible benefits.

  10. One-fifth of nonelderly Californians do not have access to job-based health insurance coverage.

    PubMed

    Lavarreda, Shana Alex; Cabezas, Livier

    2010-11-01

    Lack of job-based health insurance does not affect just workers, but entire families who depend on job-based coverage for their health care. This policy brief shows that in 2007 one-fifth of all Californians ages 0-64 who lived in households where at least one family member was employed did not have access to job-based coverage. Among adults with no access to job-based coverage through their own or a spouse's job, nearly two-thirds remained uninsured. In contrast, the majority of children with no access to health insurance through a parent obtained public health insurance, highlighting the importance of such programs. Low-income, Latino and small business employees were more likely to have no access to job-based insurance. Provisions enacted under national health care reform (the Patient Protection and Affordable Care Act of 2010) will aid some of these populations in accessing health insurance coverage.

  11. Application of remote debugging techniques in user-centric job monitoring

    NASA Astrophysics Data System (ADS)

    dos Santos, T.; Mättig, P.; Wulff, N.; Harenberg, T.; Volkmer, F.; Beermann, T.; Kalinin, S.; Ahrens, R.

    2012-06-01

    With the Job Execution Monitor, a user-centric job monitoring software developed at the University of Wuppertal and integrated into the job brokerage systems of the WLCG, job progress and grid worker node health can be supervised in real time. Imminent error conditions can thus be detected early by the submitter and countermeasures can be taken. Grid site admins can access aggregated data of all monitored jobs to infer the site status and to detect job misbehaviour. To remove the last "blind spot" from this monitoring, a remote debugging technique based on the GNU C compiler suite was developed and integrated into the software; its design concept and architecture is described in this paper and its application discussed.

  12. gLExec: gluing grid computing to the Unix world

    NASA Astrophysics Data System (ADS)

    Groep, D.; Koeroo, O.; Venekamp, G.

    2008-07-01

    The majority of compute resources in todays scientific grids are based on Unix and Unix-like operating systems. In this world, user and user-group management are based around the concepts of a numeric 'user ID' and 'group ID' that are local to the resource. In contrast, grid concepts of user and group management are centered around globally assigned identifiers and VO membership, structures that are independent of any specific resource. At the fabric boundary, these 'grid identities' have to be translated to Unix user IDs. New job submission methodologies, such as job-execution web services, community-deployed local schedulers, and the late binding of user jobs in a grid-wide overlay network of 'pilot jobs', push this fabric boundary ever further down into the resource. gLExec, a light-weight (and thereby auditable) credential mapping and authorization system, addresses these issues. It can be run both on fabric boundary, as part of an execution web service, and on the worker node in a late-binding scenario. In this contribution we describe the rationale for gLExec, how it interacts with the site authorization and credential mapping frameworks such as LCAS, LCMAPS and GUMS, and how it can be used to improve site control and traceability in a pilot-job system.

  13. Direct grid-based quantum dynamics on propagated diabatic potential energy surfaces

    NASA Astrophysics Data System (ADS)

    Richings, Gareth W.; Habershon, Scott

    2017-09-01

    We present a method for performing non-adiabatic, grid-based nuclear quantum dynamics calculations using diabatic potential energy surfaces (PESs) generated ;on-the-fly;. Gaussian process regression is used to interpolate PESs by using electronic structure energies, calculated at points in configuration space determined by the nuclear dynamics, and diabatising the results using the propagation diabatisation method reported recently (Richings and Worth, 2015). Our new method is successfully demonstrated using a grid-based approach to model the non-adiabatic dynamics of the butatriene cation. Overall, our scheme offers a route towards accurate quantum dynamics on diabatic PESs learnt on-the-fly.

  14. A methodology toward manufacturing grid-based virtual enterprise operation platform

    NASA Astrophysics Data System (ADS)

    Tan, Wenan; Xu, Yicheng; Xu, Wei; Xu, Lida; Zhao, Xianhua; Wang, Li; Fu, Liuliu

    2010-08-01

    Virtual enterprises (VEs) have become one of main types of organisations in the manufacturing sector through which the consortium companies organise their manufacturing activities. To be competitive, a VE relies on the complementary core competences among members through resource sharing and agile manufacturing capacity. Manufacturing grid (M-Grid) is a platform in which the production resources can be shared. In this article, an M-Grid-based VE operation platform (MGVEOP) is presented as it enables the sharing of production resources among geographically distributed enterprises. The performance management system of the MGVEOP is based on the balanced scorecard and has the capacity of self-learning. The study shows that a MGVEOP can make a semi-automated process possible for a VE, and the proposed MGVEOP is efficient and agile.

  15. Design of a nonlinear backstepping control strategy of grid interconnected wind power system based PMSG

    NASA Astrophysics Data System (ADS)

    Errami, Y.; Obbadi, A.; Sahnoun, S.; Benhmida, M.; Ouassaid, M.; Maaroufi, M.

    2016-07-01

    This paper presents nonlinear backstepping control for Wind Power Generation System (WPGS) based Permanent Magnet Synchronous Generator (PMSG) and connected to utility grid. The block diagram of the WPGS with PMSG and the grid side back-to-back converter is established with the dq frame of axes. This control scheme emphasises the regulation of the dc-link voltage and the control of the power factor at changing wind speed. Besides, in the proposed control strategy of WPGS, Maximum Power Point Tracking (MPPT) technique and pitch control are provided. The stability of the regulators is assured by employing Lyapunov analysis. The proposed control strategy for the system has been validated by MATLAB simulations under varying wind velocity and the grid fault condition. In addition, a comparison of simulation results based on the proposed Backstepping strategy and conventional Vector Control is provided.

  16. Discrete Adjoint-Based Design Optimization of Unsteady Turbulent Flows on Dynamic Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Diskin, Boris; Yamaleev, Nail K.

    2009-01-01

    An adjoint-based methodology for design optimization of unsteady turbulent flows on dynamic unstructured grids is described. The implementation relies on an existing unsteady three-dimensional unstructured grid solver capable of dynamic mesh simulations and discrete adjoint capabilities previously developed for steady flows. The discrete equations for the primal and adjoint systems are presented for the backward-difference family of time-integration schemes on both static and dynamic grids. The consistency of sensitivity derivatives is established via comparisons with complex-variable computations. The current work is believed to be the first verified implementation of an adjoint-based optimization methodology for the true time-dependent formulation of the Navier-Stokes equations in a practical computational code. Large-scale shape optimizations are demonstrated for turbulent flows over a tiltrotor geometry and a simulated aeroelastic motion of a fighter jet.

  17. PDEs on moving surfaces via the closest point method and a modified grid based particle method

    NASA Astrophysics Data System (ADS)

    Petras, A.; Ruuth, S. J.

    2016-05-01

    Partial differential equations (PDEs) on surfaces arise in a wide range of applications. The closest point method (Ruuth and Merriman (2008) [20]) is a recent embedding method that has been used to solve a variety of PDEs on smooth surfaces using a closest point representation of the surface and standard Cartesian grid methods in the embedding space. The original closest point method (CPM) was designed for problems posed on static surfaces, however the solution of PDEs on moving surfaces is of considerable interest as well. Here we propose solving PDEs on moving surfaces using a combination of the CPM and a modification of the grid based particle method (Leung and Zhao (2009) [12]). The grid based particle method (GBPM) represents and tracks surfaces using meshless particles and an Eulerian reference grid. Our modification of the GBPM introduces a reconstruction step into the original method to ensure that all the grid points within a computational tube surrounding the surface are active. We present a number of examples to illustrate the numerical convergence properties of our combined method. Experiments for advection-diffusion equations that are strongly coupled to the velocity of the surface are also presented.

  18. Model Predictive Control of A Matrix-Converter Based Solid State Transformer for Utility Grid Interaction

    SciTech Connect

    Xue, Yaosuo

    2016-01-01

    The matrix converter solid state transformer (MC-SST), formed from the back-to-back connection of two three-to-single-phase matrix converters, is studied for use in the interconnection of two ac grids. The matrix converter topology provides a light weight and low volume single-stage bidirectional ac-ac power conversion without the need for a dc link. Thus, the lifetime limitations of dc-bus storage capacitors are avoided. However, space vector modulation of this type of MC-SST requires to compute vectors for each of the two MCs, which must be carefully coordinated to avoid commutation failure. An additional controller is also required to control power exchange between the two ac grids. In this paper, model predictive control (MPC) is proposed for an MC-SST connecting two different ac power grids. The proposed MPC predicts the circuit variables based on the discrete model of MC-SST system and the cost function is formulated so that the optimal switch vector for the next sample period is selected, thereby generating the required grid currents for the SST. Simulation and experimental studies are carried out to demonstrate the effectiveness and simplicity of the proposed MPC for such MC-SST-based grid interfacing systems.

  19. Grid-based asynchronous migration of execution context in Java virtual machines

    SciTech Connect

    von Laszewski, G.; Shudo, K.; Muraoka, Y.

    2000-06-15

    Previous research efforts for building thread migration systems have concentrated on the development of frameworks dealing with a small local environment controlled by a single user. Computational Grids provide the opportunity to utilize a large-scale environment controlled over different organizational boundaries. Using this class of large-scale computational resources as part of a thread migration system provides a significant challenge previously not addressed by this community. In this paper the authors present a framework that integrates Grid services to enhance the functionality of a thread migration system. To accommodate future Grid services, the design of the framework is both flexible and extensible. Currently, the thread migration system contains Grid services for authentication, registration, lookup, and automatic software installation. In the context of distributed applications executed on a Grid-based infrastructure, the asynchronous migration of an execution context can help solve problems such as remote execution, load balancing, and the development of mobile agents. The prototype is based on the migration of Java threads, allowing asynchronous and heterogeneous migration of the execution context of the running code.

  20. An adaptive grid for graph-based segmentation in retinal OCT

    PubMed Central

    Lang, Andrew; Carass, Aaron; Calabresi, Peter A.; Ying, Howard S.; Prince, Jerry L.

    2016-01-01

    Graph-based methods for retinal layer segmentation have proven to be popular due to their efficiency and accuracy. These methods build a graph with nodes at each voxel location and use edges connecting nodes to encode the hard constraints of each layer’s thickness and smoothness. In this work, we explore deforming the regular voxel grid to allow adjacent vertices in the graph to more closely follow the natural curvature of the retina. This deformed grid is constructed by fixing node locations based on a regression model of each layer’s thickness relative to the overall retina thickness, thus we generate a subject specific grid. Graph vertices are not at voxel locations, which allows for control over the resolution that the graph represents. By incorporating soft constraints between adjacent nodes, segmentation on this grid will favor smoothly varying surfaces consistent with the shape of the retina. Our final segmentation method then follows our previous work. Boundary probabilities are estimated using a random forest classifier followed by an optimal graph search algorithm on the new adaptive grid to produce a final segmentation. Our method is shown to produce a more consistent segmentation with an overall accuracy of 3.38 μm across all boundaries. PMID:27773959

  1. A Cosmic Dust Sensor Based on an Array of Grid Electrodes

    NASA Astrophysics Data System (ADS)

    Li, Y. W.; Bugiel, S.; Strack, H.; Srama, R.

    2014-04-01

    We described a low mass and high sensitivity cosmic dust trajectory sensor using a array of grid segments[1]. the sensor determines the particle velocity vector and the particle mass. An impact target is used for the detection of the impact plasma of high speed particles like interplanetary dust grains or high speed ejecta. Slower particles are measured by three planes of grid electrodes using charge induction. In contrast to conventional Dust Trajectory Sensor based on wire electrodes, grid electrodes a robust and sensitive design with a trajectory resolution of a few degree. Coulomb simulation and laboratory tests were performed in order to verify the instrument design. The signal shapes are used to derive the particle plane intersection points and to derive the exact particle trajectory. The accuracy of the instrument for the incident angle depends on the particle charge, the position of the intersection point and the signal-to-noise of the charge sensitive amplifier (CSA). There are some advantages of this grid-electrodes based design with respect to conventional trajectory sensor using individual wire electrodes: the grid segment electrodes show higher amplitudes (close to 100%induced charge) and the overall number of measurement channels can be reduced. This allows a compact instrument with low power and mass requirements.

  2. Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY

    NASA Astrophysics Data System (ADS)

    Bystritskaya, Elena; Fomenko, Alexander; Gogitidze, Nelly; Lobodzinski, Bogdan

    2014-06-01

    The H1 Virtual Organization (VO), as one of the small VOs, employs most components of the EMI or gLite Middleware. In this framework, a monitoring system is designed for the H1 Experiment to identify and recognize within the GRID the best suitable resources for execution of CPU-time consuming Monte Carlo (MC) simulation tasks (jobs). Monitored resources are Computer Elements (CEs), Storage Elements (SEs), WMS-servers (WMSs), CernVM File System (CVMFS) available to the VO HONE and local GRID User Interfaces (UIs). The general principle of monitoring GRID elements is based on the execution of short test jobs on different CE queues using submission through various WMSs and directly to the CREAM-CEs as well. Real H1 MC Production jobs with a small number of events are used to perform the tests. Test jobs are periodically submitted into GRID queues, the status of these jobs is checked, output files of completed jobs are retrieved, the result of each job is analyzed and the waiting time and run time are derived. Using this information, the status of the GRID elements is estimated and the most suitable ones are included in the automatically generated configuration files for use in the H1 MC production. The monitoring system allows for identification of problems in the GRID sites and promptly reacts on it (for example by sending GGUS (Global Grid User Support) trouble tickets). The system can easily be adapted to identify the optimal resources for tasks other than MC production, simply by changing to the relevant test jobs. The monitoring system is written mostly in Python and Perl with insertion of a few shell scripts. In addition to the test monitoring system we use information from real production jobs to monitor the availability and quality of the GRID resources. The monitoring tools register the number of job resubmissions, the percentage of failed and finished jobs relative to all jobs on the CEs and determine the average values of waiting and running time for the

  3. Direct care worker's perceptions of job satisfaction following implementation of work-based learning.

    PubMed

    Lopez, Cynthia; White, Diana L; Carder, Paula C

    2014-02-01

    The purpose of this study was to understand the impact of a work-based learning program on the work lives of Direct Care Workers (DCWs) at assisted living (AL) residences. The research questions were addressed using focus group data collected as part of a larger evaluation of a work-based learning (WBL) program called Jobs to Careers. The theoretical perspective of symbolic interactionism was used to frame the qualitative data analysis. Results indicated that the WBL program impacted DCWs' job satisfaction through the program curriculum and design and through three primary categories: relational aspects of work, worker identity, and finding time. This article presents a conceptual model for understanding how these categories are interrelated and the implications for WBL programs. Job satisfaction is an important topic that has been linked to quality of care and reduced turnover in long-term care settings.

  4. A Fast and Robust Poisson-Boltzmann Solver Based on Adaptive Cartesian Grids.

    PubMed

    Boschitsch, Alexander H; Fenley, Marcia O

    2011-05-10

    An adaptive Cartesian grid (ACG) concept is presented for the fast and robust numerical solution of the 3D Poisson-Boltzmann Equation (PBE) governing the electrostatic interactions of large-scale biomolecules and highly charged multi-biomolecular assemblies such as ribosomes and viruses. The ACG offers numerous advantages over competing grid topologies such as regular 3D lattices and unstructured grids. For very large biological molecules and multi-biomolecule assemblies, the total number of grid-points is several orders of magnitude less than that required in a conventional lattice grid used in the current PBE solvers thus allowing the end user to obtain accurate and stable nonlinear PBE solutions on a desktop computer. Compared to tetrahedral-based unstructured grids, ACG offers a simpler hierarchical grid structure, which is naturally suited to multigrid, relieves indirect addressing requirements and uses fewer neighboring nodes in the finite difference stencils. Construction of the ACG and determination of the dielectric/ionic maps are straightforward, fast and require minimal user intervention. Charge singularities are eliminated by reformulating the problem to produce the reaction field potential in the molecular interior and the total electrostatic potential in the exterior ionic solvent region. This approach minimizes grid-dependency and alleviates the need for fine grid spacing near atomic charge sites. The technical portion of this paper contains three parts. First, the ACG and its construction for general biomolecular geometries are described. Next, a discrete approximation to the PBE upon this mesh is derived. Finally, the overall solution procedure and multigrid implementation are summarized. Results obtained with the ACG-based PBE solver are presented for: (i) a low dielectric spherical cavity, containing interior point charges, embedded in a high dielectric ionic solvent - analytical solutions are available for this case, thus allowing rigorous

  5. A Fast and Robust Poisson-Boltzmann Solver Based on Adaptive Cartesian Grids

    PubMed Central

    Boschitsch, Alexander H.; Fenley, Marcia O.

    2011-01-01

    An adaptive Cartesian grid (ACG) concept is presented for the fast and robust numerical solution of the 3D Poisson-Boltzmann Equation (PBE) governing the electrostatic interactions of large-scale biomolecules and highly charged multi-biomolecular assemblies such as ribosomes and viruses. The ACG offers numerous advantages over competing grid topologies such as regular 3D lattices and unstructured grids. For very large biological molecules and multi-biomolecule assemblies, the total number of grid-points is several orders of magnitude less than that required in a conventional lattice grid used in the current PBE solvers thus allowing the end user to obtain accurate and stable nonlinear PBE solutions on a desktop computer. Compared to tetrahedral-based unstructured grids, ACG offers a simpler hierarchical grid structure, which is naturally suited to multigrid, relieves indirect addressing requirements and uses fewer neighboring nodes in the finite difference stencils. Construction of the ACG and determination of the dielectric/ionic maps are straightforward, fast and require minimal user intervention. Charge singularities are eliminated by reformulating the problem to produce the reaction field potential in the molecular interior and the total electrostatic potential in the exterior ionic solvent region. This approach minimizes grid-dependency and alleviates the need for fine grid spacing near atomic charge sites. The technical portion of this paper contains three parts. First, the ACG and its construction for general biomolecular geometries are described. Next, a discrete approximation to the PBE upon this mesh is derived. Finally, the overall solution procedure and multigrid implementation are summarized. Results obtained with the ACG-based PBE solver are presented for: (i) a low dielectric spherical cavity, containing interior point charges, embedded in a high dielectric ionic solvent – analytical solutions are available for this case, thus allowing rigorous

  6. Grid generation strategies for turbomachinery configurations

    NASA Technical Reports Server (NTRS)

    Lee, K. D.; Henderson, T. L.

    1991-01-01

    Turbomachinery flow fields involve unique grid generation issues due to their geometrical and physical characteristics. Several strategic approaches are discussed to generate quality grids. The grid quality is further enhanced through blending and adapting. Grid blending smooths the grids locally through averaging and diffusion operators. Grid adaptation redistributes the grid points based on a grid quality assessment. These methods are demonstrated with several examples.

  7. The CrossGrid project

    NASA Astrophysics Data System (ADS)

    Kunze, M.; CrossGrid Collaboration

    2003-04-01

    There are many large-scale problems that require new approaches to computing, such as earth observation, environmental management, biomedicine, industrial and scientific modeling. The CrossGrid project addresses realistic problems in medicine, environmental protection, flood prediction, and physics analysis and is oriented towards specific end-users: Medical doctors, who could obtain new tools to help them to obtain correct diagnoses and to guide them during operations; industries, that could be advised on the best timing for some critical operations involving risk of pollution; flood crisis teams, that could predict the risk of a flood on the basis of historical records and actual hydrological and meteorological data; physicists, who could optimize the analysis of massive volumes of data distributed across countries and continents. Corresponding applications will be based on Grid technology and could be complex and difficult to use: the CrossGrid project aims at developing several tools that will make the Grid more friendly for average users. Portals for specific applications will be designed, that should allow for easy connection to the Grid, create a customized work environment, and provide users with all necessary information to get their job done.

  8. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect

    Jakeman, J.D. Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  9. Elliptic Curve Cryptography-Based Authentication with Identity Protection for Smart Grids.

    PubMed

    Zhang, Liping; Tang, Shanyu; Luo, He

    2016-01-01

    In a smart grid, the power service provider enables the expected power generation amount to be measured according to current power consumption, thus stabilizing the power system. However, the data transmitted over smart grids are not protected, and then suffer from several types of security threats and attacks. Thus, a robust and efficient authentication protocol should be provided to strength the security of smart grid networks. As the Supervisory Control and Data Acquisition system provides the security protection between the control center and substations in most smart grid environments, we focus on how to secure the communications between the substations and smart appliances. Existing security approaches fail to address the performance-security balance. In this study, we suggest a mitigation authentication protocol based on Elliptic Curve Cryptography with privacy protection by using a tamper-resistant device at the smart appliance side to achieve a delicate balance between performance and security of smart grids. The proposed protocol provides some attractive features such as identity protection, mutual authentication and key agreement. Finally, we demonstrate the completeness of the proposed protocol using the Gong-Needham-Yahalom logic.

  10. Elliptic Curve Cryptography-Based Authentication with Identity Protection for Smart Grids

    PubMed Central

    Zhang, Liping; Tang, Shanyu; Luo, He

    2016-01-01

    In a smart grid, the power service provider enables the expected power generation amount to be measured according to current power consumption, thus stabilizing the power system. However, the data transmitted over smart grids are not protected, and then suffer from several types of security threats and attacks. Thus, a robust and efficient authentication protocol should be provided to strength the security of smart grid networks. As the Supervisory Control and Data Acquisition system provides the security protection between the control center and substations in most smart grid environments, we focus on how to secure the communications between the substations and smart appliances. Existing security approaches fail to address the performance-security balance. In this study, we suggest a mitigation authentication protocol based on Elliptic Curve Cryptography with privacy protection by using a tamper-resistant device at the smart appliance side to achieve a delicate balance between performance and security of smart grids. The proposed protocol provides some attractive features such as identity protection, mutual authentication and key agreement. Finally, we demonstrate the completeness of the proposed protocol using the Gong-Needham- Yahalom logic. PMID:27007951

  11. Cygrid: A fast Cython-powered convolution-based gridding module for Python

    NASA Astrophysics Data System (ADS)

    Winkel, B.; Lenz, D.; Flöer, L.

    2016-06-01

    Context. Data gridding is a common task in astronomy and many other science disciplines. It refers to the resampling of irregularly sampled data to a regular grid. Aims: We present cygrid, a library module for the general purpose programming language Python. Cygrid can be used to resample data to any collection of target coordinates, although its typical application involves FITS maps or data cubes. The FITS world coordinate system standard is supported. Methods: The regridding algorithm is based on the convolution of the original samples with a kernel of arbitrary shape. We introduce a lookup table scheme that allows us to parallelize the gridding and combine it with the HEALPix tessellation of the sphere for fast neighbor searches. Results: We show that for n input data points, cygrids runtime scales between O(n) and O(nlog n) and analyze the performance gain that is achieved using multiple CPU cores. We also compare the gridding speed with other techniques, such as nearest-neighbor, and linear and cubic spline interpolation. Conclusions: Cygrid is a very fast and versatile gridding library that significantly outperforms other third-party Python modules, such as the linear and cubic spline interpolation provided by SciPy. http://https://github.com/bwinkel/cygrid

  12. HPM-Based Dynamic Sparse Grid Approach for Perona-Malik Equation

    PubMed Central

    Mei, Shu-Li; Zhu, De-Hai

    2014-01-01

    The Perona-Malik equation is a famous image edge-preserved denoising model, which is represented as a nonlinear 2-dimension partial differential equation. Based on the homotopy perturbation method (HPM) and the multiscale interpolation theory, a dynamic sparse grid method for Perona-Malik was constructed in this paper. Compared with the traditional multiscale numerical techniques, the proposed method is independent of the basis function. In this method, a dynamic choice scheme of external grid points is proposed to eliminate the artifacts introduced by the partitioning technique. In order to decrease the calculation amount introduced by the change of the external grid points, the Newton interpolation technique is employed instead of the traditional Lagrange interpolation operator, and the condition number of the discretized matrix different equations is taken into account of the choice of the external grid points. Using the new numerical scheme, the time complexity of the sparse grid method for the image denoising is decreased to O(4J+2j) from O(43J), (j ≪ J). The experiment results show that the dynamic choice scheme of the external gird points can eliminate the boundary effect effectively and the efficiency can also be improved greatly comparing with the classical interval wavelets numerical methods. PMID:25050394

  13. HPM-based dynamic sparse grid approach for Perona-Malik equation.

    PubMed

    Mei, Shu-Li; Zhu, De-Hai

    2014-01-01

    The Perona-Malik equation is a famous image edge-preserved denoising model, which is represented as a nonlinear 2-dimension partial differential equation. Based on the homotopy perturbation method (HPM) and the multiscale interpolation theory, a dynamic sparse grid method for Perona-Malik was constructed in this paper. Compared with the traditional multiscale numerical techniques, the proposed method is independent of the basis function. In this method, a dynamic choice scheme of external grid points is proposed to eliminate the artifacts introduced by the partitioning technique. In order to decrease the calculation amount introduced by the change of the external grid points, the Newton interpolation technique is employed instead of the traditional Lagrange interpolation operator, and the condition number of the discretized matrix different equations is taken into account of the choice of the external grid points. Using the new numerical scheme, the time complexity of the sparse grid method for the image denoising is decreased to O(4 (J+2j)) from O(4(3J)), (j ≪ J). The experiment results show that the dynamic choice scheme of the external gird points can eliminate the boundary effect effectively and the efficiency can also be improved greatly comparing with the classical interval wavelets numerical methods.

  14. Information Security Risk Assessment of Smart Grid Based on Absorbing Markov Chain and SPA

    NASA Astrophysics Data System (ADS)

    Jianye, Zhang; Qinshun, Zeng; Yiyang, Song; Cunbin, Li

    2014-12-01

    To assess and prevent the smart grid information security risks more effectively, this paper provides risk index quantitative calculation method based on absorbing Markov chain to overcome the deficiencies that links between system components were not taken into consideration and studies mostly were limited to static evaluation. The method avoids the shortcomings of traditional Expert Score with significant subjective factors and also considers the links between information system components, which make the risk index system closer to the reality. Then, a smart grid information security risk assessment model on the basis of set pair analysis improved by Markov chain was established. Using the identity, discrepancy, and contradiction of connection degree to dynamically reflect the trend of smart grid information security risk and combining with the Markov chain to calculate connection degree of the next period, the model implemented the smart grid information security risk assessment comprehensively and dynamically. Finally, this paper proves that the established model is scientific, effective, and feasible to dynamically evaluate the smart grid information security risks.

  15. Upper arm elevation and repetitive shoulder movements: a general population job exposure matrix based on expert ratings and technical measurements.

    PubMed

    Dalbøge, Annett; Hansson, Gert-Åke; Frost, Poul; Andersen, Johan Hviid; Heilskov-Hansen, Thomas; Svendsen, Susanne Wulff

    2016-08-01

    We recently constructed a general population job exposure matrix (JEM), The Shoulder JEM, based on expert ratings. The overall aim of this study was to convert expert-rated job exposures for upper arm elevation and repetitive shoulder movements to measurement scales. The Shoulder JEM covers all Danish occupational titles, divided into 172 job groups. For 36 of these job groups, we obtained technical measurements (inclinometry) of upper arm elevation and repetitive shoulder movements. To validate the expert-rated job exposures against the measured job exposures, we used Spearman rank correlations and the explained variance[Formula: see text] according to linear regression analyses (36 job groups). We used the linear regression equations to convert the expert-rated job exposures for all 172 job groups into predicted measured job exposures. Bland-Altman analyses were used to assess the agreement between the predicted and measured job exposures. The Spearman rank correlations were 0.63 for upper arm elevation and 0.64 for repetitive shoulder movements. The expert-rated job exposures explained 64% and 41% of the variance of the measured job exposures, respectively. The corresponding calibration equations were y=0.5%time+0.16×expert rating and y=27°/s+0.47×expert rating. The mean differences between predicted and measured job exposures were zero due to calibration; the 95% limits of agreement were ±2.9% time for upper arm elevation >90° and ±33°/s for repetitive shoulder movements. The updated Shoulder JEM can be used to present exposure-response relationships on measurement scales. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  16. Reducing the dimensionality of grid based methods for electron-atom scattering calculations below ionization threshold

    NASA Astrophysics Data System (ADS)

    Benda, Jakub; Houfek, Karel

    2017-04-01

    For total energies below the ionization threshold it is possible to dramatically reduce the computational burden of the solution of the electron-atom scattering problem based on grid methods combined with the exterior complex scaling. As in the R-matrix method, the problem can be split into the inner and outer problem, where the outer problem considers only the energetically accessible asymptotic channels. The (N + 1)-electron inner problem is coupled to the one-electron outer problems for every channel, resulting in a matrix that scales only linearly with size of the outer grid.

  17. Application of a Scalable, Parallel, Unstructured-Grid-Based Navier-Stokes Solver

    NASA Technical Reports Server (NTRS)

    Parikh, Paresh

    2001-01-01

    A parallel version of an unstructured-grid based Navier-Stokes solver, USM3Dns, previously developed for efficient operation on a variety of parallel computers, has been enhanced to incorporate upgrades made to the serial version. The resultant parallel code has been extensively tested on a variety of problems of aerospace interest and on two sets of parallel computers to understand and document its characteristics. An innovative grid renumbering construct and use of non-blocking communication are shown to produce superlinear computing performance. Preliminary results from parallelization of a recently introduced "porous surface" boundary condition are also presented.

  18. The design and implementation of a remote sensing image processing system based on grid middleware

    NASA Astrophysics Data System (ADS)

    Zhong, Liang; Ma, Hongchao; Xu, Honggen; Ding, Yi

    2009-10-01

    In this article, a remote sensing image processing system is established to carry out the significant scientific problem that processing and distributing the mass earth-observed data quantitatively and intelligently with high efficiency under the Condor Environment. This system includes the submitting of the long-distantly task, the Grid middleware in the mass image processing and the quick distribution of the remote-sensing images, etc. A conclusion can be gained from the application of this system based on Grid environment. It proves to be an effective way to solve the present problem of fast processing, quick distribution and sharing of the mass remote-sensing images.

  19. Probability-Based Software for Grid Optimization: Improved Power System Operations Using Advanced Stochastic Optimization

    SciTech Connect

    2012-02-24

    GENI Project: Sandia National Laboratories is working with several commercial and university partners to develop software for market management systems (MMSs) that enable greater use of renewable energy sources throughout the grid. MMSs are used to securely and optimally determine which energy resources should be used to service energy demand across the country. Contributions of electricity to the grid from renewable energy sources such as wind and solar are intermittent, introducing complications for MMSs, which have trouble accommodating the multiple sources of price and supply uncertainties associated with bringing these new types of energy into the grid. Sandia’s software will bring a new, probability-based formulation to account for these uncertainties. By factoring in various probability scenarios for electricity production from renewable energy sources in real time, Sandia’s formula can reduce the risk of inefficient electricity transmission, save ratepayers money, conserve power, and support the future use of renewable energy.

  20. Implementation of fuzzy-sliding mode based control of a grid connected photovoltaic system.

    PubMed

    Menadi, Abdelkrim; Abdeddaim, Sabrina; Ghamri, Ahmed; Betka, Achour

    2015-09-01

    The present work describes an optimal operation of a small scale photovoltaic system connected to a micro-grid, based on both sliding mode and fuzzy logic control. Real time implementation is done through a dSPACE 1104 single board, controlling a boost chopper on the PV array side and a voltage source inverter (VSI) on the grid side. The sliding mode controller tracks permanently the maximum power of the PV array regardless of atmospheric condition variations, while The fuzzy logic controller (FLC) regulates the DC-link voltage, and ensures via current control of the VSI a quasi-total transit of the extracted PV power to the grid under a unity power factor operation. Simulation results, carried out via Matlab-Simulink package were approved through experiment, showing the effectiveness of the proposed control techniques. Copyright © 2015. Published by Elsevier Ltd.

  1. An unstructured grid, three-dimensional model based on the shallow water equations

    USGS Publications Warehouse

    Casulli, V.; Walters, R.A.

    2000-01-01

    A semi-implicit finite difference model based on the three-dimensional shallow water equations is modified to use unstructured grids. There are obvious advantages in using unstructured grids in problems with a complicated geometry. In this development, the concept of unstructured orthogonal grids is introduced and applied to this model. The governing differential equations are discretized by means of a semi-implicit algorithm that is robust, stable and very efficient. The resulting model is relatively simple, conserves mass, can fit complicated boundaries and yet is sufficiently flexible to permit local mesh refinements in areas of interest. Moreover, the simulation of the flooding and drying is included in a natural and straightforward manner. These features are illustrated by a test case for studies of convergence rates and by examples of flooding on a river plain and flow in a shallow estuary. Copyright ?? 2000 John Wiley & Sons, Ltd.

  2. Air Pollution Monitoring and Mining Based on Sensor Grid in London

    PubMed Central

    Ma, Yajie; Richards, Mark; Ghanem, Moustafa; Guo, Yike; Hassard, John

    2008-01-01

    In this paper, we present a distributed infrastructure based on wireless sensors network and Grid computing technology for air pollution monitoring and mining, which aims to develop low-cost and ubiquitous sensor networks to collect real-time, large scale and comprehensive environmental data from road traffic emissions for air pollution monitoring in urban environment. The main informatics challenges in respect to constructing the high-throughput sensor Grid are discussed in this paper. We present a two-layer network framework, a P2P e-Science Grid architecture, and the distributed data mining algorithm as the solutions to address the challenges. We simulated the system in TinyOS to examine the operation of each sensor as well as the networking performance. We also present the distributed data mining result to examine the effectiveness of the algorithm. PMID:27879895

  3. Self-adaptive Fault-Tolerance of HLA-Based Simulations in the Grid Environment

    NASA Astrophysics Data System (ADS)

    Huang, Jijie; Chai, Xudong; Zhang, Lin; Li, Bo Hu

    The objects of a HLA-based simulation can access model services to update their attributes. However, the grid server may be overloaded and refuse the model service to handle objects accesses. Because these objects have been accessed this model service during last simulation loop and their medium state are stored in this server, this may terminate the simulation. A fault-tolerance mechanism must be introduced into simulations. But the traditional fault-tolerance methods cannot meet the above needs because the transmission latency between a federate and the RTI in grid environment varies from several hundred milliseconds to several seconds. By adding model service URLs to the OMT and expanding the HLA services and model services with some interfaces, this paper proposes a self-adaptive fault-tolerance mechanism of simulations according to the characteristics of federates accessing model services. Benchmark experiments indicate that the expanded HLA/RTI can make simulations self-adaptively run in the grid environment.

  4. Study on the model of distributed remote sensing data processing based on agent grid

    NASA Astrophysics Data System (ADS)

    Zhang, Xining; Li, Deren; Li, Jingliang

    2006-10-01

    The increments of high-resolution remote sensing data about Digital Earth and the distributed data among heterogeneous remote sites have brought challenges to processing remote sensing data effectively. Traditional models of distributed computing are inadequate to support such complex applications. Agent technology provides a new method for understanding the features of distributed system and solving distributed application problems. This paper proposes a model for distributed remote sensing data processing based on agent grid. This model makes use of grid to discover, compose, utilize and deploy agents, process distributed image data, and image-processing algorithms. "Agents Group" mode is used in the model to manage all the agents distributed in the grid, which consists of one or more agents to accomplish automatic and dynamic configuration of distributed image data resources and to efficiently support ondemand image processing in distributed environment. The model, framework and implementation of prototype are reported in this paper.

  5. Air Pollution Monitoring and Mining Based on Sensor Grid in London.

    PubMed

    Ma, Yajie; Richards, Mark; Ghanem, Moustafa; Guo, Yike; Hassard, John

    2008-06-01

    In this paper, we present a distributed infrastructure based on wireless sensors network and Grid computing technology for air pollution monitoring and mining, which aims to develop low-cost and ubiquitous sensor networks to collect real-time, large scale and comprehensive environmental data from road traffic emissions for air pollution monitoring in urban environment. The main informatics challenges in respect to constructing the high-throughput sensor Grid are discussed in this paper. We present a twolayer network framework, a P2P e-Science Grid architecture, and the distributed data mining algorithm as the solutions to address the challenges. We simulated the system in TinyOS to examine the operation of each sensor as well as the networking performance. We also present the distributed data mining result to examine the effectiveness of the algorithm.

  6. An unstructured grid, three-dimensional model based on the shallow water equations

    NASA Astrophysics Data System (ADS)

    Casulli, Vincenzo; Walters, Roy A.

    2000-02-01

    A semi-implicit finite difference model based on the three-dimensional shallow water equations is modified to use unstructured grids. There are obvious advantages in using unstructured grids in problems with a complicated geometry. In this development, the concept of unstructured orthogonal grids is introduced and applied to this model. The governing differential equations are discretized by means of a semi-implicit algorithm that is robust, stable and very efficient. The resulting model is relatively simple, conserves mass, can fit complicated boundaries and yet is sufficiently flexible to permit local mesh refinements in areas of interest. Moreover, the simulation of the flooding and drying is included in a natural and straightforward manner. These features are illustrated by a test case for studies of convergence rates and by examples of flooding on a river plain and flow in a shallow estuary. Copyright

  7. CHARACTERIZING SPATIAL AND TEMPORAL DYNAMICS: DEVELOPMENT OF A GRID-BASED WATERSHED MERCURY LOADING MODEL

    EPA Science Inventory

    A distributed grid-based watershed mercury loading model has been developed to characterize spatial and temporal dynamics of mercury from both point and non-point sources. The model simulates flow, sediment transport, and mercury dynamics on a daily time step across a diverse lan...

  8. CHARACTERIZING SPATIAL AND TEMPORAL DYNAMICS: DEVELOPMENT OF A GRID-BASED WATERSHED MERCURY LOADING MODEL

    EPA Science Inventory

    A distributed grid-based watershed mercury loading model has been developed to characterize spatial and temporal dynamics of mercury from both point and non-point sources. The model simulates flow, sediment transport, and mercury dynamics on a daily time step across a diverse lan...

  9. AstroGrid-D: Grid technology for astronomical science

    NASA Astrophysics Data System (ADS)

    Enke, Harry; Steinmetz, Matthias; Adorf, Hans-Martin; Beck-Ratzka, Alexander; Breitling, Frank; Brüsemeister, Thomas; Carlson, Arthur; Ensslin, Torsten; Högqvist, Mikael; Nickelt, Iliya; Radke, Thomas; Reinefeld, Alexander; Reiser, Angelika; Scholl, Tobias; Spurzem, Rainer; Steinacker, Jürgen; Voges, Wolfgang; Wambsganß, Joachim; White, Steve

    2011-02-01

    We present status and results of AstroGrid-D, a joint effort of astrophysicists and computer scientists to employ grid technology for scientific applications. AstroGrid-D provides access to a network of distributed machines with a set of commands as well as software interfaces. It allows simple use of computer and storage facilities and to schedule or monitor compute tasks and data management. It is based on the Globus Toolkit middleware (GT4). Chapter 1 describes the context which led to the demand for advanced software solutions in Astrophysics, and we state the goals of the project. We then present characteristic astrophysical applications that have been implemented on AstroGrid-D in chapter 2. We describe simulations of different complexity, compute-intensive calculations running on multiple sites (Section 2.1), and advanced applications for specific scientific purposes (Section 2.2), such as a connection to robotic telescopes (Section 2.2.3). We can show from these examples how grid execution improves e.g. the scientific workflow. Chapter 3 explains the software tools and services that we adapted or newly developed. Section 3.1 is focused on the administrative aspects of the infrastructure, to manage users and monitor activity. Section 3.2 characterises the central components of our architecture: The AstroGrid-D information service to collect and store metadata, a file management system, the data management system, and a job manager for automatic submission of compute tasks. We summarise the successfully established infrastructure in chapter 4, concluding with our future plans to establish AstroGrid-D as a platform of modern e-Astronomy.

  10. A Cycle-Based Data Aggregation Scheme for Grid-Based Wireless Sensor Networks

    PubMed Central

    Chiang, Yung-Kuei; Wang, Neng-Chung; Hsieh, Chih-Hung

    2014-01-01

    In a wireless sensor network (WSN), a great number of sensor nodes are deployed to gather sensed data. These sensor nodes are typically powered by batteries so their energy is restricted. Sensor nodes mainly consume energy in data transmission, especially over a long distance. Since the location of the base station (BS) is remote, the energy consumed by each node to directly transmit its data to the BS is considerable and the node will die very soon. A well-designed routing protocol is thus essential to reduce the energy consumption. In this paper, we propose a Cycle-Based Data Aggregation Scheme (CBDAS) for grid-based WSNs. In CBDAS, the whole sensor field is divided into a grid of cells, each with a head. We prolong the network lifetime by linking all cell heads together to form a cyclic chain so that the gathered data can move in two directions. For data gathering in each round, the gathered data moves from node to node along the chain, getting aggregated. Finally, a designated cell head, the cycle leader, directly transmits to the BS. CBDAS performs data aggregation at every cell head so as to substantially reduce the amount of data that must be transmitted to the BS. Only cell heads need disseminate data so that the number of data transmissions is greatly diminished. Sensor nodes of each cell take turns as the cell head, and all cell heads on the cyclic chain also take turns being cycle leader. The energy depletion is evenly distributed so that the nodes' lifetime is extended. As a result, the lifetime of the whole sensor network is extended. Simulation results show that CBDAS outperforms protocols like Direct, PEGASIS, and PBDAS. PMID:24828579

  11. A cycle-based data aggregation scheme for grid-based wireless sensor networks.

    PubMed

    Chiang, Yung-Kuei; Wang, Neng-Chung; Hsieh, Chih-Hung

    2014-05-13

    In a wireless sensor network (WSN), a great number of sensor nodes are deployed to gather sensed data. These sensor nodes are typically powered by batteries so their energy is restricted. Sensor nodes mainly consume energy consumption in data transmission, especially for a long distance. Since the location of the base station (BS) is remote, the energy consumed by each node to directly transmit its data to the BS is considerable and the node will die very soon. A well-designed routing protocol is thus essential to reduce the energy consumption. In this paper, we propose a Cycle-Based Data Aggregation Scheme (CBDAS) for grid-based WSNs. In CBDAS, the whole sensor field is divided into a grid of cells, each with a head. We prolong the network lifetime by linking all cell heads together to form a cyclic chain so that the gathered data can move in two directions. For data gathering in each round, the gathered data moves from node to node along the chain, getting aggregated. Finally, a designated cell head, the cycle leader, directly transmits to the BS. CBDAS performs data aggregation at every cell head so as to substantially reduce the amount of data that must be transmitted to the BS. Only cell heads need disseminate data so that the number of data transmissions is greatly diminished. Sensor nodes of each cell take turns as the cell head, and all cell heads on the cyclic chain also take turns being cycle leader. The energy depletion is evenly distributed so that the nodes' lifetime is extended. As a result, the lifetime of the whole sensor network is extended. Simulation results show that CBDAS outperforms protocols like Direct, PEGASIS, and PBDAS.

  12. Simulating Runoff from a Grid Based Mercury Model: Flow Comparisons

    EPA Science Inventory

    Several mercury cycling models, including general mass balance approaches, mixed-batch reactors in streams or lakes, or regional process-based models, exist to assess the ecological exposure risks associated with anthropogenically increased atmospheric mercury (Hg) deposition, so...

  13. Simulating Runoff from a Grid Based Mercury Model: Flow Comparisons

    EPA Science Inventory

    Several mercury cycling models, including general mass balance approaches, mixed-batch reactors in streams or lakes, or regional process-based models, exist to assess the ecological exposure risks associated with anthropogenically increased atmospheric mercury (Hg) deposition, so...

  14. Academic Job Placements in Library and Information Science Field: A Case Study Performed on ALISE Web-Based Postings

    ERIC Educational Resources Information Center

    Abouserie, Hossam Eldin Mohamed Refaat

    2010-01-01

    The study investigated and analyzed the state of academic web-based job announcements in Library and Information Science Field. The purpose of study was to get in depth understanding about main characteristics and trends of academic job market in Library and Information science field. The study focused on web-based version announcement as it was…

  15. Organizational Culture's Role in the Relationship between Power Bases and Job Stress

    ERIC Educational Resources Information Center

    Erkutlu, Hakan; Chafra, Jamel; Bumin, Birol

    2011-01-01

    The purpose of this research is to examine the moderating role of organizational culture in the relationship between leader's power bases and subordinate's job stress. Totally 622 lecturers and their superiors (deans) from 13 state universities chosen by random method in Ankara, Istanbul, Izmir, Antalya, Samsun, Erzurum and Gaziantep in 2008-2009…

  16. Computer-Based Job Aiding: Problem Solving at Work. Technical Report No. 11.

    ERIC Educational Resources Information Center

    Stone, David E.; Hutson, Barbara A.

    As part of an ongoing effort to understand the processes people employ in reading technical material and the ways in which computer based job aids can assist people in doing complex tasks, a study was conducted to determine how subjects engaged in an assembly task use a detailed and hierarchically organized information structure (Hypertext) to…

  17. Systematic Method for Establishing Officer Grade Requirements Based Upon Job Demands.

    ERIC Educational Resources Information Center

    Christal, Raymond E.

    This report presents interim results of a study developing a methodology for management engineering teams to determine the appropriate grade requirements for officer positions based on job content and responsibilities. The technology reported represents a modification and extension of methods developed between 1963 and 1966. Results indicated that…

  18. The Evaluation of Teachers' Job Performance Based on Total Quality Management (TQM)

    ERIC Educational Resources Information Center

    Shahmohammadi, Nayereh

    2017-01-01

    This study aimed to evaluate teachers' job performance based on total quality management (TQM) model. This was a descriptive survey study. The target population consisted of all primary school teachers in Karaj (N = 2917). Using Cochran formula and simple random sampling, 340 participants were selected as sample. A total quality management…

  19. Community Based Organizations. The Challenges of the Job Training Partnership Act.

    ERIC Educational Resources Information Center

    Brown, Larry

    The advent of the Job Training Partnership Act (JTPA) has not been favorable to community-based organizations (CBOs) serving unemployed young people. The overall decline in the amount of money available for employment training is one reason for the reduction in services, but it is not the sole reason. The transition to the new act itself is also…

  20. Organizational Culture's Role in the Relationship between Power Bases and Job Stress

    ERIC Educational Resources Information Center

    Erkutlu, Hakan; Chafra, Jamel; Bumin, Birol

    2011-01-01

    The purpose of this research is to examine the moderating role of organizational culture in the relationship between leader's power bases and subordinate's job stress. Totally 622 lecturers and their superiors (deans) from 13 state universities chosen by random method in Ankara, Istanbul, Izmir, Antalya, Samsun, Erzurum and Gaziantep in 2008-2009…

  1. Data Base for a Job Opportunity Vocational Agricultural Program Planning Model.

    ERIC Educational Resources Information Center

    Baggett, Connie D.; And Others

    A job opportunity-based curriculum planning model was developed for high school vocational agriculture programs. Three objectives were to identify boundaries of the geographical area within which past program graduates obtained entry-level position, title and description of position, and areas of high school specialization; number and titles of…

  2. An In-depth Study of Grid-based Asteroseismic Analysis

    NASA Astrophysics Data System (ADS)

    Gai, Ning; Basu, Sarbani; Chaplin, William J.; Elsworth, Yvonne

    2011-04-01

    NASA's Kepler mission is providing basic asteroseismic data for hundreds of stars. One of the more common ways of determining stellar characteristics from these data is by the so-called grid-based modeling. We have made a detailed study of grid-based analysis techniques to study the errors (and error correlations) involved. As had been reported earlier, we find that it is relatively easy to get very precise values of stellar radii using grid-based techniques. However, we find that there are small, but significant, biases that can result because of the grid of models used. The biases can be minimized if metallicity is known. Masses cannot be determined as precisely as the radii and suffer from larger systematic effects. We also find that the errors in mass and radius are correlated. A positive consequence of this correlation is that log g can be determined both precisely and accurately with almost no systematic biases. Radii and log g can be determined with almost no model dependence to within 5% for realistic estimates of errors in asteroseismic and conventional observations. Errors in mass can be somewhat higher unless accurate metallicity estimates are available. Age estimates of individual stars are the most model dependent. The errors are larger, too. However, we find that for star clusters, it is possible to get a relatively precise age if one assumes that all stars in a given cluster have the same age.

  3. Effects of the job stress education for supervisors on psychological distress and job performance among their immediate subordinates: a supervisor-based randomized controlled trial.

    PubMed

    Takao, Soshi; Tsutsumi, Akizumi; Nishiuchi, Kyoko; Mineyama, Sachiko; Kawakami, Norito

    2006-11-01

    As job stress is now one of the biggest health-related problems in the workplace, several education programs for supervisors have been conducted to reduce job stress. We conducted a supervisor-based randomized controlled trial to evaluate the effects of an education program on their subordinates' psychological distress and job performance. The subjects were 301 employees (46 supervisors and 255 subordinates) in a Japanese sake brewery. First, we randomly allocated supervisors to the education group (24 supervisors) and the waiting-list group (22 supervisors). Then, for the allocated supervisors we introduced a single-session, 60-min education program according to the guidelines for employee mental health promotion along with training that provided consulting skills combined with role-playing exercises. We conducted pre- and post-intervention (after 3 months) surveys for all subordinates to examine psychological distress and job performance. We defined the intervention group as those subordinates whose immediate supervisors received the education, and the control group was defined as those subordinates whose supervisors did not. To evaluate the effects, we employed a repeated measures analysis of variance (ANOVA). Overall, the intervention effects (time x group) were not significant for psychological distress or job performance among both male (p=0.456 and 0.252) and female (p=0.714 and 0.106) subordinates. However, young male subordinates engaged in white-collar occupations showed significant intervention effects for psychological distress (p=0.012) and job performance (p=0.029). In conclusion, our study indicated a possible beneficial effect of supervisor education on the psychological distress and job performance of subordinates. This effect may vary according to specific groups.

  4. CDF way to the GRID

    NASA Astrophysics Data System (ADS)

    Delli Paoli, F.

    2006-11-01

    The improvements of the peak instantaneous luminosity of the Tevatron Collider require large increases in computing requirements for the CDF experiment which has to be able to increase proportionally the amount of Monte Carlo data it produces and to satisfy the computing needs for future data analysis. This is, in turn, forcing the CDF Collaboration to move beyond the used dedicated resources and start exploiting Grid resources. CDF has been running a set of CDF Analysis Farm (CAFs), which are submission portals to dedicated pools. In this paper will be presented the CDF strategy to access Grid resources. GlideCAF, a new CAF implementation based on Condor Glide-in technology, has been developed to access resources in specific Grid Sites and is currently in production status at CNAF Tier-1 in Italy. Recently have been configured GlideCAFs also in San Diego (US), Fermilab and Lyon Tier-1 Center (France). GlideCAF model has been used also to implement OsgCAF, which is a Fermilab project to exploit OSG resources in US. LcgCAF is basically a reimplementation of the CAF model in order to access Grid resources by using the LCG/EGEE Middleware components in a total standard Grid way. LcgCAF is constituted by a set of services each of them responsible for accepting, submitting and monitoring CDF user jobs during theirs lifetimes in the Grid environment. An overview of the Grid Environment and of the specific Middleware services used will be presented; GlideCAF and LcgCAF implementations will be discussed in detail. Some details on OsgCAF project will be also given.

  5. Grid occupancy estimation for environment perception based on belief functions and PCR6

    NASA Astrophysics Data System (ADS)

    Moras, Julien; Dezert, Jean; Pannetier, Benjamin

    2015-05-01

    In this contribution, we propose to improve the grid map occupancy estimation method developed so far based on belief function modeling and the classical Dempster's rule of combination. Grid map offers a useful representation of the perceived world for mobile robotics navigation. It will play a major role for the security (obstacle avoidance) of next generations of terrestrial vehicles, as well as for future autonomous navigation systems. In a grid map, the occupancy of each cell representing a small piece of the surrounding area of the robot must be estimated at first from sensors measurements (typically LIDAR, or camera), and then it must also be classified into different classes in order to get a complete and precise perception of the dynamic environment where the robot moves. So far, the estimation and the grid map updating have been done using fusion techniques based on the probabilistic framework, or on the classical belief function framework thanks to an inverse model of the sensors. Mainly because the latter offers an interesting management of uncertainties when the quality of available information is low, and when the sources of information appear as conflicting. To improve the performances of the grid map estimation, we propose in this paper to replace Dempster's rule of combination by the PCR6 rule (Proportional Conflict Redistribution rule #6) proposed in DSmT (Dezert-Smarandache) Theory. As an illustrating scenario, we consider a platform moving in dynamic area and we compare our new realistic simulation results (based on a LIDAR sensor) with those obtained by the probabilistic and the classical belief-based approaches.

  6. Navigation in Grid Space with the NAS Grid Benchmarks

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Hood, Robert; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We present a navigational tool for computational grids. The navigational process is based on measuring the grid characteristics with the NAS Grid Benchmarks (NGB) and using the measurements to assign tasks of a grid application to the grid machines. The tool allows the user to explore the grid space and to navigate the execution at a grid application to minimize its turnaround time. We introduce the notion of gridscape as a user view of the grid and show how it can be me assured by NGB, Then we demonstrate how the gridscape can be used with two different schedulers to navigate a grid application through a rudimentary grid.

  7. An adaptive grid/Navier-Stokes methodology for the calculation of nozzle afterbody base flows with a supersonic freestream

    NASA Technical Reports Server (NTRS)

    Williams, Morgan; Lim, Dennis; Ungewitter, Ronald

    1993-01-01

    This paper describes an adaptive grid method for base flows in a supersonic freestream. The method is based on the direct finite-difference statement of the equidistribution principle. The weighting factor is a combination of the Mach number, density, and velocity first-derivative gradients in the radial direction. Two key ideas of the method are to smooth the weighting factor by using a type of implicit smoothing and to allow boundary points to move in the grid adaptation process. An AGARD nozzle afterbody base flow configuration is used to demonstrate the performance of the adaptive grid methodology. Computed base pressures are compared to experimental data. The adapted grid solutions offer a dramatic improvement in base pressure prediction compared to solutions computed on a nonadapted grid. A total-variation-diminishing (TVD) Navier-Stokes scheme is used to solve the governing flow equations.

  8. Spatial services grid

    NASA Astrophysics Data System (ADS)

    Cao, Jian; Li, Qi; Cheng, Jicheng

    2005-10-01

    This paper discusses the concept, key technologies and main application of Spatial Services Grid. The technologies of Grid computing and Webservice is playing a revolutionary role in studying the spatial information services. The concept of the SSG (Spatial Services Grid) is put forward based on the SIG (Spatial Information Grid) and OGSA (open grid service architecture). Firstly, the grid computing is reviewed and the key technologies of SIG and their main applications are reviewed. Secondly, the grid computing and three kinds of SIG (in broad sense)--SDG (spatial data grid), SIG (spatial information grid) and SSG (spatial services grid) and their relationships are proposed. Thirdly, the key technologies of the SSG (spatial services grid) is put forward. Finally, three representative applications of SSG (spatial services grid) are discussed. The first application is urban location based services gird, which is a typical spatial services grid and can be constructed on OGSA (Open Grid Services Architecture) and digital city platform. The second application is region sustainable development grid which is the key to the urban development. The third application is Region disaster and emergency management services grid.

  9. The improved robustness of multigrid elliptic solvers based on multiple semicoarsened grids

    NASA Technical Reports Server (NTRS)

    Naik, Naomi H.; Vanrosendale, John

    1991-01-01

    Multigrid convergence rates degenerate on problems with stretched grids or anisotropic operators, unless one uses line or plane relaxation. For 3-D problems, only plane relaxation suffices, in general. While line and plane relaxation algorithms are efficient on sequential machines, they are quite awkward and inefficient on parallel machines. A new multigrid algorithm is presented based on the use of multiple coarse grids, that eliminates the need for line or plane relaxation in anisotropic problems. This algorithm was developed and the standard multigrid theory was extended to establish rapid convergence for this class of algorithms. The new algorithm uses only point relaxation, allowing easy and efficient parallel implementation, yet achieves robustness and convergence rates comparable to line and plane relaxation multigrid algorithms. The algorithm described is a variant of Mulder's multigrid algorithm for hyperbolic problems. The latter uses multiple coarse grids to achieve robustness, but is unsuitable for elliptic problems, since its V-cycle convergence rate goes to one as the number of levels increases. The new algorithm combines the contributions from the multiple coarse grid via a local switch, based on the strength of the discrete operator in each coordinate direction.

  10. Grid Computing

    NASA Astrophysics Data System (ADS)

    Foster, Ian

    2001-08-01

    The term "Grid Computing" refers to the use, for computational purposes, of emerging distributed Grid infrastructures: that is, network and middleware services designed to provide on-demand and high-performance access to all important computational resources within an organization or community. Grid computing promises to enable both evolutionary and revolutionary changes in the practice of computational science and engineering based on new application modalities such as high-speed distributed analysis of large datasets, collaborative engineering and visualization, desktop access to computation via "science portals," rapid parameter studies and Monte Carlo simulations that use all available resources within an organization, and online analysis of data from scientific instruments. In this article, I examine the status of Grid computing circa 2000, briefly reviewing some relevant history, outlining major current Grid research and development activities, and pointing out likely directions for future work. I also present a number of case studies, selected to illustrate the potential of Grid computing in various areas of science.

  11. Agent-based simulation of building evacuation using a grid graph-based model

    NASA Astrophysics Data System (ADS)

    Tan, L.; Lin, H.; Hu, M.; Che, W.

    2014-02-01

    Shifting from macroscope models to microscope models, the agent-based approach has been widely used to model crowd evacuation as more attentions are paid on individualized behaviour. Since indoor evacuation behaviour is closely related to spatial features of the building, effective representation of indoor space is essential for the simulation of building evacuation. The traditional cell-based representation has limitations in reflecting spatial structure and is not suitable for topology analysis. Aiming at incorporating powerful topology analysis functions of GIS to facilitate agent-based simulation of building evacuation, we used a grid graph-based model in this study to represent the indoor space. Such model allows us to establish an evacuation network at a micro level. Potential escape routes from each node thus could be analysed through GIS functions of network analysis considering both the spatial structure and route capacity. This would better support agent-based modelling of evacuees' behaviour including route choice and local movements. As a case study, we conducted a simulation of emergency evacuation from the second floor of an official building using Agent Analyst as the simulation platform. The results demonstrate the feasibility of the proposed method, as well as the potential of GIS in visualizing and analysing simulation results.

  12. Computer-based coding of free-text job descriptions to efficiently identify occupations in epidemiological studies.

    PubMed

    Russ, Daniel E; Ho, Kwan-Yuet; Colt, Joanne S; Armenti, Karla R; Baris, Dalsu; Chow, Wong-Ho; Davis, Faith; Johnson, Alison; Purdue, Mark P; Karagas, Margaret R; Schwartz, Kendra; Schwenn, Molly; Silverman, Debra T; Johnson, Calvin A; Friesen, Melissa C

    2016-06-01

    Mapping job titles to standardised occupation classification (SOC) codes is an important step in identifying occupational risk factors in epidemiological studies. Because manual coding is time-consuming and has moderate reliability, we developed an algorithm called SOCcer (Standardized Occupation Coding for Computer-assisted Epidemiologic Research) to assign SOC-2010 codes based on free-text job description components. Job title and task-based classifiers were developed by comparing job descriptions to multiple sources linking job and task descriptions to SOC codes. An industry-based classifier was developed based on the SOC prevalence within an industry. These classifiers were used in a logistic model trained using 14 983 jobs with expert-assigned SOC codes to obtain empirical weights for an algorithm that scored each SOC/job description. We assigned the highest scoring SOC code to each job. SOCcer was validated in 2 occupational data sources by comparing SOC codes obtained from SOCcer to expert assigned SOC codes and lead exposure estimates obtained by linking SOC codes to a job-exposure matrix. For 11 991 case-control study jobs, SOCcer-assigned codes agreed with 44.5% and 76.3% of manually assigned codes at the 6-digit and 2-digit level, respectively. Agreement increased with the score, providing a mechanism to identify assignments needing review. Good agreement was observed between lead estimates based on SOCcer and manual SOC assignments (κ 0.6-0.8). Poorer performance was observed for inspection job descriptions, which included abbreviations and worksite-specific terminology. Although some manual coding will remain necessary, using SOCcer may improve the efficiency of incorporating occupation into large-scale epidemiological studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  13. Computer-based coding of free-text job descriptions to efficiently identify occupations in epidemiological studies

    PubMed Central

    Russ, Daniel E.; Ho, Kwan-Yuet; Colt, Joanne S.; Armenti, Karla R.; Baris, Dalsu; Chow, Wong-Ho; Davis, Faith; Johnson, Alison; Purdue, Mark P.; Karagas, Margaret R.; Schwartz, Kendra; Schwenn, Molly; Silverman, Debra T.; Johnson, Calvin A.; Friesen, Melissa C.

    2016-01-01

    Background Mapping job titles to standardized occupation classification (SOC) codes is an important step in identifying occupational risk factors in epidemiologic studies. Because manual coding is time-consuming and has moderate reliability, we developed an algorithm called SOCcer (Standardized Occupation Coding for Computer-assisted Epidemiologic Research) to assign SOC-2010 codes based on free-text job description components. Methods Job title and task-based classifiers were developed by comparing job descriptions to multiple sources linking job and task descriptions to SOC codes. An industry-based classifier was developed based on the SOC prevalence within an industry. These classifiers were used in a logistic model trained using 14,983 jobs with expert-assigned SOC codes to obtain empirical weights for an algorithm that scored each SOC/job description. We assigned the highest scoring SOC code to each job. SOCcer was validated in two occupational data sources by comparing SOC codes obtained from SOCcer to expert assigned SOC codes and lead exposure estimates obtained by linking SOC codes to a job-exposure matrix. Results For 11,991 case-control study jobs, SOCcer-assigned codes agreed with 44.5% and 76.3% of manually assigned codes at the 6- and 2-digit level, respectively. Agreement increased with the score, providing a mechanism to identify assignments needing review. Good agreement was observed between lead estimates based on SOCcer and manual SOC assignments (kappa: 0.6–0.8). Poorer performance was observed for inspection job descriptions, which included abbreviations and worksite-specific terminology. Conclusions Although some manual coding will remain necessary, using SOCcer may improve the efficiency of incorporating occupation into large-scale epidemiologic studies. PMID:27102331

  14. MPDATA: An edge-based unstructured-grid formulation

    NASA Astrophysics Data System (ADS)

    Smolarkiewicz, Piotr K.; Szmelter, Joanna

    2005-07-01

    We present an advancement in the evolution of MPDATA (multidimensional positive definite advection transport algorithm). Over the last two decades, MPDATA has proven successful in applications using single-block structured cuboidal meshes (viz. Cartesian meshes), while employing homeomorphic mappings to accommodate time-dependent curvilinear domains. Motivated by the strengths of the Cartesian-mesh MPDATA, we develop a new formulation in an arbitrary finite-volume framework with a fully unstructured polyhedral hybrid mesh. In MPDATA, as in any Taylor-series based integration method for PDE, the choice of data structure has a pronounced impact on the technical details of the algorithm. Aiming at a broad range of applications with a large number of control-volume cells, we select a general, compact and computationally efficient, edge-based data structure. This facilitates the use of MPDATA for problems involving complex geometries and/or inhomogeneous anisotropic flows where mesh adaptivity is advantageous. In this paper, we describe the theory and implementation of the basic finite-volume MPDATA, and document extensions important for applications: a fully monotone scheme, diffusion scheme, and generalization to complete flow solvers. Theoretical discussions are illustrated with benchmark results in two and three spatial dimensions.

  15. Analysis of the Multi Strategy Goal Programming for Micro-Grid Based on Dynamic ant Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Qiu, J. P.; Niu, D. X.

    Micro-grid is one of the key technologies of the future energy supplies. Take economic planning. reliability, and environmental protection of micro grid as a basis for the analysis of multi-strategy objective programming problems for micro grid which contains wind power, solar power, and battery and micro gas turbine. Establish the mathematical model of each power generation characteristics and energy dissipation. and change micro grid planning multi-objective function under different operating strategies to a single objective model based on AHP method. Example analysis shows that in combination with dynamic ant mixed genetic algorithm can get the optimal power output of this model.

  16. Job Stress, Stress Related to Performance-Based Accreditation, Locus of Control, Age, and Gender As Related to Job Satisfaction and Burnout in Teachers and Principals.

    ERIC Educational Resources Information Center

    Hipps, Elizabeth Smith; Halpin, Glennelle

    The purpose of the study described here was to: (1) determine the amount of variance in burnout and job satisfaction in public school teachers and principals which could be accounted for by stress related to the state's performance-based accreditation standards; (2) examine the relationship between stress related to state standards and the age and…

  17. Professional confidence and job satisfaction: an examination of counselors' perceptions in faith-based and non-faith-based drug treatment programs.

    PubMed

    Chu, Doris C; Sung, Hung-En

    2014-08-01

    Understanding substance abuse counselors' professional confidence and job satisfaction is important since such confidence and satisfaction can affect the way counselors go about their jobs. Analyzing data derived from a random sample of 110 counselors from faith-based and non-faith-based treatment programs, this study examines counselors' professional confidence and job satisfaction in both faith-based and non-faith-based programs. The multivariate analyses indicate years of experience and being a certified counselor were the only significant predictors of professional confidence. There was no significant difference in perceived job satisfaction and confidence between counselors in faith-based and non-faith-based programs. A majority of counselors in both groups expressed a high level of satisfaction with their job. Job experience in drug counseling and prior experience as an abuser were perceived by counselors as important components to facilitate counseling skills. Policy implications are discussed. © The Author(s) 2013.

  18. Efficient calibration of a distributed pde-based hydrological model using grid coarsening

    NASA Astrophysics Data System (ADS)

    von Gunten, D.; Wöhling, T.; Haslauer, C.; Merchán, D.; Causapé, J.; Cirpka, O. A.

    2014-11-01

    Partial-differential-equation based integrated hydrological models are now regularly used at catchment scale. They rely on the shallow water equations for surface flow and on the Richards' equations for subsurface flow, allowing a spatially explicit representation of properties and states. However, these models usually come at high computational costs, which limit their accessibility to state-of-the-art methods of parameter estimation and uncertainty quantification, because these methods require a large number of model evaluations. In this study, we present an efficient model calibration strategy, based on a hierarchy of grid resolutions, each of them resolving the same zonation of subsurface and land-surface units. We first analyze which model outputs show the highest similarities between the original model and two differently coarsened grids. Then we calibrate the coarser models by comparing these similar outputs to the measurements. We finish the calibration using the fully resolved model, taking the result of the preliminary calibration as starting point. We apply the proposed approach to the well monitored Lerma catchment in North-East Spain, using the model HydroGeoSphere. The original model grid with 80,000 finite elements was complemented with two other model variants with approximately 16,000 and 10,000 elements, respectively. Comparing the model results for these different grids, we observe differences in peak discharge, evapotranspiration, and near-surface saturation. Hydraulic heads and low flow, however, are very similar for all tested parameter sets, which allows the use of these variables to calibrate our model. The calibration results are satisfactory and the duration of the calibration has been greatly decreased by using different model grid resolutions.

  19. Gridded sunshine duration climate data record for Germany based on combined satellite and in situ observations

    NASA Astrophysics Data System (ADS)

    Walawender, Jakub; Kothe, Steffen; Trentmann, Jörg; Pfeifroth, Uwe; Cremer, Roswitha

    2017-04-01

    The purpose of this study is to create a 1 km2 gridded daily sunshine duration data record for Germany covering the period from 1983 to 2015 (33 years) based on satellite estimates of direct normalised surface solar radiation and in situ sunshine duration observations using a geostatistical approach. The CM SAF SARAH direct normalized irradiance (DNI) satellite climate data record and in situ observations of sunshine duration from 121 weather stations operated by DWD are used as input datasets. The selected period of 33 years is associated with the availability of satellite data. The number of ground stations is limited to 121 as there are only time series with less than 10% of missing observations over the selected period included to keep the long-term consistency of the output sunshine duration data record. In the first step, DNI data record is used to derive sunshine hours by applying WMO threshold of 120 W/m2 (SDU = DNI ≥ 120 W/m2) and weighting of sunny slots to correct the sunshine length between two instantaneous image data due to cloud movement. In the second step, linear regression between SDU and in situ sunshine duration is calculated to adjust the satellite product to the ground observations and the output regression coefficients are applied to create a regression grid. In the last step regression residuals are interpolated with ordinary kriging and added to the regression grid. A comprehensive accuracy assessment of the gridded sunshine duration data record is performed by calculating prediction errors (cross-validation routine). "R" is used for data processing. A short analysis of the spatial distribution and temporal variability of sunshine duration over Germany based on the created dataset will be presented. The gridded sunshine duration data are useful for applications in various climate-related studies, agriculture and solar energy potential calculations.

  20. Photovoltaic Grid-Connected Modeling and Characterization Based on Experimental Results

    PubMed Central

    Humada, Ali M.; Hojabri, Mojgan; Sulaiman, Mohd Herwan Bin; Hamada, Hussein M.; Ahmed, Mushtaq N.

    2016-01-01

    A grid-connected photovoltaic (PV) system operates under fluctuated weather condition has been modeled and characterized based on specific test bed. A mathematical model of a small-scale PV system has been developed mainly for residential usage, and the potential results have been simulated. The proposed PV model based on three PV parameters, which are the photocurrent, IL, the reverse diode saturation current, Io, the ideality factor of diode, n. Accuracy of the proposed model and its parameters evaluated based on different benchmarks. The results showed that the proposed model fitting the experimental results with high accuracy compare to the other models, as well as the I-V characteristic curve. The results of this study can be considered valuable in terms of the installation of a grid-connected PV system in fluctuated climatic conditions. PMID:27035575

  1. Photovoltaic Grid-Connected Modeling and Characterization Based on Experimental Results.

    PubMed

    Humada, Ali M; Hojabri, Mojgan; Sulaiman, Mohd Herwan Bin; Hamada, Hussein M; Ahmed, Mushtaq N

    2016-01-01

    A grid-connected photovoltaic (PV) system operates under fluctuated weather condition has been modeled and characterized based on specific test bed. A mathematical model of a small-scale PV system has been developed mainly for residential usage, and the potential results have been simulated. The proposed PV model based on three PV parameters, which are the photocurrent, IL, the reverse diode saturation current, Io, the ideality factor of diode, n. Accuracy of the proposed model and its parameters evaluated based on different benchmarks. The results showed that the proposed model fitting the experimental results with high accuracy compare to the other models, as well as the I-V characteristic curve. The results of this study can be considered valuable in terms of the installation of a grid-connected PV system in fluctuated climatic conditions.

  2. A Current Sensor Based on the Giant Magnetoresistance Effect: Design and Potential Smart Grid Applications

    PubMed Central

    Ouyang, Yong; He, Jinliang; Hu, Jun; Wang, Shan X.

    2012-01-01

    Advanced sensing and measurement techniques are key technologies to realize a smart grid. The giant magnetoresistance (GMR) effect has revolutionized the fields of data storage and magnetic measurement. In this work, a design of a GMR current sensor based on a commercial analog GMR chip for applications in a smart grid is presented and discussed. Static, dynamic and thermal properties of the sensor were characterized. The characterizations showed that in the operation range from 0 to ±5 A, the sensor had a sensitivity of 28 mV·A−1, linearity of 99.97%, maximum deviation of 2.717%, frequency response of −1.5 dB at 10 kHz current measurement, and maximum change of the amplitude response of 0.0335%·°C−1 with thermal compensation. In the distributed real-time measurement and monitoring of a smart grid system, the GMR current sensor shows excellent performance and is cost effective, making it suitable for applications such as steady-state and transient-state monitoring. With the advantages of having a high sensitivity, high linearity, small volume, low cost, and simple structure, the GMR current sensor is promising for the measurement and monitoring of smart grids. PMID:23202221

  3. A current sensor based on the giant magnetoresistance effect: design and potential smart grid applications.

    PubMed

    Ouyang, Yong; He, Jinliang; Hu, Jun; Wang, Shan X

    2012-11-09

    Advanced sensing and measurement techniques are key technologies to realize a smart grid. The giant magnetoresistance (GMR) effect has revolutionized the fields of data storage and magnetic measurement. In this work, a design of a GMR current sensor based on a commercial analog GMR chip for applications in a smart grid is presented and discussed. Static, dynamic and thermal properties of the sensor were characterized. The characterizations showed that in the operation range from 0 to ±5 A, the sensor had a sensitivity of 28 mV·A(-1), linearity of 99.97%, maximum deviation of 2.717%, frequency response of −1.5 dB at 10 kHz current measurement, and maximum change of the amplitude response of 0.0335%·°C(-1) with thermal compensation. In the distributed real-time measurement and monitoring of a smart grid system, the GMR current sensor shows excellent performance and is cost effective, making it suitable for applications such as steady-state and transient-state monitoring. With the advantages of having a high sensitivity, high linearity, small volume, low cost, and simple structure, the GMR current sensor is promising for the measurement and monitoring of smart grids.

  4. Job Stress of School-Based Speech-Language Pathologists

    ERIC Educational Resources Information Center

    Harris, Stephanie Ferney; Prater, Mary Anne; Dyches, Tina Taylor; Heath, Melissa Allen

    2009-01-01

    Stress and burnout contribute significantly to the shortages of school-based speech-language pathologists (SLPs). At the request of the Utah State Office of Education, the researchers measured the stress levels of 97 school-based SLPs using the "Speech-Language Pathologist Stress Inventory." Results indicated that participants' emotional-fatigue…

  5. A Computer-Based, Interactive Videodisc Job Aid and Expert System for Electron Beam Lithography Integration and Diagnostic Procedures.

    ERIC Educational Resources Information Center

    Stevenson, Kimberly

    This master's thesis describes the development of an expert system and interactive videodisc computer-based instructional job aid used for assisting in the integration of electron beam lithography devices. Comparable to all comprehensive training, expert system and job aid development require a criterion-referenced systems approach treatment to…

  6. A Computer-Based, Interactive Videodisc Job Aid and Expert System for Electron Beam Lithography Integration and Diagnostic Procedures.

    ERIC Educational Resources Information Center

    Stevenson, Kimberly

    This master's thesis describes the development of an expert system and interactive videodisc computer-based instructional job aid used for assisting in the integration of electron beam lithography devices. Comparable to all comprehensive training, expert system and job aid development require a criterion-referenced systems approach treatment to…

  7. A Correlational Study of Telework Frequency, Information Communication Technology, and Job Satisfaction of Home-Based Teleworkers

    ERIC Educational Resources Information Center

    Webster-Trotman, Shana P.

    2010-01-01

    In 2008, 33.7 million Americans teleworked from home. The Telework Enhancement Act (S. 707) and the Telework Improvements Act (H.R. 1722) of 2009 were designed to increase the number of teleworkers. The research problem addressed was the lack of understanding of factors that influence home-based teleworkers' job satisfaction. Job dissatisfaction…

  8. A Correlational Study of Telework Frequency, Information Communication Technology, and Job Satisfaction of Home-Based Teleworkers

    ERIC Educational Resources Information Center

    Webster-Trotman, Shana P.

    2010-01-01

    In 2008, 33.7 million Americans teleworked from home. The Telework Enhancement Act (S. 707) and the Telework Improvements Act (H.R. 1722) of 2009 were designed to increase the number of teleworkers. The research problem addressed was the lack of understanding of factors that influence home-based teleworkers' job satisfaction. Job dissatisfaction…

  9. Creating Better Child Care Jobs: Model Work Standards for Teaching Staff in Center-Based Child Care.

    ERIC Educational Resources Information Center

    Center for the Child Care Workforce, Washington, DC.

    This document presents model work standards articulating components of the child care center-based work environment that enable teachers to do their jobs well. These standards establish criteria to assess child care work environments and identify areas to improve in order to assure good jobs for adults and good care for children. The standards are…

  10. Creating Better Child Care Jobs: Model Work Standards for Teaching Staff in Center-Based Child Care.

    ERIC Educational Resources Information Center

    Center for the Child Care Workforce, Washington, DC.

    This document presents model work standards articulating components of the child care center-based work environment that enable teachers to do their jobs well. These standards establish criteria to assess child care work environments and identify areas to improve in order to assure good jobs for adults and good care for children. The standards are…

  11. 20 CFR 670.520 - Are students permitted to hold jobs other than work-based learning opportunities?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Are students permitted to hold jobs other than work-based learning opportunities? 670.520 Section 670.520 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR THE JOB CORPS UNDER TITLE I OF THE WORKFORCE INVESTMENT ACT Program Activities and Center Operations...

  12. 20 CFR 670.520 - Are students permitted to hold jobs other than work-based learning opportunities?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 4 2012-04-01 2012-04-01 false Are students permitted to hold jobs other than work-based learning opportunities? 670.520 Section 670.520 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR (CONTINUED) THE JOB CORPS UNDER TITLE I OF THE WORKFORCE INVESTMENT ACT Program Activities and...

  13. 20 CFR 670.520 - Are students permitted to hold jobs other than work-based learning opportunities?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 4 2014-04-01 2014-04-01 false Are students permitted to hold jobs other than work-based learning opportunities? 670.520 Section 670.520 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR (CONTINUED) THE JOB CORPS UNDER TITLE I OF THE WORKFORCE INVESTMENT ACT Program Activities and...

  14. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  15. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE PAGES

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  16. Burnout in Medical Residents: A Study Based on the Job Demands-Resources Model

    PubMed Central

    2014-01-01

    Purpose. Burnout is a prolonged response to chronic emotional and interpersonal stressors on the job. The purpose of our cross-sectional study was to estimate the burnout rates among medical residents in the largest Greek hospital in 2012 and identify factors associated with it, based on the job demands-resources model (JD-R). Method. Job demands were examined via a 17-item questionnaire assessing 4 characteristics (emotional demands, intellectual demands, workload, and home-work demands' interface) and job resources were measured via a 14-item questionnaire assessing 4 characteristics (autonomy, opportunities for professional development, support from colleagues, and supervisor's support). The Maslach Burnout Inventory (MBI) was used to measure burnout. Results. Of the 290 eligible residents, 90.7% responded. In total 14.4% of the residents were found to experience burnout. Multiple logistic regression analysis revealed that each increased point in the JD-R questionnaire score regarding home-work interface was associated with an increase in the odds of burnout by 25.5%. Conversely, each increased point for autonomy, opportunities in professional development, and each extra resident per specialist were associated with a decrease in the odds of burnout by 37.1%, 39.4%, and 59.0%, respectively. Conclusions. Burnout among medical residents is associated with home-work interface, autonomy, professional development, and resident to specialist ratio. PMID:25531003

  17. Burnout in medical residents: a study based on the job demands-resources model.

    PubMed

    Zis, Panagiotis; Anagnostopoulos, Fotios; Sykioti, Panagiota

    2014-01-01

    Burnout is a prolonged response to chronic emotional and interpersonal stressors on the job. The purpose of our cross-sectional study was to estimate the burnout rates among medical residents in the largest Greek hospital in 2012 and identify factors associated with it, based on the job demands-resources model (JD-R). Job demands were examined via a 17-item questionnaire assessing 4 characteristics (emotional demands, intellectual demands, workload, and home-work demands' interface) and job resources were measured via a 14-item questionnaire assessing 4 characteristics (autonomy, opportunities for professional development, support from colleagues, and supervisor's support). The Maslach Burnout Inventory (MBI) was used to measure burnout. Of the 290 eligible residents, 90.7% responded. In total 14.4% of the residents were found to experience burnout. Multiple logistic regression analysis revealed that each increased point in the JD-R questionnaire score regarding home-work interface was associated with an increase in the odds of burnout by 25.5%. Conversely, each increased point for autonomy, opportunities in professional development, and each extra resident per specialist were associated with a decrease in the odds of burnout by 37.1%, 39.4%, and 59.0%, respectively. Burnout among medical residents is associated with home-work interface, autonomy, professional development, and resident to specialist ratio.

  18. Community and job satisfactions: an argument for reciprocal influence based on the principle of stimulus generalization

    SciTech Connect

    Gavin, J.; Montgomery, J.C.

    1982-10-01

    The principle of stimulus generalization provided the underlying argument for a test of hypotheses regarding the association of community and job satisfactions and a critique of related theory and research. Two-stage least squares (2SLS) analysis made possible the examination of reciprocal causation, a notion inherent in the theoretical argument. Data were obtained from 276 employees of a Western U.S. coal mine as part of a work attitudes survey. The 2SLS analysis indicated a significant impact of community satisfaction on job satisfaction and an effect of borderline significance of job on community satisfaction. Theory-based correlational comparisons were made on groups of employees residing in four distinct communities, high and low tenure groups, males and females, and different levels in the mine's hierarchy. The pattern of correlations was generally consistent with predictions, but significance tests for differences yielded equivocal support. When considered in the context of previous studies, the data upheld a reciprocal causal model and the explanatory principle of stimulus generalization for understanding the relation of community and job satisfactions. Sample characteristics necessitate cautious interpretation and the model per se might best be viewed as a heuristic framework for more definitive research.

  19. Refinements and practical implementation of a power based loss of grid detection algorithm for embedded generators

    NASA Astrophysics Data System (ADS)

    Barrett, James

    The incorporation of small, privately owned generation operating in parallel with, and supplying power to, the distribution network is becoming more widespread. This method of operation does however have problems associated with it. In particular, a loss of the connection to the main utility supply which leaves a portion of the utility load connected to the embedded generator will result in a power island. This situation presents possible dangers to utility personnel and the public, complications for smooth system operation and probable plant damage should the two systems be reconnected out-of-synchronism. Loss of Grid (or Islanding), as this situation is known, is the subject of this thesis. The work begins by detailing the requirements for operation of generation embedded in the utility supply with particular attention drawn to the requirements for a loss of grid protection scheme. The mathematical basis for a new loss of grid protection algorithm is developed and the inclusion of the algorithm in an integrated generator protection scheme described. A detailed description is given on the implementation of the new algorithm in a microprocessor based relay hardware to allow practical tests on small embedded generation facilities, including an in-house multiple generator test facility. The results obtained from the practical tests are compared with those obtained from simulation studies carried out in previous work and the differences are discussed. The performance of the algorithm is enhanced from the theoretical algorithm developed using the simulation results with simple filtering together with pattern recognition techniques. This provides stability during severe load fluctuations under parallel operation and system fault conditions and improved performance under normal operating conditions and for loss of grid detection. In addition to operating for a loss of grid connection, the algorithm will respond to load fluctuations which occur within a power island

  20. A Comprehensive WSN-Based Approach to Efficiently Manage a Smart Grid

    PubMed Central

    Martinez-Sandoval, Ruben; Garcia-Sanchez, Antonio-Javier; Garcia-Sanchez, Felipe; Garcia-Haro, Joan; Flynn, David

    2014-01-01

    The Smart Grid (SG) is conceived as the evolution of the current electrical grid representing a big leap in terms of efficiency, reliability and flexibility compared to today's electrical network. To achieve this goal, the Wireless Sensor Networks (WSNs) are considered by the scientific/engineering community to be one of the most suitable technologies to apply SG technology to due to their low-cost, collaborative and long-standing nature. However, the SG has posed significant challenges to utility operators—mainly very harsh radio propagation conditions and the lack of appropriate systems to empower WSN devices—making most of the commercial widespread solutions inadequate. In this context, and as a main contribution, we have designed a comprehensive ad-hoc WSN-based solution for the Smart Grid (SENSED-SG) that focuses on specific implementations of the MAC, the network and the application layers to attain maximum performance and to successfully deal with any arising hurdles. Our approach has been exhaustively evaluated by computer simulations and mathematical analysis, as well as validation within real test-beds deployed in controlled environments. In particular, these test-beds cover two of the main scenarios found in a SG; on one hand, an indoor electrical substation environment, implemented in a High Voltage AC/DC laboratory, and, on the other hand, an outdoor case, deployed in the Transmission and Distribution segment of a power grid. The results obtained show that SENSED-SG performs better and is more suitable for the Smart Grid than the popular ZigBee WSN approach. PMID:25310468

  1. A comprehensive WSN-based approach to efficiently manage a Smart Grid.

    PubMed

    Martinez-Sandoval, Ruben; Garcia-Sanchez, Antonio-Javier; Garcia-Sanchez, Felipe; Garcia-Haro, Joan; Flynn, David

    2014-10-10

    The Smart Grid (SG) is conceived as the evolution of the current electrical grid representing a big leap in terms of efficiency, reliability and flexibility compared to today's electrical network. To achieve this goal, the Wireless Sensor Networks (WSNs) are considered by the scientific/engineering community to be one of the most suitable technologies to apply SG technology to due to their low-cost, collaborative and long-standing nature. However, the SG has posed significant challenges to utility operators-mainly very harsh radio propagation conditions and the lack of appropriate systems to empower WSN devices-making most of the commercial widespread solutions inadequate. In this context, and as a main contribution, we have designed a comprehensive ad-hoc WSN-based solution for the Smart Grid (SENSED-SG) that focuses on specific implementations of the MAC, the network and the application layers to attain maximum performance and to successfully deal with any arising hurdles. Our approach has been exhaustively evaluated by computer simulations and mathematical analysis, as well as validation within real test-beds deployed in controlled environments. In particular, these test-beds cover two of the main scenarios found in a SG; on one hand, an indoor electrical substation environment, implemented in a High Voltage AC/DC laboratory, and, on the other hand, an outdoor case, deployed in the Transmission and Distribution segment of a power grid. The results obtained show that SENSED-SG performs better and is more suitable for the Smart Grid than the popular ZigBee WSN approach.

  2. Scalability of grid- and subbasin-based land surface modeling approaches for hydrologic simulations

    SciTech Connect

    Tesfa, Teklu K.; Ruby Leung, L.; Huang, Maoyi; Li, Hong-Yi; Voisin, Nathalie; Wigmosta, Mark S.

    2014-03-27

    This paper investigates the relative merits of grid- and subbasin-based land surface modeling approaches for hydrologic simulations, with a focus on their scalability (i.e., abilities to perform consistently across a range of spatial resolutions) in simulating runoff generation. Simulations produced by the grid- and subbasin-based configurations of the Community Land Model (CLM) are compared at four spatial resolutions (0.125o, 0.25o, 0.5o and 1o) over the topographically diverse region of the U.S. Pacific Northwest. Using the 0.125o resolution simulation as the “reference”, statistical skill metrics are calculated and compared across simulations at 0.25o, 0.5o and 1o spatial resolutions of each modeling approach at basin and topographic region levels. Results suggest significant scalability advantage for the subbasin-based approach compared to the grid-based approach for runoff generation. Basin level annual average relative errors of surface runoff at 0.25o, 0.5o, and 1o compared to 0.125o are 3%, 4%, and 6% for the subbasin-based configuration and 4%, 7%, and 11% for the grid-based configuration, respectively. The scalability advantages of the subbasin-based approach are more pronounced during winter/spring and over mountainous regions. The source of runoff scalability is found to be related to the scalability of major meteorological and land surface parameters of runoff generation. More specifically, the subbasin-based approach is more consistent across spatial scales than the grid-based approach in snowfall/rainfall partitioning, which is related to air temperature and surface elevation. Scalability of a topographic parameter used in the runoff parameterization also contributes to improved scalability of the rain driven saturated surface runoff component, particularly during winter. Hence this study demonstrates the importance of spatial structure for multi-scale modeling of hydrological processes, with implications to surface heat fluxes in coupled land

  3. A computational-grid based system for continental drainage network extraction using SRTM digital elevation models

    NASA Technical Reports Server (NTRS)

    Curkendall, David W.; Fielding, Eric J.; Pohl, Josef M.; Cheng, Tsan-Huei

    2003-01-01

    We describe a new effort for the computation of elevation derivatives using the Shuttle Radar Topography Mission (SRTM) results. Jet Propulsion Laboratory's (JPL) SRTM has produced a near global database of highly accurate elevation data. The scope of this database enables computing precise stream drainage maps and other derivatives on Continental scales. We describe a computing architecture for this computationally very complex task based on NASA's Information Power Grid (IPG), a distributed high performance computing network based on the GLOBUS infrastructure. The SRTM data characteristics and unique problems they present are discussed. A new algorithm for organizing the conventional extraction algorithms [1] into a cooperating parallel grid is presented as an essential component to adapt to the IPG computing structure. Preliminary results are presented for a Southern California test area, established for comparing SRTM and its results against those produced using the USGS National Elevation Data (NED) model.

  4. Grid-based modeling for land use planning and environmental resource mapping.

    SciTech Connect

    Kuiper, J. A.

    1999-08-04

    Geographic Information System (GIS) technology is used by land managers and natural resource planners for examining resource distribution and conducting project planning, often by visually interpreting spatial data representing environmental or regulatory variables. Frequently, many variables influence the decision-making process, and modeling can improve results with even a small investment of time and effort. Presented are several grid-based GIS modeling projects, including: (1) land use optimization under environmental and regulatory constraints; (2) identification of suitable wetland mitigation sites; and (3) predictive mapping of prehistoric cultural resource sites. As different as the applications are, each follows a similar process of problem conceptualization, implementation of a practical grid-based GIS model, and evaluation of results.

  5. Development of a Microcontroller-based Battery Charge Controller for an Off-grid Photovoltaic System

    NASA Astrophysics Data System (ADS)

    Rina, Z. S.; Amin, N. A. M.; Hashim, M. S. M.; Majid, M. S. A.; Rojan, M. A.; Zaman, I.

    2017-08-01

    A development of a microcontroller-based charge controller for a 12V battery has been explained in this paper. The system is designed based on a novel algorithm to couple existing solar photovoltaic (PV) charging and main grid supply charging power source. One of the main purposes of the hybrid charge controller is to supply a continuous charging power source to the battery. Furthermore, the hybrid charge controller was developed to shorten the battery charging time taken. The algorithm is programmed in an Arduino Uno R3 microcontroller that monitors the battery voltage and generates appropriate commands for the charging power source selection. The solar energy is utilized whenever the solar irradiation is high. The main grid supply will be only consumed whenever the solar irradiation is low. This system ensures continuous charging power supply and faster charging of the battery.

  6. Creating analytically divergence-free velocity fields from grid-based data

    NASA Astrophysics Data System (ADS)

    Ravu, Bharath; Rudman, Murray; Metcalfe, Guy; Lester, Daniel R.; Khakhar, Devang V.

    2016-10-01

    We present a method, based on B-splines, to calculate a C2 continuous analytic vector potential from discrete 3D velocity data on a regular grid. A continuous analytically divergence-free velocity field can then be obtained from the curl of the potential. This field can be used to robustly and accurately integrate particle trajectories in incompressible flow fields. Based on the method of Finn and Chacon (2005) [10] this new method ensures that the analytic velocity field matches the grid values almost everywhere, with errors that are two to four orders of magnitude lower than those of existing methods. We demonstrate its application to three different problems (each in a different coordinate system) and provide details of the specifics required in each case. We show how the additional accuracy of the method results in qualitatively and quantitatively superior trajectories that results in more accurate identification of Lagrangian coherent structures.

  7. Robust optimization based energy dispatch in smart grids considering demand uncertainty

    NASA Astrophysics Data System (ADS)

    Nassourou, M.; Puig, V.; Blesa, J.

    2017-01-01

    In this study we discuss the application of robust optimization to the problem of economic energy dispatch in smart grids. Robust optimization based MPC strategies for tackling uncertain load demands are developed. Unexpected additive disturbances are modelled by defining an affine dependence between the control inputs and the uncertain load demands. The developed strategies were applied to a hybrid power system connected to an electrical power grid. Furthermore, to demonstrate the superiority of the standard Economic MPC over the MPC tracking, a comparison (e.g average daily cost) between the standard MPC tracking, the standard Economic MPC, and the integration of both in one-layer and two-layer approaches was carried out. The goal of this research is to design a controller based on Economic MPC strategies, that tackles uncertainties, in order to minimise economic costs and guarantee service reliability of the system.

  8. Discrete Adjoint-Based Design for Unsteady Turbulent Flows On Dynamic Overset Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Diskin, Boris

    2012-01-01

    A discrete adjoint-based design methodology for unsteady turbulent flows on three-dimensional dynamic overset unstructured grids is formulated, implemented, and verified. The methodology supports both compressible and incompressible flows and is amenable to massively parallel computing environments. The approach provides a general framework for performing highly efficient and discretely consistent sensitivity analysis for problems involving arbitrary combinations of overset unstructured grids which may be static, undergoing rigid or deforming motions, or any combination thereof. General parent-child motions are also accommodated, and the accuracy of the implementation is established using an independent verification based on a complex-variable approach. The methodology is used to demonstrate aerodynamic optimizations of a wind turbine geometry, a biologically-inspired flapping wing, and a complex helicopter configuration subject to trimming constraints. The objective function for each problem is successfully reduced and all specified constraints are satisfied.

  9. Enabling Campus Grids with Open Science Grid Technology

    NASA Astrophysics Data System (ADS)

    Weitzel, Derek; Bockelman, Brian; Fraser, Dan; Pordes, Ruth; Swanson, David

    2011-12-01

    The Open Science Grid is a recognized key component of the US national cyber-infrastructure enabling scientific discovery through advanced high throughput computing. The principles and techniques that underlie the Open Science Grid can also be applied to Campus Grids since many of the requirements are the same, even if the implementation technologies differ. We find five requirements for a campus grid: trust relationships, job submission, resource independence, accounting, and data management. The Holland Computing Center's campus grid at the University of Nebraska-Lincoln was designed to fulfill the requirements of a campus grid. A bridging daemon was designed to bring non-Condor clusters into a grid managed by Condor. Condor features which make it possible to bridge Condor sites into a multi-campus grid have been exploited at the Holland Computing Center as well.

  10. PLL Based Energy Efficient PV System with Fuzzy Logic Based Power Tracker for Smart Grid Applications.

    PubMed

    Rohini, G; Jamuna, V

    This work aims at improving the dynamic performance of the available photovoltaic (PV) system and maximizing the power obtained from it by the use of cascaded converters with intelligent control techniques. Fuzzy logic based maximum power point technique is embedded on the first conversion stage to obtain the maximum power from the available PV array. The cascading of second converter is needed to maintain the terminal voltage at grid potential. The soft-switching region of three-stage converter is increased with the proposed phase-locked loop based control strategy. The proposed strategy leads to reduction in the ripple content, rating of components, and switching losses. The PV array is mathematically modeled and the system is simulated and the results are analyzed. The performance of the system is compared with the existing maximum power point tracking algorithms. The authors have endeavored to accomplish maximum power and improved reliability for the same insolation of the PV system. Hardware results of the system are also discussed to prove the validity of the simulation results.

  11. PLL Based Energy Efficient PV System with Fuzzy Logic Based Power Tracker for Smart Grid Applications

    PubMed Central

    Rohini, G.; Jamuna, V.

    2016-01-01

    This work aims at improving the dynamic performance of the available photovoltaic (PV) system and maximizing the power obtained from it by the use of cascaded converters with intelligent control techniques. Fuzzy logic based maximum power point technique is embedded on the first conversion stage to obtain the maximum power from the available PV array. The cascading of second converter is needed to maintain the terminal voltage at grid potential. The soft-switching region of three-stage converter is increased with the proposed phase-locked loop based control strategy. The proposed strategy leads to reduction in the ripple content, rating of components, and switching losses. The PV array is mathematically modeled and the system is simulated and the results are analyzed. The performance of the system is compared with the existing maximum power point tracking algorithms. The authors have endeavored to accomplish maximum power and improved reliability for the same insolation of the PV system. Hardware results of the system are also discussed to prove the validity of the simulation results. PMID:27294189

  12. An Analysis for an Internet Grid to Support Space Based Operations

    NASA Technical Reports Server (NTRS)

    Bradford, Robert; McNair, Ann R. (Technical Monitor)

    2002-01-01

    Currently, and in the past, dedicated communication circuits and "network services" with very stringent performance requirements have been used to support manned and unmanned mission critical ground operations at GSFC, JSC, MSFC, KSC and other NASA facilities. Because of the evolution of network technology, it is time to investigate other approaches to providing mission services for space ground and flight operations. In various scientific disciplines, effort is under way to develop network/komputing grids. These grids consisting of networks and computing equipment are enabling lower cost science. Specifically, earthquake research is headed in this direction. With a standard for network and computing interfaces using a grid, a researcher would not be required to develop and engineer NASA/DoD specific interfaces with the attendant increased cost. Use of the Internet Protocol (IP), CCSDS packet spec, and reed-solomon for satellite error correction etc. can be adopted/standardized to provide these interfaces. Generally most interfaces are developed at least to some degree end to end. This study would investigate the feasibility of using existing standards and protocols necessary to implement a SpaceOps Grid. New interface definitions or adoption/modification of existing ones for the various space operational services is required for voice both space based and ground, video, telemetry, commanding and planning may play a role to some undefined level. Security will be a separate focus in the study since security is such a large issue in using public networks. This SpaceOps Grid would be transparent to users. It would be anagulous to the Ethernet protocol's ease of use in that a researcher would plug in their experiment or instrument at one end and would be connected to the appropriate host or server without further intervention. Free flyers would be in this category as well. They would be launched and would transmit without any further intervention with the researcher or

  13. An Analysis for an Internet Grid to Support Space Based Operations

    NASA Technical Reports Server (NTRS)

    Bradford, Robert; McNair, Ann R. (Technical Monitor)

    2002-01-01

    Currently, and in the past, dedicated communication circuits and "network services" with very stringent performance requirements have been used to support manned and unmanned mission critical ground operations at GSFC, JSC, MSFC, KSC and other NASA facilities. Because of the evolution of network technology, it is time to investigate other approaches to providing mission services for space ground and flight operations. In various scientific disciplines, effort is under way to develop network/komputing grids. These grids consisting of networks and computing equipment are enabling lower cost science. Specifically, earthquake research is headed in this direction. With a standard for network and computing interfaces using a grid, a researcher would not be required to develop and engineer NASA/DoD specific interfaces with the attendant increased cost. Use of the Internet Protocol (IP), CCSDS packet spec, and reed-solomon for satellite error correction etc. can be adopted/standardized to provide these interfaces. Generally most interfaces are developed at least to some degree end to end. This study would investigate the feasibility of using existing standards and protocols necessary to implement a SpaceOps Grid. New interface definitions or adoption/modification of existing ones for the various space operational services is required for voice both space based and ground, video, telemetry, commanding and planning may play a role to some undefined level. Security will be a separate focus in the study since security is such a large issue in using public networks. This SpaceOps Grid would be transparent to users. It would be anagulous to the Ethernet protocol's ease of use in that a researcher would plug in their experiment or instrument at one end and would be connected to the appropriate host or server without further intervention. Free flyers would be in this category as well. They would be launched and would transmit without any further intervention with the researcher or

  14. Job-based health benefits in 2002: some important trends.

    PubMed

    Gabel, Jon; Levitt, Larry; Holve, Erin; Pickreign, Jeremy; Whitmore, Heidi; Dhont, Kelley; Hawkins, Samantha; Rowland, Diane

    2002-01-01

    Based on a national survey of 2,014 randomly selected public and private firms with three or more workers, this paper reports changes in employer-based health insurance from spring 2001 to spring 2002. The cost of health insurance rose 12.7 percent, the highest rate of growth since 1990. Employee contributions for health insurance rose in 2002, from $30 to $38 for single coverage and from $150 to $174 for family coverage. Deductibles and copayments rose also, and employers adopted formularies and three-tier cost-sharing formulas to control prescription drug expenses. PPO and HMO enrollment rose, while the percentage of small employers offering health benefits fell. Because increasing claims expenses rather than the underwriting cycle are the major driver of rising premiums, double-digit growth appears likely to continue.

  15. A new service-oriented grid-based method for AIoT application and implementation

    NASA Astrophysics Data System (ADS)

    Zou, Yiqin; Quan, Li

    2017-07-01

    The traditional three-layer Internet of things (IoT) model, which includes physical perception layer, information transferring layer and service application layer, cannot express complexity and diversity in agricultural engineering area completely. It is hard to categorize, organize and manage the agricultural things with these three layers. Based on the above requirements, we propose a new service-oriented grid-based method to set up and build the agricultural IoT. Considering the heterogeneous, limitation, transparency and leveling attributes of agricultural things, we propose an abstract model for all agricultural resources. This model is service-oriented and expressed with Open Grid Services Architecture (OGSA). Information and data of agricultural things were described and encapsulated by using XML in this model. Every agricultural engineering application will provide service by enabling one application node in this service-oriented grid. Description of Web Service Resource Framework (WSRF)-based Agricultural Internet of Things (AIoT) and the encapsulation method were also discussed in this paper for resource management in this model.

  16. Automatic building extraction from LiDAR data fusion of point and grid-based features

    NASA Astrophysics Data System (ADS)

    Du, Shouji; Zhang, Yunsheng; Zou, Zhengrong; Xu, Shenghua; He, Xue; Chen, Siyang

    2017-08-01

    This paper proposes a method for extracting buildings from LiDAR point cloud data by combining point-based and grid-based features. To accurately discriminate buildings from vegetation, a point feature based on the variance of normal vectors is proposed. For a robust building extraction, a graph cuts algorithm is employed to combine the used features and consider the neighbor contexture information. As grid feature computing and a graph cuts algorithm are performed on a grid structure, a feature-retained DSM interpolation method is proposed in this paper. The proposed method is validated by the benchmark ISPRS Test Project on Urban Classification and 3D Building Reconstruction and compared to the state-art-of-the methods. The evaluation shows that the proposed method can obtain a promising result both at area-level and at object-level. The method is further applied to the entire ISPRS dataset and to a real dataset of the Wuhan City. The results show a completeness of 94.9% and a correctness of 92.2% at the per-area level for the former dataset and a completeness of 94.4% and a correctness of 95.8% for the latter one. The proposed method has a good potential for large-size LiDAR data.

  17. Creating Motivating Job Aids.

    ERIC Educational Resources Information Center

    Tilaro, Angie; Rossett, Allison

    1993-01-01

    Explains how to create job aids that employees will be motivated to use, based on a review of pertinent literature and interviews with professionals. Topics addressed include linking motivation with job aids; Keller's ARCS (Attention, Relevance, Confidence, Satisfaction) model of motivation; and design strategies for job aids based on Keller's…

  18. Creating Motivating Job Aids.

    ERIC Educational Resources Information Center

    Tilaro, Angie; Rossett, Allison

    1993-01-01

    Explains how to create job aids that employees will be motivated to use, based on a review of pertinent literature and interviews with professionals. Topics addressed include linking motivation with job aids; Keller's ARCS (Attention, Relevance, Confidence, Satisfaction) model of motivation; and design strategies for job aids based on Keller's…

  19. Effects of a Peer Assessment System Based on a Grid-Based Knowledge Classification Approach on Computer Skills Training

    ERIC Educational Resources Information Center

    Hsu, Ting-Chia

    2016-01-01

    In this study, a peer assessment system using the grid-based knowledge classification approach was developed to improve students' performance during computer skills training. To evaluate the effectiveness of the proposed approach, an experiment was conducted in a computer skills certification course. The participants were divided into three…

  20. Effects of a Peer Assessment System Based on a Grid-Based Knowledge Classification Approach on Computer Skills Training

    ERIC Educational Resources Information Center

    Hsu, Ting-Chia

    2016-01-01

    In this study, a peer assessment system using the grid-based knowledge classification approach was developed to improve students' performance during computer skills training. To evaluate the effectiveness of the proposed approach, an experiment was conducted in a computer skills certification course. The participants were divided into three…

  1. Design and implementation of a web-based data grid management system for enterprise PACS backup and disaster recovery

    NASA Astrophysics Data System (ADS)

    Zhou, Zheng; Ma, Kevin; Talini, Elisa; Documet, Jorge; Lee, Jasper; Liu, Brent

    2007-03-01

    A cross-continental Data Grid infrastructure has been developed at the Image Processing and Informatics (IPI) research laboratory as a fault-tolerant image data backup and disaster recovery solution for Enterprise PACS. The Data Grid stores multiple copies of the imaging studies as well as the metadata, such as patient and study information, in geographically distributed computers and storage devices involving three different continents: America, Asia and Europe. This effectively prevents loss of image data and accelerates data recovery in the case of disaster. However, the lack of centralized management system makes the administration of the current Data Grid difficult. Three major challenges exist in current Data Grid management: 1. No single user interface to access and administrate each geographically separate component; 2. No graphical user interface available, resulting in command-line-based administration; 3. No single sign-on access to the Data Grid; administrators have to log into every Grid component with different corresponding user names/passwords. In this paper we are presenting a prototype of a unique web-based access interface for both Data Grid administrators and users. The interface has been designed to be user-friendly; it provides necessary instruments to constantly monitor the current status of the Data Grid components and their contents from any locations, contributing to longer system up-time.

  2. Experience with Remote Job Execution

    SciTech Connect

    Lynch, Vickie E; Cobb, John W; Green, Mark L; Kohl, James Arthur; Miller, Stephen D; Ren, Shelly; Smith, Bradford C; Vazhkudai, Sudharshan S

    2008-01-01

    The Neutron Science Portal at Oak Ridge National Laboratory submits jobs to the TeraGrid for remote job execution. The TeraGrid is a network of high performance computers supported by the US National Science Foundation. There are eleven partner facilities with over a petaflop of peak computing performance and sixty petabytes of long-term storage. Globus is installed on a local machine and used for job submission. The graphical user interface is produced by java coding that reads an XML file. After submission, the status of the job is displayed in a Job Information Service window which queries globus for the status. The output folder produced in the scratch directory of the TeraGrid machine is returned to the portal with globus-url-copy command that uses the gridftp servers on the TeraGrid machines. This folder is copied from the stage-in directory of the community account to the user's results directory where the output can be plotted using the portal's visualization services. The primary problem with remote job execution is diagnosing execution problems. We have daily tests of submitting multiple remote jobs from the portal. When these jobs fail on a computer, it is difficult to diagnose the problem from the globus output. Successes and problems will be presented.

  3. Are health workers motivated by income? Job motivation of Cambodian primary health workers implementing performance-based financing.

    PubMed

    Khim, Keovathanak

    2016-01-01

    Background Financial incentives are widely used in performance-based financing (PBF) schemes, but their contribution to health workers' incomes and job motivation is poorly understood. Cambodia undertook health sector reform from the middle of 2009 and PBF was employed as a part of the reform process. Objective This study examines job motivation for primary health workers (PHWs) under PBF reform in Cambodia and assesses the relationship between job motivation and income. Design A cross-sectional self-administered survey was conducted on 266 PHWs, from 54 health centers in the 15 districts involved in the reform. The health workers were asked to report all sources of income from public sector jobs and provide answers to 20 items related to job motivation. Factor analysis was conducted to identify the latent variables of job motivation. Factors associated with motivation were identified through multivariable regression. Results PHWs reported multiple sources of income and an average total income of US$190 per month. Financial incentives under the PBF scheme account for 42% of the average total income. PHWs had an index motivation score of 4.9 (on a scale from one to six), suggesting they had generally high job motivation that was related to a sense of community service, respect, and job benefits. Regression analysis indicated that income and the perception of a fair distribution of incentives were both statistically significant in association with higher job motivation scores. Conclusions Financial incentives used in the reform formed a significant part of health workers' income and influenced their job motivation. Improving job motivation requires fixing payment mechanisms and increasing the size of incentives. PBF is more likely to succeed when income, training needs, and the desire for a sense of community service are addressed and institutionalized within the health system.

  4. Are health workers motivated by income? Job motivation of Cambodian primary health workers implementing performance-based financing

    PubMed Central

    Khim, Keovathanak

    2016-01-01

    Background Financial incentives are widely used in performance-based financing (PBF) schemes, but their contribution to health workers’ incomes and job motivation is poorly understood. Cambodia undertook health sector reform from the middle of 2009 and PBF was employed as a part of the reform process. Objective This study examines job motivation for primary health workers (PHWs) under PBF reform in Cambodia and assesses the relationship between job motivation and income. Design A cross-sectional self-administered survey was conducted on 266 PHWs, from 54 health centers in the 15 districts involved in the reform. The health workers were asked to report all sources of income from public sector jobs and provide answers to 20 items related to job motivation. Factor analysis was conducted to identify the latent variables of job motivation. Factors associated with motivation were identified through multivariable regression. Results PHWs reported multiple sources of income and an average total income of US$190 per month. Financial incentives under the PBF scheme account for 42% of the average total income. PHWs had an index motivation score of 4.9 (on a scale from one to six), suggesting they had generally high job motivation that was related to a sense of community service, respect, and job benefits. Regression analysis indicated that income and the perception of a fair distribution of incentives were both statistically significant in association with higher job motivation scores. Conclusions Financial incentives used in the reform formed a significant part of health workers’ income and influenced their job motivation. Improving job motivation requires fixing payment mechanisms and increasing the size of incentives. PBF is more likely to succeed when income, training needs, and the desire for a sense of community service are addressed and institutionalized within the health system. PMID:27319575

  5. Are health workers motivated by income? Job motivation of Cambodian primary health workers implementing performance-based financing.

    PubMed

    Khim, Keovathanak

    2016-01-01

    Financial incentives are widely used in performance-based financing (PBF) schemes, but their contribution to health workers' incomes and job motivation is poorly understood. Cambodia undertook health sector reform from the middle of 2009 and PBF was employed as a part of the reform process. This study examines job motivation for primary health workers (PHWs) under PBF reform in Cambodia and assesses the relationship between job motivation and income. A cross-sectional self-administered survey was conducted on 266 PHWs, from 54 health centers in the 15 districts involved in the reform. The health workers were asked to report all sources of income from public sector jobs and provide answers to 20 items related to job motivation. Factor analysis was conducted to identify the latent variables of job motivation. Factors associated with motivation were identified through multivariable regression. PHWs reported multiple sources of income and an average total income of US$190 per month. Financial incentives under the PBF scheme account for 42% of the average total income. PHWs had an index motivation score of 4.9 (on a scale from one to six), suggesting they had generally high job motivation that was related to a sense of community service, respect, and job benefits. Regression analysis indicated that income and the perception of a fair distribution of incentives were both statistically significant in association with higher job motivation scores. Financial incentives used in the reform formed a significant part of health workers' income and influenced their job motivation. Improving job motivation requires fixing payment mechanisms and increasing the size of incentives. PBF is more likely to succeed when income, training needs, and the desire for a sense of community service are addressed and institutionalized within the health system.

  6. Microgrid Restraining Strategy Based on Improved DC Grid Connected DFIG Torque Ripple

    NASA Astrophysics Data System (ADS)

    Fei, Xia; Yang, Zhixiong; Zongze, Xia

    2017-05-01

    Aiming to the voltage of the stator side is generated by the modulation of the SSC in the improved topology, especially under the circumstance with the asymmTeric fault of stator side, DFIG’s electromagnTeic torque, amplifies ripple of grid-connected power for the grid side. The novel control mTehod suitable to stator side converter and rotor side converter based on reduced-order resonant controller (RORC) is proposed in this thesis, DFIG’s torque and output power performance are improved. Under the RORC control conditions the transfer functions of stator current and torque control system are established, the amplitude characteristic and the system stability of RORC control are analysed. The simulation results in Matlab/Simulink verify the correctness and validity of the proposed mTehod.

  7. Novel grid-based optical Braille conversion: from scanning to wording

    NASA Astrophysics Data System (ADS)

    Yoosefi Babadi, Majid; Jafari, Shahram

    2011-12-01

    Grid-based optical Braille conversion (GOBCO) is explained in this article. The grid-fitting technique involves processing scanned images taken from old hard-copy Braille manuscripts, recognising and converting them into English ASCII text documents inside a computer. The resulted words are verified using the relevant dictionary to provide the final output. The algorithms employed in this article can be easily modified to be implemented on other visual pattern recognition systems and text extraction applications. This technique has several advantages including: simplicity of the algorithm, high speed of execution, ability to help visually impaired persons and blind people to work with fax machines and the like, and the ability to help sighted people with no prior knowledge of Braille to understand hard-copy Braille manuscripts.

  8. A Global “Natural” Grid Model Based on the Morse Complex

    NASA Astrophysics Data System (ADS)

    Wang, Hongbin; Zhao, Xuesheng; Zhu, Xinying; Li, Jiebiao

    2016-11-01

    In the exploration and interpretation of the extensive or global natural phenomena such as environmental monitoring, climatic analysis, hydrological analysis, meteorological service, simulation of sea level rise, etc., knowledge about the shape properties of the earth surface and terrain features is urgently needed. However, traditional discrete global grids (DGG) can not directly provide it and are confronted with the challenge of the rapid data volume growth as the modern earth surveying technology develops. In this paper, a global "natural"grid (GNG) model based on the Morse complex is proposed and a relatively comprehensive and theoretical comparison with the traditional DGG models is analyzed in details as well as some issues to be resolved in the future. Finally, the experimental and analysis results indicate that this distinct GNG model built from DGG is more significant to the advance of the geospatial data acquisition technology and to the interpretation of those extensive or global natural phenomena.

  9. Price Response Can Make the Grid Robust: An Agent-based Discussion

    SciTech Connect

    Roop, Joseph M.; Fathelrahman, Eihab M.; Widergren, Steven E.

    2005-11-07

    There is considerable agreement that a more price responsive system would make for a more robust grid. This raises the issue of how the end-user can be induced to accept a system that relies more heavily on price signals than the current system. From a modeling perspective, how should the software ‘agent’ representing the consumer of electricity be modeled so that this agent exhibits some price responsiveness in a realistic manner? To address these issues, we construct an agent-based approach that is realistic in the sense that it can transition from the current system behavior to one that is more price responsive. Evidence from programs around the country suggests that there are ways to implement such a program that could add robustness to the grid.

  10. A goal-directed spatial navigation model using forward trajectory planning based on grid cells.

    PubMed

    Erdem, Uğur M; Hasselmo, Michael

    2012-03-01

    A goal-directed navigation model is proposed based on forward linear look-ahead probe of trajectories in a network of head direction cells, grid cells, place cells and prefrontal cortex (PFC) cells. The model allows selection of new goal-directed trajectories. In a novel environment, the virtual rat incrementally creates a map composed of place cells and PFC cells by random exploration. After exploration, the rat retrieves memory of the goal location, picks its next movement direction by forward linear look-ahead probe of trajectories in several candidate directions while stationary in one location, and finds the one activating PFC cells with the highest reward signal. Each probe direction involves activation of a static pattern of head direction cells to drive an interference model of grid cells to update their phases in a specific direction. The updating of grid cell spiking drives place cells along the probed look-ahead trajectory similar to the forward replay during waking seen in place cell recordings. Directions are probed until the look-ahead trajectory activates the reward signal and the corresponding direction is used to guide goal-finding behavior. We report simulation results in several mazes with and without barriers. Navigation with barriers requires a PFC map topology based on the temporal vicinity of visited place cells and a reward signal diffusion process. The interaction of the forward linear look-ahead trajectory probes with the reward diffusion allows discovery of never-before experienced shortcuts towards a goal location.

  11. Evaluation of a Positive Youth Development Program Based on the Repertory Grid Test

    PubMed Central

    Shek, Daniel T. L.

    2012-01-01

    The repertory grid test, based on personal construct psychology, was used to evaluate the effectiveness of Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong. One hundred and four program participants (n = 104) were randomly invited to complete a repertory grid based on personal construct theory in order to provide both quantitative and qualitative data for measuring self-identity changes after joining the program. Findings generally showed that the participants perceived that they understood themselves better and had stronger resilience after joining the program. Participants also saw themselves as closer to their ideal selves and other positive role figures (but farther away from a loser) after joining the program. This study provides additional support for the effectiveness of the Tier 1 Program of Project P.A.T.H.S. in the Chinese context. This study also shows that the repertory grid test is a useful evaluation method to measure self-identity changes in participants in positive youth development programs. PMID:22593680

  12. A Goal-Directed Spatial Navigation Model Using Forward Trajectory Planning Based on Grid Cells

    PubMed Central

    Erdem, Uğur Murat; Hasselmo, Michael E.

    2012-01-01

    A goal-directed navigation model is proposed based on forward linear look-ahead probe of trajectories in a network of head direction cells, grid cells, place cells, and prefrontal cortex (PFC) cells. The model allows selection of new goal-directed trajectories. In a novel environment, the virtual rat incrementally creates a map composed of place cells and PFC cells by random exploration. After exploration, the rat retrieves memory of the goal location, picks its next movement direction by forward linear look-ahead probe of trajectories in several candidate directions while stationary in one location, and finds the one activating PFC cells with the highest reward signal. Each probe direction involves activation of a static pattern of head direction cells to drive an interference model of grid cells to update their phases in a specific direction. The updating of grid cell spiking drives place cells along the probed look-ahead trajectory similar to the forward replay during waking seen in place cell recordings. Directions are probed until the look-ahead trajectory activates the reward signal and the corresponding direction is used to guide goal-finding behavior. We report simulation results in several mazes with and without barriers. Navigation with barriers requires a PFC map topology based on the temporal vicinity of visited place cells and a reward signal diffusion process. The interaction of the forward linear look-ahead trajectory probes with the reward diffusion allows discovery of never before experienced shortcuts towards a goal location. PMID:22393918

  13. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    NASA Astrophysics Data System (ADS)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  14. A New Wall Function Model for RANS Equations Based on Overlapping Grids

    NASA Astrophysics Data System (ADS)

    Lampropoulos, Nikolaos; Papadimitriou, Dimitrios; Zervogiannis, Thomas

    2013-04-01

    This paper presents a new numerical method for the modeling of turbulent flows based on a new wall model for computing Reynolds-Averaged-Navier-Stokes (RANS) equations with the Spalart-Allmaras (SA) turbulence model. The basic objective is the reduction of the total central processing unit (CPU) cost of the numerical simulation without harming the accuracy of the results. The main idea of this study is based on the use of two overlapping computational grids covering the two distinct regions of the flow (i.e., the boundary layer and the outer region), and the implementation of appropriate (different) numerical schemes in each case. The seamless cooperation of the grids in the iterative algorithm is achieved by defining an alternative wall function concept. The unstructured grid (UG) covering the outer region consists of mixed type elements (i.e., quadrilaterals and triangles), with relatively small degrees of anisotropy, on which the full set of Navier-Stokes (NS) along with the turbulent model (TM) equations are relaxed. The inner structured grid (SG), which aims at resolving the boundary layer, is a body-fitted mesh with high element density in the normal to the wall direction. The slow relaxation of the governing equations on anisotropic SGs is alleviated by using the Tridiagonal Matrix Algorithm (TDMA) and a block Lower Upper Method (LU). These prove to be quite suitable for the relaxation of the discretized equations on SGs, which consist of banded arrays in tensor form. The application of the proposed algorithm in a couple of benchmark cases proves its superiority over the High-Reynolds SA model with standard wall functions when both methods are compared with the (more costly) Low-Reynolds SA turbulence model and experimental results.

  15. SAM-GRID: A system utilizing grid middleware and SAM to enable full function grid computing

    NASA Astrophysics Data System (ADS)

    Baranovski, Andrew; Garzoglio, Gabriele; Lueking, Lee; Terekhov, Dane Skow Igor; Walker, Rodney

    2003-06-01

    We present a grid system, which is in development, employing an architecture comprising the primary functional components of job handling, data handling, and monitoring and information services. Each component is built using existing Grid middleware. The Job handling utilizes Condor Match Making Services to broker job submissions, Condor-G to schedule, and GRAM to submit and execute jobs on remote compute resources. The information services provide strategic information of the system including a file replica catalogue, compute availability, and network data-throughput rate predictions, which are made available to the other components. Data handling services are provided by SAM, the data management system built for the Dzero experiment at Fermilab, to optimize data delivery, and cache and replicate data as needed at the processing nodes. The SAM-Grid system is being built to provide experiments in progress at Fermilab the ability to utilize worldwide computing resources to process enormous quantities of data for complex physics analyses.

  16. SoilGrids1km — Global Soil Information Based on Automated Mapping

    PubMed Central

    Hengl, Tomislav; de Jesus, Jorge Mendes; MacMillan, Robert A.; Batjes, Niels H.; Heuvelink, Gerard B. M.; Ribeiro, Eloi; Samuel-Rosa, Alessandro; Kempen, Bas; Leenaars, Johan G. B.; Walsh, Markus G.; Gonzalez, Maria Ruiperez

    2014-01-01

    Background Soils are widely recognized as a non-renewable natural resource and as biophysical carbon sinks. As such, there is a growing requirement for global soil information. Although several global soil information systems already exist, these tend to suffer from inconsistencies and limited spatial detail. Methodology/Principal Findings We present SoilGrids1km — a global 3D soil information system at 1 km resolution — containing spatial predictions for a selection of soil properties (at six standard depths): soil organic carbon (g kg−1), soil pH, sand, silt and clay fractions (%), bulk density (kg m−3), cation-exchange capacity (cmol+/kg), coarse fragments (%), soil organic carbon stock (t ha−1), depth to bedrock (cm), World Reference Base soil groups, and USDA Soil Taxonomy suborders. Our predictions are based on global spatial prediction models which we fitted, per soil variable, using a compilation of major international soil profile databases (ca. 110,000 soil profiles), and a selection of ca. 75 global environmental covariates representing soil forming factors. Results of regression modeling indicate that the most useful covariates for modeling soils at the global scale are climatic and biomass indices (based on MODIS images), lithology, and taxonomic mapping units derived from conventional soil survey (Harmonized World Soil Database). Prediction accuracies assessed using 5–fold cross-validation were between 23–51%. Conclusions/Significance SoilGrids1km provide an initial set of examples of soil spatial data for input into global models at a resolution and consistency not previously available. Some of the main limitations of the current version of SoilGrids1km are: (1) weak relationships between soil properties/classes and explanatory variables due to scale mismatches, (2) difficulty to obtain covariates that capture soil forming factors, (3) low sampling density and spatial clustering of soil profile locations. However, as the SoilGrids

  17. SoilGrids1km--global soil information based on automated mapping.

    PubMed

    Hengl, Tomislav; de Jesus, Jorge Mendes; MacMillan, Robert A; Batjes, Niels H; Heuvelink, Gerard B M; Ribeiro, Eloi; Samuel-Rosa, Alessandro; Kempen, Bas; Leenaars, Johan G B; Walsh, Markus G; Gonzalez, Maria Ruiperez

    2014-01-01

    Soils are widely recognized as a non-renewable natural resource and as biophysical carbon sinks. As such, there is a growing requirement for global soil information. Although several global soil information systems already exist, these tend to suffer from inconsistencies and limited spatial detail. We present SoilGrids1km--a global 3D soil information system at 1 km resolution--containing spatial predictions for a selection of soil properties (at six standard depths): soil organic carbon (g kg-1), soil pH, sand, silt and clay fractions (%), bulk density (kg m-3), cation-exchange capacity (cmol+/kg), coarse fragments (%), soil organic carbon stock (t ha-1), depth to bedrock (cm), World Reference Base soil groups, and USDA Soil Taxonomy suborders. Our predictions are based on global spatial prediction models which we fitted, per soil variable, using a compilation of major international soil profile databases (ca. 110,000 soil profiles), and a selection of ca. 75 global environmental covariates representing soil forming factors. Results of regression modeling indicate that the most useful covariates for modeling soils at the global scale are climatic and biomass indices (based on MODIS images), lithology, and taxonomic mapping units derived from conventional soil survey (Harmonized World Soil Database). Prediction accuracies assessed using 5-fold cross-validation were between 23-51%. SoilGrids1km provide an initial set of examples of soil spatial data for input into global models at a resolution and consistency not previously available. Some of the main limitations of the current version of SoilGrids1km are: (1) weak relationships between soil properties/classes and explanatory variables due to scale mismatches, (2) difficulty to obtain covariates that capture soil forming factors, (3) low sampling density and spatial clustering of soil profile locations. However, as the SoilGrids system is highly automated and flexible, increasingly accurate predictions can be

  18. A grid-based model for integration of distributed medical databases.

    PubMed

    Luo, Yongxing; Jiang, Lijun; Zhuang, Tian-ge

    2009-12-01

    Grid has emerged recently as an integration infrastructure for sharing and coordinated use of diverse resources in dynamic, distributed environment. In this paper, we present a prototype system for integration of heterogeneous medical databases based on Grid technology, which can provide a uniform access interface and efficient query mechanism to different medical databases. After presenting the architecture of the prototype system that employs corresponding Grid services and middleware technologies, we make an analysis on its basic functional components including OGSA-DAI, metadata model, transaction management, and query processing in detail, which cooperate with each other to enable uniform accessing and seamless integration of the underlying heterogeneous medical databases. Then, we test effectiveness and performance of the system through a query instance, analyze the experiment result, and make a discussion on some issues relating to practical medical applications. Although the prototype system has been carried out and tested in a simulated hospital information environment at present, the underlying principles are applicable to practical applications.

  19. Observation-based gridded runoff estimates for Europe (E-RUN version 1.1)

    NASA Astrophysics Data System (ADS)

    Gudmundsson, Lukas; Seneviratne, Sonia I.

    2016-07-01

    River runoff is an essential climate variable as it is directly linked to the terrestrial water balance and controls a wide range of climatological and ecological processes. Despite its scientific and societal importance, there are to date no pan-European observation-based runoff estimates available. Here we employ a recently developed methodology to estimate monthly runoff rates on regular spatial grid in Europe. For this we first assemble an unprecedented collection of river flow observations, combining information from three distinct databases. Observed monthly runoff rates are subsequently tested for homogeneity and then related to gridded atmospheric variables (E-OBS version 12) using machine learning. The resulting statistical model is then used to estimate monthly runoff rates (December 1950-December 2015) on a 0.5° × 0.5° grid. The performance of the newly derived runoff estimates is assessed in terms of cross validation. The paper closes with example applications, illustrating the potential of the new runoff estimates for climatological assessments and drought monitoring. The newly derived data are made publicly available at doi:10.1594/PANGAEA.861371.

  20. Grid-based steered thermodynamic integration accelerates the calculation of binding free energies.

    PubMed

    Fowler, Philip W; Jha, Shantenu; Coveney, Peter V

    2005-08-15

    The calculation of binding free energies is important in many condensed matter problems. Although formally exact computational methods have the potential to complement, add to, and even compete with experimental approaches, they are difficult to use and extremely time consuming. We describe a Grid-based approach for the calculation of relative binding free energies, which we call Steered Thermodynamic Integration calculations using Molecular Dynamics (STIMD), and its application to Src homology 2 (SH2) protein cell signalling domains. We show that the time taken to compute free energy differences using thermodynamic integration can be significantly reduced: potentially from weeks or months to days of wall-clock time. To be able to perform such accelerated calculations requires the ability to both run concurrently and control in realtime several parallel simulations on a computational Grid. We describe how the RealityGrid computational steering system, in conjunction with a scalable classical MD code, can be used to dramatically reduce the time to achieve a result. This is necessary to improve the adoption of this technique and further allows more detailed investigations into the accuracy and precision of thermodynamic integration. Initial results for the Src SH2 system are presented and compared to a reported experimental value. Finally, we discuss the significance of our approach.

  1. Supercontinuum optimization for dual-soliton based light sources using genetic algorithms in a grid platform.

    PubMed

    Arteaga-Sierra, F R; Milián, C; Torres-Gómez, I; Torres-Cisneros, M; Moltó, G; Ferrando, A

    2014-09-22

    We present a numerical strategy to design fiber based dual pulse light sources exhibiting two predefined spectral peaks in the anomalous group velocity dispersion regime. The frequency conversion is based on the soliton fission and soliton self-frequency shift occurring during supercontinuum generation. The optimization process is carried out by a genetic algorithm that provides the optimum input pulse parameters: wavelength, temporal width and peak power. This algorithm is implemented in a Grid platform in order to take advantage of distributed computing. These results are useful for optical coherence tomography applications where bell-shaped pulses located in the second near-infrared window are needed.

  2. High-Efficiency Food Production in a Renewable Energy Based Micro-Grid Power System

    NASA Technical Reports Server (NTRS)

    Bubenheim, David; Meiners, Dennis

    2016-01-01

    Controlled Environment Agriculture (CEA) systems can be used to produce high-quality, desirable food year round, and the fresh produce can positively contribute to the health and well being of residents in communities with difficult supply logistics. While CEA has many positive outcomes for a remote community, the associated high electric demands have prohibited widespread implementation in what is typically already a fully subscribed power generation and distribution system. Recent advances in CEA technologies as well as renewable power generation, storage, and micro-grid management are increasing system efficiency and expanding the possibilities for enhancing community supporting infrastructure without increasing demands for outside supplied fuels. We will present examples of how new lighting, nutrient delivery, and energy management and control systems can enable significant increases in food production efficiency while maintaining high yields in CEA. Examples from Alaskan communities where initial incorporation of renewable power generation, energy storage and grid management techniques have already reduced diesel fuel consumption for electric generation by more than 40% and expanded grid capacity will be presented. We will discuss how renewable power generation, efficient grid management to extract maximum community service per kW, and novel energy storage approaches can expand the food production, water supply, waste treatment, sanitation and other community support services without traditional increases of consumable fuels supplied from outside the community. These capabilities offer communities with a range of choices to enhance their communities. The examples represent a synergy of technology advancement efforts to develop sustainable community support systems for future space-based human habitats and practical implementation of infrastructure components to increase efficiency and enhance health and well being in remote communities today and tomorrow.

  3. Grid-based methods for biochemical ab initio quantum chemical applications

    SciTech Connect

    Colvin, M.E.; Nelson, J.S.; Mori, E.

    1997-01-01

    A initio quantum chemical methods are seeing increased application in a large variety of real-world problems including biomedical applications ranging from drug design to the understanding of environmental mutagens. The vast majority of these quantum chemical methods are {open_quotes}spectral{close_quotes}, that is they describe the charge distribution around the nuclear framework in terms of a fixed analytic basis set. Despite the additional complexity they bring, methods involving grid representations of the electron or solvent charge can provide more efficient schemes for evaluating spectral operators, inexpensive methods for calculating electron correlation, and methods for treating the electrostatic energy of salvation in polar solvents. The advantage of mixed or {open_quotes}pseudospectral{close_quotes} methods is that they allow individual non-linear operators in the partial differential equations, such as coulomb operators, to be calculated in the most appropriate regime. Moreover, these molecular grids can be used to integrate empirical functionals of the electron density. These so-called density functional methods (DFT) are an extremely promising alternative to conventional post-Hartree Fock quantum chemical methods. The introduction of a grid at the molecular solvent-accessible surface allows a very sophisticated treatment of a polarizable continuum solvent model (PCM). Where most PCM approaches use a truncated expansion of the solute`s electric multipole expansion, e.g. net charge (Born model) or dipole moment (Onsager model), such a grid-based boundary-element method (BEM) yields a nearly exact treatment of the solute`s electric field. This report describes the use of both DFT and BEM methods in several biomedical chemical applications.

  4. Strain analyis in Banda Sea using grid strain based on GPS observation

    NASA Astrophysics Data System (ADS)

    Herawati, Yola Asis; Meilano, Irwan; Sarsito, Dina Anggreni; Effendi, Jony

    2017-07-01

    Eastern Indonesia has very high deformation due to tectonic activity in triple junction area. Convergencing between plate in Eastern Indonesia trigger some microblocks. Tectonic block as one of deformation phenomenom due to the interaction of between plates can be understood by using strain analysis. Strain analysis shows the change of position, shape and dimension from an object. This research use 80 GPS from previous study by Koulali et al, (2015) and 7 continuous GPS in Bird's Head to calculate strain rates in order to find relation between tectonic activity and strain rates in Banda Sea, and to identify block boundary. The GPS data are calculated using GAMIT/GLOBK software to obtain time series in each station. Strain rates are calculated using softwae package named grid strain which calculate strain based on interpolation using discretized geodetic measurement resulting strain rates in grid system. The data distribution and algorithm in grid strain influence the result of strain rates from grid strain. The result from strain calculation is in ranges -16,421×10-8 to -0,194×10-8 for shortening parameter and 1,653×10-8 to 18,92×10-8 for extension parameter. From strain analysis known that strain rates can identify tectonic activity but not accurately for block boundary. Banda Block, Timor Block, and Bird's Head Block has different strain pattern especially in their boundary. Timor and eastern part of Banda Block dominated by shortening according to the back arc located in there, meanwhile western part of Banda Block and mostly of Bird's Head dominated by very low shortening according to collision between Eurasia and Australia Plates. For further analysis need some additional data such as density of GPS sites, sesimicity, and gravity data.

  5. Integration of an MPP System into the INFN-GRID

    NASA Astrophysics Data System (ADS)

    Costa, A.; Calanducci, A. S.; Becciani, U.

    2005-12-01

    We are going to present the middleware changes we have made to integrate an IBM-SP parallel computer into the INFN-GRID and the results of the application runs made on the IBM-SP to test its operation within the grid. The IBM-SP is an 8-processor 1.1 GHz machine using the AIX 5.2 operating system. Its hardware architecture represents a major challenge for integration into the grid infrastructure because it does not support the LCFGng (Local ConFiGuration system Next Generation) facilities. In order to obtain the goal without the advantages of the LCFGng server (RPM based), we properly tuned and compiled the middleware on the IBM-SP: in particular, we installed the Grid Services toolkit and a scheduler for job execution and monitoring. The testing phase was successfully passed by submitting a set of MPI jobs through the grid onto the IBM-SP. Specifically the tests were made by using MARA, a public code for the analysis of light curve sequences, that was made accessible through the Astrocomp portal, a web based interface for astrophysical parallel codes. The IBM-SP integration into the INFN-GRID did not require us to stop production on the system. It can be considered as a demonstration case for the integration of machines using different operating systems.

  6. Incentive-compatible demand-side management for smart grids based on review strategies

    NASA Astrophysics Data System (ADS)

    Xu, Jie; van der Schaar, Mihaela

    2015-12-01

    Demand-side load management is able to significantly improve the energy efficiency of smart grids. Since the electricity production cost depends on the aggregate energy usage of multiple consumers, an important incentive problem emerges: self-interested consumers want to increase their own utilities by consuming more than the socially optimal amount of energy during peak hours since the increased cost is shared among the entire set of consumers. To incentivize self-interested consumers to take the socially optimal scheduling actions, we design a new class of protocols based on review strategies. These strategies work as follows: first, a review stage takes place in which a statistical test is performed based on the daily prices of the previous billing cycle to determine whether or not the other consumers schedule their electricity loads in a socially optimal way. If the test fails, the consumers trigger a punishment phase in which, for a certain time, they adjust their energy scheduling in such a way that everybody in the consumer set is punished due to an increased price. Using a carefully designed protocol based on such review strategies, consumers then have incentives to take the socially optimal load scheduling to avoid entering this punishment phase. We rigorously characterize the impact of deploying protocols based on review strategies on the system's as well as the users' performance and determine the optimal design (optimal billing cycle, punishment length, etc.) for various smart grid deployment scenarios. Even though this paper considers a simplified smart grid model, our analysis provides important and useful insights for designing incentive-compatible demand-side management schemes based on aggregate energy usage information in a variety of practical scenarios.

  7. Production of BaBar Skimmed Analysis Datasets Using the Grid

    SciTech Connect

    Brew, C.A.J.; Wilson, F.F.; Castelli, G.; Adye, T.; Roethel, W.; Luppi, E.; Andreotti, D.; Smith, D.; Khan, A.; Barrett, M.; Barlow, R.; Bailey, D.; /Manchester U.

    2011-11-10

    The BABAR Collaboration, based at Stanford Linear Accelerator Center (SLAC), Stanford, US, has been performing physics reconstruction, simulation studies and data analysis for 8 years using a number of compute farms around the world. Recent developments in Grid technologies could provide a way to manage the distributed resources in a single coherent structure. We describe enhancements to the BABAR experiment's distributed skimmed dataset production system to make use of European Grid resources and present the results with regard to BABAR's latest cycle of skimmed dataset production. We compare the benefits of a local and Grid-based systems, the ease with which the system is managed and the challenges of integrating the Grid with legacy software. We compare job success rates and manageability issues between Grid and non-Grid production.

  8. Development of a fully automated CFD system for three-dimensional flow simulations based on hybrid prismatic-tetrahedral grids

    SciTech Connect

    Berg, J.W. van der; Maseland, J.E.J.; Oskam, B.

    1996-12-31

    In this paper an assessment of CFD methods based on the underlying grid type is made. It is safe to say that emerging CFD methods based on hybrid body-fitted grids of tetrahedral and prismatic cells using unstructured data storage schemes have the potential to satisfy the basic requirements of problem-turnaround-time and accuracy for complex geometries. The CFD system described in this paper is based on the hybrid prismatic-tetrahedral grid approach. In an analysis it is shown that the cells in the prismatic layer have to satisfy a central symmetry property in order to obtain a second-order accurate approximation of the viscous terms in the Reynolds-averaged Navier-Stokes equations. Prismatic grid generation is demonstrated for the ONERA M6 wing-alone configuration and the AS28G wing/body configuration.

  9. Predictors of job satisfaction among nurses working in Ethiopian public hospitals, 2014: institution-based cross-sectional study.

    PubMed

    Semachew, Ayele; Belachew, Tefera; Tesfaye, Temamen; Adinew, Yohannes Mehretie

    2017-04-24

    Nurses play a pivotal role in determining the efficiency, effectiveness, and sustainability of health care systems. Nurses' job satisfaction plays an important role in the delivery of quality health care. There is paucity of studies addressing job satisfaction among nurses in the public hospital setting in Ethiopia. Thus, this study aimed to assess job satisfaction and factors influencing it among nurses in Jimma zone public hospitals, southwestern Ethiopia. An institution-based census was conducted among 316 nurses working in Jimma zone public hospitals from March to April, 2014. A structured self-administered questionnaire based on a modified version of the McCloskey/Mueller Satisfaction Scale was used. Data were entered using Epi Info version 3.5.3 statistical software and analyzed using SPSS version 20 statistical package. Mean satisfaction scores were compared by independent variables using an independent sample t test and ANOVA. Bivariate and multivariable linear regressions were done. A total of 316 nurses were included, yielding a response rate of 92.67%. The overall mean job satisfaction was (67.43 ± 13.85). One third (33.5%) of the study participants had a low level of job satisfaction. Mutual understandings at work and professional commitment showed significant and positive relationship with overall job satisfaction, while working at an inpatient unit and work load were negatively associated. One third of nurses had a low level of job satisfaction. Professional commitment, workload, working unit, and mutual understanding at work predicted the outcome variable.

  10. Occupational self-coding and automatic recording (OSCAR): a novel web-based tool to collect and code lifetime job histories in large population-based studies.

    PubMed

    De Matteis, Sara; Jarvis, Deborah; Young, Heather; Young, Alan; Allen, Naomi; Potts, James; Darnton, Andrew; Rushton, Lesley; Cullinan, Paul

    2017-03-01

    Objectives The standard approach to the assessment of occupational exposures is through the manual collection and coding of job histories. This method is time-consuming and costly and makes it potentially unfeasible to perform high quality analyses on occupational exposures in large population-based studies. Our aim was to develop a novel, efficient web-based tool to collect and code lifetime job histories in the UK Biobank, a population-based cohort of over 500 000 participants. Methods We developed OSCAR (occupations self-coding automatic recording) based on the hierarchical structure of the UK Standard Occupational Classification (SOC) 2000, which allows individuals to collect and automatically code their lifetime job histories via a simple decision-tree model. Participants were asked to find each of their jobs by selecting appropriate job categories until they identified their job title, which was linked to a hidden 4-digit SOC code. For each occupation a job title in free text was also collected to estimate Cohen's kappa (κ) inter-rater agreement between SOC codes assigned by OSCAR and an expert manual coder. Results OSCAR was administered to 324 653 UK Biobank participants with an existing email address between June and September 2015. Complete 4-digit SOC-coded lifetime job histories were collected for 108 784 participants (response rate: 34%). Agreement between the 4-digit SOC codes assigned by OSCAR and the manual coder for a random sample of 400 job titles was moderately good [κ=0.45, 95% confidence interval (95% CI) 0.42-0.49], and improved when broader job categories were considered (κ=0.64, 95% CI 0.61-0.69 at a 1-digit SOC-code level). Conclusions OSCAR is a novel, efficient, and reasonably reliable web-based tool for collecting and automatically coding lifetime job histories in large population-based studies. Further application in other research projects for external validation purposes is warranted.

  11. A robust multi-grid pressure-based algorithm for multi-fluid flow at all speeds

    NASA Astrophysics Data System (ADS)

    Darwish, M.; Moukalled, F.; Sekar, B.

    2003-04-01

    This paper reports on the implementation and testing, within a full non-linear multi-grid environment, of a new pressure-based algorithm for the prediction of multi-fluid flow at all speeds. The algorithm is part of the mass conservation-based algorithms (MCBA) group in which the pressure correction equation is derived from overall mass conservation. The performance of the new method is assessed by solving a series of two-dimensional two-fluid flow test problems varying from turbulent low Mach number to supersonic flows, and from very low to high fluid density ratios. Solutions are generated for several grid sizes using the single grid (SG), the prolongation grid (PG), and the full non-linear multi-grid (FMG) methods. The main outcomes of this study are: (i) a clear demonstration of the ability of the FMG method to tackle the added non-linearity of multi-fluid flows, which is manifested through the performance jump observed when using the non-linear multi-grid approach as compared to the SG and PG methods; (ii) the extension of the FMG method to predict turbulent multi-fluid flows at all speeds. The convergence history plots and CPU-times presented indicate that the FMG method is far more efficient than the PG method and accelerates the convergence rate over the SG method, for the problems solved and the grids used, by a factor reaching a value as high as 15.

  12. QoS Differential Scheduling in Cognitive-Radio-Based Smart Grid Networks: An Adaptive Dynamic Programming Approach.

    PubMed

    Yu, Rong; Zhong, Weifeng; Xie, Shengli; Zhang, Yan; Zhang, Yun

    2016-02-01

    As the next-generation power grid, smart grid will be integrated with a variety of novel communication technologies to support the explosive data traffic and the diverse requirements of quality of service (QoS). Cognitive radio (CR), which has the favorable ability to improve the spectrum utilization, provides an efficient and reliable solution for smart grid communications networks. In this paper, we study the QoS differential scheduling problem in the CR-based smart grid communications networks. The scheduler is responsible for managing the spectrum resources and arranging the data transmissions of smart grid users (SGUs). To guarantee the differential QoS, the SGUs are assigned to have different priorities according to their roles and their current situations in the smart grid. Based on the QoS-aware priority policy, the scheduler adjusts the channels allocation to minimize the transmission delay of SGUs. The entire transmission scheduling problem is formulated as a semi-Markov decision process and solved by the methodology of adaptive dynamic programming. A heuristic dynamic programming (HDP) architecture is established for the scheduling problem. By the online network training, the HDP can learn from the activities of primary users and SGUs, and adjust the scheduling decision to achieve the purpose of transmission delay minimization. Simulation results illustrate that the proposed priority policy ensures the low transmission delay of high priority SGUs. In addition, the emergency data transmission delay is also reduced to a significantly low level, guaranteeing the differential QoS in smart grid.

  13. An Adaptive Integration Model of Vector Polyline to DEM Data Based on Spherical Degeneration Quadtree Grids

    NASA Astrophysics Data System (ADS)

    Zhao, X. S.; Wang, J. J.; Yuan, Z. Y.; Gao, Y.

    2013-10-01

    Traditional geometry-based approach can maintain the characteristics of vector data. However, complex interpolation calculations limit its applications in high resolution and multi-source spatial data integration at spherical scale in digital earth systems. To overcome this deficiency, an adaptive integration model of vector polyline and spherical DEM is presented. Firstly, Degenerate Quadtree Grid (DQG) which is one of the partition models for global discrete grids, is selected as a basic framework for the adaptive integration model. Secondly, a novel shift algorithm is put forward based on DQG proximity search. The main idea of shift algorithm is that the vector node in a DQG cell moves to the cell corner-point when the displayed area of the cell is smaller or equal to a pixel of screen in order to find a new vector polyline approximate to the original one, which avoids lots of interpolation calculations and achieves seamless integration. Detailed operation steps are elaborated and the complexity of algorithm is analyzed. Thirdly, a prototype system has been developed by using VC++ language and OpenGL 3D API. ASTER GDEM data and DCW roads data sets of Jiangxi province in China are selected to evaluate the performance. The result shows that time consumption of shift algorithm decreased about 76% than that of geometry-based approach. Analysis on the mean shift error from different dimensions has been implemented. In the end, the conclusions and future works in the integration of vector data and DEM based on discrete global grids are also given.

  14. New gridded daily climatology of Finland: Permutation-based uncertainty estimates and temporal trends in climate

    NASA Astrophysics Data System (ADS)

    Aalto, Juha; Pirinen, Pentti; Jylhä, Kirsti

    2016-04-01

    Long-term time series of key climate variables with a relevant spatiotemporal resolution are essential for environmental science. Moreover, such spatially continuous data, based on weather observations, are commonly used in, e.g., downscaling and bias correcting of climate model simulations. Here we conducted a comprehensive spatial interpolation scheme where seven climate variables (daily mean, maximum, and minimum surface air temperatures, daily precipitation sum, relative humidity, sea level air pressure, and snow depth) were interpolated over Finland at the spatial resolution of 10 × 10 km2. More precisely, (1) we produced daily gridded time series (FMI_ClimGrid) of the variables covering the period of 1961-2010, with a special focus on evaluation and permutation-based uncertainty estimates, and (2) we investigated temporal trends in the climate variables based on the gridded data. National climate station observations were supplemented by records from the surrounding countries, and kriging interpolation was applied to account for topography and water bodies. For daily precipitation sum and snow depth, a two-stage interpolation with a binary classifier was deployed for an accurate delineation of areas with no precipitation or snow. A robust cross-validation indicated a good agreement between the observed and interpolated values especially for the temperature variables and air pressure, although the effect of seasons was evident. Permutation-based analysis suggested increased uncertainty toward northern areas, thus identifying regions with suboptimal station density. Finally, several variables had a statistically significant trend indicating a clear but locally varying signal of climate change during the last five decades.

  15. GPU accelerated cell-based adaptive mesh refinement on unstructured quadrilateral grid

    NASA Astrophysics Data System (ADS)

    Luo, Xisheng; Wang, Luying; Ran, Wei; Qin, Fenghua

    2016-10-01

    A GPU accelerated inviscid flow solver is developed on an unstructured quadrilateral grid in the present work. For the first time, the cell-based adaptive mesh refinement (AMR) is fully implemented on GPU for the unstructured quadrilateral grid, which greatly reduces the frequency of data exchange between GPU and CPU. Specifically, the AMR is processed with atomic operations to parallelize list operations, and null memory recycling is realized to improve the efficiency of memory utilization. It is found that results obtained by GPUs agree very well with the exact or experimental results in literature. An acceleration ratio of 4 is obtained between the parallel code running on the old GPU GT9800 and the serial code running on E3-1230 V2. With the optimization of configuring a larger L1 cache and adopting Shared Memory based atomic operations on the newer GPU C2050, an acceleration ratio of 20 is achieved. The parallelized cell-based AMR processes have achieved 2x speedup on GT9800 and 18x on Tesla C2050, which demonstrates that parallel running of the cell-based AMR method on GPU is feasible and efficient. Our results also indicate that the new development of GPU architecture benefits the fluid dynamics computing significantly.

  16. The CMS integration grid testbed

    SciTech Connect

    Graham, Gregory E.

    2004-08-26

    The CMS Integration Grid Testbed (IGT) comprises USCMS Tier-1 and Tier-2 hardware at the following sites: the California Institute of Technology, Fermi National Accelerator Laboratory, the University of California at San Diego, and the University of Florida at Gainesville. The IGT runs jobs using the Globus Toolkit with a DAGMan and Condor-G front end. The virtual organization (VO) is managed using VO management scripts from the European Data Grid (EDG). Gridwide monitoring is accomplished using local tools such as Ganglia interfaced into the Globus Metadata Directory Service (MDS) and the agent based Mona Lisa. Domain specific software is packaged and installed using the Distribution After Release (DAR) tool of CMS, while middleware under the auspices of the Virtual Data Toolkit (VDT) is distributed using Pacman. During a continuous two month span in Fall of 2002, over 1 million official CMS GEANT based Monte Carlo events were generated and returned to CERN for analysis while being demonstrated at SC2002. In this paper, we describe the process that led to one of the world's first continuously available, functioning grids.

  17. Web-based interactive visualization in a Grid-enabled neuroimaging application using HTML5.

    PubMed

    Siewert, René; Specovius, Svenja; Wu, Jie; Krefting, Dagmar

    2012-01-01

    Interactive visualization and correction of intermediate results are required in many medical image analysis pipelines. To allow certain interaction in the remote execution of compute- and data-intensive applications, new features of HTML5 are used. They allow for transparent integration of user interaction into Grid- or Cloud-enabled scientific workflows. Both 2D and 3D visualization and data manipulation can be performed through a scientific gateway without the need to install specific software or web browser plugins. The possibilities of web-based visualization are presented along the FreeSurfer-pipeline, a popular compute- and data-intensive software tool for quantitative neuroimaging.

  18. Power-based control with integral action for wind turbines connected to the grid

    NASA Astrophysics Data System (ADS)

    Peña, R. R.; Fernández, R. D.; Mantz, R. J.; Battaiotto, P. E.

    2015-10-01

    In this paper, a power shaping control with integral action is employed to control active and reactive powers of wind turbines connected to the grid. As it is well known, power shaping allows finding a Lyapunov function which ensures stability. In contrast to other passivity-based control theories, the power shaping controller design allows to use easily measurable variables, such as voltages and currents which simplify the physical interpretation and, therefore, the controller synthesis. The strategy proposed is evaluated in the context of severe operating conditions, such as abrupt changes in the wind speed and voltage drops.

  19. A MPPT Algorithm Based PV System Connected to Single Phase Voltage Controlled Grid

    NASA Astrophysics Data System (ADS)

    Sreekanth, G.; Narender Reddy, N.; Durga Prasad, A.; Nagendrababu, V.

    2012-10-01

    Future ancillary services provided by photovoltaic (PV) systems could facilitate their penetration in power systems. In addition, low-power PV systems can be designed to improve the power quality. This paper presents a single-phase PV systemthat provides grid voltage support and compensation of harmonic distortion at the point of common coupling thanks to a repetitive controller. The power provided by the PV panels is controlled by a Maximum Power Point Tracking algorithm based on the incremental conductance method specifically modified to control the phase of the PV inverter voltage. Simulation and experimental results validate the presented solution.

  20. A GridPix-based X-ray detector for the CAST experiment

    NASA Astrophysics Data System (ADS)

    Krieger, C.; Kaminski, J.; Lupberger, M.; Desch, K.

    2017-09-01

    The CAST experiment has been searching for axions and axion-like particles for more than 10 years. The continuous improvements in the detector designs have increased the physics reach of the experiment far beyond what was originally conceived. As part of this development, a new detector based on a GridPix readout had been developed in 2014 and was mounted on the CAST experiment during the end of the data taking period of 2014 and the complete period in 2015. We report on the detector design, its advantages and the performance during both periods.

  1. Classroom-based Interventions and Teachers' Perceived Job Stressors and Confidence: Evidence from a Randomized Trial in Head Start Settings.

    PubMed

    Zhai, Fuhua; Raver, C Cybele; Li-Grining, Christine

    2011-09-01

    Preschool teachers' job stressors have received increasing attention but have been understudied in the literature. We investigated the impacts of a classroom-based intervention, the Chicago School Readiness Project (CSRP), on teachers' perceived job stressors and confidence, as indexed by their perceptions of job control, job resources, job demands, and confidence in behavior management. Using a clustered randomized controlled trial (RCT) design, the CSRP provided multifaceted services to the treatment group, including teacher training and mental health consultation, which were accompanied by stress-reduction services and workshops. Overall, 90 teachers in 35 classrooms at 18 Head Start sites participated in the study. After adjusting for teacher and classroom factors and site fixed effects, we found that the CSRP had significant effects on the improvement of teachers' perceived job control and work-related resources. We also found that the CSRP decreased teachers' confidence in behavior management and had no statistically significant effects on job demands. Overall, we did not find significant moderation effects of teacher race/ethnicity, education, teaching experience, or teacher type. The implications for research and policy are discussed.

  2. Social adversity in adolescence increases the physiological vulnerability to job strain in adulthood: a prospective population-based study.

    PubMed

    Westerlund, Hugo; Gustafsson, Per E; Theorell, Töres; Janlert, Urban; Hammarström, Anne

    2012-01-01

    It has been argued that the association between job strain and health could be confounded by early life exposures, and studies have shown early adversity to increase individual vulnerability to later stress. We therefore investigated if early life exposure to adversity increases the individual's physiological vulnerability job strain in adulthood. In a population-based cohort (343 women and 330 men, 83% of the eligible participants), we examined the association between on the one hand exposure to adversity in adolescence, measured at age 16, and job strain measured at age 43, and on the other hand allostatic load at age 43. Adversity was operationalised as an index comprising residential mobility and crowding, parental loss, parental unemployment, and parental physical and mental illness (including substance abuse). Allostatic load summarised body fat, blood pressure, inflammatory markers, glucose, blood lipids, and cortisol regulation. There was an interaction between adversity in adolescence and job strain (B = 0.09, 95% CI 0.02 to 0.16 after adjustment for socioeconomic status), particularly psychological demands, indicating that job strain was associated with increased allostatic load only among participants with adversity in adolescence. Job strain was associated with lower allostatic load in men (β = -0.20, 95% CI -0.35 to -0.06). Exposure to adversity in adolescence was associated with increased levels of biological stress among those reporting job strain in mid-life, indicating increased vulnerability to environmental stressors.

  3. Developing physical exposure-based back injury risk models applicable to manual handling jobs in distribution centers.

    PubMed

    Lavender, Steven A; Marras, William S; Ferguson, Sue A; Splittstoesser, Riley E; Yang, Gang

    2012-01-01

    Using our ultrasound-based "Moment Monitor," exposures to biomechanical low back disorder risk factors were quantified in 195 volunteers who worked in 50 different distribution center jobs. Low back injury rates, determined from a retrospective examination of each company's Occupational Safety and Health Administration (OSHA) 300 records over the 3-year period immediately prior to data collection, were used to classify each job's back injury risk level. The analyses focused on the factors differentiating the high-risk jobs (those having had 12 or more back injuries/200,000 hr of exposure) from the low-risk jobs (those defined as having no back injuries in the preceding 3 years). Univariate analyses indicated that measures of load moment exposure and force application could distinguish between high (n = 15) and low (n = 15) back injury risk distribution center jobs. A three-factor multiple logistic regression model capable of predicting high-risk jobs with very good sensitivity (87%) and specificity (73%) indicated that risk could be assessed using the mean across the sampled lifts of the peak forward and or lateral bending dynamic load moments that occurred during each lift, the mean of the peak push/pull forces across the sampled lifts, and the mean duration of the non-load exposure periods. A surrogate model, one that does not require the Moment Monitor equipment to assess a job's back injury risk, was identified although with some compromise in model sensitivity relative to the original model.

  4. Classroom-based Interventions and Teachers’ Perceived Job Stressors and Confidence: Evidence from a Randomized Trial in Head Start Settings

    PubMed Central

    Zhai, Fuhua; Raver, C. Cybele; Li-Grining, Christine

    2011-01-01

    Preschool teachers’ job stressors have received increasing attention but have been understudied in the literature. We investigated the impacts of a classroom-based intervention, the Chicago School Readiness Project (CSRP), on teachers’ perceived job stressors and confidence, as indexed by their perceptions of job control, job resources, job demands, and confidence in behavior management. Using a clustered randomized controlled trial (RCT) design, the CSRP provided multifaceted services to the treatment group, including teacher training and mental health consultation, which were accompanied by stress-reduction services and workshops. Overall, 90 teachers in 35 classrooms at 18 Head Start sites participated in the study. After adjusting for teacher and classroom factors and site fixed effects, we found that the CSRP had significant effects on the improvement of teachers’ perceived job control and work-related resources. We also found that the CSRP decreased teachers’ confidence in behavior management and had no statistically significant effects on job demands. Overall, we did not find significant moderation effects of teacher race/ethnicity, education, teaching experience, or teacher type. The implications for research and policy are discussed. PMID:21927538

  5. Evaluation of high grid strip densities based on the moiré artifact analysis for quality assurance: Simulation and experiment

    NASA Astrophysics Data System (ADS)

    Je, U. K.; Park, C. K.; Lim, H. W.; Cho, H. S.; Lee, D. Y.; Lee, H. W.; Kim, K. S.; Park, S. Y.; Kim, G. A.; Kang, S. Y.; Park, J. E.; Kim, W. S.; Jeon, D. H.; Woo, T. H.

    2017-09-01

    We have recently developed precise x-ray grids having strip densities in the range of 100 - 250 lines/inch by adopting the precision sawing process and carbon interspace material for the demands of specific x-ray imaging techniques. However, quality assurance in the grid manufacturing has not yet satisfactorily conducted because grid strips of a high strip density are often invisible through an x-ray nondestructive testing with a flat-panel detector of an ordinary pixel resolution (>100 μm). In this work, we propose a useful method to evaluate actual grid strip densities over the Nyquist sampling rate based on the moiré artifact analysis. We performed a systematic simulation and experiment with several sample grids and a detector having a 143- μm pixel resolution to verify the proposed quality assurance method. According to our results, the relative differences between the nominal and the evaluated grid strip densities were within 0.2% and 1.8% in the simulation and experiment, respectively, which demonstrates that the proposed method is viable with an ordinary detector having a moderate pixel resolution for quality assurance in grid manufacturing.

  6. A Study of ATLAS Grid Performance for Distributed Analysis

    NASA Astrophysics Data System (ADS)

    Panitkin, Sergey; Fine, Valery; Wenaus, Torre

    2012-12-01

    In the past two years the ATLAS Collaboration at the LHC has collected a large volume of data and published a number of ground breaking papers. The Grid-based ATLAS distributed computing infrastructure played a crucial role in enabling timely analysis of the data. We will present a study of the performance and usage of the ATLAS Grid as platform for physics analysis in 2011. This includes studies of general properties as well as timing properties of user jobs (wait time, run time, etc). These studies are based on mining of data archived by the PanDA workload management system.

  7. CDF GlideinWMS usage in grid computing of high energy physics

    SciTech Connect

    Zvada, Marian; Benjamin, Doug; Sfiligoi, Igor; /Fermilab

    2010-01-01

    Many members of large science collaborations already have specialized grids available to advance their research in the need of getting more computing resources for data analysis. This has forced the Collider Detector at Fermilab (CDF) collaboration to move beyond the usage of dedicated resources and start exploiting Grid resources. Nowadays, CDF experiment is increasingly relying on glidein-based computing pools for data reconstruction. Especially, Monte Carlo production and user data analysis, serving over 400 users by central analysis farm middleware (CAF) on the top of Condor batch system and CDF Grid infrastructure. Condor is designed as distributed architecture and its glidein mechanism of pilot jobs is ideal for abstracting the Grid computing by making a virtual private computing pool. We would like to present the first production use of the generic pilot-based Workload Management System (glideinWMS), which is an implementation of the pilot mechanism based on the Condor distributed infrastructure. CDF Grid computing uses glideinWMS for its data reconstruction on the FNAL campus Grid, user analysis and Monte Carlo production across Open Science Grid (OSG). We review this computing model and setup used including CDF specific configuration within the glideinWMS system which provides powerful scalability and makes Grid computing working like in a local batch environment with ability to handle more than 10000 running jobs at a time.

  8. CDF GlideinWMS usage in Grid computing of high energy physics

    NASA Astrophysics Data System (ADS)

    Zvada, Marian; Benjamin, Doug; Sfiligoi, Igor

    2010-04-01

    Many members of large science collaborations already have specialized grids available to advance their research in the need of getting more computing resources for data analysis. This has forced the Collider Detector at Fermilab (CDF) collaboration to move beyond the usage of dedicated resources and start exploiting Grid resources. Nowadays, CDF experiment is increasingly relying on glidein-based computing pools for data reconstruction. Especially, Monte Carlo production and user data analysis, serving over 400 users by central analysis farm middleware (CAF) on the top of Condor batch system and CDF Grid infrastructure. Condor is designed as distributed architecture and its glidein mechanism of pilot jobs is ideal for abstracting the Grid computing by making a virtual private computing pool. We would like to present the first production use of the generic pilot-based Workload Management System (glideinWMS), which is an implementation of the pilot mechanism based on the Condor distributed infrastructure. CDF Grid computing uses glideinWMS for its data reconstruction on the FNAL campus Grid, user analysis and Monte Carlo production across Open Science Grid (OSG). We review this computing model and setup used including CDF specific configuration within the glideinWMS system which provides powerful scalability and makes Grid computing working like in a local batch environment with ability to handle more than 10000 running jobs at a time.

  9. Grid management

    NASA Technical Reports Server (NTRS)

    Hwang, Danny

    1992-01-01

    A computational environment that allows many Computational Fluid Dynamics (CFD) engineers to work on the same project exists in the Special Project Office (SPO). This environment enables several users to carry out the task of grid generation. The grid management system, used by the engineers, is described in a brief overview. The topics will include the grid file naming system, the grid-generation procedure, grid storage, and the grid format standard.

  10. Optimisation of sensing time and transmission time in cognitive radio-based smart grid networks

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Fu, Yuli; Yang, Junjie

    2016-07-01

    Cognitive radio (CR)-based smart grid (SG) networks have been widely recognised as emerging communication paradigms in power grids. However, a sufficient spectrum resource and reliability are two major challenges for real-time applications in CR-based SG networks. In this article, we study the traffic data collection problem. Based on the two-stage power pricing model, the power price is associated with the efficient received traffic data in a metre data management system (MDMS). In order to minimise the system power price, a wideband hybrid access strategy is proposed and analysed, to share the spectrum between the SG nodes and CR networks. The sensing time and transmission time are jointly optimised, while both the interference to primary users and the spectrum opportunity loss of secondary users are considered. Two algorithms are proposed to solve the joint optimisation problem. Simulation results show that the proposed joint optimisation algorithms outperform the fixed parameters (sensing time and transmission time) algorithms, and the power cost is reduced efficiently.

  11. Estimation of theoretical maximum speedup ratio for parallel computing of grid-based distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Liu, Junzhi; Zhu, A.-Xing; Qin, Cheng-Zhi

    2013-10-01

    Theoretical maximum speedup ratio (TMSR) can be used as a goal for improving parallel computing methods for distributed hydrological models. Different types of distributed hydrological models need different TMSR estimation methods because of the different computing characteristics of models. Existing TMSR estimation methods, such as those for sub-basin based distributed hydrological models, are inappropriate for grid-based distributed hydrological models. In this paper, we proposed a TMSR estimation method suitable for grid-based distributed hydrological models. With this method, TMSRs for hillslope processes and channel routing processes are calculated separately and then combined to obtain the overall TMSR. A branch-and-bound algorithm and a critical path heuristic algorithm are used to estimate TMSRs for parallel computing of hillslope processes and channel routing processes, respectively. The overall TMSR is calculated according to the proportions of computing these two types of processes. A preliminary application showed that the more the number of sub-basins, the larger the TMSRs and that the compact watersheds had larger TMSRs than the long narrow watersheds.

  12. Grid-based continual analysis of molecular interior for drug discovery, QSAR and QSPR.

    PubMed

    Potemkin, A V; Grishina, M A; Potemkin, V A

    2017-02-07

    In 1979, R.D.Cramer and M.Milne made a first realization of the above mentioned principles attempting to compare molecules by aligning them in space and by mapping their molecular fields to a 3D grid. Further, this approach was developed as the DYLOMMS (DYnamic Lattice-Oriented Molecular Modelling System) approach. In 1984, H.Wold and S.Wold proposed the use of partial least squares (PLS) analysis, instead of principal component analysis, to correlate the field values with biological activities. Then, in 1988 the method which was called CoMFA (Comparative Molecular Field Analysis) was introduced and the appropriate software became commercially available. Since 1988, a lot of 3D QSAR methods, algorithms and their modifications are introduced for solving of virtual drug discovery problems (e.g., CoMSIA, CoMMA, HINT, HASL, GOLPE, GRID, PARM, Raptor, BiS, CiS, ConGO,). All the methods can be divided into two groups (classes):1. Methods studying the exterior of molecules; 2) Methods studying the interior of molecules. A series of grid-based computational technologies for Continual Molecular Interior analysis (CoMIn) is invented in the current paper. The grid-based analysis is fulfilled by means of a lattice construction analogously to many other grid-based methods. The further continual elucidation of molecular structure is performed in various ways. (i) In the terms of intermolecular interactions potentials. This can be represented as a superposition of Coulomb, Van der Waals interactions and hydrogen bonds. All the potentials are well known continual functions and their values can be determined in all lattice points for a molecule. (ii) In the terms of quantum functions such as electron density distribution, Laplacian and Hamiltonian of electron density distribution, potential energy distribution, the highest occupied and the lowest unoccupied molecular orbitals distribution and their superposition. To reduce time of calculations using quantum methods based on the

  13. Adult Competency Education Kit. Basic Skills in Speaking, Math, and Reading for Employment. Part J. ACE Competency Based Job Descriptions: Sales Core Job Description; #36--Sales, Automotive Parts; #37--Sales, Retail; #38--Salesperson, Garden & Housewares; #39--Salesperson, Women's Garments.

    ERIC Educational Resources Information Center

    San Mateo County Office of Education, Redwood City, CA. Career Preparation Centers.

    This seventh of fifteen sets of Adult Competency Education (ACE) Competency Based Job Descriptions in the ACE kit contains job descriptions for Salesperson, Automotive Parts; Sales Clerk, Retail; Salesperson, Garden and Housewares; and Salesperson, Women's Garments. Each begins with a fact sheet that includes this information: occupational title,…

  14. Time-domain analysis of planar microstrip devices using a generalized Yee-algorithm based on unstructured grids

    NASA Technical Reports Server (NTRS)

    Gedney, Stephen D.; Lansing, Faiza

    1993-01-01

    The generalized Yee-algorithm is presented for the temporal full-wave analysis of planar microstrip devices. This algorithm has the significant advantage over the traditional Yee-algorithm in that it is based on unstructured and irregular grids. The robustness of the generalized Yee-algorithm is that structures that contain curved conductors or complex three-dimensional geometries can be more accurately, and much more conveniently modeled using standard automatic grid generation techniques. This generalized Yee-algorithm is based on the the time-marching solution of the discrete form of Maxwell's equations in their integral form. To this end, the electric and magnetic fields are discretized over a dual, irregular, and unstructured grid. The primary grid is assumed to be composed of general fitted polyhedra distributed throughout the volume. The secondary grid (or dual grid) is built up of the closed polyhedra whose edges connect the centroid's of adjacent primary cells, penetrating shared faces. Faraday's law and Ampere's law are used to update the fields normal to the primary and secondary grid faces, respectively. Subsequently, a correction scheme is introduced to project the normal fields onto the grid edges. It is shown that this scheme is stable, maintains second-order accuracy, and preserves the divergenceless nature of the flux densities. Finally, for computational efficiency the algorithm is structured as a series of sparse matrix-vector multiplications. Based on this scheme, the generalized Yee-algorithm has been implemented on vector and parallel high performance computers in a highly efficient manner.

  15. Virtual screening and scaffold hopping based on GRID molecular interaction fields.

    PubMed

    Ahlström, Marie M; Ridderström, Marianne; Luthman, Kristina; Zamora, Ismael

    2005-01-01

    In this study, a set of strategies for structure-based design using GRID molecular interaction fields (MIFs) to derive a pharmacophoric representation of a protein is reported. Thrombin, one of the key enzymes involved in the blood coagulation cascade, was chosen as the model system since abundant published experimental data are available related to both crystal structures and structurally diverse sets of inhibitors. First, a virtual screening methodology was developed either using a pharmacophore representation of the protein based on GRID MIFs or using GRID MIFs from the 3D structure of a set of chosen thrombin inhibitors. The search was done in a 3D multiconformation version of the Available Chemical Directory (ACD) database, which had been spiked with 262 known thrombin inhibitors (multiple conformers available per compound). The model managed to find 80% of the known thrombin inhibitors among the 74,291 conformers in the ACD by only searching 5% of the database; hence, a 15-fold enrichment of the library was achieved. Second, a scaffold hopping methodology was developed using GRID MIFs, giving the scaffold interaction pattern and the shape of the scaffold, together with the distance between the anchor points. The scaffolds reported by Dolle in the Journal of Combinatorial Chemistry summaries (2000 and 2001) and scaffolds built or derived from ligands cocomplexed with the thrombin enzyme were parameterized using a new set of descriptors and saved into a searchable database. The scaffold representation from the database was then compared to a template scaffold (from a thrombin crystal structure), and the thrombin-derived scaffolds included in the database were found among the top solutions. To validate the usefulness of the methodology to replace the template scaffold, the entire molecule was built (scaffold and side chains) and the resulting compounds were docked into the active site of thrombin. The docking solutions showed the same binding pattern as the

  16. Efficient Dynamic Replication Algorithm Using Agent for Data Grid

    PubMed Central

    Vashisht, Priyanka; Kumar, Rajesh; Sharma, Anju

    2014-01-01

    In data grids scientific and business applications produce huge volume of data which needs to be transferred among the distributed and heterogeneous nodes of data grids. Data replication provides a solution for managing data files efficiently in large grids. The data replication helps in enhancing the data availability which reduces the overall access time of the file. In this paper an algorithm, namely, EDRA using agents for data grid, has been proposed and implemented. EDRA consists of dynamic replication of hierarchical structure taken into account for the selection of best replica. Decision for selecting the best replica is based on scheduling parameters. The scheduling parameters are bandwidth, load gauge, and computing capacity of the node. The scheduling in data grid helps in reducing the data access time. The distribution of the load on the nodes of data grid is done evenly by considering scheduling parameters. EDRA is implemented using data grid simulator, namely, OptorSim. European Data Grid CMS test bed topology is used in this experiment. The simulation results are obtained by comparing BHR, LRU, No Replication, and EDRA. The result shows the efficiency of EDRA algorithm in terms of mean job execution time, network usage, and storage usage of node. PMID:25028680

  17. PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems

    PubMed Central

    Mohamed, Mohamed A.; Eltamaly, Ali M.; Alolah, Abdulrahman I.

    2016-01-01

    This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers. PMID:27513000

  18. An Adaptive Reputation-Based Algorithm for Grid Virtual Organization Formation

    NASA Astrophysics Data System (ADS)

    Cui, Yongrui; Li, Mingchu; Ren, Yizhi; Sakurai, Kouichi

    A novel adaptive reputation-based virtual organization formation is proposed. It restrains the bad performers effectively based on the consideration of the global experience of the evaluator and evaluates the direct trust relation between two grid nodes accurately by consulting the previous trust value rationally. It also consults and improves the reputation evaluation process in PathTrust model by taking account of the inter-organizational trust relationship and combines it with direct and recommended trust in a weighted way, which makes the algorithm more robust against collusion attacks. Additionally, the proposed algorithm considers the perspective of the VO creator and takes required VO services as one of the most important fine-grained evaluation criterion, which makes the algorithm more suitable for constructing VOs in grid environments that include autonomous organizations. Simulation results show that our algorithm restrains the bad performers and resists against fake transaction attacks and badmouth attacks effectively. It provides a clear advantage in the design of a VO infrastructure.

  19. A grouping method based on grid density and relationship for crowd evacuation simulation

    NASA Astrophysics Data System (ADS)

    Li, Yan; Liu, Hong; Liu, Guang-peng; Li, Liang; Moore, Philip; Hu, Bin

    2017-05-01

    Psychological factors affect the movement of people in the competitive or panic mode of evacuation, in which the density of pedestrians is relatively large and the distance among them is small. In this paper, a crowd is divided into groups according to their social relations to simulate the actual movement of crowd evacuation more realistically and increase the attractiveness of the group based on social force model. The force of group attraction is the synthesis of two forces; one is the attraction of the individuals generated by their social relations to gather, and the other is that of the group leader to the individuals within the group to ensure that the individuals follow the leader. The synthetic force determines the trajectory of individuals. The evacuation process is demonstrated using the improved social force model. In the improved social force model, the individuals with close social relations gradually present a closer and coordinated action while following the leader. In this paper, a grouping algorithm is proposed based on grid density and relationship via computer simulation to illustrate the features of the improved social force model. The definition of the parameters involved in the algorithm is given, and the effect of relational value on the grouping is tested. Reasonable numbers of grids and weights are selected. The effectiveness of the algorithm is shown through simulation experiments. A simulation platform is also established using the proposed grouping algorithm and the improved social force model for crowd evacuation simulation.

  20. Sparse-grid-based adaptive model predictive control of HL60 cellular differentiation.

    PubMed

    Noble, Sarah L; Wendel, Lindsay E; Donahue, Maia M; Buzzard, Gregery T; Rundell, Ann E

    2012-02-01

    Quantitative methods such as model-based predictive control are known to facilitate the design of strategies to manipulate biological systems. This study develops a sparse-grid-based adaptive model predictive control (MPC) strategy to direct HL60 cellular differentiation. Sparse-grid sampling and interpolation support a computationally efficient adaptive MPC scheme in which multiple data-consistent regions of the model parameter space are identified and used to calculate a control compromise. The algorithm is evaluated in silico with structural model mismatch. Simulations demonstrate how the multiscenario control strategy more effectively manages the mismatch compared to a single scenario approach. Furthermore, the controller is evaluated in vitro to differentiate HL60 cells in both normal and perturbed environments. The controller-derived input sequence successfully achieves and sustains the specified target level of granulocytes when implemented in the laboratory. The results and analysis given here imply that adoption of this experiment planning technique to direct cell differentiation within more complex tissue engineered constructs will require the use of a reasonably accurate mathematical model and an extension of this algorithm to multiobjective controller design. © 2011 IEEE

  1. Percolation-Based Replica Discovery in Peer-to-Peer Grid Infrastructures

    NASA Astrophysics Data System (ADS)

    Palmieri, Francesco

    Peer-to-peer Grids are collaborative distributed computing/data processing systems, characterized by large scale, heterogeneity, lack of central control, unreliable components and frequent dynamic changes in both topology and configuration. In such systems, it is desirable to maintain and make widely accessible timely and up-to-date information about shared resources available to the active participants. Accordingly we introduce a scalable searching framework for locating and retrieving dataset replica information in random unstructured peer-to-peer Grids built on the Internet, based on a widely known uniform caching and searching algorithm. Such algorithm is based on bond percolation, a mathematical phase transition model well suited for random walk searches in random power law networks, which automatically shields low connectivity nodes from traffic and reduces total traffic to scale sub-linearly with network size. The proposed schema is able to find the requested information reliably end efficiently, even if every node in the network starts with a unique different set of contents as a shared resources.

  2. PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems.

    PubMed

    Mohamed, Mohamed A; Eltamaly, Ali M; Alolah, Abdulrahman I

    2016-01-01

    This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers.

  3. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    USGS Publications Warehouse

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research

  4. A brief comparison between grid based real space algorithms andspectrum algorithms for electronic structure calculations

    SciTech Connect

    Wang, Lin-Wang

    2006-12-01

    Quantum mechanical ab initio calculation constitutes the biggest portion of the computer time in material science and chemical science simulations. As a computer center like NERSC, to better serve these communities, it will be very useful to have a prediction for the future trends of ab initio calculations in these areas. Such prediction can help us to decide what future computer architecture can be most useful for these communities, and what should be emphasized on in future supercomputer procurement. As the size of the computer and the size of the simulated physical systems increase, there is a renewed interest in using the real space grid method in electronic structure calculations. This is fueled by two factors. First, it is generally assumed that the real space grid method is more suitable for parallel computation for its limited communication requirement, compared with spectrum method where a global FFT is required. Second, as the size N of the calculated system increases together with the computer power, O(N) scaling approaches become more favorable than the traditional direct O(N{sup 3}) scaling methods. These O(N) methods are usually based on localized orbital in real space, which can be described more naturally by the real space basis. In this report, the author compares the real space methods versus the traditional plane wave (PW) spectrum methods, for their technical pros and cons, and the possible of future trends. For the real space method, the author focuses on the regular grid finite different (FD) method and the finite element (FE) method. These are the methods used mostly in material science simulation. As for chemical science, the predominant methods are still Gaussian basis method, and sometime the atomic orbital basis method. These two basis sets are localized in real space, and there is no indication that their roles in quantum chemical simulation will change anytime soon. The author focuses on the density functional theory (DFT), which is the

  5. Model atmospheres for M (sub)dwarf stars. 1: The base model grid

    NASA Technical Reports Server (NTRS)

    Allard, France; Hauschildt, Peter H.

    1995-01-01

    We have calculated a grid of more than 700 model atmospheres valid for a wide range of parameters encompassing the coolest known M dwarfs, M subdwarfs, and brown dwarf candidates: 1500 less than or equal to T(sub eff) less than or equal to 4000 K, 3.5 less than or equal to log g less than or equal to 5.5, and -4.0 less than or equal to (M/H) less than or equal to +0.5. Our equation of state includes 105 molecules and up to 27 ionization stages of 39 elements. In the calculations of the base grid of model atmospheres presented here, we include over 300 molecular bands of four molecules (TiO, VO, CaH, FeH) in the JOLA approximation, the water opacity of Ludwig (1971), collision-induced opacities, b-f and f-f atomic processes, as well as about 2 million spectral lines selected from a list with more than 42 million atomic and 24 million molecular (H2, CH, NH, OH, MgH, SiH, C2, CN, CO, SiO) lines. High-resolution synthetic spectra are obtained using an opacity sampling method. The model atmospheres and spectra are calculated with the generalized stellar atmosphere code PHOENIX, assuming LTE, plane-parallel geometry, energy (radiative plus convective) conservation, and hydrostatic equilibrium. The model spectra give close agreement with observations of M dwarfs across a wide spectral range from the blue to the near-IR, with one notable exception: the fit to the water bands. We discuss several practical applications of our model grid, e.g., broadband colors derived from the synthetic spectra. In light of current efforts to identify genuine brown dwarfs, we also show how low-resolution spectra of cool dwarfs vary with surface gravity, and how the high-regulation line profile of the Li I resonance doublet depends on the Li abundance.

  6. Overview of the NASA Glenn Flux Reconstruction Based High-Order Unstructured Grid Code

    NASA Technical Reports Server (NTRS)

    Spiegel, Seth C.; DeBonis, James R.; Huynh, H. T.

    2016-01-01

    A computational fluid dynamics code based on the flux reconstruction (FR) method is currently being developed at NASA Glenn Research Center to ultimately provide a large- eddy simulation capability that is both accurate and efficient for complex aeropropulsion flows. The FR approach offers a simple and efficient method that is easy to implement and accurate to an arbitrary order on common grid cell geometries. The governing compressible Navier-Stokes equations are discretized in time using various explicit Runge-Kutta schemes, with the default being the 3-stage/3rd-order strong stability preserving scheme. The code is written in modern Fortran (i.e., Fortran 2008) and parallelization is attained through MPI for execution on distributed-memory high-performance computing systems. An h- refinement study of the isentropic Euler vortex problem is able to empirically demonstrate the capability of the FR method to achieve super-accuracy for inviscid flows. Additionally, the code is applied to the Taylor-Green vortex problem, performing numerous implicit large-eddy simulations across a range of grid resolutions and solution orders. The solution found by a pseudo-spectral code is commonly used as a reference solution to this problem, and the FR code is able to reproduce this solution using approximately the same grid resolution. Finally, an examination of the code's performance demonstrates good parallel scaling, as well as an implementation of the FR method with a computational cost/degree- of-freedom/time-step that is essentially independent of the solution order of accuracy for structured geometries.

  7. Climate Simulations based on a different-grid nested and coupled model

    NASA Astrophysics Data System (ADS)

    Li, Dan; Ji, Jinjun; Li, Yinpeng

    2002-05-01

    An atmosphere-vegetation interaction model (A VIM) has been coupled with a nine-layer General Cir-culation Model (GCM) of Institute of Atmospheic Physics/State Key Laboratory of Numerical Modeling for Atmospheric Sciences and Geophysical Fluid Dynamics (IAP/LASG), which is rhomboidally truncated at zonal wave number 15, to simulate global climatic mean states. A VIM is a model having inter-feedback between land surface processes and eco-physiological processes on land. As the first step to couple land with atmosphere completely, the physiological processes are fixed and only the physical part (generally named the SVAT (soil-vegetation-atmosphere-transfer scheme) model) of AVIM is nested into IAP/LASG L9R15 GCM. The ocean part of GCM is prescribed and its monthly sea surface temperature (SST) is the climatic mean value. With respect to the low resolution of GCM, i.e., each grid cell having lon-gitude 7.5° and latitude 4.5°, the vegetation is given a high resolution of 1.5° by 1.5° to nest and couple the fine grid cells of land with the coarse grid cells of atmosphere. The coupling model has been integrated for 15 years and its last ten-year mean of outputs was chosen for analysis. Compared with observed data and NCEP reanalysis, the coupled model simulates the main characteris-tics of global atmospheric circulation and the fields of temperature and moisture. In particular, the simu-lated precipitation and surface air temperature have sound results. The work creates a solid base on coupling climate models with the biosphere.

  8. Thread Group Multithreading: Accelerating the Computation of an Agent-Based Power System Modeling and Simulation Tool -- C GridLAB-D

    SciTech Connect

    Jin, Shuangshuang; Chassin, David P.

    2014-01-06

    GridLAB-DTM is an open source next generation agent-based smart-grid simulator that provides unprecedented capability to model the performance of smart grid technologies. Over the past few years, GridLAB-D has been used to conduct important analyses of smart grid concepts, but it is still quite limited by its computational performance. In order to break through the performance bottleneck to meet the need for large scale power grid simulations, we develop a thread group mechanism to implement highly granular multithreaded computation in GridLAB-D. We achieve close to linear speedups on multithreading version compared against the single-thread version of the same code running on general purpose multi-core commodity for a benchmark simple house model. The performance of the multithreading code shows favorable scalability properties and resource utilization, and much shorter execution time for large-scale power grid simulations.

  9. Using Job-title Based Physical Exposures from O*NET in an Epidemiological Study of Carpal Tunnel Syndrome

    PubMed Central

    Evanoff, Bradley; Zeringue, Angelique; Franzblau, Alfred; Dale, Ann Marie

    2014-01-01

    Objective We studied associations between job title based measures of force and repetition and incident carpal tunnel syndrome (CTS). Background Job exposure matrices (JEMs) are not commonly used in studies of work-related upper extremity disorders. Methods We enrolled newly-hired workers into a prospective cohort study. We assigned a Standard Occupational Classification (SOC) code to each job held and extracted physical work exposure variables from the Occupational Information Network (O*NET). CTS case definition required both characteristic symptoms and abnormal median nerve conduction. Results 751 (67.8%) of 1107 workers completed follow-up evaluations. 31 subjects (4.4%) developed CTS during an average of 3.3 years of follow-up. Repetitive Motion, Static Strength, and Dynamic Strength from the most recent job held were all significant predictors of CTS when included individually as physical exposures in models adjusting for age, gender, and BMI. Similar results were found using time-weighted exposure across all jobs held during the study. Repetitive Motion, Static Strength, and Dynamic Strength were correlated, precluding meaningful analysis of their independent effects. Conclusion This study found strong relationships between workplace physical exposures assessed via a JEM and CTS, after adjusting for age, gender, and BMI. Though job title based exposures are likely to result in significant exposure misclassification, they can be useful for large population studies where more precise exposure data are not available. Application JEMs can be used as a measure of workplace physical exposures for some studies of musculoskeletal disorders. PMID:24669551

  10. Using job-title-based physical exposures from O*NET in an epidemiological study of carpal tunnel syndrome.

    PubMed

    Evanoff, Bradley; Zeringue, Angelique; Franzblau, Alfred; Dale, Ann Marie

    2014-02-01

    We studied associations between job-title-based measures of force and repetition and incident carpal tunnel syndrome (CTS). Job exposure matrices (JEMs) are not commonly used in studies of work-related upper-extremity disorders. We enrolled newly hired workers in a prospective cohort study. We assigned a Standard Occupational Classification (SOC) code to each job held and extracted physical work exposure variables from the Occupational Information Network (O*NET). CTS case definition required both characteristic symptoms and abnormal median nerve conduction. Of 1,107 workers, 751 (67.8%) completed follow-up evaluations. A total of 31 respondents (4.4%) developed CTS during an average of 3.3 years of follow-up. Repetitive motion, static strength, and dynamic strength from the most recent job held were all significant predictors of CTS when included individually as physical exposures in models adjusting for age, gender, and BMI. Similar results were found using time-weighted exposure across all jobs held during the study. Repetitive motion, static strength, and dynamic strength were correlated, precluding meaningful analysis of their independent effects. This study found strong relationships between workplace physical exposures assessed via a JEM and CTS, after adjusting for age, gender, and BMI. Though job-title-based exposures are likely to result in significant exposure misclassification, they can be useful for large population studies where more precise exposure data are not available. JEMs can be used as a measure of workplace physical exposures for some studies of musculoskeletal disorders.

  11. A High Performance Computing Platform for Performing High-Volume Studies With Windows-based Power Grid Tools

    SciTech Connect

    Chen, Yousu; Huang, Zhenyu

    2014-08-31

    Serial Windows-based programs are widely used in power utilities. For applications that require high volume simulations, the single CPU runtime can be on the order of days or weeks. The lengthy runtime, along with the availability of low cost hardware, is leading utilities to seriously consider High Performance Computing (HPC) techniques. However, the vast majority of the HPC computers are still Linux-based and many HPC applications have been custom developed external to the core simulation engine without consideration for ease of use. This has created a technical gap for applying HPC-based tools to today’s power grid studies. To fill this gap and accelerate the acceptance and adoption of HPC for power grid applications, this paper presents a prototype of generic HPC platform for running Windows-based power grid programs on Linux-based HPC environment. The preliminary results show that the runtime can be reduced from weeks to hours to improve work efficiency.

  12. Differential Evolution Based IDWNN Controller for Fault Ride-Through of Grid-Connected Doubly Fed Induction Wind Generators.

    PubMed

    Manonmani, N; Subbiah, V; Sivakumar, L

    2015-01-01

    The key objective of wind turbine development is to ensure that output power is continuously increased. It is authenticated that wind turbines (WTs) supply the necessary reactive power to the grid at the time of fault and after fault to aid the flowing grid voltage. At this juncture, this paper introduces a novel heuristic based controller module employing differential evolution and neural network architecture to improve the low-voltage ride-through rate of grid-connected wind turbines, which are connected along with doubly fed induction generators (DFIGs). The traditional crowbar-based systems were basically applied to secure the rotor-side converter during the occurrence of grid faults. This traditional controller is found not to satisfy the desired requirement, since DFIG during the connection of crowbar acts like a squirrel cage module and absorbs the reactive power from the grid. This limitation is taken care of in this paper by introducing heuristic controllers that remove the usage of crowbar and ensure that wind turbines supply necessary reactive power to the grid during faults. The controller is designed in this paper to enhance the DFIG converter during the grid fault and this controller takes care of the ride-through fault without employing any other hardware modules. The paper introduces a double wavelet neural network controller which is appropriately tuned employing differential evolution. To validate the proposed controller module, a case study of wind farm with 1.5 MW wind turbines connected to a 25 kV distribution system exporting power to a 120 kV grid through a 30 km 25 kV feeder is carried out by simulation.

  13. Differential Evolution Based IDWNN Controller for Fault Ride-Through of Grid-Connected Doubly Fed Induction Wind Generators

    PubMed Central

    Manonmani, N.; Subbiah, V.; Sivakumar, L.

    2015-01-01

    The key objective of wind turbine development is to ensure that output power is continuously increased. It is authenticated that wind turbines (WTs) supply the necessary reactive power to the grid at the time of fault and after fault to aid the flowing grid voltage. At this juncture, this paper introduces a novel heuristic based controller module employing differential evolution and neural network architecture to improve the low-voltage ride-through rate of grid-connected wind turbines, which are connected along with doubly fed induction generators (DFIGs). The traditional crowbar-based systems were basically applied to secure the rotor-side converter during the occurrence of grid faults. This traditional controller is found not to satisfy the desired requirement, since DFIG during the connection of crowbar acts like a squirrel cage module and absorbs the reactive power from the grid. This limitation is taken care of in this paper by introducing heuristic controllers that remove the usage of crowbar and ensure that wind turbines supply necessary reactive power to the grid during faults. The controller is designed in this paper to enhance the DFIG converter during the grid fault and this controller takes care of the ride-through fault without employing any other hardware modules. The paper introduces a double wavelet neural network controller which is appropriately tuned employing differential evolution. To validate the proposed controller module, a case study of wind farm with 1.5 MW wind turbines connected to a 25 kV distribution system exporting power to a 120 kV grid through a 30 km 25 kV feeder is carried out by simulation. PMID:26516636

  14. Predictors of Evidence-Based Practice Implementation, Job Satisfaction, and Group Cohesion Among Regional Fellowship Program Participants.

    PubMed

    Kim, Son Chae; Stichler, Jaynelle F; Ecoff, Laurie; Brown, Caroline E; Gallo, Ana-Maria; Davidson, Judy E

    2016-10-01

    A regional, collaborative evidence-based practice (EBP) fellowship program utilizing institution-matched mentors was offered to a targeted group of nurses from multiple local hospitals to implement unit-based EBP projects. The Advancing Research and Clinical Practice through Close Collaboration (ARCC) model postulates that strong EBP beliefs result in high EBP implementation, which in turn causes high job satisfaction and group cohesion among nurses. This study examined the relationships among EBP beliefs, EBP implementation, job satisfaction, group cohesion, and group attractiveness among the fellowship program participants. A total of 175 participants from three annual cohorts between 2012 and 2014 completed the questionnaires at the beginning of each annual session. The questionnaires included the EBP beliefs, EBP implementation, job satisfaction, group cohesion, and group attractiveness scales. There were positive correlations between EBP beliefs and EBP implementation (r = 0.47; p <.001), as well as EBP implementation and job satisfaction (r = 0.17; p = .029). However, no statistically significant correlations were found between EBP implementation and group cohesion, or group attractiveness. Hierarchical multiple regression models showed that EBP beliefs was a significant predictor of both EBP implementation (β = 0.33; p <.001) and job satisfaction (β = 0.25; p = .011). However, EBP implementation was not a significant predictor of job satisfaction, group cohesion, or group attractiveness. In multivariate analyses where demographic variables were taken into account, although EBP beliefs predicted job satisfaction, no significant relationship was found between EBP implementation and job satisfaction or group cohesion. Further studies are needed to confirm these unexpected study findings. © 2016 Sigma Theta Tau International.

  15. 3D inversion based on multi-grid approach of magnetotelluric data from Northern Scandinavia

    NASA Astrophysics Data System (ADS)

    Cherevatova, M.; Smirnov, M.; Korja, T. J.; Egbert, G. D.

    2012-12-01

    In this work we investigate the geoelectrical structure of the cratonic margin of Fennoscandian Shield by means of magnetotelluric (MT) measurements carried out in Northern Norway and Sweden during summer 2011-2012. The project Magnetotellurics in the Scandes (MaSca) focuses on the investigation of the crust, upper mantle and lithospheric structure in a transition zone from a stable Precambrian cratonic interior to a passive continental margin beneath the Caledonian Orogen and the Scandes Mountains in western Fennoscandia. Recent MT profiles in the central and southern Scandes indicated a large contrast in resistivity between Caledonides and Precambrian basement. The alum shales as a highly conductive layers between the resistive Precambrian basement and the overlying Caledonian nappes are revealed from this profiles. Additional measurements in the Northern Scandes were required. All together data from 60 synchronous long period (LMT) and about 200 broad band (BMT) sites were acquired. The array stretches from Lofoten and Bodo (Norway) in the west to Kiruna and Skeleftea (Sweden) in the east covering an area of 500x500 square kilometers. LMT sites were occupied for about two months, while most of the BMT sites were measured during one day. We have used new multi-grid approach for 3D electromagnetic (EM) inversion and modelling. Our approach is based on the OcTree discretization where the spatial domain is represented by rectangular cells, each of which might be subdivided (recursively) into eight sub-cells. In this simplified implementation the grid is refined only in the horizontal direction, uniformly in each vertical layer. Using multi-grid we manage to have a high grid resolution near the surface (for instance, to tackle with galvanic distortions) and lower resolution at greater depth as the EM fields decay in the Earth according to the diffusion equation. We also have a benefit in computational costs as number of unknowns decrease. The multi-grid forward

  16. Power system voltage stability and agent based distribution automation in smart grid

    NASA Astrophysics Data System (ADS)

    Nguyen, Cuong Phuc

    2011-12-01

    Our interconnected electric power system is presently facing many challenges that it was not originally designed and engineered to handle. The increased inter-area power transfers, aging infrastructure, and old technologies, have caused many problems including voltage instability, widespread blackouts, slow control response, among others. These problems have created an urgent need to transform the present electric power system to a highly stable, reliable, efficient, and self-healing electric power system of the future, which has been termed "smart grid". This dissertation begins with an investigation of voltage stability in bulk transmission networks. A new continuation power flow tool for studying the impacts of generator merit order based dispatch on inter-area transfer capability and static voltage stability is presented. The load demands are represented by lumped load models on the transmission system. While this representation is acceptable in traditional power system analysis, it may not be valid in the future smart grid where the distribution system will be integrated with intelligent and quick control capabilities to mitigate voltage problems before they propagate into the entire system. Therefore, before analyzing the operation of the whole smart grid, it is important to understand the distribution system first. The second part of this dissertation presents a new platform for studying and testing emerging technologies in advanced Distribution Automation (DA) within smart grids. Due to the key benefits over the traditional centralized approach, namely flexible deployment, scalability, and avoidance of single-point-of-failure, a new distributed approach is employed to design and develop all elements of the platform. A multi-agent system (MAS), which has the three key characteristics of autonomy, local view, and decentralization, is selected to implement the advanced DA functions. The intelligent agents utilize a communication network for cooperation and

  17. IDL Grid Web Portal

    NASA Astrophysics Data System (ADS)

    Massimino, P.; Costa, A.

    2008-08-01

    Image Data Language is a software for data analysis, visualization and cross-platform application development. The potentiality of IDL is well-known in the academic scientific world, especially in the astronomical environment where thousands of procedures are developed by using IDL. The typical use of IDL is the interactive mode but it is also possible to run IDL programs that do not require any interaction with the user, submitting them in batch or background modality. Through the interactive mode the user immediately receives images or other data produced in the running phase of the program; in batch or background mode, the user will have to wait for the end of the program, sometime for many hours or days to obtain images or data that IDL produced as output: in fact in Grid environment it is possible to access to or retrieve data only after completion of the program. The work that we present gives flexibility to IDL procedures submitted to the Grid computer infrastructure. For this purpose we have developed an IDL Grid Web Portal to allow the user to access the Grid and to submit IDL programs granting a full job control and the access to images and data generated during the running phase, without waiting for their completion. We have used the PHP technology and we have given the same level of security that Grid normally offers to its users. In this way, when the user notices that the intermediate program results are not those expected, he can stop the job, change the parameters to better satisfy the computational algorithm and resubmit the program, without consuming the CPU time and other Grid resources. The IDL Grid Web Portal allows you to obtain IDL generated images, graphics and data tables by using a normal browser. All conversations from the user and the Grid resources occur via Web, as well as authentication phases. The IDL user has not to change the program source much because the Portal will automatically introduce the appropriate modification before

  18. Solving the flexible job shop problem by hybrid metaheuristics-based multiagent model

    NASA Astrophysics Data System (ADS)

    Nouri, Houssem Eddine; Belkahla Driss, Olfa; Ghédira, Khaled

    2017-05-01

    The flexible job shop scheduling problem (FJSP) is a generalization of the classical job shop scheduling problem that allows to process operations on one machine out of a set of alternative machines. The FJSP is an NP-hard problem consisting of two sub-problems, which are the assignment and the scheduling problems. In this paper, we propose how to solve the FJSP by hybrid metaheuristics-based clustered holonic multiagent model. First, a neighborhood-based genetic algorithm (NGA) is applied by a scheduler agent for a global exploration of the search space. Second, a local search technique is used by a set of cluster agents to guide the research in promising regions of the search space and to improve the quality of the NGA final population. The efficiency of our approach is explained by the flexible selection of the promising parts of the search space by the clustering operator after the genetic algorithm process, and by applying the intensification technique of the tabu search allowing to restart the search from a set of elite solutions to attain new dominant scheduling solutions. Computational results are presented using four sets of well-known benchmark literature instances. New upper bounds are found, showing the effectiveness of the presented approach.

  19. The visibility-based tapered gridded estimator (TGE) for the redshifted 21-cm power spectrum

    NASA Astrophysics Data System (ADS)

    Choudhuri, Samir; Bharadwaj, Somnath; Chatterjee, Suman; Ali, Sk. Saiyad; Roy, Nirupam; Ghosh, Abhik

    2016-12-01

    We present an improved visibility-based tapered gridded estimator (TGE) for the power spectrum of the diffuse sky signal. The visibilities are gridded to reduce the total computation time for the calculation, and tapered through a convolution to suppress the contribution from the outer regions of the telescope's field of view. The TGE also internally estimates the noise bias, and subtracts this out to give an unbiased estimate of the power spectrum. An earlier version of the 2D TGE for the angular power spectrum Cℓ is improved and then extended to obtain the 3D TGE for the power spectrum P(k) of the 21-cm brightness temperature fluctuations. Analytic formulas are also presented for predicting the variance of the binned power spectrum. The estimator and its variance predictions are validated using simulations of 150-MHz Giant Metrewave Radio Telescope (GMRT) observations. We find that the estimator accurately recovers the input model for the 1D spherical power spectrum P(k) and the 2D cylindrical power spectrum P(k⊥, k∥), and that the predicted variance is in reasonably good agreement with the simulations.

  20. Web-based visualization of gridded dataset usings OceanBrowser

    NASA Astrophysics Data System (ADS)

    Barth, Alexander; Watelet, Sylvain; Troupin, Charles; Beckers, Jean-Marie

    2015-04-01

    OceanBrowser is a web-based visualization tool for gridded oceanographic data sets. Those data sets are typically four-dimensional (longitude, latitude, depth and time). OceanBrowser allows one to visualize horizontal sections at a given depth and time to examine the horizontal distribution of a given variable. It also offers the possibility to display the results on an arbitrary vertical section. To study the evolution of the variable in time, the horizontal and vertical sections can also be animated. Vertical section can be generated by using a fixed distance from coast or fixed ocean depth. The user can customize the plot by changing the color-map, the range of the color-bar, the type of the plot (linearly interpolated color, simple contours, filled contours) and download the current view as a simple image or as Keyhole Markup Language (KML) file for visualization in applications such as Google Earth. The data products can also be accessed as NetCDF files and through OPeNDAP. Third-party layers from a web map service can also be integrated. OceanBrowser is used in the frame of the SeaDataNet project (http://gher-diva.phys.ulg.ac.be/web-vis/) and EMODNET Chemistry (http://oceanbrowser.net/emodnet/) to distribute gridded data sets interpolated from in situ observation using DIVA (Data-Interpolating Variational Analysis).

  1. Branch-based centralized data collection for smart grids using wireless sensor networks.

    PubMed

    Kim, Kwangsoo; Jin, Seong-il

    2015-05-21

    A smart grid is one of the most important applications in smart cities. In a smart grid, a smart meter acts as a sensor node in a sensor network, and a central device collects power usage from every smart meter. This paper focuses on a centralized data collection problem of how to collect every power usage from every meter without collisions in an environment in which the time synchronization among smart meters is not guaranteed. To solve the problem, we divide a tree that a sensor network constructs into several branches. A conflict-free query schedule is generated based on the branches. Each power usage is collected according to the schedule. The proposed method has important features: shortening query processing time and avoiding collisions between a query and query responses. We evaluate this method using the ns-2 simulator. The experimental results show that this method can achieve both collision avoidance and fast query processing at the same time. The success rate of data collection at a sink node executing this method is 100%. Its running time is about 35 percent faster than that of the round-robin method, and its memory size is reduced to about 10% of that of the depth-first search method.

  2. Grid-cell-based crop water accounting for the famine early warning system

    USGS Publications Warehouse

    Verdin, J.; Klaver, R.

    2002-01-01

    Rainfall monitoring is a regular activity of food security analysts for sub-Saharan Africa due to the potentially disastrous impact of drought. Crop water accounting schemes are used to track rainfall timing and amounts relative to phenological requirements, to infer water limitation impacts on yield. Unfortunately, many rain gauge reports are available only after significant delays, and the gauge locations leave large gaps in coverage. As an alternative, a grid-cell-based formulation for the water requirement satisfaction index (WRSI) was tested for maize in Southern Africa. Grids of input variables were obtained from remote sensing estimates of rainfall, meteorological models, and digital soil maps. The spatial WRSI was computed for the 1996–97 and 1997–98 growing seasons. Maize yields were estimated by regression and compared with a limited number of reports from the field for the 1996–97 season in Zimbabwe. Agreement at a useful level (r = 0·80) was observed. This is comparable to results from traditional analysis with station data. The findings demonstrate the complementary role that remote sensing, modelling, and geospatial analysis can play in an era when field data collection in sub-Saharan Africa is suffering an unfortunate decline.

  3. Lambda Station: On-demand flow based routing for data intensive Grid applications over multitopology networks

    SciTech Connect

    Bobyshev, A.; Crawford, M.; DeMar, P.; Grigaliunas, V.; Grigoriev, M.; Moibenko, A.; Petravick, D.; Rechenmacher, R.; Newman, H.; Bunn, J.; Van Lingen, F.; Nae, D.; Ravot, S.; Steenberg, C.; Su, X.; Thomas, M.; Xia, Y.; /Caltech

    2006-08-01

    Lambda Station is an ongoing project of Fermi National Accelerator Laboratory and the California Institute of Technology. The goal of this project is to design, develop and deploy network services for path selection, admission control and flow based forwarding of traffic among data-intensive Grid applications such as are used in High Energy Physics and other communities. Lambda Station deals with the last-mile problem in local area networks, connecting production clusters through a rich array of wide area networks. Selective forwarding of traffic is controlled dynamically at the demand of applications. This paper introduces the motivation of this project, design principles and current status. Integration of Lambda Station client API with the essential Grid middleware such as the dCache/SRM Storage Resource Manager is also described. Finally, the results of applying Lambda Station services to development and production clusters at Fermilab and Caltech over advanced networks such as DOE's UltraScience Net and NSF's UltraLight is covered.

  4. Simulation of single grid-based phase-contrast x-ray imaging (g-PCXI)

    NASA Astrophysics Data System (ADS)

    Lim, H. W.; Lee, H. W.; Cho, H. S.; Je, U. K.; Park, C. K.; Kim, K. S.; Kim, G. A.; Park, S. Y.; Lee, D. Y.; Park, Y. O.; Woo, T. H.; Lee, S. H.; Chung, W. H.; Kim, J. W.; Kim, J. G.

    2017-04-01

    Single grid-based phase-contrast x-ray imaging (g-PCXI) technique, which was recently proposed by Wen et al. to retrieve absorption, scattering, and phase-gradient images from the raw image of the examined object, seems a practical method for phase-contrast imaging with great simplicity and minimal requirements on the setup alignment. In this work, we developed a useful simulation platform for g-PCXI and performed a simulation to demonstrate its viability. We also established a table-top setup for g-PCXI which consists of a focused-linear grid (200-lines/in strip density), an x-ray tube (100-μm focal spot size), and a flat-panel detector (48-μm pixel size) and performed a preliminary experiment with some samples to show the performance of the simulation platform. We successfully obtained phase-contrast x-ray images of much enhanced contrast from both the simulation and experiment and the simulated contract seemed similar to the experimental contrast, which shows the performance of the developed simulation platform. We expect that the simulation platform will be useful for designing an optimal g-PCXI system.

  5. A Generalized Grid-Based Fast Multipole Method for Integrating Helmholtz Kernels.

    PubMed

    Parkkinen, Pauli; Losilla, Sergio A; Solala, Eelis; Toivanen, Elias A; Xu, Wen-Hua; Sundholm, Dage

    2017-02-14

    A grid-based fast multipole method (GB-FMM) for optimizing three-dimensional (3D) numerical molecular orbitals in the bubbles and cube double basis has been developed and implemented. The present GB-FMM method is a generalization of our recently published GB-FMM approach for numerically calculating electrostatic potentials and two-electron interaction energies. The orbital optimization is performed by integrating the Helmholtz kernel in the double basis. The steep part of the functions in the vicinity of the nuclei is represented by one-center bubbles functions, whereas the remaining cube part is expanded on an equidistant 3D grid. The integration of the bubbles part is treated by using one-center expansions of the Helmholtz kernel in spherical harmonics multiplied with modified spherical Bessel functions of the first and second kind, analogously to the numerical inward and outward integration approach for calculating two-electron interaction potentials in atomic structure calculations. The expressions and algorithms for massively parallel calculations on general purpose graphics processing units (GPGPU) are described. The accuracy and the correctness of the implementation has been checked by performing Hartree-Fock self-consistent-field calculations (HF-SCF) on H2, H2O, and CO. Our calculations show that an accuracy of 10(-4) to 10(-7) Eh can be reached in HF-SCF calculations on general molecules.

  6. Optimized Equivalent Staggered-grid FD Method for Elastic Wave Modeling Based on Plane Wave Solutions

    NASA Astrophysics Data System (ADS)

    Yong, Peng; Huang, Jianping; Li, Zhenchun; Liao, Wenyuan; Qu, Luping; Li, Qingyang; Liu, Peijun

    2016-12-01

    In finite difference (FD) method, numerical dispersion is the dominant factor influencing the accuracy of seismic modeling. Various optimized FD schemes for scalar wave modeling have been proposed to reduce grid dispersion, while the optimized time-space domain FD schemes for elastic wave modeling have not been fully investigated yet. In this paper, an optimized FD scheme with Equivalent Staggered Grid (ESG) for elastic modelling has been developed. We start from the constant P- and S-wave speed elastic wave equations and then deduce analytical plane wave solutions in the wavenumber domain with eigenvalue decomposition method. Based on the elastic plane wave solutions, three new time-space domain dispersion relations of ESG elastic modeling are obtained, which are represented by three equations corresponding to P-, S- and converted wave terms in the elastic equations, respectively. By using these new relations, we can study the dispersion errors of different spatial FD terms independently. The dispersion analysis showed that different spatial FD terms have different errors. It is therefore suggested that different FD coefficients to be used to approximate the three spatial derivative terms. In addition, the relative dispersion error in L2-norm is minimized through optimizing FD coefficients using Newton's method. Synthetic examples have demonstrated that this new optimal FD schemes have superior accuracy for elastic wave modeling compared to Taylor-series expansion and optimized space domain FD schemes.

  7. Global Parameter Optimization of CLM4.5 Using Sparse-Grid Based Surrogates

    NASA Astrophysics Data System (ADS)

    Lu, D.; Ricciuto, D. M.; Gu, L.

    2016-12-01

    Calibration of the Community Land Model (CLM) is challenging because of its model complexity, large parameter sets, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time. The goal of this study is to calibrate some of the CLM parameters in order to improve model projection of carbon fluxes. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first use advanced sparse grid (SG) interpolation to construct a surrogate system of the actual CLM model, and then we calibrate the surrogate model in the optimization process. As the surrogate model is a polynomial whose evaluation is fast, it can be efficiently evaluated with sufficiently large number of times in the optimization, which facilitates the global search. We calibrate five parameters against 12 months of GPP, NEP, and TLAI data from the U.S. Missouri Ozark (US-MOz) tower. The results indicate that an accurate surrogate model can be created for the CLM4.5 with a relatively small number of SG points (i.e., CLM4.5 simulations), and the application of the optimized parameters leads to a higher predictive capacity than the default parameter values in the CLM4.5 for the US-MOz site.

  8. Parallel level-set methods on adaptive tree-based grids

    NASA Astrophysics Data System (ADS)

    Mirzadeh, Mohammad; Guittet, Arthur; Burstedde, Carsten; Gibou, Frederic

    2016-10-01

    We present scalable algorithms for the level-set method on dynamic, adaptive Quadtree and Octree Cartesian grids. The algorithms are fully parallelized and implemented using the MPI standard and the open-source p4est library. We solve the level set equation with a semi-Lagrangian method which, similar to its serial implementation, is free of any time-step restrictions. This is achieved by introducing a scalable global interpolation scheme on adaptive tree-based grids. Moreover, we present a simple parallel reinitialization scheme using the pseudo-time transient formulation. Both parallel algorithms scale on the Stampede supercomputer, where we are currently using up to 4096 CPU cores, the limit of our current account. Finally, a relevant application of the algorithms is presented in modeling a crystallization phenomenon by solving a Stefan problem, illustrating a level of detail that would be impossible to achieve without a parallel adaptive strategy. We believe that the algorithms presented in this article will be of interest and useful to researchers working with the level-set framework and modeling multi-scale physics in general.

  9. Multivariate Empirical Mode Decomposition Based Signal Analysis and Efficient-Storage in Smart Grid

    SciTech Connect

    Liu, Lu; Albright, Austin P; Rahimpour, Alireza; Guo, Jiandong; Qi, Hairong; Liu, Yilu

    2017-01-01

    Wide-area-measurement systems (WAMSs) are used in smart grid systems to enable the efficient monitoring of grid dynamics. However, the overwhelming amount of data and the severe contamination from noise often impede the effective and efficient data analysis and storage of WAMS generated measurements. To solve this problem, we propose a novel framework that takes advantage of Multivariate Empirical Mode Decomposition (MEMD), a fully data-driven approach to analyzing non-stationary signals, dubbed MEMD based Signal Analysis (MSA). The frequency measurements are considered as a linear superposition of different oscillatory components and noise. The low-frequency components, corresponding to the long-term trend and inter-area oscillations, are grouped and compressed by MSA using the mean shift clustering algorithm. Whereas, higher-frequency components, mostly noise and potentially part of high-frequency inter-area oscillations, are analyzed using Hilbert spectral analysis and they are delineated by statistical behavior. By conducting experiments on both synthetic and real-world data, we show that the proposed framework can capture the characteristics, such as trends and inter-area oscillation, while reducing the data storage requirements

  10. Optimized equivalent staggered-grid FD method for elastic wave modelling based on plane wave solutions

    NASA Astrophysics Data System (ADS)

    Yong, Peng; Huang, Jianping; Li, Zhenchun; Liao, Wenyuan; Qu, Luping; Li, Qingyang; Liu, Peijun

    2017-02-01

    In finite-difference (FD) method, numerical dispersion is the dominant factor influencing the accuracy of seismic modelling. Various optimized FD schemes for scalar wave modelling have been proposed to reduce grid dispersion, while the optimized time-space domain FD schemes for elastic wave modelling have not been fully investigated yet. In this paper, an optimized FD scheme with Equivalent Staggered Grid (ESG) for elastic modelling has been developed. We start from the constant P- and S-wave speed elastic wave equations and then deduce analytical plane wave solutions in the wavenumber domain with eigenvalue decomposition method. Based on the elastic plane wave solutions, three new time-space domain dispersion relations of ESG elastic modelling are obtained, which are represented by three equations corresponding to P-, S- and converted-wave terms in the elastic equations, respectively. By using these new relations, we can study the dispersion errors of different spatial FD terms independently. The dispersion analysis showed that different spatial FD terms have different errors. It is therefore suggested that different FD coefficients to be used to approximate the three spatial derivative terms. In addition, the relative dispersion error in L2-norm is minimized through optimizing FD coefficients using Newton's method. Synthetic examples have demonstrated that this new optimal FD schemes have superior accuracy for elastic wave modelling compared to Taylor-series expansion and optimized space domain FD schemes.

  11. Branch-Based Centralized Data Collection for Smart Grids Using Wireless Sensor Networks

    PubMed Central

    Kim, Kwangsoo; Jin, Seong-il

    2015-01-01

    A smart grid is one of the most important applications in smart cities. In a smart grid, a smart meter acts as a sensor node in a sensor network, and a central device collects power usage from every smart meter. This paper focuses on a centralized data collection problem of how to collect every power usage from every meter without collisions in an environment in which the time synchronization among smart meters is not guaranteed. To solve the problem, we divide a tree that a sensor network constructs into several branches. A conflict-free query schedule is generated based on the branches. Each power usage is collected according to the schedule. The proposed method has important features: shortening query processing time and avoiding collisions between a query and query responses. We evaluate this method using the ns-2 simulator. The experimental results show that this method can achieve both collision avoidance and fast query processing at the same time. The success rate of data collection at a sink node executing this method is 100%. Its running time is about 35 percent faster than that of the round-robin method, and its memory size is reduced to about 10% of that of the depth-first search method. PMID:26007734

  12. Adaptive Hierarchical Voltage Control of a DFIG-Based Wind Power Plant for a Grid Fault

    SciTech Connect

    Kim, Jinho; Muljadi, Eduard; Park, Jung-Wook; Kang, Yong Cheol

    2016-11-01

    This paper proposes an adaptive hierarchical voltage control scheme of a doubly-fed induction generator (DFIG)-based wind power plant (WPP) that can secure more reserve of reactive power (Q) in the WPP against a grid fault. To achieve this, each DFIG controller employs an adaptive reactive power to voltage (Q-V) characteristic. The proposed adaptive Q-V characteristic is temporally modified depending on the available Q capability of a DFIG; it is dependent on the distance from a DFIG to the point of common coupling (PCC). The proposed characteristic secures more Q reserve in the WPP than the fixed one. Furthermore, it allows DFIGs to promptly inject up to the Q limit, thereby improving the PCC voltage support. To avert an overvoltage after the fault clearance, washout filters are implemented in the WPP and DFIG controllers; they can prevent a surplus Q injection after the fault clearance by eliminating the accumulated values in the proportional-integral controllers of both controllers during the fault. Test results demonstrate that the scheme can improve the voltage support capability during the fault and suppress transient overvoltage after the fault clearance under scenarios of various system and fault conditions; therefore, it helps ensure grid resilience by supporting the voltage stability.

  13. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  14. Implementation of data node in spatial information grid based on WS resource framework and WS notification

    NASA Astrophysics Data System (ADS)

    Zhang, Dengrong; Yu, Le

    2006-10-01

    Abstract-An approach of constructing a data node in spatial information grid (SIG) based on Web Service Resource Framework (WSRF) and Web Service Notification (WSN) is described in this paper. Attentions are paid to construct and implement SIG's resource layer, which is the most important part. A study on this layer find out, it is impossible to require persistent interaction with the clients of the services in common SIG architecture because of inheriting "stateless" and "not persistent" limitations of Web Service. A WSRF/WSN-based data node is designed to hurdle this short comes. Three different access modes are employed to test the availability of this node. Experimental results demonstrate this service node can successfully respond to standard OGC requests and returns specific spatial data in different network environment, also is stateful, dynamic and persistent.

  15. MEDUSA - An overset grid flow solver for network-based parallel computer systems

    NASA Technical Reports Server (NTRS)

    Smith, Merritt H.; Pallis, Jani M.

    1993-01-01

    Continuing improvement in processing speed has made it feasible to solve the Reynolds-Averaged Navier-Stokes equations for simple three-dimensional flows on advanced workstations. Combining multiple workstations into a network-based heterogeneous parallel computer allows the application of programming principles learned on MIMD (Multiple Instruction Multiple Data) distributed memory parallel computers to the solution of larger problems. An overset-grid flow solution code has been developed which uses a cluster of workstations as a network-based parallel computer. Inter-process communication is provided by the Parallel Virtual Machine (PVM) software. Solution speed equivalent to one-third of a Cray-YMP processor has been achieved from a cluster of nine commonly used engineering workstation processors. Load imbalance and communication overhead are the principal impediments to parallel efficiency in this application.

  16. Direction of regeneration waves in grid-based models for forest dynamics.

    PubMed

    Schlicht, Robert; Iwasa, Yoh

    2006-09-21

    Progressing waves of regeneration are observed in forest ecosystems such as Shimagare fir forests. The patterns generated by lattice models for forest dynamics often show similar waves of disturbance and recovery. This paper introduces a method to detect and quantify the directional movement of these waves. The method is based only on the disturbance times of the sites and allows to distinguish three types of wave patterns: patterns with global direction, patterns with local direction, and patterns without direction. We apply this to several grid-based models for forest dynamics which are evaluated analytically or by simulation. The results reveal a clear distinction of the models which earlier studies were not able to detect.

  17. MEDUSA - An overset grid flow solver for network-based parallel computer systems

    NASA Technical Reports Server (NTRS)

    Smith, Merritt H.; Pallis, Jani M.

    1993-01-01

    Continuing improvement in processing speed has made it feasible to solve the Reynolds-Averaged Navier-Stokes equations for simple three-dimensional flows on advanced workstations. Combining multiple workstations into a network-based heterogeneous parallel computer allows the application of programming principles learned on MIMD (Multiple Instruction Multiple Data) distributed memory parallel computers to the solution of larger problems. An overset-grid flow solution code has been developed which uses a cluster of workstations as a network-based parallel computer. Inter-process communication is provided by the Parallel Virtual Machine (PVM) software. Solution speed equivalent to one-third of a Cray-YMP processor has been achieved from a cluster of nine commonly used engineering workstation processors. Load imbalance and communication overhead are the principal impediments to parallel efficiency in this application.

  18. Calcium-based multi-element chemistry for grid-scale electrochemical energy storage

    PubMed Central

    Ouchi, Takanari; Kim, Hojong; Spatocco, Brian L.; Sadoway, Donald R.

    2016-01-01

    Calcium is an attractive material for the negative electrode in a rechargeable battery due to its low electronegativity (high cell voltage), double valence, earth abundance and low cost; however, the use of calcium has historically eluded researchers due to its high melting temperature, high reactivity and unfavorably high solubility in molten salts. Here we demonstrate a long-cycle-life calcium-metal-based rechargeable battery for grid-scale energy storage. By deploying a multi-cation binary electrolyte in concert with an alloyed negative electrode, calcium solubility in the electrolyte is suppressed and operating temperature is reduced. These chemical mitigation strategies also engage another element in energy storage reactions resulting in a multi-element battery. These initial results demonstrate how the synergistic effects of deploying multiple chemical mitigation strategies coupled with the relaxation of the requirement of a single itinerant ion can unlock calcium-based chemistries and produce a battery with enhanced performance. PMID:27001915

  19. High-Capacity Hydrogen-Based Green-Energy Storage Solutions For The Grid Balancing

    NASA Astrophysics Data System (ADS)

    D'Errico, F.; Screnci, A.

    One of the current main challenges in green-power storage and smart grids is the lack of effective solutions for accommodating the unbalance between renewable energy sources, that offer intermittent electricity supply, and a variable electricity demand. Energy management systems have to be foreseen for the near future, while they still represent a major challenge. Integrating intermittent renewable energy sources, by safe and cost-effective energy storage systems based on solid state hydrogen is today achievable thanks to recently some technology breakthroughs. Optimized solid storage method made of magnesium-based hydrides guarantees a very rapid absorption and desorption kinetics. Coupled with electrolyzer technology, high-capacity storage of green-hydrogen is therefore practicable. Besides these aspects, magnesium has been emerging as environmentally friend energy storage method to sustain integration, monitoring and control of large quantity of GWh from high capacity renewable generation in the EU.

  20. High-Capacity Hydrogen-Based Green-Energy Storage Solutions for the Grid Balancing

    NASA Astrophysics Data System (ADS)

    D'Errico, F.; Screnci, A.

    One of the current main challenges in green-power storage and smart grids is the lack of effective solutions for accommodating the unbalance between renewable energy sources, that offer intermittent electricity supply, and a variable electricity demand. Energy management systems have to be foreseen for the near future, while they still represent a major challenge. Integrating intermittent renewable energy sources, by safe and cost-effective energy storage systems based on solid state hydrogen is today achievable thanks to recently some technology breakthroughs. Optimized solid storage method made of magnesium-based hydrides guarantees a very rapid absorption and desorption kinetics. Coupled with electrolyzer technology, high-capacity storage of green-hydrogen is therefore practicable. Besides these aspects, magnesium has been emerging as environmentally friend energy storage method to sustain integration, monitoring and control of large quantity of GWh from high capacity renewable generation in the EU.

  1. Calcium-based multi-element chemistry for grid-scale electrochemical energy storage

    NASA Astrophysics Data System (ADS)

    Ouchi, Takanari; Kim, Hojong; Spatocco, Brian L.; Sadoway, Donald R.

    2016-03-01

    Calcium is an attractive material for the negative electrode in a rechargeable battery due to its low electronegativity (high cell voltage), double valence, earth abundance and low cost; however, the use of calcium has historically eluded researchers due to its high melting temperature, high reactivity and unfavorably high solubility in molten salts. Here we demonstrate a long-cycle-life calcium-metal-based rechargeable battery for grid-scale energy storage. By deploying a multi-cation binary electrolyte in concert with an alloyed negative electrode, calcium solubility in the electrolyte is suppressed and operating temperature is reduced. These chemical mitigation strategies also engage another element in energy storage reactions resulting in a multi-element battery. These initial results demonstrate how the synergistic effects of deploying multiple chemical mitigation strategies coupled with the relaxation of the requirement of a single itinerant ion can unlock calcium-based chemistries and produce a battery with enhanced performance.

  2. A new algorithm for grid-based hydrologic analysis by incorporating stormwater infrastructure

    NASA Astrophysics Data System (ADS)

    Choi, Yosoon; Yi, Huiuk; Park, Hyeong-Dong

    2011-08-01

    We developed a new algorithm, the Adaptive Stormwater Infrastructure (ASI) algorithm, to incorporate ancillary data sets related to stormwater infrastructure into the grid-based hydrologic analysis. The algorithm simultaneously considers the effects of the surface stormwater collector network (e.g., diversions, roadside ditches, and canals) and underground stormwater conveyance systems (e.g., waterway tunnels, collector pipes, and culverts). The surface drainage flows controlled by the surface runoff collector network are superimposed onto the flow directions derived from a DEM. After examining the connections between inlets and outfalls in the underground stormwater conveyance system, the flow accumulation and delineation of watersheds are calculated based on recursive computations. Application of the algorithm to the Sangdong tailings dam in Korea revealed superior performance to that of a conventional D8 single-flow algorithm in terms of providing reasonable hydrologic information on watersheds with stormwater infrastructure.

  3. NaradaBrokering as Middleware Fabric for Grid-based Remote Visualization Services

    NASA Astrophysics Data System (ADS)

    Pallickara, S.; Erlebacher, G.; Yuen, D.; Fox, G.; Pierce, M.

    2003-12-01

    Remote Visualization Services (RVS) have tended to rely on approaches based on the client server paradigm. The simplicity in these approaches is offset by problems such as single-point-of-failures, scaling and availability. Furthermore, as the complexity, scale and scope of the services hosted on this paradigm increase, this approach becomes increasingly unsuitable. We propose a scheme based on top of a distributed brokering infrastructure, NaradaBrokering, which comprises a distributed network of broker nodes. These broker nodes are organized in a cluster-based architecture that can scale to very large sizes. The broker network is resilient to broker failures and efficiently routes interactions to entities that expressed an interest in them. In our approach to RVS, services advertise their capabilities to the broker network, which manages these service advertisements. Among the services considered within our system are those that perform graphic transformations, mediate access to specialized datasets and finally those that manage the execution of specified tasks. There could be multiple instances of each of these services and the system ensures that load for a given service is distributed efficiently over these service instances. Among the features provided in our approach are efficient discovery of services and asynchronous interactions between services and service requestors (which could themselves be other services). Entities need not be online during the execution of the service request. The system also ensures that entities can be notified about task executions, partial results and failures that might have taken place during service execution. The system also facilitates specification of task overrides, distribution of execution results to alternate devices (which were not used to originally request service execution) and to multiple users. These RVS services could of course be either OGSA (Open Grid Services Architecture) based Grid services or traditional

  4. FermiGrid - experience and future plans

    SciTech Connect

    Chadwick, K.; Berman, E.; Canal, P.; Hesselroth, T.; Garzoglio, G.; Levshina, T.; Sergeev, V.; Sfiligoi, I.; Timm, S.; Yocum, D.; /Fermilab

    2007-09-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid and the WLCG. FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the Open Science Grid (OSG), EGEE and the Worldwide LHC Computing Grid Collaboration (WLCG). Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure--the successes and the problems.

  5. Comparison between staggered grid finite-volume and edge-based finite-element modelling of geophysical electromagnetic data on unstructured grids

    NASA Astrophysics Data System (ADS)

    Jahandari, Hormoz; Ansari, SeyedMasoud; Farquharson, Colin G.

    2017-03-01

    This study compares two finite-element (FE) and three finite-volume (FV) schemes which use unstructured tetrahedral grids for the modelling of electromagnetic (EM) data. All these schemes belong to a group of differential methods where the electric field is defined along the edges of the elements. The FE and FV schemes are based on both the EM-field and the potential formulations of Maxwell's equations. The EM-field FE scheme uses edge-based (vector) basis functions while the potential FE scheme uses vector and scalar basis functions. All the FV schemes use staggered tetrahedral-Voronoï grids. Three examples are used for comparisons in terms of accuracy and in terms of the computation resources required by generic iterative and direct solvers for solving the problems. Two of these examples represent survey scenarios with electric and magnetic sources and the results are compared with those from the literature while the third example is a comparison against analytical solutions for an electric dipole source. Exactly the same mesh is used for all examples to allow for direct comparison of the various schemes. The results show that while the FE and FV schemes are comparable in terms of accuracy and computation resources, the FE schemes are slightly more accurate but also more expensive than the FV schemes.

  6. Adaptive grid-based confidence assessment for synthetic optoelectronic images by Physical Reasonable Infrared Scene Simulation Engine (PRISSE)

    NASA Astrophysics Data System (ADS)

    Wu, Xin; Sun, Hao; Ma, Xiangchao; Wang, Yucheng; Han, Yiping

    2017-09-01

    Like visible spectrum, synthetic infrared scenes reflect the invisible world of infrared features. Propagation of a typical infrared radiation involves a variety of sources of different sizes, shapes, intensities, roughness, temperature, etc., all of which would impose impacts on the fidelity of the synthetic images. Assessing the confidence of a synthetic infrared scene is therefore not so intuitive as evaluating the quality of a visible image. An adaptive grid-based method is proposed in this paper for similarity assessments between synthetic infrared images and the corresponding real infrared images, which is on a grid-by-grid basis. Different from many traditional methods, each grid in our work is weighted by a value that is simulated by a 2D Gamma distribution. Introducing adaptive grids and exerting a weighting value on each grid are the aspects of our method. To investigate the effectiveness of our method, an experiment was conducted for taking real mid-wavelength infrared (MWIR) images, and the corresponding synthetic MWIR images were simulated by Physical Reasonable Infrared Scene Simulation Engine (PRISSE). The confidence of similarity assessments evaluated by our method is then compared to some popularly-used traditional assessment methods.

  7. sLORETA allows reliable distributed source reconstruction based on subdural strip and grid recordings.

    PubMed

    Dümpelmann, Matthias; Ball, Tonio; Schulze-Bonhage, Andreas

    2012-05-01

    Source localization based on invasive recordings by subdural strip and grid electrodes is a topic of increasing interest. This simulation study addresses the question, which factors are relevant for reliable source reconstruction based on sLORETA. MRI and electrode positions of a patient undergoing invasive presurgical epilepsy diagnostics were the basis of sLORETA simulations. A boundary element head model derived from the MRI was used for the simulation of electrical potentials and source reconstruction. Focal dipolar sources distributed on a regular three-dimensional lattice and spatiotemporal distributed patches served as input for simulation. In addition to the distance between original and reconstructed source maxima, the activation volume of the reconstruction and the correlation of time courses between the original and reconstructed sources were investigated. Simulations were supplemented by the localization of the patient's spike activity. For noise-free simulated data, sLORETA achieved results with zero localization error. Added noise diminished the percentage of reliable source localizations with a localization error ≤15 mm to 67.8%. Only for source positions close to the electrode contacts the activation volume correctly represented focal generators. Time-courses of original and reconstructed sources were significantly correlated. The case study results showed accurate localization. sLORETA is a distributed source model, which can be applied for reliable grid and strip based source localization. For distant source positions, overestimation of the extent of the generator has to be taken into account. sLORETA-based source reconstruction has the potential to improve the localization of distributed generators in presurgical epilepsy diagnostics and cognitive neuroscience. Copyright © 2011 Wiley-Liss, Inc.

  8. A novel approach to optimize workflow in grid-based teleradiology applications.

    PubMed

    Yılmaz, Ayhan Ozan; Baykal, Nazife

    2016-01-01

    This study proposes an infrastructure with a reporting workflow optimization algorithm (RWOA) in order to interconnect facilities, reporting units and radiologists on a single access interface, to increase the efficiency of the reporting process by decreasing the medical report turnaround time and to increase the quality of medical reports by determining the optimum match between the inspection and radiologist in terms of subspecialty, workload and response time. Workflow centric network architecture with an enhanced caching, querying and retrieving mechanism is implemented by seamlessly integrating Grid Agent and Grid Manager to conventional digital radiology systems. The inspection and radiologist attributes are modelled using a hierarchical ontology structure. Attribute preferences rated by radiologists and technical experts are formed into reciprocal matrixes and weights for entities are calculated utilizing Analytic Hierarchy Process (AHP). The assignment alternatives are processed by relation-based semantic matching (RBSM) and Integer Linear Programming (ILP). The results are evaluated based on both real case applications and simulated process data in terms of subspecialty, response time and workload success rates. Results obtained using simulated data are compared with the outcomes obtained by applying Round Robin, Shortest Queue and Random distribution policies. The proposed algorithm is also applied to a real case teleradiology application process data where medical reporting workflow was performed based on manual assignments by the chief radiologist for 6225 inspections. RBSM gives the highest subspecialty success rate and integrating ILP with RBSM ratings as RWOA provides a better response time and workload distribution success rate. RWOA based image delivery also prevents bandwidth, storage or hardware related stuck and latencies. When compared with a real case teleradiology application where inspection assignments were performed manually, the proposed

  9. Predicting the level of job satisfaction based on hardiness and its components among nurses with tension headache.

    PubMed

    Mahdavi, A; Nikmanesh, E; AghaeI, M; Kamran, F; Zahra Tavakoli, Z; Khaki Seddigh, F

    2015-01-01

    Nurses are the most significant part of human resources in a sanitary and health system. Job satisfaction results in the enhancement of organizational productivity, employee commitment to the organization and ensuring his/ her physical and mental health. The present research was conducted with the aim of predicting the level of job satisfaction based on hardiness and its components among the nurses with tension headache. The research method was correlational. The population consisted of all the nurses with tension headache who referred to the relevant specialists in Tehran. The sample size consisted of 50 individuals who were chosen by using the convenience sampling method and were measured and investigated by using the research tools of "Job Satisfaction Test" of Davis, Lofkvist and Weiss and "Personal Views Survey" of Kobasa. The data analysis was carried out by using the Pearson Correlation Coefficient and the Regression Analysis. The research findings demonstrated that the correlation coefficient obtained for "hardiness", "job satisfaction" was 0.506, and this coefficient was significant at the 0.01 level. Moreover, it was specified that the sense of commitment and challenge were stronger predictors for job satisfaction of nurses with tension headache among the components of hardiness, and, about 16% of the variance of "job satisfaction" could be explained by the two components (sense of commitment and challenge).

  10. Predicting the level of job satisfaction based on hardiness and its components among nurses with tension headache

    PubMed Central

    Mahdavi, A; Nikmanesh, E; AghaeI, M; Kamran, F; Zahra Tavakoli, Z; Khaki Seddigh, F

    2015-01-01

    Nurses are the most significant part of human resources in a sanitary and health system. Job satisfaction results in the enhancement of organizational productivity, employee commitment to the organization and ensuring his/ her physical and mental health. The present research was conducted with the aim of predicting the level of job satisfaction based on hardiness and its components among the nurses with tension headache. The research method was correlational. The population consisted of all the nurses with tension headache who referred to the relevant specialists in Tehran. The sample size consisted of 50 individuals who were chosen by using the convenience sampling method and were measured and investigated by using the research tools of “Job Satisfaction Test” of Davis, Lofkvist and Weiss and “Personal Views Survey” of Kobasa. The data analysis was carried out by using the Pearson Correlation Coefficient and the Regression Analysis. The research findings demonstrated that the correlation coefficient obtained for “hardiness”, “job satisfaction” was 0.506, and this coefficient was significant at the 0.01 level. Moreover, it was specified that the sense of commitment and challenge were stronger predictors for job satisfaction of nurses with tension headache among the components of hardiness, and, about 16% of the variance of “job satisfaction” could be explained by the two components (sense of commitment and challenge). PMID:28316713

  11. Calibrating a population-based job-exposure matrix using inspection measurements to estimate historical occupational exposure to lead for a population-based cohort in Shanghai, China

    PubMed Central

    Koh, Dong-Hee; Bhatti, Parveen; Coble, Joseph B.; Stewart, Patricia A; Lu, Wei; Shu, Xiao-Ou; Ji, Bu-Tian; Xue, Shouzheng; Locke, Sarah J.; Portengen, Lutzen; Yang, Gong; Chow, Wong-Ho; Gao, Yu-Tang; Rothman, Nathaniel; Vermeulen, Roel; Friesen, Melissa C.

    2012-01-01

    The epidemiologic evidence for the carcinogenicity of lead is inconsistent and requires improved exposure assessment to estimate risk. We evaluated historical occupational lead exposure for a population-based cohort of women (n=74,942) by calibrating a job-exposure matrix (JEM) with lead fume (n=20,084) and lead dust (n=5,383) measurements collected over four decades in Shanghai, China. Using mixed-effect models, we calibrated intensity JEM ratings to the measurements using fixed-effects terms for year and JEM rating. We developed job/industry-specific estimates from the random-effects terms for job and industry. The model estimates were applied to subjects’ jobs when the JEM probability rating was high for either job or industry; remaining jobs were considered unexposed. The models predicted that exposure increased monotonically with JEM intensity rating and decreased 20–50-fold over time. The cumulative calibrated JEM estimates and job/industry-specific estimates were highly correlated (Pearson correlation=0.79–0.84). Overall, 5% of the person-years and 8% of the women were exposed to lead fume; 2% of the person-years and 4% of the women were exposed to lead dust. The most common lead-exposed jobs were manufacturing electronic equipment. These historical lead estimates should enhance our ability to detect associations between lead exposure and cancer risk in future epidemiologic analyses. PMID:22910004

  12. Calibrating a population-based job-exposure matrix using inspection measurements to estimate historical occupational exposure to lead for a population-based cohort in Shanghai, China.

    PubMed

    Koh, Dong-Hee; Bhatti, Parveen; Coble, Joseph B; Stewart, Patricia A; Lu, Wei; Shu, Xiao-Ou; Ji, Bu-Tian; Xue, Shouzheng; Locke, Sarah J; Portengen, Lutzen; Yang, Gong; Chow, Wong-Ho; Gao, Yu-Tang; Rothman, Nathaniel; Vermeulen, Roel; Friesen, Melissa C

    2014-01-01

    The epidemiologic evidence for the carcinogenicity of lead is inconsistent and requires improved exposure assessment to estimate risk. We evaluated historical occupational lead exposure for a population-based cohort of women (n=74,942) by calibrating a job-exposure matrix (JEM) with lead fume (n=20,084) and lead dust (n=5383) measurements collected over four decades in Shanghai, China. Using mixed-effect models, we calibrated intensity JEM ratings to the measurements using fixed-effects terms for year and JEM rating. We developed job/industry-specific estimates from the random-effects terms for job and industry. The model estimates were applied to subjects' jobs when the JEM probability rating was high for either job or industry; remaining jobs were considered unexposed. The models predicted that exposure increased monotonically with JEM intensity rating and decreased 20-50-fold over time. The cumulative calibrated JEM estimates and job/industry-specific estimates were highly correlated (Pearson correlation=0.79-0.84). Overall, 5% of the person-years and 8% of the women were exposed to lead fume; 2% of the person-years and 4% of the women were exposed to lead dust. The most common lead-exposed jobs were manufacturing electronic equipment. These historical lead estimates should enhance our ability to detect associations between lead exposure and cancer risk in the future epidemiologic analyses.

  13. Near-Body Grid Adaption for Overset Grids

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2016-01-01

    A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.

  14. Informatic infrastructure for Climatological and Oceanographic data based on THREDDS technology in a Grid environment

    NASA Astrophysics Data System (ADS)

    Tronconi, C.; Forneris, V.; Santoleri, R.

    2009-04-01

    CNR-ISAC-GOS is responsible for the Mediterranean Sea satellite operational system in the framework of MOON Patnership. This Observing System acquires satellite data and produces Near Real Time, Delayed Time and Re-analysis of Ocean Colour and Sea Surface Temperature products covering the Mediterranean and the Black Seas and regional basins. In the framework of several projects (MERSEA, PRIMI, Adricosm Star, SeaDataNet, MyOcean, ECOOP), GOS is producing Climatological/Satellite datasets based on optimal interpolation and specific Regional algorithm for chlorophyll, updated in Near Real Time and in Delayed mode. GOS has built • an informatic infrastructure data repository and delivery based on THREDDS technology The datasets are generated in NETCDF format, compliant with both the CF convention and the international satellite-oceanographic specification, as prescribed by GHRSST (for SST). All data produced, are made available to the users through a THREDDS server catalog. • A LAS has been installed in order to exploit the potential of NETCDF data and the OPENDAP URL. It provides flexible access to geo-referenced scientific data • a Grid Environment based on Globus Technologies (GT4) connecting more than one Institute; in particular exploiting CNR and ESA clusters makes possible to reprocess 12 years of Chlorophyll data in less than one month.(estimated processing time on a single core PC: 9months). In the poster we will give an overview of: • the features of the THREDDS catalogs, pointing out the powerful characteristics of this new middleware that has replaced the "old" OPENDAP Server; • the importance of adopting a common format (as NETCDF) for data exchange; • the tools (e.g. LAS) connected with THREDDS and NETCDF format use. • the Grid infrastructure on ISAC We will present also specific basin-scale High Resolution products and Ultra High Resolution regional/coastal products available on these catalogs.

  15. GLIDE: a grid-based light-weight infrastructure for data-intensive environments

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris A.; Malek, Sam; Beckman, Nels; Mikic-Rakic, Marija; Medvidovic, Nenad; Chrichton, Daniel J.

    2005-01-01

    The promise of the grid is that it will enable public access and sharing of immense amounts of computational and data resources among dynamic coalitions of individuals and institutions. However, the current grid solutions make several limiting assumptions that curtail their widespread adoption. To address these limitations, we present GLIDE, a prototype light-weight, data-intensive middleware infrastructure that enables access to the robust data and computational power of the grid on DREAM platforms.

  16. GLIDE: a grid-based light-weight infrastructure for data-intensive environments

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris A.; Malek, Sam; Beckman, Nels; Mikic-Rakic, Marija; Medvidovic, Nenad; Chrichton, Daniel J.

    2005-01-01

    The promise of the grid is that it will enable public access and sharing of immense amounts of computational and data resources among dynamic coalitions of individuals and institutions. However, the current grid solutions make several limiting assumptions that curtail their widespread adoption. To address these limitations, we present GLIDE, a prototype light-weight, data-intensive middleware infrastructure that enables access to the robust data and computational power of the grid on DREAM platforms.

  17. Trust Management in an Agent-Based Grid Resource Brokering System-Preliminary Considerations

    NASA Astrophysics Data System (ADS)

    Ganzha, M.; Paprzycki, M.; Lirkov, I.

    2007-10-01

    It has been suggested that utilization of autonomous software agents in computational Grids may deliver the needed functionality to speed-up Grid adoption. I our recent work we have outlined an approach in which agent teams facilitate Grid resource brokering and management. One of the interesting questions is how to manage trust in such a system. The aim of this paper is to outline our proposed solution.

  18. Creative Engineering Based Education with Autonomous Robots Considering Job Search Support

    NASA Astrophysics Data System (ADS)

    Takezawa, Satoshi; Nagamatsu, Masao; Takashima, Akihiko; Nakamura, Kaeko; Ohtake, Hideo; Yoshida, Kanou

    The Robotics Course in our Mechanical Systems Engineering Department offers “Robotics Exercise Lessons” as one of its Problem-Solution Based Specialized Subjects. This is intended to motivate students learning and to help them acquire fundamental items and skills on mechanical engineering and improve understanding of Robotics Basic Theory. Our current curriculum was established to accomplish this objective based on two pieces of research in 2005: an evaluation questionnaire on the education of our Mechanical Systems Engineering Department for graduates and a survey on the kind of human resources which companies are seeking and their expectations for our department. This paper reports the academic results and reflections of job search support in recent years as inherited and developed from the previous curriculum.

  19. Ab Initio potential grid based docking: From High Performance Computing to In Silico Screening

    NASA Astrophysics Data System (ADS)

    de Jonge, Marc R.; Vinkers, H. Maarten; van Lenthe, Joop H.; Daeyaert, Frits; Bush, Ian J.; van Dam, Huub J. J.; Sherwood, Paul; Guest, Martyn F.

    2007-09-01

    We present a new and completely parallel method for protein ligand docking. The potential of the docking target structure is obtained directly from the electron density derived through an ab initio computation. A large subregion of the crystal structure of Isocitrate Lyase, was selected as docking target. To allow the full ab initio treatment of this region special care was taken to assign optimal basis functions. The electrostatic potential is tested by docking a small charged molecule (succinate) into the binding site. The ab initio grid yields a superior result by producing the best binding orientation and position, and by recognizing it as the best. In contrast the same docking procedure, but using a classical point-charge based potential, produces a number of additional incorrect binding poses, and does not recognize the correct pose as the best solution.

  20. Medical Data GRIDs as approach towards secure cross enterprise document sharing (based on IHE XDS).

    PubMed

    Wozak, Florian; Ammenwerth, Elske; Breu, Micheal; Penz, Robert; Schabetsberger, Thomas; Vogl, Raimund; Wurz, Manfred

    2006-01-01

    Quality and efficiency of health care services is expected to be improved by the electronic processing and trans-institutional availability of medical data. A prototype architecture based on the IHE-XDS profile is currently being developed. Due to legal and organizational requirements specific adaptations to the IHE-XDS profile have been made. In this work the services of the health@net reference architecture are described in details, which have been developed with focus on compliance to both, the IHE-XDS profile and the legal situation in Austria. We expect to gain knowledge about the development of a shared electronic health record using Medical Data Grids as an Open Source reference implementation and how proprietary Hospital Information systems can be integrated in this environment.

  1. An octree based approach to multi-grid B-spline registration

    NASA Astrophysics Data System (ADS)

    Jiang, Pingge; Shackleford, James A.

    2017-02-01

    In this paper we propose a new strategy for the recovery of complex anatomical deformations that exhibit local discontinuities, such as the shearing found at the lung-ribcage interface, using multi-grid octree B-splines. B- spline based image registration is widely used in the recovery of respiration induced deformations between CT images. However, the continuity imposed upon the computed deformation field by the parametrizing cubic B- spline basis function results in an inability to correctly capture discontinuities such as the sliding motion at organ boundaries. The proposed technique efficiently captures deformation within and at organ boundaries without the need for prior knowledge, such as segmentation, by selectively increasing deformation freedom within image regions exhibiting poor local registration. Experimental results show that the proposed method achieves more physically plausible deformations than traditional global B-spline methods.

  2. Grid-based mapping: A method for rapidly determining the spatial distributions of small features over very large areas

    NASA Astrophysics Data System (ADS)

    Ramsdale, Jason D.; Balme, Matthew R.; Conway, Susan J.; Gallagher, Colman; van Gasselt, Stephan A.; Hauber, Ernst; Orgel, Csilla; Séjourné, Antoine; Skinner, James A.; Costard, Francois; Johnsson, Andreas; Losiak, Anna; Reiss, Dennis; Swirad, Zuzanna M.; Kereszturi, Akos; Smith, Isaac B.; Platz, Thomas

    2017-06-01

    The increased volume, spatial resolution, and areal coverage of high-resolution images of Mars over the past 15 years have led to an increased quantity and variety of small-scale landform identifications. Though many such landforms are too small to represent individually on regional-scale maps, determining their presence or absence across large areas helps form the observational basis for developing hypotheses on the geological nature and environmental history of a study area. The combination of improved spatial resolution and near-continuous coverage significantly increases the time required to analyse the data. This becomes problematic when attempting regional or global-scale studies of metre and decametre-scale landforms. Here, we describe an approach for mapping small features (from decimetre to kilometre scale) across large areas, formulated for a project to study the northern plains of Mars, and provide context on how this method was developed and how it can be implemented. Rather than ;mapping; with points and polygons, grid-based mapping uses a ;tick box; approach to efficiently record the locations of specific landforms (we use an example suite of glacial landforms; including viscous flow features, the latitude dependant mantle and polygonised ground). A grid of squares (e.g. 20 km by 20 km) is created over the mapping area. Then the basemap data are systematically examined, grid-square by grid-square at full resolution, in order to identify the landforms while recording the presence or absence of selected landforms in each grid-square to determine spatial distributions. The result is a series of grids recording the distribution of all the mapped landforms across the study area. In some ways, these are equivalent to raster images, as they show a continuous distribution-field of the various landforms across a defined (rectangular, in most cases) area. When overlain on context maps, these form a coarse, digital landform map. We find that grid-based mapping

  3. Job enrichment, work motivation, and job satisfaction in hospital wards: testing the job characteristics model.

    PubMed

    Kivimäki, M; Voutilainen, P; Koskinen, P

    1995-03-01

    This study investigated work motivation and job satisfaction at hospital wards with high and low levels of job enrichment. Primary nursing was assumed to represent a highly enriched job, whereas functional nursing represented a job with a low level of enrichment. Five surgical wards were divided into these two categories based on the structured interviews with head nurses. Work motivation and job satisfaction among ward personnel were assessed by a questionnaire. The ward personnel occupying highly enriched jobs reported significantly higher work motivation and satisfaction with the management than the personnel occupying jobs with a low level of enrichment.

  4. 20 CFR 670.520 - Are students permitted to hold jobs other than work-based learning opportunities?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Are students permitted to hold jobs other than work-based learning opportunities? 670.520 Section 670.520 Employees' Benefits EMPLOYMENT AND...-based learning opportunities? Yes, a center operator may authorize a student to participate in...

  5. 20 CFR 670.520 - Are students permitted to hold jobs other than work-based learning opportunities?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 4 2013-04-01 2013-04-01 false Are students permitted to hold jobs other than work-based learning opportunities? 670.520 Section 670.520 Employees' Benefits EMPLOYMENT AND... than work-based learning opportunities? Yes, a center operator may authorize a student to...

  6. Structuring Job Related Information on the Intranet: An Experimental Comparison of Task vs. an Organization-Based Approach

    ERIC Educational Resources Information Center

    Cozijn, Reinier; Maes, Alfons; Schackman, Didie; Ummelen, Nicole

    2007-01-01

    In this article, we present a usability experiment in which participants were asked to make intensive use of information on an intranet in order to execute job-related tasks. Participants had to work with one of two versions of an intranet: one with an organization-based hyperlink structure, and one with a task-based hyperlink structure.…

  7. The Differences in Teachers' and Principals' General Job Stress and Stress Related to Performance-Based Accreditation.

    ERIC Educational Resources Information Center

    Hipps, Elizabeth Smith; Halpin, Glennelle

    Whether different amounts of general job stress and stress related to the Alabama Performance-Based Accreditation Standards were experienced by teachers and principals was studied in a sample of 65 principals and 242 teachers from 9 Alabama school systems. All subjects completed the Alabama Performance-Based Accreditation Standards Stress Measure,…

  8. Using Grid Benchmarks for Dynamic Scheduling of Grid Applications

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Hood, Robert

    2003-01-01

    Navigation or dynamic scheduling of applications on computational grids can be improved through the use of an application-specific characterization of grid resources. Current grid information systems provide a description of the resources, but do not contain any application-specific information. We define a GridScape as dynamic state of the grid resources. We measure the dynamic performance of these resources using the grid benchmarks. Then we use the GridScape for automatic assignment of the tasks of a grid application to grid resources. The scalability of the system is achieved by limiting the navigation overhead to a few percent of the application resource requirements. Our task submission and assignment protocol guarantees that the navigation system does not cause grid congestion. On a synthetic data mining application we demonstrate that Gridscape-based task assignment reduces the application tunaround time.

  9. Long Range Debye-Hückel Correction for Computation of Grid-based Electrostatic Forces Between Biomacromolecules

    SciTech Connect

    Mereghetti, Paolo; Martinez, M.; Wade, Rebecca C.

    2014-06-17

    Brownian dynamics (BD) simulations can be used to study very large molecular systems, such as models of the intracellular environment, using atomic-detail structures. Such simulations require strategies to contain the computational costs, especially for the computation of interaction forces and energies. A common approach is to compute interaction forces between macromolecules by precomputing their interaction potentials on three-dimensional discretized grids. For long-range interactions, such as electrostatics, grid-based methods are subject to finite size errors. We describe here the implementation of a Debye-Hückel correction to the grid-based electrostatic potential used in the SDA BD simulation software that was applied to simulate solutions of bovine serum albumin and of hen egg white lysozyme.

  10. A grid-based implementation of XDS-I as a part of a metropolitan EHR in Shanghai

    NASA Astrophysics Data System (ADS)

    Zhang, Jianguo; Zhang, Chenghao; Sun, Jianyong, Sr.; Yang, Yuanyuan; Jin, Jin; Yu, Fenghai; He, Zhenyu; Zheng, Xichuang; Qin, Huanrong; Feng, Jie; Zhang, Guozheng

    2007-03-01

    A number of hospitals in Shanghai are piloting the development of an EHR solution based on a grid concept with a service-oriented architecture (SOA). The first phase of the project targets the Diagnostic Imaging domain and allows seamless sharing of images and reports across the multiple hospitals. The EHR solution is fully aligned with the IHE XDS-I integration profile and consists of the components of the XDS-I Registry, Repository, Source and Consumer actors. By using SOA, the solution uses ebXML over secured http for all transactions with in the grid. However, communication with the PACS and RIS is DICOM and HL7 v3.x. The solution was installed in three hospitals and one date center in Shanghai and tested for performance of data publication, user query and image retrieval. The results are extremely positive and demonstrate that the EHR solution based on SOA with grid concept can scale effectively to server a regional implementation.

  11. An evaluation method of power quality about electrified railways connected to power grid based on PSCAD/EMTDC

    NASA Astrophysics Data System (ADS)

    Liang, Weibin; Ouyang, Sen; Huang, Xiang; Su, Weijian

    2017-05-01

    The existing modeling process of power quality about electrified railways connected to power grid is complicated and the simulation scene is incomplete, so this paper puts forward a novel evaluation method of power quality based on PSCAD/ETMDC. Firstly, a model of power quality about electrified railways connected to power grid is established, which is based on testing report or measured data. The equivalent model of electrified locomotive contains power characteristic and harmonic characteristic, which are substituted by load and harmonic source. Secondly, in order to make evaluation more complete, an analysis scheme has been put forward. The scheme uses a combination of three-dimensions of electrified locomotive, which contains types, working conditions and quantity. At last, Shenmao Railway is taken as example to evaluate the power quality at different scenes, and the result shows electrified railways connected to power grid have significant effect on power quality.

  12. Safe Grid

    NASA Technical Reports Server (NTRS)

    Chow, Edward T.; Stewart, Helen; Korsmeyer, David (Technical Monitor)

    2003-01-01

    The biggest users of GRID technologies came from the science and technology communities. These consist of government, industry and academia (national and international). The NASA GRID is moving into a higher technology readiness level (TRL) today; and as a joint effort among these leaders within government, academia, and industry, the NASA GRID plans to extend availability to enable scientists and engineers across these geographical boundaries collaborate to solve important problems facing the world in the 21 st century. In order to enable NASA programs and missions to use IPG resources for program and mission design, the IPG capabilities needs to be accessible from inside the NASA center networks. However, because different NASA centers maintain different security domains, the GRID penetration across different firewalls is a concern for center security people. This is the reason why some IPG resources are been separated from the NASA center network. Also, because of the center network security and ITAR concerns, the NASA IPG resource owner may not have full control over who can access remotely from outside the NASA center. In order to obtain organizational approval for secured remote access, the IPG infrastructure needs to be adapted to work with the NASA business process. Improvements need to be made before the IPG can be used for NASA program and mission development. The Secured Advanced Federated Environment (SAFE) technology is designed to provide federated security across NASA center and NASA partner's security domains. Instead of one giant center firewall which can be difficult to modify for different GRID applications, the SAFE "micro security domain" provide large number of professionally managed "micro firewalls" that can allow NASA centers to accept remote IPG access without the worry of damaging other center resources. The SAFE policy-driven capability-based federated security mechanism can enable joint organizational and resource owner approved remote

  13. Initial experiences with grid-based volume visualization of fluid flow simulations on PC clusters

    NASA Astrophysics Data System (ADS)

    Porter, David H.; Woodward, Paul R.; Iyer, Anusha

    2005-03-01

    Over the last 18 months, our team at the Laboratory for Computational Science & Engineering (LCSE) at the University of Minnesota has been moving our data analysis and visualization applications from small clusters of PCs within our lab to a Grid-based approach using multiple PC clusters with dynamically varying availability. Under support from an NSF CISE Research Resources grant, we have outfitted 52 Dell PCs in a student lab in our building that is operated by the University's Academic and Distributed Computing Services (ADCS) organization. This PC cluster supplements another PC cluster of 10 machines in our lab. As the students gradually leave this ADCS lab after 10 PM, the PCs are rebooted into an operating system image that sees the 400 GB disk subsystems we have installed on them and communicates with a central, 32-processor Unisys ES-7000 machine in our lab. The ES-7000 hosts databases that coordinate the work of these 52 PCs along with that of 10 additional Dell PCs in our lab that drive our PowerWall display. This equipment forms a local Grid that we coordinate to analyze and visualize data generated on remote clusters at NCSA. The PCs of the student lab offer a 20 TB pool of disk storage for our simulation data as well as a large movie rendering capability with their Nvidia graphics engines. However, these machines do not become available to us in force until after about 1 AM. This fact has forced us to automate our visualization process to an unusual degree. It has also forced us to address problems of security and run error diagnosis that we could easily avoid in a more standard environment. In this paper we report our methods of addressing these challenges and describe the software tools that we have developed and made available for this purpose on our Web site, www.lcse.umn.edu. We also report our experience in using this system to visualize 1.4 TB of vorticity volumetric data from a recent simulation of homogeneous, compressible turbulence with our

  14. Resource management and scheduling policy based on grid for AIoT

    NASA Astrophysics Data System (ADS)

    Zou, Yiqin; Quan, Li

    2017-07-01

    This paper has a research on resource management and scheduling policy based on grid technology for Agricultural Internet of Things (AIoT). Facing the situation of a variety of complex and heterogeneous agricultural resources in AIoT, it is difficult to represent them in a unified way. But from an abstract perspective, there are some common models which can express their characteristics and features. Based on this, we proposed a high-level model called Agricultural Resource Hierarchy Model (ARHM), which can be used for modeling various resources. It introduces the agricultural resource modeling method based on this model. Compared with traditional application-oriented three-layer model, ARHM can hide the differences of different applications and make all applications have a unified interface layer and be implemented without distinction. Furthermore, it proposes a Web Service Resource Framework (WSRF)-based resource management method and the encapsulation structure for it. Finally, it focuses on the discussion of multi-agent-based AG resource scheduler, which is a collaborative service provider pattern in multiple agricultural production domains.

  15. National job-exposure matrix in analyses of census-based estimates of occupational cancer risk.

    PubMed

    Pukkala, Eero; Guo, Johannes; Kyyrönen, Pentti; Lindbohm, Marja-Liisa; Sallmén, Markku; Kauppinen, Timo

    2005-04-01

    The aim of this study was to increase the understanding of the alternative exposure metrics and analysis methods in studies applying job-exposure matrices in analyses of health outcomes, the association between crystalline silica and cancer being used as an example. Observed and expected numbers of cancer cases during 1971-1995 among Finns born in 1906-1945 were calculated for 393 occupational categories, as defined in the 1970 population census. According to the Finnish Cancer Registry, there were 43 433 lung and 21 444 prostate cancer cases. The Finnish job-exposure matrix (FINJEM) provided estimates of the proportion of exposed persons and the mean level of exposure among the exposed in each occupation. The most comprehensive exposure metric included period- and age-specific estimates of exposure and an estimate of occupational stability, but also remarkably simpler metrics gave significantly elevated estimates of the risk ratio (RR) between 1.36 and 1.50 for lung cancer for occupations with the highest estimated cumulative silica exposure (> or = 10 mg/m3-years), allowing a lag time of 20 years. It proved important to adjust the risk ratios at least for the socioeconomic status and occupational exposure to asbestos. The risk ratios for prostate cancer were close to 1.0 in every model. The results showed that the FINJEM-based analysis was able to replicate the well-known association between exposure to crystalline silica and lung cancer. The FINJEM-based method gives valid results, and it can be used to analyze large sets of register-based data on health outcomes.

  16. An objective decision model of power grid environmental protection based on environmental influence index and energy-saving and emission-reducing index

    NASA Astrophysics Data System (ADS)

    Feng, Jun-shu; Jin, Yan-ming; Hao, Wei-hua

    2017-01-01

    Based on modelling the environmental influence index of power transmission and transformation project and energy-saving and emission-reducing index of source-grid-load of power system, this paper establishes an objective decision model of power grid environmental protection, with constraints of power grid environmental protection objectives being legal and economical, and considering both positive and negative influences of grid on the environmental in all-life grid cycle. This model can be used to guide the programming work of power grid environmental protection. A numerical simulation of Jiangsu province’s power grid environmental protection objective decision model has been operated, and the results shows that the maximum goal of energy-saving and emission-reducing benefits would be reached firstly as investment increasing, and then the minimum goal of environmental influence.

  17. Social Adversity in Adolescence Increases the Physiological Vulnerability to Job Strain in Adulthood: A Prospective Population-Based Study

    PubMed Central

    Westerlund, Hugo; Gustafsson, Per E.; Theorell, Töres; Janlert, Urban; Hammarström, Anne

    2012-01-01

    Background It has been argued that the association between job strain and health could be confounded by early life exposures, and studies have shown early adversity to increase individual vulnerability to later stress. We therefore investigated if early life exposure to adversity increases the individual's physiological vulnerability job strain in adulthood. Methodology/Principal Findings In a population-based cohort (343 women and 330 men, 83% of the eligible participants), we examined the association between on the one hand exposure to adversity in adolescence, measured at age 16, and job strain measured at age 43, and on the other hand allostatic load at age 43. Adversity was operationalised as an index comprising residential mobility and crowding, parental loss, parental unemployment, and parental physical and mental illness (including substance abuse). Allostatic load summarised body fat, blood pressure, inflammatory markers, glucose, blood lipids, and cortisol regulation. There was an interaction between adversity in adolescence and job strain (B = 0.09, 95% CI 0.02 to 0.16 after adjustment for socioeconomic status), particularly psychological demands, indicating that job strain was associated with increased allostatic load only among participants with adversity in adolescence. Job strain was associated with lower allostatic load in men (β = −0.20, 95% CI −0.35 to −0.06). Conclusions/Significance Exposure to adversity in adolescence was associated with increased levels of biological stress among those reporting job strain in mid-life, indicating increased vulnerability to environmental stressors. PMID:22558285

  18. Adapting a commercial power system simulator for smart grid based system study and vulnerability assessment

    NASA Astrophysics Data System (ADS)

    Navaratne, Uditha Sudheera

    The smart grid is the future of the power grid. Smart meters and the associated network play a major role in the distributed system of the smart grid. Advance Metering Infrastructure (AMI) can enhance the reliability of the grid, generate efficient energy management opportunities and many innovations around the future smart grid. These innovations involve intense research not only on the AMI network itself but as also on the influence an AMI network can have upon the rest of the power grid. This research describes a smart meter testbed with hardware in loop that can facilitate future research in an AMI network. The smart meters in the testbed were developed such that their functionality can be customized to simulate any given scenario such as integrating new hardware components into a smart meter or developing new encryption algorithms in firmware. These smart meters were integrated into the power system simulator to simulate the power flow variation in the power grid on different AMI activities. Each smart meter in the network also provides a communication interface to the home area network. This research delivers a testbed for emulating the AMI activities and monitoring their effect on the smart grid.

  19. Analysis and Validation of Grid dem Generation Based on Gaussian Markov Random Field

    NASA Astrophysics Data System (ADS)

    Aguilar, F. J.; Aguilar, M. A.; Blanco, J. L.; Nemmaoui, A.; García Lorca, A. M.

    2016-06-01

    Digital Elevation Models (DEMs) are considered as one of the most relevant geospatial data to carry out land-cover and land-use classification. This work deals with the application of a mathematical framework based on a Gaussian Markov Random Field (GMRF) to interpolate grid DEMs from scattered elevation data. The performance of the GMRF interpolation model was tested on a set of LiDAR data (0.87 points/m2) provided by the Spanish Government (PNOA Programme) over a complex working area mainly covered by greenhouses in Almería, Spain. The original LiDAR data was decimated by randomly removing different fractions of the original points (from 10% to up to 99% of points removed). In every case, the remaining points (scattered observed points) were used to obtain a 1 m grid spacing GMRF-interpolated Digital Surface Model (DSM) whose accuracy was assessed by means of the set of previously extracted checkpoints. The GMRF accuracy results were compared with those provided by the widely known Triangulation with Linear Interpolation (TLI). Finally, the GMRF method was applied to a real-world case consisting of filling the LiDAR-derived DSM gaps after manually filtering out non-ground points to obtain a Digital Terrain Model (DTM). Regarding accuracy, both GMRF and TLI produced visually pleasing and similar results in terms of vertical accuracy. As an added bonus, the GMRF mathematical framework makes possible to both retrieve the estimated uncertainty for every interpolated elevation point (the DEM uncertainty) and include break lines or terrain discontinuities between adjacent cells to produce higher quality DTMs.

  20. The SAM-GRID project: architecture and plan

    NASA Astrophysics Data System (ADS)

    Baranovski, A.; Garzoglio, G.; Koutaniemi, H.; Lueking, L.; Patil, S.; Pordes, R.; Rana, A.; Terekhov, I.; Veseli, S.; Yu, J.; Walker, R.; White, V.

    2003-04-01

    SAM is a robust distributed file-based data management and access service, fully integrated with the D0 experiment at Fermilab and in phase of evaluation at the CDF experiment. The goal of the SAM-Grid project is to fully enable distributed computing for the experiments. The architecture of the project is composed of three primary functional blocks: the job handling, data handling, and monitoring and information services. Job handling and monitoring/information services are built on top of standard grid technologies (Condor-G/Globus Toolkit), which are integrated with the data handling system provided by SAM. The plan is devised to provide the users incrementally increasing levels of capability over the next 2 years.

  1. Job-based health insurance in 2001: inflation hits double digits, managed care retreats.

    PubMed

    Gabel, J; Levitt, L; Pickreign, J; Whitmore, H; Holve, E; Rowland, D; Dhont, K; Hawkins, S

    2001-01-01

    Drawing on the results of a national survey of 1,907 firms with three or more workers, this paper reports on several facets of job-based health insurance, including the cost to employers and workers; plan offerings and enrollments; patient cost sharing and benefits; eligibility, coverage, and take-up rates; and results from questions about employers' knowledge of market trends and health policy initiatives. Premiums increased 11 percent from spring 2000 to spring 2001, and the percentage of Americans in health maintenance organizations (HMOs) fell six percentage points to its lowest level since 1993, while preferred provider organization (PPO) enrollment rose to 48 percent. Despite premium increases, the percentage of firms offering coverage remained statistically unchanged, and a relatively strong labor market has continued to shield workers from the higher cost of coverage.

  2. Three hybridization models based on local search scheme for job shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Balbi Fraga, Tatiana

    2015-05-01

    This work presents three different hybridization models based on the general schema of Local Search Heuristics, named Hybrid Successive Application, Hybrid Neighborhood, and Hybrid Improved Neighborhood. Despite similar approaches might have already been presented in the literature in other contexts, in this work these models are applied to analyzes the solution of the job shop scheduling problem, with the heuristics Taboo Search and Particle Swarm Optimization. Besides, we investigate some aspects that must be considered in order to achieve better solutions than those obtained by the original heuristics. The results demonstrate that the algorithms derived from these three hybrid models are more robust than the original algorithms and able to get better results than those found by the single Taboo Search.

  3. Software Based Barriers To Integration Of Renewables To The Future Distribution Grid

    SciTech Connect

    Stewart, Emma; Kiliccote, Sila

    2014-06-01

    The future distribution grid has complex analysis needs, which may not be met with the existing processes and tools. In addition there is a growing number of measured and grid model data sources becoming available. For these sources to be useful they must be accurate, and interpreted correctly. Data accuracy is a key barrier to the growth of the future distribution grid. A key goal for California, and the United States, is increasing the renewable penetration on the distribution grid. To increase this penetration measured and modeled representations of generation must be accurate and validated, giving distribution planners and operators confidence in their performance. This study will review the current state of these software and modeling barriers and opportunities for the future distribution grid.

  4. Discretization of three-dimensional free surface flows and moving boundary problems via elliptic grid methods based on variational principles

    NASA Astrophysics Data System (ADS)

    Fraggedakis, D.; Papaioannou, J.; Dimakopoulos, Y.; Tsamopoulos, J.

    2017-09-01

    A new boundary-fitted technique to describe free surface and moving boundary problems is presented. We have extended the 2D elliptic grid generator developed by Dimakopoulos and Tsamopoulos (2003) [19] and further advanced by Chatzidai et al. (2009) [18] to 3D geometries. The set of equations arises from the fulfillment of the variational principles established by Brackbill and Saltzman (1982) [21], and refined by Christodoulou and Scriven (1992) [22]. These account for both smoothness and orthogonality of the grid lines of tessellated physical domains. The elliptic-grid equations are accompanied by new boundary constraints and conditions which are based either on the equidistribution of the nodes on boundary surfaces or on the existing 2D quasi-elliptic grid methodologies. The capabilities of the proposed algorithm are first demonstrated in tests with analytically described complex surfaces. The sequence in which these tests are presented is chosen to help the reader build up experience on the best choice of the elliptic grid parameters. Subsequently, the mesh equations are coupled with the Navier-Stokes equations, in order to reveal the full potential of the proposed methodology in free surface flows. More specifically, the problem of gas assisted injection in ducts of circular and square cross-sections is examined, where the fluid domain experiences extreme deformations. Finally, the flow-mesh solver is used to calculate the equilibrium shapes of static menisci in capillary tubes.

  5. A procedure for the estimation of the numerical uncertainty of CFD calculations based on grid refinement studies

    SciTech Connect

    Eça, L.; Hoekstra, M.

    2014-04-01

    This paper offers a procedure for the estimation of the numerical uncertainty of any integral or local flow quantity as a result of a fluid flow computation; the procedure requires solutions on systematically refined grids. The error is estimated with power series expansions as a function of the typical cell size. These expansions, of which four types are used, are fitted to the data in the least-squares sense. The selection of the best error estimate is based on the standard deviation of the fits. The error estimate is converted into an uncertainty with a safety factor that depends on the observed order of grid convergence and on the standard deviation of the fit. For well-behaved data sets, i.e. monotonic convergence with the expected observed order of grid convergence and no scatter in the data, the method reduces to the well known Grid Convergence Index. Examples of application of the procedure are included. - Highlights: • Estimation of the numerical uncertainty of any integral or local flow quantity. • Least squares fits to power series expansions to handle noisy data. • Excellent results obtained for manufactured solutions. • Consistent results obtained for practical CFD calculations. • Reduces to the well known Grid Convergence Index for well-behaved data sets.

  6. Securing smart grid technology

    NASA Astrophysics Data System (ADS)

    Chaitanya Krishna, E.; Kosaleswara Reddy, T.; Reddy, M. YogaTeja; Reddy G. M., Sreerama; Madhusudhan, E.; AlMuhteb, Sulaiman

    2013-03-01

    In the developing countries electrical energy is very important for its all-round improvement by saving thousands of dollars and investing them in other sector for development. For Growing needs of power existing hierarchical, centrally controlled grid of the 20th Century is not sufficient. To produce and utilize effective power supply for industries or people we should have Smarter Electrical grids that address the challenges of the existing power grid. The Smart grid can be considered as a modern electric power grid infrastructure for enhanced efficiency and reliability through automated control, high-power converters, modern communications infrastructure along with modern IT services, sensing and metering technologies, and modern energy management techniques based on the optimization of demand, energy and network availability and so on. The main objective of this paper is to provide a contemporary look at the current state of the art in smart grid communications as well as critical issues on smart grid technologies primarily in terms of information and communication technology (ICT) issues like security, efficiency to communications layer field. In this paper we propose new model for security in Smart Grid Technology that contains Security Module(SM) along with DEM which will enhance security in Grid. It is expected that this paper will provide a better understanding of the technologies, potential advantages and research challenges of the smart grid and provoke interest among the research community to further explore this promising research area.

  7. LEOPARD: A grid-based dispersion relation solver for arbitrary gyrotropic distributions

    NASA Astrophysics Data System (ADS)

    Astfalk, Patrick; Jenko, Frank

    2017-01-01

    Particle velocity distributions measured in collisionless space plasmas often show strong deviations from idealized model distributions. Despite this observational evidence, linear wave analysis in space plasma environments such as the solar wind or Earth's magnetosphere is still mainly carried out using dispersion relation solvers based on Maxwellians or other parametric models. To enable a more realistic analysis, we present the new grid-based kinetic dispersion relation solver LEOPARD (Linear Electromagnetic Oscillations in Plasmas with Arbitrary Rotationally-symmetric Distributions) which no longer requires prescribed model distributions but allows for arbitrary gyrotropic distribution functions. In this work, we discuss the underlying numerical scheme of the code and we show a few exemplary benchmarks. Furthermore, we demonstrate a first application of LEOPARD to ion distribution data obtained from hybrid simulations. In particular, we show that in the saturation stage of the parallel fire hose instability, the deformation of the initial bi-Maxwellian distribution invalidates the use of standard dispersion relation solvers. A linear solver based on bi-Maxwellians predicts further growth even after saturation, while LEOPARD correctly indicates vanishing growth rates. We also discuss how this complies with former studies on the validity of quasilinear theory for the resonant fire hose. In the end, we briefly comment on the role of LEOPARD in directly analyzing spacecraft data, and we refer to an upcoming paper which demonstrates a first application of that kind.

  8. Distributed Fault Detection Based on Credibility and Cooperation for WSNs in Smart Grids

    PubMed Central

    Shao, Sujie; Guo, Shaoyong; Qiu, Xuesong

    2017-01-01

    Due to the increasingly important role in monitoring and data collection that sensors play, accurate and timely fault detection is a key issue for wireless sensor networks (WSNs) in smart grids. This paper presents a novel distributed fault detection mechanism for WSNs based on credibility and cooperation. Firstly, a reasonable credibility model of a sensor is established to identify any suspicious status of the sensor according to its own temporal data correlation. Based on the credibility model, the suspicious sensor is then chosen to launch fault diagnosis requests. Secondly, the sending time of fault diagnosis request is discussed to avoid the transmission overhead brought about by unnecessary diagnosis requests and improve the efficiency of fault detection based on neighbor cooperation. The diagnosis reply of a neighbor sensor is analyzed according to its own status. Finally, to further improve the accuracy of fault detection, the diagnosis results of neighbors are divided into several classifications to judge the fault status of the sensors which launch the fault diagnosis requests. Simulation results show that this novel mechanism can achieve high fault detection ratio with a small number of fault diagnoses and low data congestion probability. PMID:28452925

  9. Distributed Fault Detection Based on Credibility and Cooperation for WSNs in Smart Grids.

    PubMed

    Shao, Sujie; Guo, Shaoyong; Qiu, Xuesong

    2017-04-28

    Due to the increasingly important role in monitoring and data collection that sensors play, accurate and timely fault detection is a key issue for wireless sensor networks (WSNs) in smart grids. This paper presents a novel distributed fault detection mechanism for WSNs based on credibility and cooperation. Firstly, a reasonable credibility model of a sensor is established to identify any suspicious status of the sensor according to its own temporal data correlation. Based on the credibility model, the suspicious sensor is then chosen to launch fault diagnosis requests. Secondly, the sending time of fault diagnosis request is discussed to avoid the transmission overhead brought about by unnecessary diagnosis requests and improve the efficiency of fault detection based on neighbor cooperation. The diagnosis reply of a neighbor sensor is analyzed according to its own status. Finally, to further improve the accuracy of fault detection, the diagnosis results of neighbors are divided into several classifications to judge the fault status of the sensors which launch the fault diagnosis requests. Simulation results show that this novel mechanism can achieve high fault detection ratio with a small number of fault diagnoses and low data congestion probability.

  10. An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    1999-01-01

    An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.

  11. DISTRIBUTED GRID-CONNECTED PHOTOVOLTAIC POWER SYSTEM EMISSION OFFSET ASSESSMENT: STATISTICAL TEST OF SIMULATED- AND MEASURED-BASED DATA

    EPA Science Inventory

    This study assessed the pollutant emission offset potential of distributed grid-connected photovoltaic (PV) power systems. Computer-simulated performance results were utilized for 211 PV systems located across the U.S. The PV systems' monthly electrical energy outputs were based ...

  12. DISTRIBUTED GRID-CONNECTED PHOTOVOLTAIC POWER SYSTEM EMISSION OFFSET ASSESSMENT: STATISTICAL TEST OF SIMULATED- AND MEASURED-BASED DATA

    EPA Science Inventory

    This study assessed the pollutant emission offset potential of distributed grid-connected photovoltaic (PV) power systems. Computer-simulated performance results were utilized for 211 PV systems located across the U.S. The PV systems' monthly electrical energy outputs were based ...

  13. Comparisons of the Anelastic and Unified Modes Based on the Lorenz and Charney-Phillips Vertical Grids

    NASA Astrophysics Data System (ADS)

    Konor, Celal; Arakawa, Akio

    2010-05-01

    The anelastic and unified models based on the Lorenz and Charney-Phillips vertical grids are compared in view of nonhydrostatic simulation of buoyant bubbles. It is widely accepted that small-scale nonacoustic motions such as convection and turbulence are basically anelastic. The recently proposed unified system (Arakawa and Konor, 2009) unifies the anelastic and quasi-hydrostatic systems by including quasi-hydrostatic compressibility and, therefore, it can be used for simulating a wide range of motion from turbulence to planetary scales. There are two basic grids for the vertical discretization of governing equations. The most commonly used vertical grid is the Lorenz grid (L-grid), on which the thermodynamic variables and the horizontal momentum are staggered from the vertical momentum. The other is the less commonly used Charney-Phillips grid (CP-grid), on which the thermodynamic variables and the vertical momentum are staggered from the horizontal momentum. The existence of a computational mode with the L-grid in the vertical structure of temperature is well-known. It should be also pointed out that, when the L-grid is used in a non-hydrostatic model, the buoyancy force cannot properly respond to the dynamically generated noise in the vertical velocity field. With the unified system of equations, however, we find that the dynamical generation of noise tends to be suppressed. This can be interpreted as a result of including the quasi-hydrostatic compressibility. Even when the motion is basically nonhydrostatic, the generated noise tends to be quasi-stationary and, therefore, quasi-hydrostatic. Although the original intention of including the quasi-hydrostatic compressibility in the unified system is to improve simulation of planetary waves, the results presented here indicate that the unified system can also better control small-scale computational noise without generating vertically propagating acoustic waves. In this presentation, we show results from

  14. Smart grid initialization reduces the computational complexity of multi-objective image registration based on a dual-dynamic transformation model to account for large anatomical differences

    NASA Astrophysics Data System (ADS)

    Bosman, Peter A. N.; Alderliesten, Tanja

    2016-03-01

    We recently demonstrated the strong potential of using dual-dynamic transformation models when tackling deformable image registration problems involving large anatomical differences. Dual-dynamic transformation models employ two moving grids instead of the common single moving grid for the target image (and single fixed grid for the source image). We previously employed powerful optimization algorithms to make use of the additional flexibility offered by a dual-dynamic transformation model with good results, directly obtaining insight into the trade-off between important registration objectives as a result of taking a multi-objective approach to optimization. However, optimization has so far been initialized using two regular grids, which still leaves a great potential of dual-dynamic transformation models untapped: a-priori grid alignment with image structures/areas that are expected to deform more. This allows (far) less grid points to be used, compared to using a sufficiently refined regular grid, leading to (far) more efficient optimization, or, equivalently, more accurate results using the same number of grid points. We study the implications of exploiting this potential by experimenting with two new smart grid initialization procedures: one manual expert-based and one automated image-feature-based. We consider a CT test case with large differences in bladder volume with and without a multi-resolution scheme and find a substantial benefit of using smart grid initialization.

  15. Predicting Teacher Job Satisfaction Based on Principals' Instructional Supervision Behaviours: A Study of Turkish Teachers

    ERIC Educational Resources Information Center

    Ilgan, Abdurrahman; Parylo, Oksana; Sungu, Hilmi

    2015-01-01

    This quantitative research examined instructional supervision behaviours of school principals as a predictor of teacher job satisfaction through the analysis of Turkish teachers' perceptions of principals' instructional supervision behaviours. There was a statistically significant difference found between the teachers' job satisfaction level and…

  16. Predicting Teacher Job Satisfaction Based on Principals' Instructional Supervision Behaviours: A Study of Turkish Teachers

    ERIC Educational Resources Information Center

    Ilgan, Abdurrahman; Parylo, Oksana; Sungu, Hilmi

    2015-01-01

    This quantitative research examined instructional supervision behaviours of school principals as a predictor of teacher job satisfaction through the analysis of Turkish teachers' perceptions of principals' instructional supervision behaviours. There was a statistically significant difference found between the teachers' job satisfaction level and…

  17. Job Designs: A Community Based Program for Students with Emotional and Behavioral Disorders.

    ERIC Educational Resources Information Center

    Lehman, Constance

    1992-01-01

    The Job Designs Project, a 3-year federally funded project, provides students (ages 16-22) at an Oregon residential treatment center for youth with emotional and behavioral disorders with supported paid employment in the community. The project has provided job supported employment services to 36 students working in such positions as restaurant bus…

  18. Development of Smart Grid for Community and Cyber based Landslide Hazard Monitoring and Early Warning System

    NASA Astrophysics Data System (ADS)

    Karnawati, D.; Wilopo, W.; Fathani, T. F.; Fukuoka, H.; Andayani, B.

    2012-12-01

    A Smart Grid is a cyber-based tool to facilitate a network of sensors for monitoring and communicating the landslide hazard and providing the early warning. The sensor is designed as an electronic sensor installed in the existing monitoring and early warning instruments, and also as the human sensors which comprise selected committed-people at the local community, such as the local surveyor, local observer, member of the local task force for disaster risk reduction, and any person at the local community who has been registered to dedicate their commitments for sending reports related to the landslide symptoms observed at their living environment. This tool is designed to be capable to receive up to thousands of reports/information at the same time through the electronic sensors, text message (mobile phone), the on-line participatory web as well as various social media such as Twitter and Face book. The information that should be recorded/ reported by the sensors is related to the parameters of landslide symptoms, for example the progress of cracks occurrence, ground subsidence or ground deformation. Within 10 minutes, this tool will be able to automatically elaborate and analyse the reported symptoms to predict the landslide hazard and risk levels. The predicted level of hazard/ risk can be sent back to the network of electronic and human sensors as the early warning information. The key parameters indicating the symptoms of landslide hazard were recorded/ monitored by the electrical and the human sensors. Those parameters were identified based on the investigation on geological and geotechnical conditions, supported with the laboratory analysis. The cause and triggering mechanism of landslide in the study area was also analysed in order to define the critical condition to launch the early warning. However, not only the technical but also social system were developed to raise community awareness and commitments to serve the mission as the human sensors, which will

  19. Facilitating Integration of Electron Beam Lithography Devices with Interactive Videodisc, Computer-Based Simulation and Job Aids.

    ERIC Educational Resources Information Center

    Von Der Linn, Robert Christopher

    A needs assessment of the Grumman E-Beam Systems Group identified the requirement for additional skill mastery for the engineers who assemble, integrate, and maintain devices used to manufacture integrated circuits. Further analysis of the tasks involved led to the decision to develop interactive videodisc, computer-based job aids to enable…

  20. Faculty in Faith-Based Institutions: Participation in Decision-Making and Its Impact on Job Satisfaction

    ERIC Educational Resources Information Center

    Metheny, Glen A.; West, G. Bud; Winston, Bruce E.; Wood, J. Andy

    2015-01-01

    This study examined full-time faculty in Christian, faith-based colleges and universities and investigated the type of impact their participation in the decision-making process had on job satisfaction. Previous studies have examined relationships among faculty at state universities and community colleges, yet little research has been examined in…

  1. MAGNETIC GRID

    DOEpatents

    Post, R.F.

    1960-08-01

    An electronic grid is designed employing magnetic forces for controlling the passage of charged particles. The grid is particularly applicable to use in gas-filled tubes such as ignitrons. thyratrons, etc., since the magnetic grid action is impartial to the polarity of the charged particles and, accordingly. the sheath effects encountered with electrostatic grids are not present. The grid comprises a conductor having sections spaced apart and extending in substantially opposite directions in the same plane, the ends of the conductor being adapted for connection to a current source.

  2. Information Theoretically Secure, Enhanced Johnson Noise Based Key Distribution over the Smart Grid with Switched Filters

    PubMed Central

    2013-01-01

    We introduce a protocol with a reconfigurable filter system to create non-overlapping single loops in the smart power grid for the realization of the Kirchhoff-Law-Johnson-(like)-Noise secure key distribution system. The protocol is valid for one-dimensional radial networks (chain-like power line) which are typical of the electricity distribution network between the utility and the customer. The speed of the protocol (the number of steps needed) versus grid size is analyzed. When properly generalized, such a system has the potential to achieve unconditionally secure key distribution over the smart power grid of arbitrary geometrical dimensions. PMID:23936164

  3. Information theoretically secure, enhanced Johnson noise based key distribution over the smart grid with switched filters.

    PubMed

    Gonzalez, Elias; Kish, Laszlo B; Balog, Robert S; Enjeti, Prasad

    2013-01-01

    We introduce a protocol with a reconfigurable filter system to create non-overlapping single loops in the smart power grid for the realization of the Kirchhoff-Law-Johnson-(like)-Noise secure key distribution system. The protocol is valid for one-dimensional radial networks (chain-like power line) which are typical of the electricity distribution network between the utility and the customer. The speed of the protocol (the number of steps needed) versus grid size is analyzed. When properly generalized, such a system has the potential to achieve unconditionally secure key distribution over the smart power grid of arbitrary geometrical dimensions.

  4. Job requirements compared to dental school education: impact of a case-based learning curriculum.

    PubMed

    Keeve, Philip L; Gerhards, Ute; Arnold, Wolfgang A; Zimmer, Stefan; Zöllner, Axel

    2012-01-01

    Case-based learning (CBL) is suggested as a key educational method of knowledge acquisition to improve dental education. The purpose of this study was to assess graduates from a patient-oriented, case-based learning (CBL)-based curriculum as regards to key competencies required at their professional activity. 407 graduates from a patient-oriented, case-based learning (CBL) dental curriculum who graduated between 1990 and 2006 were eligible for this study. 404 graduates were contacted between 2007 and 2008 to self-assess nine competencies as required at their day-to-day work and as taught in dental school on a 6-point Likert scale. Baseline demographics and clinical characteristics were presented as mean ± standard deviation (SD) for continuous variables. To determine whether dental education sufficiently covers the job requirements of physicians, we calculated the mean difference ∆ between the ratings of competencies as required in day-to-day work and as taught in medical school by subtracting those from each other (negative mean difference ∆ indicates deficit; positive mean difference ∆ indicates surplus). Spearman's rank correlation coefficient was calculated to reveal statistical significance (statistical significance p<0.05). 41.6% recipients of the questionnaire responded (n=168 graduates). A homogeneous distribution quantity of the graduate groups concerning gender, graduation date, professional experience and average examination grade was achieved.Comparing competencies required at work and taught in medical school, CBL was associated with benefits in "Research competence" (∆+0.6) "Interdisciplinary thinking" (∆+0.47), "Dental medical knowledge" (∆+0.43), "Practical dental skills" (∆+0.21), "Team work" (∆+0.16) and "Independent learning/working" (∆+0.08), whereas "Problem-solving skills" (∆-0.07), "Psycho-social competence" (∆-0.66) and "Business competence" (∆-2.86) needed improvement in the CBL-based curriculum. CBL demonstrated

  5. Job requirements compared to dental school education: impact of a case-based learning curriculum

    PubMed Central

    Keeve, Philip L.; Gerhards, Ute; Arnold, Wolfgang A.; Zimmer, Stefan; Zöllner, Axel

    2012-01-01

    Introduction: Case-based learning (CBL) is suggested as a key educational method of knowledge acquisition to improve dental education. The purpose of this study was to assess graduates from a patient-oriented, case-based learning (CBL)-based curriculum as regards to key competencies required at their professional activity. Methods: 407 graduates from a patient-oriented, case-based learning (CBL) dental curriculum who graduated between 1990 and 2006 were eligible for this study. 404 graduates were contacted between 2007 and 2008 to self-assess nine competencies as required at their day-to-day work and as taught in dental school on a 6-point Likert scale. Baseline demographics and clinical characteristics were presented as mean ± standard deviation (SD) for continuous variables. To determine whether dental education sufficiently covers the job requirements of physicians, we calculated the mean difference ∆ between the ratings of competencies as required in day-to-day work and as taught in medical school by subtracting those from each other (negative mean difference ∆ indicates deficit; positive mean difference ∆ indicates surplus). Spearman’s rank correlation coefficient was calculated to reveal statistical significance (statistical significance p<0.05). Results: 41.6% recipients of the questionnaire responded (n=168 graduates). A homogeneous distribution quantity of the graduate groups concerning gender, graduation date, professional experience and average examination grade was achieved. Comparing competencies required at work and taught in medical school, CBL was associated with benefits in “Research competence” (∆+0.6) “Interdisciplinary thinking” (∆+0.47), “Dental medical knowledge” (∆+0.43), “Practical dental skills” (∆+0.21), “Team work” (∆+0.16) and “Independent learning/working” (∆+0.08), whereas “Problem-solving skills” (∆-0.07), “Psycho-social competence” (∆-0.66) and “Business competence” (∆-2

  6. World Jobs.

    ERIC Educational Resources Information Center

    Amirault, Thomas A.

    1995-01-01

    Although jobs in international corporations operating in the United States are not substantially different from those of their domestic counterparts, international job opportunities will be greatest for those who have prepared themselves through education, experience, and travel. (Author/JOW)

  7. Scatter reduction for grid-less mammography using the convolution-based image post-processing technique

    NASA Astrophysics Data System (ADS)

    Marimón, Elena; Nait-Charif, Hammadi; Khan, Asmar; Marsden, Philip A.; Diaz, Oliver

    2017-03-01

    X-ray Mammography examinations are highly affected by scattered radiation, as it degrades the quality of the image and complicates the diagnosis process. Anti-scatter grids are currently used in planar mammography examinations as the standard physical scattering reduction technique. This method has been found to be inefficient, as it increases the dose delivered to the patient, does not remove all the scattered radiation and increases the price of the equipment. Alternative scattering reduction methods, based on post-processing algorithms, are being investigated to substitute anti-scatter grids. Methods such as the convolution-based scatter estimation have lately become attractive as they are quicker and more flexible than pure Monte Carlo (MC) simulations. In this study we make use of this specific method, which is based on the premise that the scatter in the system is spatially diffuse, thus it can be approximated by a two-dimensional low-pass convolution filter of the primary image. This algorithm uses the narrow pencil beam method to obtain the scatter kernel used to convolve an image, acquired without anti-scatter grid. The results obtained show an image quality comparable, in the worst case, to the grid image, in terms of uniformity and contrast to noise ratio. Further improvement is expected when using clinically-representative phantoms.

  8. Threshold-Based Random Charging Scheme for Decentralized PEV Charging Operation in a Smart Grid

    PubMed Central

    Kwon, Ojin; Kim, Pilkee; Yoon, Yong-Jin

    2016-01-01

    Smart grids have been introduced to replace conventional power distribution systems without real time monitoring for accommodating the future market penetration of plug-in electric vehicles (PEVs). When a large number of PEVs require simultaneous battery charging, charging coordination techniques have become one of the most critical factors to optimize the PEV charging performance and the conventional distribution system. In this case, considerable computational complexity of a central controller and exchange of real time information among PEVs may occur. To alleviate these problems, a novel threshold-based random charging (TBRC) operation for a decentralized charging system is proposed. Using PEV charging thresholds and random access rates, the PEVs themselves can participate in the charging requests. As PEVs with a high battery state do not transmit the charging requests to the central controller, the complexity of the central controller decreases due to the reduction of the charging requests. In addition, both the charging threshold and the random access rate are statistically calculated based on the average of supply power of the PEV charging system that do not require a real time update. By using the proposed TBRC with a tolerable PEV charging degradation, a 51% reduction of the PEV charging requests is achieved. PMID:28035963

  9. Threshold-Based Random Charging Scheme for Decentralized PEV Charging Operation in a Smart Grid.

    PubMed

    Kwon, Ojin; Kim, Pilkee; Yoon, Yong-Jin

    2016-12-26

    Smart grids have been introduced to replace conventional power distribution systems without real time monitoring for accommodating the future market penetration of plug-in electric vehicles (PEVs). When a large number of PEVs require simultaneous battery charging, charging coordination techniques have become one of the most critical factors to optimize the PEV charging performance and the conventional distribution system. In this case, considerable computational complexity of a central controller and exchange of real time information among PEVs may occur. To alleviate these problems, a novel threshold-based random charging (TBRC) operation for a decentralized charging system is proposed. Using PEV charging thresholds and random access rates, the PEVs themselves can participate in the charging requests. As PEVs with a high battery state do not transmit the charging requests to the central controller, the complexity of the central controller decreases due to the reduction of the charging requests. In addition, both the charging threshold and the random access rate are statistically calculated based on the average of supply power of the PEV charging system that do not require a real time update. By using the proposed TBRC with a tolerable PEV charging degradation, a 51% reduction of the PEV charging requests is achieved.

  10. Sound Source Localization for HRI Using FOC-Based Time Difference Feature and Spatial Grid Matching.

    PubMed

    Li, Xiaofei; Liu, Hong

    2013-08-01

    In human-robot interaction (HRI), speech sound source localization (SSL) is a convenient and efficient way to obtain the relative position between a speaker and a robot. However, implementing a SSL system based on TDOA method encounters many problems, such as noise of real environments, the solution of nonlinear equations, switch between far field and near field. In this paper, fourth-order cumulant spectrum is derived, based on which a time delay estimation (TDE) algorithm that is available for speech signal and immune to spatially correlated Gaussian noise is proposed. Furthermore, time difference feature of sound source and its spatial distribution are analyzed, and a spatial grid matching (SGM) algorithm is proposed for localization step, which handles some problems that geometric positioning method faces effectively. Valid feature detection algorithm and a decision tree method are also suggested to improve localization performance and reduce computational complexity. Experiments are carried out in real environments on a mobile robot platform, in which thousands of sets of speech data with noise collected by four microphones are tested in 3D space. The effectiveness of our TDE method and SGM algorithm is verified.

  11. High Energy IED measurements with MEMs based Si grid technology inside a 300mm Si wafer

    NASA Astrophysics Data System (ADS)

    Funk, Merritt

    2012-10-01

    The measurement of ion energy at the wafer surface for commercial equipment and process development without extensive modification of the reactor geometry has been an industry challenge. High energy, wide frequency range, process gases tolerant, contamination free and accurate ion energy measurements are the base requirements. In this work we will report on the complete system developed to achieve the base requirements. The system includes: a reusable silicon ion energy analyzer (IEA) wafer, signal feed through, RF confinement, and high voltage measurement and control. The IEA wafer design required carful understanding of the relationships between the plasma Debye length, the number of grids, intergrid charge exchange (spacing), capacitive coupling, materials, and dielectric flash over constraints. RF confinement with measurement transparency was addressed so as not to disturb the chamber plasma, wafer sheath and DC self-bias as well as to achieve spectral accuracy The experimental results were collected using a commercial parallel plate etcher powered by a dual frequency (VHF + LF). Modeling and Simulations also confirmed the details captured in the IED.

  12. Job Satisfaction.

    DTIC Science & Technology

    1979-07-01

    well include an "overall, global or unidimensional component" (p 184) but that additional specific factors were also evident, ie. "job satisfaction is...between a person’s life style and organisational structure. They hypothesised that job satisfaction may be adversely affected if there is any significant...between job satisfaction and an independent life style, and; thirdly, that "job satisfac- tion is maximispd when the individual places a high value

  13. ASCI Grid Services summary report.

    SciTech Connect

    Hiebert-Dodd, Kathie L.

    2004-03-01

    The ASCI Grid Services (initially called Distributed Resource Management) project was started under DisCom{sup 2} when distant and distributed computing was identified as a technology critical to the success of the ASCI Program. The goals of the Grid Services project has and continues to be to provide easy, consistent access to all the ASCI hardware and software resources across the nuclear weapons complex using computational grid technologies, increase the usability of ASCI hardware and software resources by providing interfaces for resource monitoring, job submission, job monitoring, and job control, and enable the effective use of high-end computing capability through complex-wide resource scheduling and brokering. In order to increase acceptance of the new technology, the goal included providing these services in both the unclassified as well as the classified user's environment. This paper summarizes the many accomplishments and lessons learned over approximately five years of the ASCI Grid Services Project. It also provides suggestions on how to renew/restart the effort for grid services capability when the situation is right for that need.

  14. Job Club.

    ERIC Educational Resources Information Center

    Parsell, Ruth; Thompson, Gretchen

    1979-01-01

    Counselors at the UCLA Placement Center organized the Job Club to develop successful job search techniques with group support, direction, and encouragement. Specific goals were: (a) to provide a forum for sharing; (b) to assist in identifying job-related skills; (c) to provide basic information; (d) to establish guidelines; and (e) assist decision…

  15. Initial Study on the Predictability of Real Power on the Grid based on PMU Data

    SciTech Connect

    Ferryman, Thomas A.; Tuffner, Francis K.; Zhou, Ning; Lin, Guang

    2011-03-23

    Operations on the electric power grid provide highly reliable power to the end users. These operations involve hundreds of human operators and automated control schemes. However, the operations process can often take several minutes to complete. During these several minutes, the operations are often evaluated on a past state of the power system. Proper prediction methods could change this to make the operations evaluate the state of the power grid minutes in advance. Such information allows proactive, rather than reactive, actions on the power system and aids in improving the efficiency and reliability of the power grid as a whole. A successful demonstration of this prediction framework is necessary to evaluate the feasibility of utilizing such predicted states in grid operations.

  16. Grid-based precision aim system and method for disrupting suspect objects

    SciTech Connect

    Gladwell, Thomas Scott; Garretson, Justin; Hobart, Clinton G.; Monda, Mark J.

    2014-06-10

    A system and method for disrupting at least one component of a suspect object is provided. The system has a source for passing radiation through the suspect object, a grid board positionable adjacent the suspect object (the grid board having a plurality of grid areas, the radiation from the source passing through the grid board), a screen for receiving the radiation passing through the suspect object and generating at least one image, a weapon for deploying a discharge, and a targeting unit for displaying the image of the suspect object and aiming the weapon according to a disruption point on the displayed image and deploying the discharge into the suspect object to disable the suspect object.

  17. Heterojunction solar cells based on single-crystal silicon with an inkjet-printed contact grid

    NASA Astrophysics Data System (ADS)

    Abolmasov, S. N.; Abramov, A. S.; Ivanov, G. A.; Terukov, E. I.; Emtsev, K. V.; Nyapshaev, I. A.; Bazeley, A. A.; Gubin, S. P.; Kornilov, D. Yu.; Tkachev, S. V.; Kim, V. P.; Ryndin, D. A.; Levchenkova, V. I.

    2017-01-01

    Results on the creation of a current-collecting grid for heterojunction silicon solar cells by ink-jet printing are presented. Characteristics of the obtained solar cells are compared with those of the samples obtained using standard screen printing.

  18. ReSS: A Resource Selection Service for the Open Science Grid

    SciTech Connect

    Garzoglio, Gabriele; Levshina, Tanya; Mhashilkar, Parag; Timm, Steve; /Fermilab

    2008-01-01

    The Open Science Grid offers access to hundreds of computing and storage resources via standard Grid interfaces. Before the deployment of an automated resource selection system, users had to submit jobs directly to these resources. They would manually select a resource and specify all relevant attributes in the job description prior to submitting the job. The necessity of a human intervention in resource selection and attribute specification hinders automated job management components from accessing OSG resources and it is inconvenient for the users. The Resource Selection Service (ReSS) project addresses these shortcomings. The system integrates condor technology, for the core match making service, with the gLite CEMon component, for gathering and publishing resource information in the Glue Schema format. Each one of these components communicates over secure protocols via web services interfaces. The system is currently used in production on OSG by the DZero Experiment, the Engagement Virtual Organization, and the Dark Energy. It is also the resource selection service for the Fermilab Campus Grid, FermiGrid. ReSS is considered a lightweight solution to push-based workload management. This paper describes the architecture, performance, and typical usage of the system.

  19. Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Luo, Yabo; Waden, Yongo P.

    2017-06-01

    Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.

  20. Internet 2 Access Grid.

    ERIC Educational Resources Information Center

    Simco, Greg

    2002-01-01

    Discussion of the Internet 2 Initiative, which is based on collaboration among universities, businesses, and government, focuses on the Access Grid, a Computational Grid that includes interactive multimedia within high-speed networks to provide resources to enable remote collaboration among the research community. (Author/LRW)

  1. Geometric grid generation

    NASA Technical Reports Server (NTRS)

    Ives, David

    1995-01-01

    This paper presents a highly automated hexahedral grid generator based on extensive geometrical and solid modeling operations developed in response to a vision of a designer-driven one day turnaround CFD process which implies a designer-driven one hour grid generation process.

  2. A Mobile Phone-Based Sensor Grid for Distributed Team Operations

    DTIC Science & Technology

    2010-09-01

    When the grid is breached by a human , animal or machine, the individual phones capture signals generated by the intruders’ movements. These signals are...microphone to capture sound in the area. When the grid is breached by a human , animal or machine, the individual phones capture signals generated by the...secondary objective is to determine if the Bluetooth networks are reliable enough to create an ad hoc network and transfer alerts to a human sentry

  3. GNARE: an environment for Grid-based high-throughput genome analysis.

    SciTech Connect

    Sulakhe, D.; Rodriguez, A.; D'Souza, M.; Wilde, M.; Nefedova, V.; Foster, I.; Maltsev, N.; Mathematics and Computer Science; Univ. of Chicago

    2005-01-01

    Recent progress in genomics and experimental biology has brought exponential growth of the biological information available for computational analysis in public genomics databases. However, applying the potentially enormous scientific value of this information to the understanding of biological systems requires computing and data storage technology of an unprecedented scale. The grid, with its aggregated and distributed computational and storage infrastructure, offers an ideal platform for high-throughput bioinformatics analysis. To leverage this we have developed the Genome Analysis Research Environment (GNARE) - a scalable computational system for the high-throughput analysis of genomes, which provides an integrated database and computational backend for data-driven bioinformatics applications. GNARE efficiently automates the major steps of genome analysis including acquisition of data from multiple genomic databases; data analysis by a diverse set of bioinformatics tools; and storage of results and annotations. High-throughput computations in GNARE are performed using distributed heterogeneous grid computing resources such as Grid2003, TeraGrid, and the DOE science grid. Multi-step genome analysis workflows involving massive data processing, the use of application-specific toots and algorithms and updating of an integrated database to provide interactive Web access to results are all expressed and controlled by a 'virtual data' model which transparently maps computational workflows to distributed grid resources. This paper describes how Grid technologies such as Globus, Condor, and the Gryphyn virtual data system were applied in the development of GNARE. It focuses on our approach to Grid resource allocation and to the use of GNARE as a computational framework for the development of bioinformatics applications.

  4. Uncertainty Analysis of Power Grid Investment Capacity Based on Monte Carlo

    NASA Astrophysics Data System (ADS)

    Qin, Junsong; Liu, Bingyi; Niu, Dongxiao

    By analyzing the influence factors of the investment capacity of power grid, to depreciation cost, sales price and sales quantity, net profit, financing and GDP of the second industry as the dependent variable to build the investment capacity analysis model. After carrying out Kolmogorov-Smirnov test, get the probability distribution of each influence factor. Finally, obtained the grid investment capacity uncertainty of analysis results by Monte Carlo simulation.

  5. Creative Job Search Technique

    ERIC Educational Resources Information Center

    Canadian Vocational Journal, 1974

    1974-01-01

    Creative Job Search Technique is based on the premise that most people have never learned how to systematically look for a job. A person who is unemployed can be helped to take a hard look at his acquired skills and relate those skills to an employer's needs. (Author)

  6. Job Placement Handbook.

    ERIC Educational Resources Information Center

    Los Angeles Unified School District, CA. Div. of Career and Continuing Education.

    Designed to serve as a guide for job placement personnel, this handbook is written from the point of view of a school or job preparation facility, based on methodology applicable to the placement function in any setting. Factors identified as critical to a successful placement operation are utilization of a systems approach, establishment of…

  7. Long range Debye-Hückel correction for computation of grid-based electrostatic forces between biomacromolecules

    PubMed Central

    2014-01-01

    Background Brownian dynamics (BD) simulations can be used to study very large molecular systems, such as models of the intracellular environment, using atomic-detail structures. Such simulations require strategies to contain the computational costs, especially for the computation of interaction forces and energies. A common approach is to compute interaction forces between macromolecules by precomputing their interaction potentials on three-dimensional discretized grids. For long-range interactions, such as electrostatics, grid-based methods are subject to finite size errors. We describe here the implementation of a Debye-Hückel correction to the grid-based electrostatic potential used in the SDA BD simulation software that was applied to simulate solutions of bovine serum albumin and of hen egg white lysozyme. Results We found that the inclusion of the long-range electrostatic correction increased the accuracy of both the protein-protein interaction profiles and the protein diffusion coefficients at low ionic strength. Conclusions An advantage of this method is the low additional computational cost required to treat long-range electrostatic interactions in large biomacromolecular systems. Moreover, the implementation described here for BD simulations of protein solutions can also be applied in implicit solvent molecular dynamics simulations that make use of gridded interaction potentials. PMID:25045516

  8. Comparisons of purely topological model, betweenness based model and direct current power flow model to analyze power grid vulnerability.

    PubMed

    Ouyang, Min

    2013-06-01

    This paper selects three frequently used power grid models, including a purely topological model (PTM), a betweennness based model (BBM), and a direct current power flow model (DCPFM), to describe three different dynamical processes on a power grid under both single and multiple component failures. Each of the dynamical processes is then characterized by both a topology-based and a flow-based vulnerability metrics to compare the three models with each other from the vulnerability perspective. Taking as an example, the IEEE 300 power grid with line capacity set proportional to a tolerance parameter tp, the results show non-linear phenomenon: under single node failures, there exists a critical value of tp = 1.36, above which the three models all produce identical topology-based vulnerability results and more than 85% nodes have identical flow-based vulnerability from any two models; under multiple node failures that each node fails with an identical failure probability fp, there exists a critical fp = 0.56, above which the three models produce almost identical topology-based vulnerability results at any tp ≥ 1, but producing identical flow-based vulnerability results only occurs at fp = . In addition, the topology-based vulnerability results can provide a good approximation for the flow-based vulnerability under large fp, and the priority of PTM and BBM to better approach the DCPFM for vulnerability analysis mainly depends on the value of fp. Similar results are also found for other failure types, other system operation parameters, and other power grids.

  9. Occupational stressors and hypertension: a multi-method study using observer-based job analysis and self-reports in urban transit operators.

    PubMed

    Greiner, Birgit A; Krause, Niklas; Ragland, David; Fisher, June M

    2004-09-01

    This multi-method study aimed to disentangle objective and subjective components of job stressors and determine the role of each for hypertension risk. Because research on job stressors and hypertension has been exclusively based on self-reports of stressors, the tendency of some individuals to use denial and repressive coping might be responsible for the inconclusive results in previous studies. Stressor measures with different degrees of objectivity were contrasted, including (1) an observer-based measure of stressors (job barriers, time pressure) obtained from experts, (2) self-reported frequency and appraised intensity of job problems and time pressures averaged per workplace (group level), (3) self-reported frequency of job problems and time pressures at the individual level, and (4) self-reported appraised intensity of job problems and time pressures at the individual level. The sample consisted of 274 transit operators working on 27 different transit lines and four different vehicle types. Objective stressors (job barriers and time pressure) were each significantly associated with hypertension (casual blood pressure readings and/or currently taking anti-hypertensive medication) after adjustment for age, gender and seniority. Self-reported stressors at the individual level were positively but not significantly associated with hypertension. At the group level, only appraisal of job problems significantly predicted hypertension. In a composite regression model, both observer-based job barriers and self-reported intensity of job problems were independently and significantly associated with hypertension. Associations between self-reported job problems (individual level) and hypertension were dependent on the level of objective stressors. When observer-based stressor level was low, the association between self-reported frequency of stressors and hypertension was high. When the observer-based stressor level was high the association was inverse; this might be

  10. Use of job aids to improve facility-based postnatal counseling and care in rural Benin.

    PubMed

    Jennings, L; Yebadokpo, A; Affo, J; Agbogbe, M

    2015-03-01

    This study examined the effect of a job aids-focused intervention on quality of facility-based postnatal counseling, and whether increased communication improved in-hospital newborn care and maternal knowledge of home practices and danger signs requiring urgent care. Ensuring mothers and newborns receive essential postnatal services, including health counseling, is integral to their survival. Yet, quality of clinic-based postnatal services is often low, and evidence on effective improvement strategies is scarce. Using a pre-post randomized design, data were drawn from direct observations and interviews with 411 mother-newborn pairs. Multi-level regression models with difference-in-differences analyses estimated the intervention's relative effect, adjusting for changes in the comparison arm. The mean percent of recommended messages provided to recently-delivered women significantly improved in the intervention arm as compared to the control (difference-in-differences [∆i - ∆c] +30.9, 95 % confidence interval (CI) 19.3, 42.5), and the proportion of newborns thermally protected within the first hour (∆i - ∆c +33.7, 95 % CI 19.0, 48.4) and delayed for bathing (∆i - ∆c +23.9, 95 % CI 9.4, 38.4) significantly increased. No significant changes were observed in early breastfeeding (∆i - ∆c +6.8, 95 % CI -2.8, 16.4) which was nearly universal. Omitting traditional umbilical cord substances rose slightly, but was insignificant (∆i - ∆c +8.5, 95 % CI -2.8, 19.9). The proportion of mothers with correct knowledge of maternal (∆i - ∆c +27.8, 95 % CI 11.0, 44.6) and newborn (∆i - ∆c +40.3, 95 % CI 22.2, 58.4) danger signs grew substantially, as did awareness of several home-care practices (∆i - ∆c +26.0, 95 % CI 7.7, 44.3). Counseling job aids can improve the quality of postnatal services. However, achieving reduction goals in maternal and neonatal mortality will likely require more comprehensive approaches to link enhanced facility services with

  11. AIRS Observations Based Evaluation of Relative Climate Feedback Strengths on a GCM Grid-Scale

    NASA Astrophysics Data System (ADS)

    Molnar, G. I.; Susskind, J.

    2012-12-01

    Climate feedback strengths, especially those associated with moist processes, still have a rather wide range in GCMs, the primary tools to predict future climate changes associated with man's ever increasing influences on our planet. Here, we make use of the first 10 years of AIRS observations to evaluate interrelationships/correlations of atmospheric moist parameter anomalies computed from AIRS Version 5 Level-3 products, and demonstrate their usefulness to assess relative feedback strengths. Although one may argue about the possible usability of shorter-term, observed climate parameter anomalies for estimating the strength of various (mostly moist processes related) feedbacks, recent works, in particular analyses by Dessler [2008, 2010], have demonstrated their usefulness in assessing global water vapor and cloud feedbacks. First, we create AIRS-observed monthly anomaly time-series (ATs) of outgoing longwave radiation, water vapor, clouds and temperature profile over a 10-year long (Sept. 2002 through Aug. 2012) period using 1x1 degree resolution (a common GCM grid-scale). Next, we evaluate the interrelationships of ATs of the above parameters with the corresponding 1x1 degree, as well as global surface temperature ATs. The latter provides insight comparable with more traditional climate feedback definitions (e. g., Zelinka and Hartmann, 2012) whilst the former is related to a new definition of "local (in surface temperature too) feedback strengths" on a GCM grid-scale. Comparing the correlation maps generated provides valuable new information on the spatial distribution of relative climate feedback strengths. We argue that for GCMs to be trusted for predicting longer-term climate variability, they should be able to reproduce these observed relationships/metrics as closely as possible. For this time period the main climate "forcing" was associated with the El Niño/La Niña variability (e. g., Dessler, 2010), so these assessments may not be descriptive of longer

  12. Validation of elastic registration algorithms based on adaptive irregular grids for medical applications

    NASA Astrophysics Data System (ADS)

    Franz, Astrid; Carlsen, Ingwer C.; Renisch, Steffen; Wischmann, Hans-Aloys

    2006-03-01

    Elastic registration of medical images is an active field of current research. Registration algorithms have to be validated in order to show that they fulfill the requirements of a particular clinical application. Furthermore, validation strategies compare the performance of different registration algorithms and can hence judge which algorithm is best suited for a target application. In the literature, validation strategies for rigid registration algorithms have been analyzed. For a known ground truth they assess the displacement error at a few landmarks, which is not sufficient for elastic transformations described by a huge number of parameters. Hence we consider the displacement error averaged over all pixels in the whole image or in a region-of-interest of clinical relevance. Using artificially, but realistically deformed images of the application domain, we use this quality measure to analyze an elastic registration based on transformations defined on adaptive irregular grids for the following clinical applications: Magnetic Resonance (MR) images of freely moving joints for orthopedic investigations, thoracic Computed Tomography (CT) images for the detection of pulmonary embolisms, and transmission images as used for the attenuation correction and registration of independently acquired Positron Emission Tomography (PET) and CT images. The definition of a region-of-interest allows to restrict the analysis of the registration accuracy to clinically relevant image areas. The behaviour of the displacement error as a function of the number of transformation control points and their placement can be used for identifying the best strategy for the initial placement of the control points.

  13. Grid-based Infrastructure and Distributed Data Mining for Virtual Observatories

    NASA Astrophysics Data System (ADS)

    Karimabadi, H.; Sipes, T.; Ferenci, S.; Fujimoto, R.; Olschanowsky, R.; Balac, N.; Roberts, A.

    2006-12-01

    Data access as well as analysis of geographically distributed data sets are challenges common to a wide variety of fields. To address this problem, we have been working on the development of two pieces of technology: a grid-based software called IDDAT that supports processing and remote data analysis of widely distributed data and RemoteMiner which is a parallel, distributed data mining software. IDDAT and RemoteMiner work seamlessly and provide the necessary backend functionalities hidden from the user. The user accesses the system through a single web portal where data selection is performed and data mining functions are planned. The data mining functions are prepared for execution by IDDat services. Preparation can include moving data to the processing location via services built over Storage Resource Broker (SRB), preprocessing data, and allocating computation and storage resources. IDDat services also initiate and monitor data mining functions and provide services to allow the results to be shared among other users. In this presentation, we illustrate a general user workflow and the provided functionalities. We will also provide an overview of the technical issues and design features such as storage scheduling, efficient network traffic management and resource selection.

  14. Mindfulness-Based Cognitive Therapy for Psychosis: Measuring Psychological Change Using Repertory Grids.

    PubMed

    Randal, Chloe; Bucci, Sandra; Morera, Tirma; Barrett, Moya; Pratt, Daniel

    2016-11-01

    There are an increasing, but limited, number of studies investigating the benefits of mindfulness interventions for people experiencing psychosis. To our knowledge, changes following mindfulness for psychosis have not yet been explored from a personal construct perspective. This study had two main aims: (i) to explore changes in the way a person construes their self, others and their experience of psychosis following a Mindfulness-Based Cognitive Therapy (MBCT) group; and (ii) to replicate the findings of other studies exploring the feasibility and potential benefits of MBCT for psychosis. Sixteen participants, with experience of psychosis, completed an 8-week MBCT group. Participants completed pre-group and post-group assessments including a repertory grid, in addition to a range of outcome measures. There was some evidence of changes in construing following MBCT, with changes in the way participants viewed their ideal self and recovered self, and an indication of increased self-understanding. Improvements were found in participants' self-reported ability to act with awareness and in recovery. This study demonstrates the feasibility and potential benefits of MBCT groups for people experiencing psychosis. Furthermore, it provides some evidence of changes in construal following MBCT that warrant further exploration. Large-scale controlled trials of MBCT for psychosis are needed, as well as studies investigating the mechanisms of change. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Modeling and assessment of civil aircraft evacuation based on finer-grid

    NASA Astrophysics Data System (ADS)

    Fang, Zhi-Ming; Lv, Wei; Jiang, Li-Xue; Xu, Qing-Feng; Song, Wei-Guo

    2016-04-01

    Studying civil aircraft emergency evacuation process by using computer model is an effective way. In this study, the evacuation of Airbus A380 is simulated using a Finer-Grid Civil Aircraft Evacuation (FGCAE) model. In this model, the effect of seat area and others on escape process and pedestrian's "hesitation" before leaving exits are considered, and an optimized rule of exit choice is defined. Simulations reproduce typical characteristics of aircraft evacuation, such as the movement synchronization between adjacent pedestrians, route choice and so on, and indicate that evacuation efficiency will be determined by pedestrian's "preference" and "hesitation". Based on the model, an assessment procedure of aircraft evacuation safety is presented. The assessment and comparison with the actual evacuation test demonstrate that the available exit setting of "one exit from each exit pair" used by practical demonstration test is not the worst scenario. The half exits of one end of the cabin are all unavailable is the worst one, that should be paid more attention to, and even be adopted in the certification test. The model and method presented in this study could be useful for assessing, validating and improving the evacuation performance of aircraft.

  16. Multifidelity Sparse-Grid-Based Uncertainty Quantification for the Hokkaido Nansei-oki Tsunami

    NASA Astrophysics Data System (ADS)

    de Baar, Jouke H. S.; Roberts, Stephen G.

    2017-08-01

    With uncertainty quantification, we aim to efficiently propagate the uncertainties in the input parameters of a computer simulation, in order to obtain a probability distribution of its output. In this work, we use multi-fidelity sparse grid interpolation to propagate the uncertainty in the shape of the incoming wave for the Okushiri test-case, which is a wave tank model of a part of the 1993 Hokkaido Nansei-oki tsunami. An important issue with many uncertainty quantification approaches is the `curse of dimensionality': the overall computational cost of the uncertainty propagation increases rapidly when we increase the number of uncertain input parameters. We aim to mitigate the curse of dimensionality by using a multifidelity approach. In the multifidelity approach, we combine results from a small number of accurate and expensive high-fidelity simulations with a large number of less accurate but also less expensive low-fidelity simulations. For the Okushiri test-case, we find an improved scaling when we increase the number of uncertain input parameters. This results in a significant reduction of the overall computational cost. For example, for four uncertain input parameters, accurate uncertainty quantification based on only high-fidelity simulations comes at a normalised cost of 219 high-fidelity simulations; when we use a multifidelity approach, this is reduced to a normalised cost of only 10 high-fidelity simulations.

  17. Location-Aware Dynamic Session-Key Management for Grid-Based Wireless Sensor Networks

    PubMed Central

    Chen, Chin-Ling; Lin, I-Hsien

    2010-01-01

    Security is a critical issue for sensor networks used in hostile environments. When wireless sensor nodes in a wireless sensor network are distributed in an insecure hostile environment, the sensor nodes must be protected: a secret key must be used to protect the nodes transmitting messages. If the nodes are not protected and become compromised, many types of attacks against the network may result. Such is the case with existing schemes, which are vulnerable to attacks because they mostly provide a hop-by-hop paradigm, which is insufficient to defend against known attacks. We propose a location-aware dynamic session-key management protocol for grid-based wireless sensor networks. The proposed protocol improves the security of a secret key. The proposed scheme also includes a key that is dynamically updated. This dynamic update can lower the probability of the key being guessed correctly. Thus currently known attacks can be defended. By utilizing the local information, the proposed scheme can also limit the flooding region in order to reduce the energy that is consumed in discovering routing paths. PMID:22163606

  18. Performance evaluation of four grid-based dispersion models in complex terrain

    NASA Astrophysics Data System (ADS)

    Tesche, T. W.; Haney, J. L.; Morris, R. E.

    Four numerical grid-based dispersion models (Mathew/ADPIC, SMOG, Hybrid, and 2DFLOW) were adapted to the Geysers-Calistoga geothermal area in northern California. The models were operated using five intensive meteorological and tracer diffusion data sets collected during the 1981 ASCOT field experiment at the Geysers (three nocturnal drainage and two daytime valley stagnation episodes). The 2DFLOW and Hybrid Models were found to perform best for drainage and limited-mixing conditions, respectively. These two models were subsequently evaluated using data from five 1980 ASCOT drainage experiments. The Hybrid Model was also tested using data from nine limited-mixing and downwash tracer experiments performed at the Geysers prior to the ASCOT program. Overall, the 2DFLOW Model performed best for drainage flow conditions, whereas the Hybrid Model performed best for valley stagnation (limited-mixing) and moderate crossridge wind conditions. To aid new source review studies at the Geysers, a series of source-receptor transfer matrices were generated for several different meteorological regimes under a variety of emission scenarios using the Hybrid Model. These matrices supply ready estimates of cumulative hydrogen sulfide impacts from various geothermal sources in the region.

  19. Multifidelity Sparse-Grid-Based Uncertainty Quantification for the Hokkaido Nansei-oki Tsunami

    NASA Astrophysics Data System (ADS)

    de Baar, Jouke H. S.; Roberts, Stephen G.

    2017-07-01

    With uncertainty quantification, we aim to efficiently propagate the uncertainties in the input parameters of a computer simulation, in order to obtain a probability distribution of its output. In this work, we use multi-fidelity sparse grid interpolation to propagate the uncertainty in the shape of the incoming wave for the Okushiri test-case, which is a wave tank model of a part of the 1993 Hokkaido Nansei-oki tsunami. An important issue with many uncertainty quantification approaches is the `curse of dimensionality': the overall computational cost of the uncertainty propagation increases rapidly when we increase the number of uncertain input parameters. We aim to mitigate the curse of dimensionality by using a multifidelity approach. In the multifidelity approach, we combine results from a small number of accurate and expensive high-fidelity simulations with a large number of less accurate but also less expensive low-fidelity simulations. For the Okushiri test-case, we find an improved scaling when we increase the number of uncertain input parameters. This results in a significant reduction of the overall computational cost. For example, for four uncertain input parameters, accurate uncertainty quantification based on only high-fidelity simulations comes at a normalised cost of 219 high-fidelity simulations; when we use a multifidelity approach, this is reduced to a normalised cost of only 10 high-fidelity simulations.

  20. Optimal RTP Based Power Scheduling for Residential Load in Smart Grid

    NASA Astrophysics Data System (ADS)

    Joshi, Hemant I.; Pandya, Vivek J.

    2015-12-01

    To match supply and demand, shifting of load from peak period to off-peak period is one of the effective solutions. Presently flat rate tariff is used in major part of the world. This type of tariff doesn't give incentives to the customers if they use electrical energy during off-peak period. If real time pricing (RTP) tariff is used, consumers can be encouraged to use energy during off-peak period. Due to advancement in information and communication technology, two-way communications is possible between consumers and utility. To implement this technique in smart grid, home energy controller (HEC), smart meters, home area network (HAN) and communication link between consumers and utility are required. HEC interacts automatically by running an algorithm to find optimal energy consumption schedule for each consumer. However, all the consumers are not allowed to shift their load simultaneously during off-peak period to avoid rebound peak condition. Peak to average ratio (PAR) is considered while carrying out minimization problem. Linear programming problem (LPP) method is used for minimization. The simulation results of this work show the effectiveness of the minimization method adopted. The hardware work is in progress and the program based on the method described here will be made to solve real problem.