Science.gov

Sample records for based grid job

  1. A grid job monitoring system

    SciTech Connect

    Dumitrescu, Catalin; Nowack, Andreas; Padhi, Sanjay; Sarkar, Subir; /INFN, Pisa /Pisa, Scuola Normale Superiore

    2010-01-01

    This paper presents a web-based Job Monitoring framework for individual Grid sites that allows users to follow in detail their jobs in quasi-real time. The framework consists of several independent components: (a) a set of sensors that run on the site CE and worker nodes and update a database, (b) a simple yet extensible web services framework and (c) an Ajax powered web interface having a look-and-feel and control similar to a desktop application. The monitoring framework supports LSF, Condor and PBS-like batch systems. This is one of the first monitoring systems where an X.509 authenticated web interface can be seamlessly accessed by both end-users and site administrators. While a site administrator has access to all the possible information, a user can only view the jobs for the Virtual Organizations (VO) he/she is a part of. The monitoring framework design supports several possible deployment scenarios. For a site running a supported batch system, the system may be deployed as a whole, or existing site sensors can be adapted and reused with the web services components. A site may even prefer to build the web server independently and choose to use only the Ajax powered web interface. Finally, the system is being used to monitor a glideinWMS instance. This broadens the scope significantly, allowing it to monitor jobs over multiple sites.

  2. A Grid job monitoring system

    NASA Astrophysics Data System (ADS)

    Dumitrescu, Catalin; Nowack, Andreas; Padhi, Sanjay; Sarkar, Subir

    2010-04-01

    This paper presents a web-based Job Monitoring framework for individual Grid sites that allows users to follow in detail their jobs in quasi-real time. The framework consists of several independent components : (a) a set of sensors that run on the site CE and worker nodes and update a database, (b) a simple yet extensible web services framework and (c) an Ajax powered web interface having a look-and-feel and control similar to a desktop application. The monitoring framework supports LSF, Condor and PBS-like batch systems. This is one of the first monitoring systems where an X.509 authenticated web interface can be seamlessly accessed by both end-users and site administrators. While a site administrator has access to all the possible information, a user can only view the jobs for the Virtual Organizations (VO) he/she is a part of. The monitoring framework design supports several possible deployment scenarios. For a site running a supported batch system, the system may be deployed as a whole, or existing site sensors can be adapted and reused with the web services components. A site may even prefer to build the web server independently and choose to use only the Ajax powered web interface. Finally, the system is being used to monitor a glideinWMS instance. This broadens the scope significantly, allowing it to monitor jobs over multiple sites.

  3. Job Scheduling in a Heterogeneous Grid Environment

    NASA Technical Reports Server (NTRS)

    Shan, Hong-Zhang; Smith, Warren; Oliker, Leonid; Biswas, Rupak

    2004-01-01

    Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.

  4. Job scheduling in a heterogenous grid environment

    SciTech Connect

    Oliker, Leonid; Biswas, Rupak; Shan, Hongzhang; Smith, Warren

    2004-02-11

    Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.

  5. Mediated definite delegation - Certified Grid jobs in ALICE and beyond

    NASA Astrophysics Data System (ADS)

    Schreiner, Steffen; Grigoras, Costin; Litmaath, Maarten; Betev, Latchezar; Buchmann, Johannes

    2012-12-01

    Grid computing infrastructures need to provide traceability and accounting of their users’ activity and protection against misuse and privilege escalation, where the delegation of privileges in the course of a job submission is a key concern. This work describes an improved handling of Multi-user Grid Jobs in the ALICE Grid Services. A security analysis of the ALICE Grid job model is presented with derived security objectives, followed by a discussion of existing approaches of unrestricted delegation based on X.509 proxy certificates and the Grid middleware gLExec. Unrestricted delegation has severe security consequences and limitations, most importantly allowing for identity theft and forgery of jobs and data. These limitations are discussed and formulated, both in general and with respect to an adoption in line with Multi-user Grid Jobs. A new general model of mediated definite delegation is developed, allowing a broker to dynamically process and assign Grid jobs to agents while providing strong accountability and long-term traceability. A prototype implementation allowing for fully certified Grid jobs is presented as well as a potential interaction with gLExec. The achieved improvements regarding system security, malicious job exploitation, identity protection, and accountability are emphasized, including a discussion of non-repudiation in the face of malicious Grid jobs.

  6. Pilot job accounting and auditing in Open Science Grid

    SciTech Connect

    Sfiligoi, Igor; Green, Chris; Quinn, Greg; Thain, Greg; /Wisconsin U., Madison

    2008-06-01

    The Grid accounting and auditing mechanisms were designed under the assumption that users would submit their jobs directly to the Grid gatekeepers. However, many groups are starting to use pilot-based systems, where users submit jobs to a centralized queue and are successively transferred to the Grid resources by the pilot infrastructure. While this approach greatly improves the user experience, it does disrupt the established accounting and auditing procedures. Open Science Grid deploys gLExec on the worker nodes to keep the pilot-related accounting and auditing information and centralizes the accounting collection with GRATIA.

  7. Grid Service for User-Centric Job

    SciTech Connect

    Lauret, Jerome

    2009-07-31

    The User Centric Monitoring (UCM) project was aimed at developing a toolkit that provides the Virtual Organization (VO) with tools to build systems that serve a rich set of intuitive job and application monitoring information to the VO’s scientists so that they can be more productive. The tools help collect and serve the status and error information through a Web interface. The proposed UCM toolkit is composed of a set of library functions, a database schema, and a Web portal that will collect and filter available job monitoring information from various resources and present it to users in a user-centric view rather than and administrative-centric point of view. The goal is to create a set of tools that can be used to augment grid job scheduling systems, meta-schedulers, applications, and script sets in order to provide the UCM information. The system provides various levels of an application programming interface that is useful through out the Grid environment and at the application level for logging messages, which are combined with the other user-centric monitoring information in a abstracted “data store”. A planned monitoring portal will also dynamically present the information to users in their web browser in a secure manor, which is also easily integrated into any JSR-compliant portal deployment that a VO might employ. The UCM is meant to be flexible and modular in the ways that it can be adopted to give the VO many choices to build a solution that works for them with special attention to the smaller VOs that do not have the resources to implement home-grown solutions.

  8. Jobs masonry in LHCb with elastic Grid Jobs

    NASA Astrophysics Data System (ADS)

    Stagni, F.; Charpentier, Ph

    2015-12-01

    In any distributed computing infrastructure, a job is normally forbidden to run for an indefinite amount of time. This limitation is implemented using different technologies, the most common one being the CPU time limit implemented by batch queues. It is therefore important to have a good estimate of how much CPU work a job will require: otherwise, it might be killed by the batch system, or by whatever system is controlling the jobs’ execution. In many modern interwares, the jobs are actually executed by pilot jobs, that can use the whole available time in running multiple consecutive jobs. If at some point the available time in a pilot is too short for the execution of any job, it should be released, while it could have been used efficiently by a shorter job. Within LHCbDIRAC, the LHCb extension of the DIRAC interware, we developed a simple way to fully exploit computing capabilities available to a pilot, even for resources with limited time capabilities, by adding elasticity to production MonteCarlo (MC) simulation jobs. With our approach, independently of the time available, LHCbDIRAC will always have the possibility to execute a MC job, whose length will be adapted to the available amount of time: therefore the same job, running on different computing resources with different time limits, will produce different amounts of events. The decision on the number of events to be produced is made just in time at the start of the job, when the capabilities of the resource are known. In order to know how many events a MC job will be instructed to produce, LHCbDIRAC simply requires three values: the CPU-work per event for that type of job, the power of the machine it is running on, and the time left for the job before being killed. Knowing these values, we can estimate the number of events the job will be able to simulate with the available CPU time. This paper will demonstrate that, using this simple but effective solution, LHCb manages to make a more efficient use of

  9. Smart Grid Cybersecurity: Job Performance Model Report

    SciTech Connect

    O'Neil, Lori Ross; Assante, Michael; Tobey, David

    2012-08-01

    This is the project report to DOE OE-30 for the completion of Phase 1 of a 3 phase report. This report outlines the work done to develop a smart grid cybersecurity certification. This work is being done with the subcontractor NBISE.

  10. Job Superscheduler Architecture and Performance in Computational Grid Environments

    NASA Technical Reports Server (NTRS)

    Shan, Hongzhang; Oliker, Leonid; Biswas, Rupak

    2003-01-01

    Computational grids hold great promise in utilizing geographically separated heterogeneous resources to solve large-scale complex scientific problems. However, a number of major technical hurdles, including distributed resource management and effective job scheduling, stand in the way of realizing these gains. In this paper, we propose a novel grid superscheduler architecture and three distributed job migration algorithms. We also model the critical interaction between the superscheduler and autonomous local schedulers. Extensive performance comparisons with ideal, central, and local schemes using real workloads from leading computational centers are conducted in a simulation environment. Additionally, synthetic workloads are used to perform a detailed sensitivity analysis of our superscheduler. Several key metrics demonstrate that substantial performance gains can be achieved via smart superscheduling in distributed computational grids.

  11. Data location-aware job scheduling in the grid. Application to the GridWay metascheduler

    NASA Astrophysics Data System (ADS)

    Delgado Peris, Antonio; Hernandez, Jose; Huedo, Eduardo; Llorente, Ignacio M.

    2010-04-01

    Grid infrastructures constitute nowadays the core of the computing facilities of the biggest LHC experiments. These experiments produce and manage petabytes of data per year and run thousands of computing jobs every day to process that data. It is the duty of metaschedulers to allocate the tasks to the most appropriate resources at the proper time. Our work reviews the policies that have been proposed for the scheduling of grid jobs in the context of very data-intensive applications. We indicate some of the practical problems that such models will face and describe what we consider essential characteristics of an optimum scheduling system: aim to minimise not only job turnaround time but also data replication, flexibility to support different virtual organisation requirements and capability to coordinate the tasks of data placement and job allocation while keeping their execution decoupled. These ideas have guided the development of an enhanced prototype for GridWay, a general purpose metascheduler, part of the Globus Toolkit and member of the EGEE's RESPECT program. Current GridWay's scheduling algorithm is unaware of data location. Our prototype makes it possible for job requests to set data needs not only as absolute requirements but also as functions for resource ranking. As our tests show, this makes it more flexible than currently used resource brokers to implement different data-aware scheduling algorithms.

  12. Multicore job scheduling in the Worldwide LHC Computing Grid

    NASA Astrophysics Data System (ADS)

    Forti, A.; Pérez-Calero Yzquierdo, A.; Hartmann, T.; Alef, M.; Lahiff, A.; Templon, J.; Dal Pra, S.; Gila, M.; Skipsey, S.; Acosta-Silva, C.; Filipcic, A.; Walker, R.; Walker, C. J.; Traynor, D.; Gadrat, S.

    2015-12-01

    After the successful first run of the LHC, data taking is scheduled to restart in Summer 2015 with experimental conditions leading to increased data volumes and event complexity. In order to process the data generated in such scenario and exploit the multicore architectures of current CPUs, the LHC experiments have developed parallelized software for data reconstruction and simulation. However, a good fraction of their computing effort is still expected to be executed as single-core tasks. Therefore, jobs with diverse resources requirements will be distributed across the Worldwide LHC Computing Grid (WLCG), making workload scheduling a complex problem in itself. In response to this challenge, the WLCG Multicore Deployment Task Force has been created in order to coordinate the joint effort from experiments and WLCG sites. The main objective is to ensure the convergence of approaches from the different LHC Virtual Organizations (VOs) to make the best use of the shared resources in order to satisfy their new computing needs, minimizing any inefficiency originated from the scheduling mechanisms, and without imposing unnecessary complexities in the way sites manage their resources. This paper describes the activities and progress of the Task Force related to the aforementioned topics, including experiences from key sites on how to best use different batch system technologies, the evolution of workload submission tools by the experiments and the knowledge gained from scale tests of the different proposed job submission strategies.

  13. Exploring virtualisation tools with a new virtualisation provisioning method to test dynamic grid environments for ALICE grid jobs over ARC grid middleware

    NASA Astrophysics Data System (ADS)

    Wagner, B.; Kileng, B.; Alice Collaboration

    2014-06-01

    The Nordic Tier-1 centre for LHC is distributed over several computing centres. It uses ARC as the internal computing grid middleware. ALICE uses its own grid middleware AliEn to distribute jobs and the necessary software application stack. To make use of most of the AliEn infrastructure and software deployment methods for running ALICE grid jobs on ARC, we are investigating different possible virtualisation technologies. For this a testbed and possible framework for bridging different middleware systems is under development. It allows us to test a variety of virtualisation methods and software deployment technologies in the form of different virtual machines.

  14. Remote Job Testing for the Neutron Science TeraGrid Gateway

    SciTech Connect

    Lynch, Vickie E; Cobb, John W; Miller, Stephen D; Reuter, Michael A; Smith, Bradford C

    2009-01-01

    Remote job execution gives neutron science facilities access to high performance computing such as the TeraGrid. A scientific community can use community software with a community certificate and account through a common interface of a portal. Results show this approach is successful, but with more testing and problem solving, we expect remote job executions to become more reliable.

  15. The Grid[Way] Job Template Manager, a tool for parameter sweeping

    NASA Astrophysics Data System (ADS)

    Lorca, Alejandro; Huedo, Eduardo; Llorente, Ignacio M.

    2011-04-01

    Parameter sweeping is a widely used algorithmic technique in computational science. It is specially suited for high-throughput computing since the jobs evaluating the parameter space are loosely coupled or independent. A tool that integrates the modeling of a parameter study with the control of jobs in a distributed architecture is presented. The main task is to facilitate the creation and deletion of job templates, which are the elements describing the jobs to be run. Extra functionality relies upon the GridWay Metascheduler, acting as the middleware layer for job submission and control. It supports interesting features like multi-dimensional sweeping space, wildcarding of parameters, functional evaluation of ranges, value-skipping and job template automatic indexation. The use of this tool increases the reliability of the parameter sweep study thanks to the systematic bookkeeping of job templates and respective job statuses. Furthermore, it simplifies the porting of the target application to the grid reducing the required amount of time and effort. Program summaryProgram title: Grid[Way] Job Template Manager (version 1.0) Catalogue identifier: AEIE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Apache license 2.0 No. of lines in distributed program, including test data, etc.: 3545 No. of bytes in distributed program, including test data, etc.: 126 879 Distribution format: tar.gz Programming language: Perl 5.8.5 and above Computer: Any (tested on PC x86 and x86_64) Operating system: Unix, GNU/Linux (tested on Ubuntu 9.04, Scientific Linux 4.7, centOS 5.4), Mac OS X (tested on Snow Leopard 10.6) RAM: 10 MB Classification: 6.5 External routines: The GridWay Metascheduler [1]. Nature of problem: To parameterize and manage an application running on a grid or cluster. Solution method: Generation of job templates as a cross product of

  16. Grid-based HPC astrophysical applications at INAF Catania.

    NASA Astrophysics Data System (ADS)

    Costa, A.; Calanducci, A.; Becciani, U.; Capuzzo Dolcetta, R.

    The research activity on grid area at INAF Catania has been devoted to two main goals: the integration of a multiprocessor supercomputer (IBM SP4) within INFN-GRID middleware and the developing of a web-portal, Astrocomp-G, for the submission of astrophysical jobs into the grid infrastructure. Most of the actual grid implementation infrastructure is based on common hardware, i.e. i386 architecture machines (Intel Celeron, Pentium III, IV, Amd Duron, Athlon) using Linux RedHat OS. We were the first institute to integrate a totally different machine, an IBM SP with RISC architecture and AIX OS, as a powerful Worker Node inside a grid infrastructure. We identified and ported to AIX OS the grid components dealing with job monitoring and execution and properly tuned the Computing Element to delivery jobs into this special Worker Node. For testing purpose we used MARA, an astrophysical application for the analysis of light curve sequences. Astrocomp-G is a user-friendly front end to our grid site. Users who want to submit the astrophysical applications already available in the portal need to own a valid personal X509 certificate in addiction to a username and password released by the grid portal web master. The personal X509 certificate is a prerequisite for the creation of a short or long-term proxy certificate that allows the grid infrastructure services to identify clearly whether the owner of the job has the permissions to use resources and data. X509 and proxy certificates are part of GSI (Grid Security Infrastructure), a standard security tool adopted by all major grid sites around the world.

  17. Impact of admission and cache replacement policies on response times of jobs on data grids

    SciTech Connect

    Otoo, Ekow J.; Rotem, Doron; Shoshani, Arie

    2003-04-21

    Caching techniques have been used widely to improve the performance gaps of storage hierarchies in computing systems. Little is known about the impact of policies on the response times of jobs that access and process very large files in data grids particularly when data and computations on the data have to be co-located on the same host. In data intensive applications that access large data files over wide area network environment, such as data-grids, the combination of policies for job servicing (or scheduling), caching and cache replacement can significantly impact the performance of grid jobs. We present some preliminary results of a simulation study that combines an admission policy with a cache replacement policy when servicing jobs submitted to a storage resource manager. The results show that, in comparison to a first come first serve policy, the response times of jobs are significantly improved, for practical limits of disk cache sizes, when the jobs that are back-logged to access the same files are taken into consideration in scheduling the next file to be retrieved into the disk cache. Not only are the response times of jobs improved, but also the metric measures for caching policies, such as the hit ratio and the average cost per retrieval, are improved irrespective of the cache replacement policy.

  18. A modify ant colony optimization for the grid jobs scheduling problem with QoS requirements

    NASA Astrophysics Data System (ADS)

    Pu, Xun; Lu, XianLiang

    2011-10-01

    Job scheduling with customers' quality of service (QoS) requirement is challenging in grid environment. In this paper, we present a modify Ant colony optimization (MACO) for the Job scheduling problem in grid. Instead of using the conventional construction approach to construct feasible schedules, the proposed algorithm employs a decomposition method to satisfy the customer's deadline and cost requirements. Besides, a new mechanism of service instances state updating is embedded to improve the convergence of MACO. Experiments demonstrate the effectiveness of the proposed algorithm.

  19. An ACO Approach to Job Scheduling in Grid Environment

    NASA Astrophysics Data System (ADS)

    Kant, Ajay; Sharma, Arnesh; Agarwal, Sanchit; Chandra, Satish

    Due to recent advances in the wide-area network technologies and low cost of computing resources, grid computing has become an active research area. The efficiency of a grid environment largely depends on the scheduling method it follows. This paper proposes a framework for grid scheduling using dynamic information and an ant colony optimization algorithm to improve the decision of scheduling. A notion of two types of ants -'Red Ants' and 'Black Ants' have been introduced. The purpose of red and Black Ants has been explained and algorithms have been developed for optimizing the resource utilization. The proposed method does optimization at two levels and it is found to be more efficient than existing methods.

  20. Using ssh and sshfs to virtualize Grid job submission with RCondor

    NASA Astrophysics Data System (ADS)

    Sfiligoi, I.; Dost, J. M.

    2014-06-01

    The HTCondor based glideinWMS has become the product of choice for exploiting Grid resources for many communities. Unfortunately, its default operational model expects users to log into a machine running a HTCondor schedd before being able to submit their jobs. Many users would instead prefer to use their local workstation for everything. A product that addresses this problem is RCondor, a module delivered with the HTCondor package. RCondor provides command line tools that simulate the behavior of a local HTCondor installation, while using ssh under the hood to execute commands on the remote node instead. RCondor also interfaces with sshfs, virtualizing access to remote files, thus giving the user the impression of a truly local HTCondor installation. This paper presents a detailed description of RCondor, as well as comparing it to the other methods currently available for accessing remote HTCondor schedds.

  1. Wavelet-Based Grid Generation

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1996-01-01

    Wavelets can provide a basis set in which the basis functions are constructed by dilating and translating a fixed function known as the mother wavelet. The mother wavelet can be seen as a high pass filter in the frequency domain. The process of dilating and expanding this high-pass filter can be seen as altering the frequency range that is 'passed' or detected. The process of translation moves this high-pass filter throughout the domain, thereby providing a mechanism to detect the frequencies or scales of information at every location. This is exactly the type of information that is needed for effective grid generation. This paper provides motivation to use wavelets for grid generation in addition to providing the final product: source code for wavelet-based grid generation.

  2. Development of Job-Based Reading Tests

    DTIC Science & Technology

    1982-11-01

    representing the four types of Army job reading tasks identified in prior research (Locating Job Information in an Index , in Tables and Graphs,DD re I= 3 E...categories of Army job reading tasks established in prior research: Locating Job Information in an Index , in Tables and Graphs, and in Narrative Descriptions...as the index of general reading ability. This decision was based on a known correlation of approximately 0.80 between FA and the Metropolitan Reading

  3. A History-based Estimation for LHCb job requirements

    NASA Astrophysics Data System (ADS)

    Rauschmayr, Nathalie

    2015-12-01

    The main goal of a Workload Management System (WMS) is to find and allocate resources for the given tasks. The more and better job information the WMS receives, the easier will be to accomplish its task, which directly translates into higher utilization of resources. Traditionally, the information associated with each job, like expected runtime, is defined beforehand by the Production Manager in best case and fixed arbitrary values by default. In the case of LHCb's Workload Management System no mechanisms are provided which automate the estimation of job requirements. As a result, much more CPU time is normally requested than actually needed. Particularly, in the context of multicore jobs this presents a major problem, since single- and multicore jobs shall share the same resources. Consequently, grid sites need to rely on estimations given by the VOs in order to not decrease the utilization of their worker nodes when making multicore job slots available. The main reason for going to multicore jobs is the reduction of the overall memory footprint. Therefore, it also needs to be studied how memory consumption of jobs can be estimated. A detailed workload analysis of past LHCb jobs is presented. It includes a study of job features and their correlation with runtime and memory consumption. Following the features, a supervised learning algorithm is developed based on a history based prediction. The aim is to learn over time how jobs’ runtime and memory evolve influenced due to changes in experiment conditions and software versions. It will be shown that estimation can be notably improved if experiment conditions are taken into account.

  4. Competitive coevolutionary learning of fuzzy systems for job exchange in computational grids.

    PubMed

    Fölling, Alexander; Grimme, Christian; Lepping, Joachim; Papaspyrou, Alexander; Schwiegelshohn, Uwe

    2009-01-01

    In our work, we address the problem of workload distribution within a computational grid. In this scenario, users submit jobs to local high performance computing (HPC) systems which are, in turn, interconnected such that the exchange of jobs to other sites becomes possible. Providers are able to avoid local execution of jobs by offering them to other HPC sites. In our implementation, this distribution decision is made by a fuzzy system controller whose parameters can be adjusted to establish different exchange behaviors. In such a system, it is essential that HPC sites can only benefit if the workload is equitably (not necessarily equally) portioned among all participants. However, each site egoistically strives only for the minimization of its own jobs' response times regularly at the expense of other sites. This scenario is particularly suited for the application of a competitive coevolutionary algorithm: the fuzzy systems of the participating HPC sites are modeled as species that evolve in different populations while having to compete within the commonly shared ecosystem. Using real workload traces and grid setups, we show that opportunistic cooperation leads to significant improvements for each HPC site as well as for the overall system.

  5. Grid-based Meteorological and Crisis Applications

    NASA Astrophysics Data System (ADS)

    Hluchy, Ladislav; Bartok, Juraj; Tran, Viet; Lucny, Andrej; Gazak, Martin

    2010-05-01

    forecast model is a subject of the parameterization and parameter optimization before its real deployment. The parameter optimization requires tens of evaluations of the parameterized model accuracy and each evaluation of the model parameters requires re-running of the hundreds of meteorological situations collected over the years and comparison of the model output with the observed data. The architecture and inherent heterogeneity of both examples and their computational complexity and their interfaces to other systems and services make them well suited for decomposition into a set of web and grid services. Such decomposition has been performed within several projects we participated or participate in cooperation with academic sphere, namely int.eu.grid (dispersion model deployed as a pilot application to an interactive grid), SEMCO-WS (semantic composition of the web and grid services), DMM (development of a significant meteorological phenomena prediction system based on the data mining), VEGA 2009-2011 and EGEE III. We present useful and practical applications of technologies of high performance computing. The use of grid technology provides access to much higher computation power not only for modeling and simulation, but also for the model parameterization and validation. This results in the model parameters optimization and more accurate simulation outputs. Having taken into account that the simulations are used for the aviation, road traffic and crisis management, even small improvement in accuracy of predictions may result in significant improvement of safety as well as cost reduction. We found grid computing useful for our applications. We are satisfied with this technology and our experience encourages us to extend its use. Within an ongoing project (DMM) we plan to include processing of satellite images which extends our requirement on computation very rapidly. We believe that thanks to grid computing we are able to handle the job almost in real time.

  6. Ganga: User-friendly Grid job submission and management tool for LHC and beyond

    NASA Astrophysics Data System (ADS)

    Vanderster, D. C.; Brochu, F.; Cowan, G.; Egede, U.; Elmsheuser, J.; Gaidoz, B.; Harrison, K.; Lee, H. C.; Liko, D.; Maier, A.; Mościcki, J. T.; Muraru, A.; Pajchel, K.; Reece, W.; Samset, B.; Slater, M.; Soroko, A.; Tan, C. L.; Williams, M.

    2010-04-01

    Ganga has been widely used for several years in ATLAS, LHCb and a handful of other communities. Ganga provides a simple yet powerful interface for submitting and managing jobs to a variety of computing backends. The tool helps users configuring applications and keeping track of their work. With the major release of version 5 in summer 2008, Ganga's main user-friendly features have been strengthened. Examples include a new configuration interface, enhanced support for job collections, bulk operations and easier access to subjobs. In addition to the traditional batch and Grid backends such as Condor, LSF, PBS, gLite/EDG a point-to-point job execution via ssh on remote machines is now supported. Ganga is used as an interactive job submission interface for end-users, and also as a job submission component for higher-level tools. For example GangaRobot is used to perform automated, end-to-end testing of distributed data analysis. Ganga comes with an extensive test suite covering more than 350 test cases. The development model involves all active developers in the release management shifts which is an important and novel approach for the distributed software collaborations. Ganga 5 is a mature, stable and widely-used tool with long-term support from the HEP community.

  7. MrGrid: A Portable Grid Based Molecular Replacement Pipeline

    PubMed Central

    Reboul, Cyril F.; Androulakis, Steve G.; Phan, Jennifer M. N.; Whisstock, James C.; Goscinski, Wojtek J.; Abramson, David; Buckle, Ashley M.

    2010-01-01

    Background The crystallographic determination of protein structures can be computationally demanding and for difficult cases can benefit from user-friendly interfaces to high-performance computing resources. Molecular replacement (MR) is a popular protein crystallographic technique that exploits the structural similarity between proteins that share some sequence similarity. But the need to trial permutations of search models, space group symmetries and other parameters makes MR time- and labour-intensive. However, MR calculations are embarrassingly parallel and thus ideally suited to distributed computing. In order to address this problem we have developed MrGrid, web-based software that allows multiple MR calculations to be executed across a grid of networked computers, allowing high-throughput MR. Methodology/Principal Findings MrGrid is a portable web based application written in Java/JSP and Ruby, and taking advantage of Apple Xgrid technology. Designed to interface with a user defined Xgrid resource the package manages the distribution of multiple MR runs to the available nodes on the Xgrid. We evaluated MrGrid using 10 different protein test cases on a network of 13 computers, and achieved an average speed up factor of 5.69. Conclusions MrGrid enables the user to retrieve and manage the results of tens to hundreds of MR calculations quickly and via a single web interface, as well as broadening the range of strategies that can be attempted. This high-throughput approach allows parameter sweeps to be performed in parallel, improving the chances of MR success. PMID:20386612

  8. Arc Length Based Grid Distribution For Surface and Volume Grids

    NASA Technical Reports Server (NTRS)

    Mastin, C. Wayne

    1996-01-01

    Techniques are presented for distributing grid points on parametric surfaces and in volumes according to a specified distribution of arc length. Interpolation techniques are introduced which permit a given distribution of grid points on the edges of a three-dimensional grid block to be propagated through the surface and volume grids. Examples demonstrate how these methods can be used to improve the quality of grids generated by transfinite interpolation.

  9. Final Report for 'An Abstract Job Handling Grid Service for Dataset Analysis'

    SciTech Connect

    David A Alexander

    2005-07-11

    For Phase I of the Job Handling project, Tech-X has built a Grid service for processing analysis requests, as well as a Graphical User Interface (GUI) client that uses the service. The service is designed to generically support High-Energy Physics (HEP) experimental analysis tasks. It has an extensible, flexible, open architecture and language. The service uses the Solenoidal Tracker At RHIC (STAR) experiment as a working example. STAR is an experiment at the Relativistic Heavy Ion Collider (RHIC) at the Brookhaven National Laboratory (BNL). STAR and other experiments at BNL generate multiple Petabytes of HEP data. The raw data is captured as millions of input files stored in a distributed data catalog. Potentially using thousands of files as input, analysis requests are submitted to a processing environment containing thousands of nodes. The Grid service provides a standard interface to the processing farm. It enables researchers to run large-scale, massively parallel analysis tasks, regardless of the computational resources available in their location.

  10. Space-based Science Operations Grid Prototype

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Welch, Clara L.; Redman, Sandra

    2004-01-01

    Grid technology is the up and coming technology that is enabling widely disparate services to be offered to users that is very economical, easy to use and not available on a wide basis. Under the Grid concept disparate organizations generally defined as "virtual organizations" can share services i.e. sharing discipline specific computer applications, required to accomplish the specific scientific and engineering organizational goals and objectives. Grids are emerging as the new technology of the future. Grid technology has been enabled by the evolution of increasingly high speed networking. Without the evolution of high speed networking Grid technology would not have emerged. NASA/Marshall Space Flight Center's (MSFC) Flight Projects Directorate, Ground Systems Department is developing a Space-based Science Operations Grid prototype to provide to scientists and engineers the tools necessary to operate space-based science payloads/experiments and for scientists to conduct public and educational outreach. In addition Grid technology can provide new services not currently available to users. These services include mission voice and video, application sharing, telemetry management and display, payload and experiment commanding, data mining, high order data processing, discipline specific application sharing and data storage, all from a single grid portal. The Prototype will provide most of these services in a first step demonstration of integrated Grid and space-based science operations technologies. It will initially be based on the International Space Station science operational services located at the Payload Operations Integration Center at MSFC, but can be applied to many NASA projects including free flying satellites and future projects. The Prototype will use the Internet2 Abilene Research and Education Network that is currently a 10 Gb backbone network to reach the University of Alabama at Huntsville and several other, as yet unidentified, Space Station based

  11. Grid artifact reduction for direct digital radiography detectors based on rotated stationary grids with homomorphic filtering

    SciTech Connect

    Kim, Dong Sik; Lee, Sanggyun

    2013-06-15

    Purpose: Grid artifacts are caused when using the antiscatter grid in obtaining digital x-ray images. In this paper, research on grid artifact reduction techniques is conducted especially for the direct detectors, which are based on amorphous selenium. Methods: In order to analyze and reduce the grid artifacts, the authors consider a multiplicative grid image model and propose a homomorphic filtering technique. For minimal damage due to filters, which are used to suppress the grid artifacts, rotated grids with respect to the sampling direction are employed, and min-max optimization problems for searching optimal grid frequencies and angles for given sampling frequencies are established. The authors then propose algorithms for the grid artifact reduction based on the band-stop filters as well as low-pass filters. Results: The proposed algorithms are experimentally tested for digital x-ray images, which are obtained from direct detectors with the rotated grids, and are compared with other algorithms. It is shown that the proposed algorithms can successfully reduce the grid artifacts for direct detectors. Conclusions: By employing the homomorphic filtering technique, the authors can considerably suppress the strong grid artifacts with relatively narrow-bandwidth filters compared to the normal filtering case. Using rotated grids also significantly reduces the ringing artifact. Furthermore, for specific grid frequencies and angles, the authors can use simple homomorphic low-pass filters in the spatial domain, and thus alleviate the grid artifacts with very low implementation complexity.

  12. Space-based Operations Grid Prototype

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Welch, Clara L.

    2003-01-01

    The Space based Operations Grid is intended to integrate the "high end" network services and compute resources that a remote payload investigator needs. This includes integrating and enhancing existing services such as access to telemetry, payload commanding, payload planning and internet voice distribution as well as the addition of services such as video conferencing, collaborative design, modeling or visualization, text messaging, application sharing, and access to existing compute or data grids. Grid technology addresses some of the greatest challenges and opportunities presented by the current trends in technology, i.e. how to take advantage of ever increasing bandwidth, how to manage virtual organizations and how to deal with the increasing threats to information technology security. We will discuss the pros and cons of using grid technology in space-based operations and share current plans for the prototype. It is hoped that early on the prototype can incorporate many of the existing as well as future services that are discussed in the first paragraph above to cooperating International Space Station Principle Investigators both nationally and internationally.

  13. Expected-Credibility-Based Job Scheduling for Reliable Volunteer Computing

    NASA Astrophysics Data System (ADS)

    Watanabe, Kan; Fukushi, Masaru; Horiguchi, Susumu

    This paper presents a proposal of an expected-credibility-based job scheduling method for volunteer computing (VC) systems with malicious participants who return erroneous results. Credibility-based voting is a promising approach to guaranteeing the computational correctness of VC systems. However, it relies on a simple round-robin job scheduling method that does not consider the jobs' order of execution, thereby resulting in numerous unnecessary job allocations and performance degradation of VC systems. To improve the performance of VC systems, the proposed job scheduling method selects a job to be executed prior to others dynamically based on two novel metrics: expected credibility and the expected number of results for each job. Simulation of VCs shows that the proposed method can improve the VC system performance up to 11%; It always outperforms the original round-robin method irrespective of the value of unknown parameters such as population and behavior of saboteurs.

  14. Technology for a NASA Space-Based Science Operations Grid

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Redman, Sandra H.

    2003-01-01

    This viewgraph representation presents an overview of a proposal to develop a space-based operations grid in support of space-based science experiments. The development of such a grid would provide a dynamic, secure and scalable architecture based on standards and next-generation reusable software and would enable greater science collaboration and productivity through the use of shared resources and distributed computing. The authors propose developing this concept for use on payload experiments carried aboard the International Space Station. Topics covered include: grid definitions, portals, grid development and coordination, grid technology and potential uses of such a grid.

  15. Cartesian-cell based grid generation and adaptive mesh refinement

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1993-01-01

    Viewgraphs on Cartesian-cell based grid generation and adaptive mesh refinement are presented. Topics covered include: grid generation; cell cutting; data structures; flow solver formulation; adaptive mesh refinement; and viscous flow.

  16. Grid based calibration of SWAT hydrological models

    NASA Astrophysics Data System (ADS)

    Gorgan, D.; Bacu, V.; Mihon, D.; Rodila, D.; Abbaspour, K.; Rouholahnejad, E.

    2012-07-01

    The calibration and execution of large hydrological models, such as SWAT (soil and water assessment tool), developed for large areas, high resolution, and huge input data, need not only quite a long execution time but also high computation resources. SWAT hydrological model supports studies and predictions of the impact of land management practices on water, sediment, and agricultural chemical yields in complex watersheds. The paper presents the gSWAT application as a web practical solution for environmental specialists to calibrate extensive hydrological models and to run scenarios, by hiding the complex control of processes and heterogeneous resources across the grid based high computation infrastructure. The paper highlights the basic functionalities of the gSWAT platform, and the features of the graphical user interface. The presentation is concerned with the development of working sessions, interactive control of calibration, direct and basic editing of parameters, process monitoring, and graphical and interactive visualization of the results. The experiments performed on different SWAT models and the obtained results argue the benefits brought by the grid parallel and distributed environment as a solution for the processing platform. All the instances of SWAT models used in the reported experiments have been developed through the enviroGRIDS project, targeting the Black Sea catchment area.

  17. Grid-based platform for training in Earth Observation

    NASA Astrophysics Data System (ADS)

    Petcu, Dana; Zaharie, Daniela; Panica, Silviu; Frincu, Marc; Neagul, Marian; Gorgan, Dorian; Stefanut, Teodor

    2010-05-01

    found in [4]. The Workload Management System (WMS) provides two types of resource managers. The first one will be based on Condor HTC and use Condor as a job manager for task dispatching and working nodes (for development purposes) while the second one will use GT4 GRAM (for production purposes). The WMS main component, the Grid Task Dispatcher (GTD), is responsible for the interaction with other internal services as the composition engine in order to facilitate access to the processing platform. Its main responsibilities are to receive tasks from the workflow engine or directly from user interface, to use a task description language (the ClassAd meta language in case of Condor HTC) for job units, to submit and check the status of jobs inside the workload management system and to retrieve job logs for debugging purposes. More details can be found in [4]. A particular component of the platform is eGLE, the eLearning environment. It provides the functionalities necessary to create the visual appearance of the lessons through the usage of visual containers like tools, patterns and templates. The teacher uses the platform for testing the already created lessons, as well as for developing new lesson resources, such as new images and workflows describing graph-based processing. The students execute the lessons or describe and experiment with new workflows or different data. The eGLE database includes several workflow-based lesson descriptions, teaching materials and lesson resources, selected satellite and spatial data. More details can be found in [5]. A first training event of using the platform was organized in September 2009 during 11th SYNASC symposium (links to the demos, testing interface, and exercises are available on project site [1]). The eGLE component was presented at 4th GPC conference in May 2009. Moreover, the functionality of the platform will be presented as demo in April 2010 at 5th EGEE User forum. References: [1] GiSHEO consortium, Project site, http

  18. Design of a Grid Service-based Platform for In Silico Protein-Ligand Screenings

    PubMed Central

    Levesque, Marshall J.; Ichikawa, Kohei; Date, Susumu; Haga, Jason H.

    2009-01-01

    Grid computing offers the powerful alternative of sharing resources on a worldwide scale, across different institutions to run computationally intensive, scientific applications without the need for a centralized supercomputer. Much effort has been put into development of software that deploys legacy applications on a grid-based infrastructure and efficiently uses available resources. One field that can benefit greatly from the use of grid resources is that of drug discovery since molecular docking simulations are an integral part of the discovery process. In this paper, we present a scalable, reusable platform to choreograph large virtual screening experiments over a computational grid using the molecular docking simulation software DOCK. Software components are applied on multiple levels to create automated workflows consisting of input data delivery, job scheduling, status query, and collection of output to be displayed in a manageable fashion for further analysis. This was achieved using Opal OP to wrap the DOCK application as a grid service and PERL for data manipulation purposes, alleviating the requirement for extensive knowledge of grid infrastructure. With the platform in place, a screening of the ZINC 2,066,906 compound “druglike” subset database against an enzyme's catalytic site was successfully performed using the MPI version of DOCK 5.4 on the PRAGMA grid testbed. The screening required 11.56 days laboratory time and utilized 200 processors over 7 clusters. PMID:18771812

  19. A Judgement-Based Framework for Analysing Adult Job Choices

    ERIC Educational Resources Information Center

    Athanasou, James A.

    2004-01-01

    The purpose of this paper is to introduce a judgement-based framework for adult job and career choices. This approach is set out as a perceptual-judgemental-reinforcement approach. Job choice is viewed as cognitive acquisition over time and is epitomised by a learning process. Seven testable assumptions are derived from the model. (Contains 1…

  20. Job optimization in ATLAS TAG-based distributed analysis

    NASA Astrophysics Data System (ADS)

    Mambelli, M.; Cranshaw, J.; Gardner, R.; Maeno, T.; Malon, D.; Novak, M.

    2010-04-01

    The ATLAS experiment is projected to collect over one billion events/year during the first few years of operation. The efficient selection of events for various physics analyses across all appropriate samples presents a significant technical challenge. ATLAS computing infrastructure leverages the Grid to tackle the analysis across large samples by organizing data into a hierarchical structure and exploiting distributed computing to churn through the computations. This includes events at different stages of processing: RAW, ESD (Event Summary Data), AOD (Analysis Object Data), DPD (Derived Physics Data). Event Level Metadata Tags (TAGs) contain information about each event stored using multiple technologies accessible by POOL and various web services. This allows users to apply selection cuts on quantities of interest across the entire sample to compile a subset of events that are appropriate for their analysis. This paper describes new methods for organizing jobs using the TAGs criteria to analyze ATLAS data. It further compares different access patterns to the event data and explores ways to partition the workload for event selection and analysis. Here analysis is defined as a broader set of event processing tasks including event selection and reduction operations ("skimming", "slimming" and "thinning") as well as DPD making. Specifically it compares analysis with direct access to the events (AOD and ESD data) to access mediated by different TAG-based event selections. We then compare different ways of splitting the processing to maximize performance.

  1. Spatial data grid based on CDN

    NASA Astrophysics Data System (ADS)

    Hu, XiaoGuang; Zhu, Xinyan; Li, Deren

    2008-12-01

    This paper firstly introduces the spatial data grid and the CDN (Content Delivery Network) technology. And then it depicts the significance of integrating grid with CDN. On this basis, this paper proposes a method of constructing the spatial data grid system by using CDN to support the massive spatial data online service. Finally, the simulation results by OPNET show that the programme do can improve the system performance, and reduce response time in a greater extent.

  2. GridLAB-D: An Agent-Based Simulation Framework for Smart Grids

    DOE PAGES

    Chassin, David P.; Fuller, Jason C.; Djilali, Ned

    2014-01-01

    Simulation of smart grid technologies requires a fundamentally new approach to integrated modeling of power systems, energy markets, building technologies, and the plethora of other resources and assets that are becoming part of modern electricity production, delivery, and consumption systems. As a result, the US Department of Energy’s Office of Electricity commissioned the development of a new type of power system simulation tool called GridLAB-D that uses an agent-based approach to simulating smart grids. This paper presents the numerical methods and approach to time-series simulation used by GridLAB-D and reviews applications in power system studies, market design, building control systemmore » design, and integration of wind power in a smart grid.« less

  3. GridLAB-D: An Agent-Based Simulation Framework for Smart Grids

    SciTech Connect

    Chassin, David P.; Fuller, Jason C.; Djilali, Ned

    2014-06-23

    Simulation of smart grid technologies requires a fundamentally new approach to integrated modeling of power systems, energy markets, building technologies, and the plethora of other resources and assets that are becoming part of modern electricity production, delivery, and consumption systems. As a result, the US Department of Energy’s Office of Electricity commissioned the development of a new type of power system simulation tool called GridLAB-D that uses an agent-based approach to simulating smart grids. This paper presents the numerical methods and approach to time-series simulation used by GridLAB-D and reviews applications in power system studies, market design, building control system design, and integration of wind power in a smart grid.

  4. A Grid-based solution for management and analysis of microarrays in distributed experiments

    PubMed Central

    Porro, Ivan; Torterolo, Livia; Corradi, Luca; Fato, Marco; Papadimitropoulos, Adam; Scaglione, Silvia; Schenone, Andrea; Viti, Federica

    2007-01-01

    Several systems have been presented in the last years in order to manage the complexity of large microarray experiments. Although good results have been achieved, most systems tend to lack in one or more fields. A Grid based approach may provide a shared, standardized and reliable solution for storage and analysis of biological data, in order to maximize the results of experimental efforts. A Grid framework has been therefore adopted due to the necessity of remotely accessing large amounts of distributed data as well as to scale computational performances for terabyte datasets. Two different biological studies have been planned in order to highlight the benefits that can emerge from our Grid based platform. The described environment relies on storage services and computational services provided by the gLite Grid middleware. The Grid environment is also able to exploit the added value of metadata in order to let users better classify and search experiments. A state-of-art Grid portal has been implemented in order to hide the complexity of framework from end users and to make them able to easily access available services and data. The functional architecture of the portal is described. As a first test of the system performances, a gene expression analysis has been performed on a dataset of Affymetrix GeneChip® Rat Expression Array RAE230A, from the ArrayExpress database. The sequence of analysis includes three steps: (i) group opening and image set uploading, (ii) normalization, and (iii) model based gene expression (based on PM/MM difference model). Two different Linux versions (sequential and parallel) of the dChip software have been developed to implement the analysis and have been tested on a cluster. From results, it emerges that the parallelization of the analysis process and the execution of parallel jobs on distributed computational resources actually improve the performances. Moreover, the Grid environment have been tested both against the possibility of

  5. Fine-grained authorization for job execution in the Grid : design and implementation.

    SciTech Connect

    Keahey, K.; Welch, V.; Lang, S.; Liu, B.; Meder, S.; Mathematics and Computer Science; Univ. of Chicago; Univ. of Houston

    2004-04-25

    In this paper, we describe our work on enabling fine-grained authorization for resource usage and management. We address the need of virtual organizations to enforce their own polices in addition to those of the resource owners, with regards to both resource consumption and job management. To implement this design, we propose changes and extensions to the Globus Toolkit's version 2 resource management mechanism. We describe the prototype and policy language that we have designed to express fine-grained policies and present an analysis of our solution.

  6. Feature combination analysis in smart grid based using SOM for Sudan national grid

    NASA Astrophysics Data System (ADS)

    Bohari, Z. H.; Yusof, M. A. M.; Jali, M. H.; Sulaima, M. F.; Nasir, M. N. M.

    2015-12-01

    In the investigation of power grid security, the cascading failure in multicontingency situations has been a test because of its topological unpredictability and computational expense. Both system investigations and burden positioning routines have their limits. In this project, in view of sorting toward Self Organizing Maps (SOM), incorporated methodology consolidating spatial feature (distance)-based grouping with electrical attributes (load) to evaluate the vulnerability and cascading impact of various part sets in the force lattice. Utilizing the grouping result from SOM, sets of overwhelming stacked beginning victimized people to perform assault conspires and asses the consequent falling impact of their failures, and this SOM-based approach viably distinguishes the more powerless sets of substations than those from the conventional burden positioning and other bunching strategies. The robustness of power grids is a central topic in the design of the so called "smart grid". In this paper, to analyze the measures of importance of the nodes in a power grid under cascading failure. With these efforts, we can distinguish the most vulnerable nodes and protect them, improving the safety of the power grid. Also we can measure if a structure is proper for power grids.

  7. CaGrid Workflow Toolkit: A taverna based workflow tool for cancer grid

    PubMed Central

    2010-01-01

    Background In biological and medical domain, the use of web services made the data and computation functionality accessible in a unified manner, which helped automate the data pipeline that was previously performed manually. Workflow technology is widely used in the orchestration of multiple services to facilitate in-silico research. Cancer Biomedical Informatics Grid (caBIG) is an information network enabling the sharing of cancer research related resources and caGrid is its underlying service-based computation infrastructure. CaBIG requires that services are composed and orchestrated in a given sequence to realize data pipelines, which are often called scientific workflows. Results CaGrid selected Taverna as its workflow execution system of choice due to its integration with web service technology and support for a wide range of web services, plug-in architecture to cater for easy integration of third party extensions, etc. The caGrid Workflow Toolkit (or the toolkit for short), an extension to the Taverna workflow system, is designed and implemented to ease building and running caGrid workflows. It provides users with support for various phases in using workflows: service discovery, composition and orchestration, data access, and secure service invocation, which have been identified by the caGrid community as challenging in a multi-institutional and cross-discipline domain. Conclusions By extending the Taverna Workbench, caGrid Workflow Toolkit provided a comprehensive solution to compose and coordinate services in caGrid, which would otherwise remain isolated and disconnected from each other. Using it users can access more than 140 services and are offered with a rich set of features including discovery of data and analytical services, query and transfer of data, security protections for service invocations, state management in service interactions, and sharing of workflows, experiences and best practices. The proposed solution is general enough to be

  8. Optimizing Resource Utilization in Grid Batch Systems

    NASA Astrophysics Data System (ADS)

    Gellrich, Andreas

    2012-12-01

    On Grid sites, the requirements of the computing tasks (jobs) to computing, storage, and network resources differ widely. For instance Monte Carlo production jobs are almost purely CPU-bound, whereas physics analysis jobs demand high data rates. In order to optimize the utilization of the compute node resources, jobs must be distributed intelligently over the nodes. Although the job resource requirements cannot be deduced directly, jobs are mapped to POSIX UID/GID according to the VO, VOMS group and role information contained in the VOMS proxy. The UID/GID then allows to distinguish jobs, if users are using VOMS proxies as planned by the VO management, e.g. ‘role=production’ for Monte Carlo jobs. It is possible to setup and configure batch systems (queuing system and scheduler) at Grid sites based on these considerations although scaling limits were observed with the scheduler MAUI. In tests these limitations could be overcome with a home-made scheduler.

  9. Pilot factory - a Condor-based system for scalable Pilot Job generation in the Panda WMS framework

    NASA Astrophysics Data System (ADS)

    Chiu, Po-Hsiang; Potekhin, Maxim

    2010-04-01

    The Panda Workload Management System is designed around the concept of the Pilot Job - a "smart wrapper" for the payload executable that can probe the environment on the remote worker node before pulling down the payload from the server and executing it. Such design allows for improved logging and monitoring capabilities as well as flexibility in Workload Management. In the Grid environment (such as the Open Science Grid), Panda Pilot Jobs are submitted to remote sites via mechanisms that ultimately rely on Condor-G. As our experience has shown, in cases where a large number of Panda jobs are simultaneously routed to a particular remote site, the increased load on the head node of the cluster, which is caused by the Pilot Job submission, may lead to overall lack of scalability. We have developed a Condor-inspired solution to this problem, which is using the schedd-based glidein, whose mission is to redirect pilots to the native batch system. Once a glidein schedd is installed and running, it can be utilized exactly the same way as local schedds and therefore, from the user's perspective, Pilots thus submitted are quite similar to jobs submitted to the local Condor pool.

  10. DEM Based Modeling: Grid or TIN? The Answer Depends

    NASA Astrophysics Data System (ADS)

    Ogden, F. L.; Moreno, H. A.

    2015-12-01

    The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.

  11. ISS Space-Based Science Operations Grid for the Ground Systems Architecture Workshop (GSAW)

    NASA Technical Reports Server (NTRS)

    Welch, Clara; Bradford, Bob

    2003-01-01

    Contents include the following:What is grid? Benefits of a grid to space-based science operations. Our approach. Score of prototype grid. The security question. Short term objectives. Long term objectives. Space-based services required for operations. The prototype. Score of prototype grid. Prototype service layout. Space-based science grid service components.

  12. Grid-based electronic structure calculations: The tensor decomposition approach

    SciTech Connect

    Rakhuba, M.V.; Oseledets, I.V.

    2016-05-01

    We present a fully grid-based approach for solving Hartree–Fock and all-electron Kohn–Sham equations based on low-rank approximation of three-dimensional electron orbitals. Due to the low-rank structure the total complexity of the algorithm depends linearly with respect to the one-dimensional grid size. Linear complexity allows for the usage of fine grids, e.g. 8192{sup 3} and, thus, cheap extrapolation procedure. We test the proposed approach on closed-shell atoms up to the argon, several molecules and clusters of hydrogen atoms. All tests show systematical convergence with the required accuracy.

  13. Team Primacy Concept (TPC) Based Employee Evaluation and Job Performance

    ERIC Educational Resources Information Center

    Muniute, Eivina I.; Alfred, Mary V.

    2007-01-01

    This qualitative study explored how employees learn from Team Primacy Concept (TPC) based employee evaluation and how they use the feedback in performing their jobs. TPC based evaluation is a form of multirater evaluation, during which the employee's performance is discussed by one's peers in a face-to-face team setting. The study used Kolb's…

  14. Supersampling method for efficient grid-based electronic structure calculations.

    PubMed

    Ryu, Seongok; Choi, Sunghwan; Hong, Kwangwoo; Kim, Woo Youn

    2016-03-07

    The egg-box effect, the spurious variation of energy and force due to the discretization of continuous space, is an inherent vexing problem in grid-based electronic structure calculations. Its effective suppression allowing for large grid spacing is thus crucial for accurate and efficient computations. We here report that the supersampling method drastically alleviates it by eliminating the rapidly varying part of a target function along both radial and angular directions. In particular, the use of the sinc filtering function performs best because as an ideal low pass filter it clearly cuts out the high frequency region beyond allowed by a given grid spacing.

  15. SoilGrids250m: Global gridded soil information based on machine learning.

    PubMed

    Hengl, Tomislav; Mendes de Jesus, Jorge; Heuvelink, Gerard B M; Ruiperez Gonzalez, Maria; Kilibarda, Milan; Blagotić, Aleksandar; Shangguan, Wei; Wright, Marvin N; Geng, Xiaoyuan; Bauer-Marschallinger, Bernhard; Guevara, Mario Antonio; Vargas, Rodrigo; MacMillan, Robert A; Batjes, Niels H; Leenaars, Johan G B; Ribeiro, Eloi; Wheeler, Ichsani; Mantel, Stephan; Kempen, Bas

    2017-01-01

    This paper describes the technical development and accuracy assessment of the most recent and improved version of the SoilGrids system at 250m resolution (June 2016 update). SoilGrids provides global predictions for standard numeric soil properties (organic carbon, bulk density, Cation Exchange Capacity (CEC), pH, soil texture fractions and coarse fragments) at seven standard depths (0, 5, 15, 30, 60, 100 and 200 cm), in addition to predictions of depth to bedrock and distribution of soil classes based on the World Reference Base (WRB) and USDA classification systems (ca. 280 raster layers in total). Predictions were based on ca. 150,000 soil profiles used for training and a stack of 158 remote sensing-based soil covariates (primarily derived from MODIS land products, SRTM DEM derivatives, climatic images and global landform and lithology maps), which were used to fit an ensemble of machine learning methods-random forest and gradient boosting and/or multinomial logistic regression-as implemented in the R packages ranger, xgboost, nnet and caret. The results of 10-fold cross-validation show that the ensemble models explain between 56% (coarse fragments) and 83% (pH) of variation with an overall average of 61%. Improvements in the relative accuracy considering the amount of variation explained, in comparison to the previous version of SoilGrids at 1 km spatial resolution, range from 60 to 230%. Improvements can be attributed to: (1) the use of machine learning instead of linear regression, (2) to considerable investments in preparing finer resolution covariate layers and (3) to insertion of additional soil profiles. Further development of SoilGrids could include refinement of methods to incorporate input uncertainties and derivation of posterior probability distributions (per pixel), and further automation of spatial modeling so that soil maps can be generated for potentially hundreds of soil variables. Another area of future research is the development of methods

  16. SoilGrids250m: Global gridded soil information based on machine learning

    PubMed Central

    Mendes de Jesus, Jorge; Heuvelink, Gerard B. M.; Ruiperez Gonzalez, Maria; Kilibarda, Milan; Blagotić, Aleksandar; Shangguan, Wei; Wright, Marvin N.; Geng, Xiaoyuan; Bauer-Marschallinger, Bernhard; Guevara, Mario Antonio; Vargas, Rodrigo; MacMillan, Robert A.; Batjes, Niels H.; Leenaars, Johan G. B.; Ribeiro, Eloi; Wheeler, Ichsani; Mantel, Stephan; Kempen, Bas

    2017-01-01

    This paper describes the technical development and accuracy assessment of the most recent and improved version of the SoilGrids system at 250m resolution (June 2016 update). SoilGrids provides global predictions for standard numeric soil properties (organic carbon, bulk density, Cation Exchange Capacity (CEC), pH, soil texture fractions and coarse fragments) at seven standard depths (0, 5, 15, 30, 60, 100 and 200 cm), in addition to predictions of depth to bedrock and distribution of soil classes based on the World Reference Base (WRB) and USDA classification systems (ca. 280 raster layers in total). Predictions were based on ca. 150,000 soil profiles used for training and a stack of 158 remote sensing-based soil covariates (primarily derived from MODIS land products, SRTM DEM derivatives, climatic images and global landform and lithology maps), which were used to fit an ensemble of machine learning methods—random forest and gradient boosting and/or multinomial logistic regression—as implemented in the R packages ranger, xgboost, nnet and caret. The results of 10–fold cross-validation show that the ensemble models explain between 56% (coarse fragments) and 83% (pH) of variation with an overall average of 61%. Improvements in the relative accuracy considering the amount of variation explained, in comparison to the previous version of SoilGrids at 1 km spatial resolution, range from 60 to 230%. Improvements can be attributed to: (1) the use of machine learning instead of linear regression, (2) to considerable investments in preparing finer resolution covariate layers and (3) to insertion of additional soil profiles. Further development of SoilGrids could include refinement of methods to incorporate input uncertainties and derivation of posterior probability distributions (per pixel), and further automation of spatial modeling so that soil maps can be generated for potentially hundreds of soil variables. Another area of future research is the development of

  17. Deploying web-based visual exploration tools on the grid

    SciTech Connect

    Jankun-Kelly, T.J.; Kreylos, Oliver; Shalf, John; Ma, Kwan-Liu; Hamann, Bernd; Joy, Kenneth; Bethel, E. Wes

    2002-02-01

    We discuss a web-based portal for the exploration, encapsulation, and dissemination of visualization results over the Grid. This portal integrates three components: an interface client for structured visualization exploration, a visualization web application to manage the generation and capture of the visualization results, and a centralized portal application server to access and manage grid resources. We demonstrate the usefulness of the developed system using an example for Adaptive Mesh Refinement (AMR) data visualization.

  18. GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE

    SciTech Connect

    Mikkelsen, K.; Næss, S. K.; Eriksen, H. K.

    2013-11-10

    We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3) better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.

  19. Grist : grid-based data mining for astronomy

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph C.; Katz, Daniel S.; Miller, Craig D.; Walia, Harshpreet; Williams, Roy; Djorgovski, S. George; Graham, Matthew J.; Mahabal, Ashish; Babu, Jogesh; Berk, Daniel E. Vanden; Nichol, Robert

    2004-01-01

    The Grist project is developing a grid-technology based system as a research environment for astronomy with massive and complex datasets. This knowledge extraction system will consist of a library of distributed grid services controlled by a workflow system, compliant with standards emerging from the grid computing, web services, and virtual observatory communities. This new technology is being used to find high redshift quasars, study peculiar variable objects, search for transients in real time, and fit SDSS QSO spectra to measure black hole masses. Grist services are also a component of the 'hyperatlas' project to serve high-resolution multi-wavelength imagery over the Internet. In support of these science and outreach objectives, the Grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access, federation, mining, subsetting, source extraction, image mosaicking, statistics, and visualization.

  20. Advances in Distance-Based Hole Cuts on Overset Grids

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Pandya, Shishir A.

    2015-01-01

    An automatic and efficient method to determine appropriate hole cuts based on distances to the wall and donor stencil maps for overset grids is presented. A new robust procedure is developed to create a closed surface triangulation representation of each geometric component for accurate determination of the minimum hole. Hole boundaries are then displaced away from the tight grid-spacing regions near solid walls to allow grid overlap to occur away from the walls where cell sizes from neighboring grids are more comparable. The placement of hole boundaries is efficiently determined using a mid-distance rule and Cartesian maps of potential valid donor stencils with minimal user input. Application of this procedure typically results in a spatially-variable offset of the hole boundaries from the minimum hole with only a small number of orphan points remaining. Test cases on complex configurations are presented to demonstrate the new scheme.

  1. Market-Based Indian Grid Integration Study Options: Preprint

    SciTech Connect

    Stoltenberg, B.; Clark, K.; Negi, S. K.

    2012-03-01

    The Indian state of Gujarat is forecasting solar and wind generation expansion from 16% to 32% of installed generation capacity by 2015. Some states in India are already experiencing heavy wind power curtailment. Understanding how to integrate variable generation (VG) into the grid is of great interest to local transmission companies and India's Ministry of New and Renewable Energy. This paper describes the nature of a market-based integration study and how this approach, while new to Indian grid operation and planning, is necessary to understand how to operate and expand the grid to best accommodate the expansion of VG. Second, it discusses options in defining a study's scope, such as data granularity, generation modeling, and geographic scope. The paper also explores how Gujarat's method of grid operation and current system reliability will affect how an integration study can be performed.

  2. Computer-Based Job Aiding: Problem Solving at Work.

    DTIC Science & Technology

    1984-01-01

    KEY .ORDS (CUMue M mum. Wif. of aeeeM. am 8 F Wp Wi MMW) technical literacy , problem solving, computer based job aiding comliute~r based instruction...discourse processes, although those notions are opera- tionalized in a new way. Infomation Search in Technical Literacy as Problem Solving The dimensions of...computer-assisted technical literacy , information seeking strategies employed during an assembly task were analyzed in terms of overall group frequencies

  3. Research on the comparison of extension mechanism of cellular automaton based on hexagon grid and rectangular grid

    NASA Astrophysics Data System (ADS)

    Zhai, Xiaofang; Zhu, Xinyan; Xiao, Zhifeng; Weng, Jie

    2009-10-01

    Historically, cellular automata (CA) is a discrete dynamical mathematical structure defined on spatial grid. Research on cellular automata system (CAS) has focused on rule sets and initial condition and has not discussed its adjacency. Thus, the main focus of our study is the effect of adjacency on CA behavior. This paper is to compare rectangular grids with hexagonal grids on their characteristics, strengths and weaknesses. They have great influence on modeling effects and other applications including the role of nearest neighborhood in experimental design. Our researches present that rectangular and hexagonal grids have different characteristics. They are adapted to distinct aspects, and the regular rectangular or square grid is used more often than the hexagonal grid. But their relative merits have not been widely discussed. The rectangular grid is generally preferred because of its symmetry, especially in orthogonal co-ordinate system and the frequent use of raster from Geographic Information System (GIS). However, in terms of complex terrain, uncertain and multidirectional region, we have preferred hexagonal grids and methods to facilitate and simplify the problem. Hexagonal grids can overcome directional warp and have some unique characteristics. For example, hexagonal grids have a simpler and more symmetric nearest neighborhood, which avoids the ambiguities of the rectangular grids. Movement paths or connectivity, the most compact arrangement of pixels, make hexagonal appear great dominance in the process of modeling and analysis. The selection of an appropriate grid should be based on the requirements and objectives of the application. We use rectangular and hexagonal grids respectively for developing city model. At the same time we make use of remote sensing images and acquire 2002 and 2005 land state of Wuhan. On the base of city land state in 2002, we make use of CA to simulate reasonable form of city in 2005. Hereby, these results provide a proof of

  4. Software-Based Challenges of Developing the Future Distribution Grid

    SciTech Connect

    Stewart, Emma; Kiliccote, Sila; McParland, Charles

    2014-06-01

    distribution grid modeling, and measured data sources are a key missing element . Modeling tools need to be calibrated based on measured grid data to validate their output in varied conditions such as high renewables penetration and rapidly changing topology. In addition, establishing a standardized data modeling format would enable users to transfer data among tools to take advantage of different analysis features. ?

  5. Constructing the ASCI computational grid

    SciTech Connect

    BEIRIGER,JUDY I.; BIVENS,HUGH P.; HUMPHREYS,STEVEN L.; JOHNSON,WILBUR R.; RHEA,RONALD E.

    2000-06-01

    The Accelerated Strategic Computing Initiative (ASCI) computational grid is being constructed to interconnect the high performance computing resources of the nuclear weapons complex. The grid will simplify access to the diverse computing, storage, network, and visualization resources, and will enable the coordinated use of shared resources regardless of location. To match existing hardware platforms, required security services, and current simulation practices, the Globus MetaComputing Toolkit was selected to provide core grid services. The ASCI grid extends Globus functionality by operating as an independent grid, incorporating Kerberos-based security, interfacing to Sandia's Cplant{trademark},and extending job monitoring services. To fully meet ASCI's needs, the architecture layers distributed work management and criteria-driven resource selection services on top of Globus. These services simplify the grid interface by allowing users to simply request ''run code X anywhere''. This paper describes the initial design and prototype of the ASCI grid.

  6. A Cartesian grid-based unified gas kinetic scheme

    NASA Astrophysics Data System (ADS)

    Chen, Songze; Xu, Kun

    2014-12-01

    A Cartesian grid-based unified gas kinetic scheme is developed. In this approach, any oriented boundary in a Cartesian grid is represented by many directional boundary points. The numerical flux is evaluated on each boundary point. Then, a boundary flux interpolation method (BFIM) is constructed to distribute the boundary effect to the flow evolution on regular Cartesian grid points. The BFIM provides a general strategy to implement any kind of boundary condition on Cartesian grid. The newly developed technique is implemented in the unified gas kinetic scheme, where the scheme is reformulated into a finite difference format. Several typical test cases are simulated with different geometries. For example, the thermophoresis phenomenon for a plate with infinitesimal thickness immersed in a rarefied flow environment is calculated under different orientations on the same Cartesian grid. These computational results validate the BFIM in the unified scheme for the capturing of different thermal boundary conditions. The BFIM can be extended to the moving boundary problems as well.

  7. A Grid Infrastructure for Supporting Space-based Science Operations

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Redman, Sandra H.; McNair, Ann R. (Technical Monitor)

    2002-01-01

    Emerging technologies for computational grid infrastructures have the potential for revolutionizing the way computers are used in all aspects of our lives. Computational grids are currently being implemented to provide a large-scale, dynamic, and secure research and engineering environments based on standards and next-generation reusable software, enabling greater science and engineering productivity through shared resources and distributed computing for less cost than traditional architectures. Combined with the emerging technologies of high-performance networks, grids provide researchers, scientists and engineers the first real opportunity for an effective distributed collaborative environment with access to resources such as computational and storage systems, instruments, and software tools and services for the most computationally challenging applications.

  8. A grid service-based active thermochemical table framework.

    SciTech Connect

    von Laszewski, G.; Ruscic, B.; Wagstrom, P.; Krishnan, S.; Amin, K.; Nijsure, S.; Bittner, S.; Pinzon, R.; Hewson, J. C.; Morton, M. L.; Minkoff, M.; Wagner, A.; SNL

    2002-01-01

    In this paper we report our work on the integration of existing scientific applications using Grid Services. We describe a general architecture that provides access to these applications via Web services-based application factories. Furthermore, we demonstrate how such services can interact with each other.

  9. A geometry-based adaptive unstructured grid generation algorithm for complex geological media

    NASA Astrophysics Data System (ADS)

    Bahrainian, Seyed Saied; Dezfuli, Alireza Daneh

    2014-07-01

    In this paper a novel unstructured grid generation algorithm is presented that considers the effect of geological features and well locations in grid resolution. The proposed grid generation algorithm presents a strategy for definition and construction of an initial grid based on the geological model, geometry adaptation of geological features, and grid resolution control. The algorithm is applied to seismotectonic map of the Masjed-i-Soleiman reservoir. Comparison of grid results with the “Triangle” program shows a more suitable permeability contrast. Immiscible two-phase flow solutions are presented for a fractured porous media test case using different grid resolutions. Adapted grid on the fracture geometry gave identical results with that of a fine grid. The adapted grid employed 88.2% less CPU time when compared to the solutions obtained by the fine grid.

  10. GRID based Thermal Images Processing for volcanic activity monitoring

    NASA Astrophysics Data System (ADS)

    Mangiagli, S.; Coco, S.; Drago, L.; Laudani, A.,; Lodato, L.; Pollicino, G.; Torrisi, O.

    2009-04-01

    evolution. Clearly the analysis of this amount of data requires a lot of CPU and storage resources and this represent a serious limitation, and often this can overwhelm the performance capability of a workstation. Fortunately the INGV and the University of Catania are involved in a project for the development of a GRID infrastructure (a virtual supercomputer created by using a network of independent, geographically dispersed, computing clusters which act like a grid) and in software for this GRID. The performance of the VTA can be improved by using GRID thanks to its kernel thought to perform analysis for each thermal image independently from the others, and consequently it can be adequately parallelized in such a way the different parts of the same computation job can run on a multiplicity of machines. In particular the VTA grid version has been conceived considering the application as a Direct Acyclic Graph (DAG): the analysis task is first subdivided in the major number of machines available and then another part of the program proved the aggregation of the results. Consequently the porting of this software in the GRID environment greatly enhanced VTA's potentialities, allowing us to perform faster and multiple analysis on huge set of data, proving itself as a really usefull instrument for scientific research.

  11. Fast Outlier Detection Using a Grid-Based Algorithm.

    PubMed

    Lee, Jihwan; Cho, Nam-Wook

    2016-01-01

    As one of data mining techniques, outlier detection aims to discover outlying observations that deviate substantially from the reminder of the data. Recently, the Local Outlier Factor (LOF) algorithm has been successfully applied to outlier detection. However, due to the computational complexity of the LOF algorithm, its application to large data with high dimension has been limited. The aim of this paper is to propose grid-based algorithm that reduces the computation time required by the LOF algorithm to determine the k-nearest neighbors. The algorithm divides the data spaces in to a smaller number of regions, called as a "grid", and calculates the LOF value of each grid. To examine the effectiveness of the proposed method, several experiments incorporating different parameters were conducted. The proposed method demonstrated a significant computation time reduction with predictable and acceptable trade-off errors. Then, the proposed methodology was successfully applied to real database transaction logs of Korea Atomic Energy Research Institute. As a result, we show that for a very large dataset, the grid-LOF can be considered as an acceptable approximation for the original LOF. Moreover, it can also be effectively used for real-time outlier detection.

  12. Grid-Based Fourier Transform Phase Contrast Imaging

    NASA Astrophysics Data System (ADS)

    Tahir, Sajjad

    Low contrast in x-ray attenuation imaging between different materials of low electron density is a limitation of traditional x-ray radiography. Phase contrast imaging offers the potential to improve the contrast between such materials, but due to the requirements on the spatial coherence of the x-ray beam, practical implementation of such systems with tabletop (i.e. non-synchrotron) sources has been limited. One recently developed phase imaging technique employs multiple fine-pitched gratings. However, the strict manufacturing tolerances and precise alignment requirements have limited the widespread adoption of grating-based techniques. In this work, we have investigated a technique recently demonstrated by Bennett et al. that utilizes a single grid of much coarser pitch. Our system consisted of a low power 100 microm spot Mo source, a CCD with 22 microm pixel pitch, and either a focused mammography linear grid or a stainless steel woven mesh. Phase is extracted from a single image by windowing and comparing data localized about harmonics of the grid in the Fourier domain. A Matlab code was written to perform the image processing. For the first time, the effects on the diffraction phase contrast and scattering amplitude images of varying grid types and periods, and of varying the window function type used to separate the harmonics, and the window widths, were investigated. Using the wire mesh, derivatives of the phase along two orthogonal directions were obtained and new methods investigated to form improved phase contrast images.

  13. The biometric-based module of smart grid system

    NASA Astrophysics Data System (ADS)

    Engel, E.; Kovalev, I. V.; Ermoshkina, A.

    2015-10-01

    Within Smart Grid concept the flexible biometric-based module base on Principal Component Analysis (PCA) and selective Neural Network is developed. The formation of the selective Neural Network the biometric-based module uses the method which includes three main stages: preliminary processing of the image, face localization and face recognition. Experiments on the Yale face database show that (i) selective Neural Network exhibits promising classification capability for face detection, recognition problems; and (ii) the proposed biometric-based module achieves near real-time face detection, recognition speed and the competitive performance, as compared to some existing subspaces-based methods.

  14. Performance-based contracting: turning vocational policy into jobs.

    PubMed

    Gates, Lauren B; Klein, Suzanne W; Akabas, Sheila H; Myers, Robert; Schwager, Marian; Kaelin-Kee, Jan

    2004-01-01

    The New York State Office of Mental Health has implemented a 2-year demonstration to determine if performance-based contracting (PBC) improves rates of competitive employment for people with serious persistent mental health conditions, and promotes best practice among providers. This article reports the interim findings from the demonstration. Initial results suggest that PBC is reaching the target population and promoting employment for a significant proportion of participants. It is also stimulating agency re-evaluation of consumer recruitment strategies, job development models, staffing patterns, coordination with support services, methods of post-placement support, and commitment to competitive employment for consumers.

  15. An APEL Tool Based CPU Usage Accounting Infrastructure for Large Scale Computing Grids

    NASA Astrophysics Data System (ADS)

    Jiang, Ming; Novales, Cristina Del Cano; Mathieu, Gilles; Casson, John; Rogers, William; Gordon, John

    The APEL (Accounting Processor for Event Logs) is the fundamental tool for the CPU usage accounting infrastructure deployed within the WLCG and EGEE Grids. In these Grids, jobs are submitted by users to computing resources via a Grid Resource Broker (e.g. gLite Workload Management System). As a log processing tool, APEL interprets logs of Grid gatekeeper (e.g. globus) and batch system logs (e.g. PBS, LSF, SGE and Condor) to produce CPU job accounting records identified with Grid identities. These records provide a complete description of usage of computing resources by user's jobs. APEL publishes accounting records into an accounting record repository at a Grid Operations Centre (GOC) for the access from a GUI web tool. The functions of log files parsing, records generation and publication are implemented by the APEL Parser, APEL Core, and APEL Publisher component respectively. Within the distributed accounting infrastructure, accounting records are transported from APEL Publishers at Grid sites to either a regionalised accounting system or the central one by choice via a common ActiveMQ message broker network. This provides an open transport layer for other accounting systems to publish relevant accounting data to a central accounting repository via a unified interface provided an APEL Publisher and also will give regional/National Grid Initiatives (NGIs) Grids the flexibility in their choice of accounting system. The robust and secure delivery of accounting record messages at an NGI level and between NGI accounting instances and the central one are achieved by using configurable APEL Publishers and an ActiveMQ message broker network.

  16. Multilayer neural network models based on grid methods

    NASA Astrophysics Data System (ADS)

    Lazovskaya, T.; Tarkhov, D.

    2016-11-01

    The article discusses building hybrid models relating classical numerical methods for solving ordinary and partial differential equations and the universal neural network approach being developed by D Tarkhov and A Vasilyev. The different ways of constructing multilayer neural network structures based on grid methods are considered. The technique of building a continuous approximation using one simple modification of classical schemes is presented. Introduction non-linear relationships into the classic models with and without posterior learning are investigated. The numerical experiments are conducted.

  17. Design and Implementation of Real-Time Off-Grid Detection Tool Based on FNET/GridEye

    SciTech Connect

    Guo, Jiahui; Zhang, Ye; Liu, Yilu; Young II, Marcus Aaron; Irminger, Philip; Dimitrovski, Aleksandar D; Willging, Patrick

    2014-01-01

    Real-time situational awareness tools are of critical importance to power system operators, especially during emergencies. The availability of electric power has become a linchpin of most post disaster response efforts as it is the primary dependency for public and private sector services, as well as individuals. Knowledge of the scope and extent of facilities impacted, as well as the duration of their dependence on backup power, enables emergency response officials to plan for contingencies and provide better overall response. Based on real-time data acquired by Frequency Disturbance Recorders (FDRs) deployed in the North American power grid, a real-time detection method is proposed. This method monitors critical electrical loads and detects the transition of these loads from an on-grid state, where the loads are fed by the power grid to an off-grid state, where the loads are fed by an Uninterrupted Power Supply (UPS) or a backup generation system. The details of the proposed detection algorithm are presented, and some case studies and off-grid detection scenarios are also provided to verify the effectiveness and robustness. Meanwhile, the algorithm has already been implemented based on the Grid Solutions Framework (GSF) and has effectively detected several off-grid situations.

  18. A grid-based approach for simulating stream temperature

    NASA Astrophysics Data System (ADS)

    Yearsley, John

    2012-03-01

    Applications of grid-based systems are widespread in many areas of environmental analysis. In this study, the concept is adapted to the modeling of water temperature by integrating a macroscale hydrologic model, variable infiltration capacity (VIC), with a computationally efficient and accurate water temperature model. The hydrologic model has been applied to many river basins at scales from 0.0625° to 1.0°. The water temperature model, which uses a semi-Lagrangian numerical scheme to solve the one-dimensional, time-dependent equations for thermal energy balance in advective river systems, has been applied and tested on segmented river systems in the Pacific Northwest. The state-space structure of the water temperature model described in previous work is extended to include propagation of uncertainty. Model results focus on proof of concept by comparing statistics from a study of a test basin with results from other studies that have used either process models or statistical models to estimate water temperature. The results from this study compared favorably with those of selected case studies using data-driven statistical models. The results for deterministic process models of water temperature were generally better than the grid-based method, particularly for those models developed from site-specific, data-intensive studies. Biases in the results from the grid-based system are attributed to heterogeneity in hydraulic characteristics and the method of estimating headwater temperatures.

  19. Invulnerability of power grids based on maximum flow theory

    NASA Astrophysics Data System (ADS)

    Fan, Wenli; Huang, Shaowei; Mei, Shengwei

    2016-11-01

    The invulnerability analysis against cascades is of great significance in evaluating the reliability of power systems. In this paper, we propose a novel cascading failure model based on the maximum flow theory to analyze the invulnerability of power grids. In the model, node initial loads are built on the feasible flows of nodes with a tunable parameter γ used to control the initial node load distribution. The simulation results show that both the invulnerability against cascades and the tolerance parameter threshold αT are affected by node load distribution greatly. As γ grows, the invulnerability shows the distinct change rules under different attack strategies and different tolerance parameters α respectively. These results are useful in power grid planning and cascading failure prevention.

  20. New method adaptive to geospatial information acquisition and share based on grid

    NASA Astrophysics Data System (ADS)

    Fu, Yingchun; Yuan, Xiuxiao

    2005-11-01

    As we all know, it is difficult and time-consuming to acquire and share multi-source geospatial information in grid computing environment, especially for the data of different geo-reference benchmark. Although middleware for data format transformation has been applied by many grid applications and GIS software systems, it remains difficult to on demand realize spatial data assembly jobs among various geo-reference benchmarks because of complex computation of rigorous coordinate transformation model. To address the problem, an efficient hierarchical quadtree structure referred as multi-level grids is designed and coded to express the multi-scale global geo-space. The geospatial objects located in a certain grid of multi-level grids may be expressed as an increment value which is relative to the grid central point and is constant in different geo-reference benchmark. A mediator responsible for geo-reference transformation function with multi-level grids has been developed and aligned with grid service. With help of the mediator, a map or query spatial data sets from individual source of different geo-references can be merged into an uniform composite result. Instead of complex data pre-processing prior to compatible spatial integration, the introduced method is adaptive to be integrated with grid-enable service.

  1. Jobs, Jobs, Jobs!

    ERIC Educational Resources Information Center

    Jacobson, Linda

    2011-01-01

    Teaching is not the safe career bet that it once was. The thinking used to be: New students will always be entering the public schools, and older teachers will always be retiring, so new teachers will always be needed. But teaching jobs aren't secure enough to stand up to the "Great Recession," as this drawn-out downturn has been called. Across…

  2. A genetic algorithm-based job scheduling model for big data analytics.

    PubMed

    Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei

    Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.

  3. Competency-based certification project. Phase I: Job analysis.

    PubMed

    Gessaroli, M E; Poliquin, M

    1994-08-01

    The Canadian Association of Medical Radiation Technologists (C.A.M.R.T.) is transforming its existing certification process into a competency-based process, consistent with the knowledge and skills required by entry-level radiography, radiation therapy and nuclear medicine technology practitioners. The project concurs with the change in focus advocated by the Conjoint Committee on Allied Medical Education Accreditation. The Committee supports new accreditation requirements that, among other things, place more emphasis on competency-based learning outcomes. Following is the first of three papers prepared by the C.A.M.R.T. to explain the project and the strategy for its implementation, focusing respectively on each phase. This paper discusses Phase One: the job analysis.

  4. Agent-based modeling supporting the migration of registry systems to grid based architectures.

    PubMed

    Cryer, Martin E; Frey, Lewis

    2009-03-01

    With the increasing age and cost of operation of the existing NCI SEER platform core technologies, such essential resources in the fight against cancer as these will eventually have to be migrated to Grid based systems. In order to model this migration, a simulation is proposed based upon an agent modeling technology. This modeling technique allows for simulation of complex and distributed services provided by a large scale Grid computing platform such as the caBIG(™) project's caGRID. In order to investigate such a migration to a Grid based platform technology, this paper proposes using agent-based modeling simulations to predict the performance of current and Grid configurations of the NCI SEER system integrated with the existing translational opportunities afforded by caGRID. The model illustrates how the use of Grid technology can potentially improve system response time as systems under test are scaled. In modeling SEER nodes accessing multiple registry silos, we show that the performance of SEER applications re-implemented in a Grid native manner exhibits a nearly constant user response time with increasing numbers of distributed registry silos, compared with the current application architecture which exhibits a linear increase in response time for increasing numbers of silos.

  5. Jobs to Manufacturing Careers: Work-Based Courses. Work-Based Learning in Action

    ERIC Educational Resources Information Center

    Kobes, Deborah

    2016-01-01

    This case study, one of a series of publications exploring effective and inclusive models of work-based learning, finds that work-based courses bring college to the production line by using the job as a learning lab. Work-based courses are an innovative way to give incumbent workers access to community college credits and degrees. They are…

  6. Classroom-Based Interventions and Teachers' Perceived Job Stressors and Confidence: Evidence from a Randomized Trial in Head Start Settings

    ERIC Educational Resources Information Center

    Zhai, Fuhua; Raver, C. Cybele; Li-Grining, Christine

    2011-01-01

    Preschool teachers' job stressors have received increasing attention but have been understudied in the literature. We investigated the impacts of a classroom-based intervention, the Chicago School Readiness Project (CSRP), on teachers' perceived job stressors and confidence, as indexed by their perceptions of job control, job resources, job…

  7. Organizational and Environmental Predictors of Job Satisfaction in Community-based HIV/AIDS Service Organizations.

    ERIC Educational Resources Information Center

    Gimbel, Ronald W.; Lehrman, Sue; Strosberg, Martin A.; Ziac, Veronica; Freedman, Jay; Savicki, Karen; Tackley, Lisa

    2002-01-01

    Using variables measuring organizational characteristics and environmental influences, this study analyzed job satisfaction in community-based HIV/AIDS organizations. Organizational characteristics were found to predict job satisfaction among employees with varying intensity based on position within the organization. Environmental influences had…

  8. A windows-based job safety analysis program for mine safety management

    SciTech Connect

    Chakraborty, P.R.; Poukhovski, D.A.; Bise, C.J.

    1996-12-31

    Job Safety Analysis (JSA) is a process used to determine hazards of and safe procedures for each step of a job. With JSA, the most important steps needed to properly perform a job are first identified. Thus, a specific job or work assignment can be separated into a series of relatively simple steps; the hazards associated with each step are then identified. Finally, solutions can be developed to control each hazard. A Windows-based Job Safety Analysis program (WIN-JSA) was developed at Penn State to assist the safety officials at a mine location in creating new JSAs and regularly reviewing the existing JSAs. The program is an integrated collection of four databases that contain information regarding jobs, job steps, hazards associated with each job step, and recommendations for overcoming the hazards, respectively. This Windows-based personal-computer (PC) program allows the user to access these databases to build a new job configuration (essentially, a new JSA), modify an existing JSA, and print hard copies. It is designed to be used by safety and training supervisors who possess little or no previous computer experience. Therefore, the screen views are designed to be self-explanatory, and the print-outs simulate the commonly used JSA format. Overall, the PC-based approach of creating and maintaining JSAs provides flexibility, reduces paperwork, and can be successfully integrated into existing JSA programs to increase their effectiveness.

  9. Improving mobile robot localization: grid-based approach

    NASA Astrophysics Data System (ADS)

    Yan, Junchi

    2012-02-01

    Autonomous mobile robots have been widely studied not only as advanced facilities for industrial and daily life automation, but also as a testbed in robotics competitions for extending the frontier of current artificial intelligence. In many of such contests, the robot is supposed to navigate on the ground with a grid layout. Based on this observation, we present a localization error correction method by exploring the geometric feature of the tile patterns. On top of the classical inertia-based positioning, our approach employs three fiber-optic sensors that are assembled under the bottom of the robot, presenting an equilateral triangle layout. The sensor apparatus, together with the proposed supporting algorithm, are designed to detect a line's direction (vertical or horizontal) by monitoring the grid crossing events. As a result, the line coordinate information can be fused to rectify the cumulative localization deviation from inertia positioning. The proposed method is analyzed theoretically in terms of its error bound and also has been implemented and tested on a customary developed two-wheel autonomous mobile robot.

  10. A grid-based coulomb collision model for PIC codes

    SciTech Connect

    Jones, M.E.; Lemons, D.S.; Mason, R.J.; Thomas, V.A.; Winske, D.

    1996-01-01

    A new method is presented to model the intermediate regime between collisionless and Coulobm collision dominated plasmas in particle-in-cell codes. Collisional processes between particles of different species are treated throuqh the concept of a grid-based {open_quotes}collision field,{close_quotes} which can be particularly efficient for multi-dimensional applications. In this method, particles are scattered using a force which is determined from the moments of the distribution functions accumulated on the grid. The form of the force is such to reproduce themulti-fluid transport equations through the second (energy) moment. Collisions between particles of the same species require a separate treatment. For this, a Monte Carlo-like scattering method based on the Langevin equation is used. The details of both methods are presented, and their implementation in a new hybrid (particle ion, massless fluid electron) algorithm is described. Aspects of the collision model are illustrated through several one- and two-dimensional test problems as well as examples involving laser produced colliding plasmas.

  11. The relationships among nurses' job characteristics and attitudes toward web-based continuing learning.

    PubMed

    Chiu, Yen-Lin; Tsai, Chin-Chung; Fan Chiang, Chih-Yun

    2013-04-01

    The purpose of this study was to explore the relationships between job characteristics (job demands, job control and social support) and nurses' attitudes toward web-based continuing learning. A total of 221 in-service nurses from hospitals in Taiwan were surveyed. The Attitudes toward Web-based Continuing Learning Survey (AWCL) was employed as the outcome variables, and the Chinese version Job Characteristic Questionnaire (C-JCQ) was administered to assess the predictors for explaining the nurses' attitudes toward web-based continuing learning. To examine the relationships among these variables, hierarchical regression was conducted. The results of the regression analysis revealed that job control and social support positively associated with nurses' attitudes toward web-based continuing learning. However, the relationship of job demands to such learning was not significant. Moreover, a significant demands×job control interaction was found, but the job demands×social support interaction had no significant relationships with attitudes toward web-based continuing learning.

  12. Modeling earthquake activity using a memristor-based cellular grid

    NASA Astrophysics Data System (ADS)

    Vourkas, Ioannis; Sirakoulis, Georgios Ch.

    2013-04-01

    Earthquakes are absolutely among the most devastating natural phenomena because of their immediate and long-term severe consequences. Earthquake activity modeling, especially in areas known to experience frequent large earthquakes, could lead to improvements in infrastructure development that will prevent possible loss of lives and property damage. An earthquake process is inherently a nonlinear complex system and lately scientists have become interested in finding possible analogues of earthquake dynamics. The majority of the models developed so far were based on a mass-spring model of either one or two dimensions. An early approach towards the reordering and the improvement of existing models presenting the capacitor-inductor (LC) analogue, where the LC circuit resembles a mass-spring system and simulates earthquake activity, was also published recently. Electromagnetic oscillation occurs when energy is transferred between the capacitor and the inductor. This energy transformation is similar to the mechanical oscillation that takes place in the mass-spring system. A few years ago memristor-based oscillators were used as learning circuits exposed to a train of voltage pulses that mimic environment changes. The mathematical foundation of the memristor (memory resistor), as the fourth fundamental passive element, has been expounded by Leon Chua and later extended to a more broad class of memristors, known as memristive devices and systems. This class of two-terminal passive circuit elements with memory performs both information processing and storing of computational data on the same physical platform. Importantly, the states of these devices adjust to input signals and provide analog capabilities unavailable in standard circuit elements, resulting in adaptive circuitry and providing analog parallel computation. In this work, a memristor-based cellular grid is used to model earthquake activity. An LC contour along with a memristor is used to model seismic activity

  13. Grid regulation services for energy storage devices based on grid frequency

    SciTech Connect

    Pratt, Richard M; Hammerstrom, Donald J; Kintner-Meyer, Michael C.W.; Tuffner, Francis K

    2014-04-15

    Disclosed herein are representative embodiments of methods, apparatus, and systems for charging and discharging an energy storage device connected to an electrical power distribution system. In one exemplary embodiment, a controller monitors electrical characteristics of an electrical power distribution system and provides an output to a bi-directional charger causing the charger to charge or discharge an energy storage device (e.g., a battery in a plug-in hybrid electric vehicle (PHEV)). The controller can help stabilize the electrical power distribution system by increasing the charging rate when there is excess power in the electrical power distribution system (e.g., when the frequency of an AC power grid exceeds an average value), or by discharging power from the energy storage device to stabilize the grid when there is a shortage of power in the electrical power distribution system (e.g., when the frequency of an AC power grid is below an average value).

  14. Grid regulation services for energy storage devices based on grid frequency

    SciTech Connect

    Pratt, Richard M; Hammerstrom, Donald J; Kintner-Meyer, Michael C.W.; Tuffner, Francis K

    2013-07-02

    Disclosed herein are representative embodiments of methods, apparatus, and systems for charging and discharging an energy storage device connected to an electrical power distribution system. In one exemplary embodiment, a controller monitors electrical characteristics of an electrical power distribution system and provides an output to a bi-directional charger causing the charger to charge or discharge an energy storage device (e.g., a battery in a plug-in hybrid electric vehicle (PHEV)). The controller can help stabilize the electrical power distribution system by increasing the charging rate when there is excess power in the electrical power distribution system (e.g., when the frequency of an AC power grid exceeds an average value), or by discharging power from the energy storage device to stabilize the grid when there is a shortage of power in the electrical power distribution system (e.g., when the frequency of an AC power grid is below an average value).

  15. Knowledge-based zonal grid generation for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Andrews, Alison E.

    1988-01-01

    Automation of flow field zoning in two dimensions is an important step towards reducing the difficulty of three-dimensional grid generation in computational fluid dynamics. Using a knowledge-based approach makes sense, but problems arise which are caused by aspects of zoning involving perception, lack of expert consensus, and design processes. These obstacles are overcome by means of a simple shape and configuration language, a tunable zoning archetype, and a method of assembling plans from selected, predefined subplans. A demonstration system for knowledge-based two-dimensional flow field zoning has been successfully implemented and tested on representative aerodynamic configurations. The results show that this approach can produce flow field zonings that are acceptable to experts with differing evaluation criteria.

  16. Transaction-Based Controls for Building-Grid Integration: VOLTTRON™

    SciTech Connect

    Akyol, Bora A.; Haack, Jereme N.; Hernandez, George; Katipamula, Srinivas; Widergren, Steven E.

    2015-07-01

    The U.S. Department of Energy’s (DOE’s) Building Technologies Office (BTO) is supporting the development of a “transactional network” concept that supports energy, operational, and financial transactions between building systems (e.g., rooftop units -- RTUs), and the electric power grid using applications, or 'agents', that reside either on the equipment, on local building controllers, or in the Cloud. The transactional network vision is delivered using a real-time, scalable reference platform called VOLTTRON that supports the needs of the changing energy system. VOLTTRON is an agent execution and an innovative distributed control and sensing software platform that supports modern control strategies, including agent-based and transaction-based controls. It enables mobile and stationary software agents to perform information gathering, processing, and control actions.

  17. A personality trait-based interactionist model of job performance.

    PubMed

    Tett, Robert P; Burnett, Dawn D

    2003-06-01

    Evidence for situational specificity of personality-job performance relations calls for better understanding of how personality is expressed as valued work behavior. On the basis of an interactionist principle of trait activation (R. P. Tett & H. A. Guterman, 2000), a model is proposed that distinguishes among 5 situational features relevant to trait expression (job demands, distracters, constraints, releasers, and facilitators), operating at task, social, and organizational levels. Trait-expressive work behavior is distinguished from (valued) job performance in clarifying the conditions favoring personality use in selection efforts. The model frames linkages between situational taxonomies (e.g., J. L. Holland's [1985] RIASEC model) and the Big Five and promotes useful discussion of critical issues, including situational specificity, personality-oriented job analysis, team building, and work motivation.

  18. DICOM image communication in globus-based medical grids.

    PubMed

    Vossberg, Michal; Tolxdorff, Thomas; Krefting, Dagmar

    2008-03-01

    Grid computing, the collaboration of distributed resources across institutional borders, is an emerging technology to meet the rising demand on computing power and storage capacity in fields such as high-energy physics, climate modeling, or more recently, life sciences. A secure, reliable, and highly efficient data transport plays an integral role in such grid environments and even more so in medical grids. Unfortunately, many grid middleware distributions, such as the well-known Globus Toolkit, lack the integration of the world-wide medical image communication standard Digital Imaging and Communication in Medicine (DICOM). Currently, the DICOM protocol first needs to be converted to the file transfer protocol (FTP) that is offered by the grid middleware. This effectively reduces most of the advantages and security an integrated network of DICOM devices offers. In this paper, a solution is proposed that adapts the DICOM protocol to the Globus grid security infrastructure and utilizes routers to transparently route traffic to and from DICOM systems. Thus, all legacy DICOM devices can be seamlessly integrated into the grid without modifications. A prototype of the grid routers with the most important DICOM functionality has been developed and successfully tested in the MediGRID test bed, the German grid project for life sciences.

  19. A framework for graph-based synthesis, analysis, and visualization of HPC cluster job data.

    SciTech Connect

    Mayo, Jackson R.; Kegelmeyer, W. Philip, Jr.; Wong, Matthew H.; Pebay, Philippe Pierre; Gentile, Ann C.; Thompson, David C.; Roe, Diana C.; De Sapio, Vincent; Brandt, James M.

    2010-08-01

    The monitoring and system analysis of high performance computing (HPC) clusters is of increasing importance to the HPC community. Analysis of HPC job data can be used to characterize system usage and diagnose and examine failure modes and their effects. This analysis is not straightforward, however, due to the complex relationships that exist between jobs. These relationships are based on a number of factors, including shared compute nodes between jobs, proximity of jobs in time, etc. Graph-based techniques represent an approach that is particularly well suited to this problem, and provide an effective technique for discovering important relationships in job queuing and execution data. The efficacy of these techniques is rooted in the use of a semantic graph as a knowledge representation tool. In a semantic graph job data, represented in a combination of numerical and textual forms, can be flexibly processed into edges, with corresponding weights, expressing relationships between jobs, nodes, users, and other relevant entities. This graph-based representation permits formal manipulation by a number of analysis algorithms. This report presents a methodology and software implementation that leverages semantic graph-based techniques for the system-level monitoring and analysis of HPC clusters based on job queuing and execution data. Ontology development and graph synthesis is discussed with respect to the domain of HPC job data. The framework developed automates the synthesis of graphs from a database of job information. It also provides a front end, enabling visualization of the synthesized graphs. Additionally, an analysis engine is incorporated that provides performance analysis, graph-based clustering, and failure prediction capabilities for HPC systems.

  20. On the applications of algebraic grid generation methods based on transfinite interpolation

    NASA Technical Reports Server (NTRS)

    Nguyen, Hung Lee

    1989-01-01

    Algebraic grid generation methods based on transfinite interpolation called the two-boundary and four-boundary methods are applied for generating grids with highly complex boundaries. These methods yield grid point distributions that allow for accurate application to regions of sharp gradients in the physical domain or time-dependent problems with small length scale phenomena. Algebraic grids are derived using the two-boundary and four-boundary methods for applications in both two- and three-dimensional domains. Grids are developed for distinctly different geometrical problems and the two-boundary and four-boundary methods are demonstrated to be applicable to a wide class of geometries.

  1. Risk Aware Overbooking for Commercial Grids

    NASA Astrophysics Data System (ADS)

    Birkenheuer, Georg; Brinkmann, André; Karl, Holger

    The commercial exploitation of the emerging Grid and Cloud markets needs SLAs to sell computing run times. Job traces show that users have a limited ability to estimate the resource needs of their applications. This offers the possibility to apply overbooking to negotiation, but overbooking increases the risk of SLA violations. This work presents an overbooking approach with an integrated risk assessment model. Simulations for this model, which are based on real-world job traces, show that overbooking offers significant opportunities for Grid and Cloud providers.

  2. Analyzing data flows of WLCG jobs at batch job level

    NASA Astrophysics Data System (ADS)

    Kuehn, Eileen; Fischer, Max; Giffels, Manuel; Jung, Christopher; Petzold, Andreas

    2015-05-01

    With the introduction of federated data access to the workflows of WLCG, it is becoming increasingly important for data centers to understand specific data flows regarding storage element accesses, firewall configurations, as well as the scheduling of batch jobs themselves. As existing batch system monitoring and related system monitoring tools do not support measurements at batch job level, a new tool has been developed and put into operation at the GridKa Tier 1 center for monitoring continuous data streams and characteristics of WLCG jobs and pilots. Long term measurements and data collection are in progress. These measurements already have been proven to be useful analyzing misbehaviors and various issues. Therefore we aim for an automated, realtime approach for anomaly detection. As a requirement, prototypes for standard workflows have to be examined. Based on measurements of several months, different features of HEP jobs are evaluated regarding their effectiveness for data mining approaches to identify these common workflows. The paper will introduce the actual measurement approach and statistics as well as the general concept and first results classifying different HEP job workflows derived from the measurements at GridKa.

  3. The Construction of an Ontology-Based Ubiquitous Learning Grid

    ERIC Educational Resources Information Center

    Liao, Ching-Jung; Chou, Chien-Chih; Yang, Jin-Tan David

    2009-01-01

    The purpose of this study is to incorporate adaptive ontology into ubiquitous learning grid to achieve seamless learning environment. Ubiquitous learning grid uses ubiquitous computing environment to infer and determine the most adaptive learning contents and procedures in anytime, any place and with any device. To achieve the goal, an…

  4. Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd

    2015-01-01

    Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.

  5. Interviewing for the Principal's Job: A Behavior-Based Approach

    ERIC Educational Resources Information Center

    Clement, Mary C.

    2009-01-01

    The stakes are high when one decides to leave a tenured teaching position or an assistant principalship to interview for a principal's position. However, the stakes are high for the future employer as well. The school district needs to know that the applicant is ready for a job that is very complex. As a new principal, the applicant will be…

  6. Job Search Methods: Consequences for Gender-based Earnings Inequality.

    ERIC Educational Resources Information Center

    Huffman, Matt L.; Torres, Lisa

    2001-01-01

    Data from adults in Atlanta, Boston, and Los Angeles (n=1,942) who searched for work using formal (ads, agencies) or informal (networks) methods indicated that type of method used did not contribute to the gender gap in earnings. Results do not support formal job search as a way to reduce gender inequality. (Contains 55 references.) (SK)

  7. Grid Work

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Pointwise Inc.'s, Gridgen Software is a system for the generation of 3D (three dimensional) multiple block, structured grids. Gridgen is a visually-oriented, graphics-based interactive code used to decompose a 3D domain into blocks, distribute grid points on curves, initialize and refine grid points on surfaces and initialize volume grid points. Gridgen is available to U.S. citizens and American-owned companies by license.

  8. Development and pilot trial of a web-based job placement information network.

    PubMed

    Chan, Eliza W C; Tam, S F

    2005-01-01

    The purpose of this project was to develop and pilot a web-based job placement information network aiming at enhancing the work trial and job placement opportunities of people with disabilities (PWD). Efficient uses of information technology in vocational rehabilitation were suggested to help improve PWD employment opportunities and thus enable them to contribute as responsible citizens to the society. In this preliminary study, a web-based employer network was so developed to explore Hong Kong employers' needs and intentions in employing PWD. The results indicated that Hong Kong employers generally agreed to arrange work trials for PWD whose work abilities match job requirements. They also expressed that they would offer permanent job placements to those PWD who showed satisfactory performance in work trials. The present study evidenced that using an information network could expedite communications between employers and job placement services, and thus job placement service outcomes. It is hoped that a job placement databank could thus be developed through accumulating responses from potential employers.

  9. The 2004 knowledge base parametric grid data software suite.

    SciTech Connect

    Wilkening, Lisa K.; Simons, Randall W.; Ballard, Sandy; Jensen, Lee A.; Chang, Marcus C.; Hipp, James Richard

    2004-08-01

    One of the most important types of data in the National Nuclear Security Administration (NNSA) Ground-Based Nuclear Explosion Monitoring Research and Engineering (GNEM R&E) Knowledge Base (KB) is parametric grid (PG) data. PG data can be used to improve signal detection, signal association, and event discrimination, but so far their greatest use has been for improving event location by providing ground-truth-based corrections to travel-time base models. In this presentation we discuss the latest versions of the complete suite of Knowledge Base PG tools developed by NNSA to create, access, manage, and view PG data. The primary PG population tool is the Knowledge Base calibration integration tool (KBCIT). KBCIT is an interactive computer application to produce interpolated calibration-based information that can be used to improve monitoring performance by improving precision of model predictions and by providing proper characterizations of uncertainty. It is used to analyze raw data and produce kriged correction surfaces that can be included in the Knowledge Base. KBCIT not only produces the surfaces but also records all steps in the analysis for later review and possible revision. New features in KBCIT include a new variogram autofit algorithm; the storage of database identifiers with a surface; the ability to merge surfaces; and improved surface-smoothing algorithms. The Parametric Grid Library (PGL) provides the interface to access the data and models stored in a PGL file database. The PGL represents the core software library used by all the GNEM R&E tools that read or write PGL data (e.g., KBCIT and LocOO). The library provides data representations and software models to support accurate and efficient seismic phase association and event location. Recent improvements include conversion of the flat-file database (FDB) to an Oracle database representation; automatic access of station/phase tagged models from the FDB during location; modification of the core

  10. Skill-based job descriptions for sterile processing technicians--a total quality approach.

    PubMed

    Doyle, F F; Marriott, M A

    1994-05-01

    Rochester General Hospital in Rochester, NY, included as part of its total quality management effort the task of revising job descriptions for its sterile processing technicians as a way to decrease turnover and increase job satisfaction, teamwork and quality output. The department's quality team developed "skill banding," a tool that combines skill-based pay with large salary ranges that span job classifications normally covered by several separate salary ranges. They defined the necessary competencies needed to move through five skill bands and worked with the rest of the department to fine-tune the details. The process has only recently been implemented, but department employees are enthusiastic about it.

  11. OPNET/Simulink Based Testbed for Disturbance Detection in the Smart Grid

    SciTech Connect

    Sadi, Mohammad A. H.; Dasgupta, Dipankar; Ali, Mohammad Hassan; Abercrombie, Robert K

    2015-01-01

    The important backbone of the smart grid is the cyber/information infrastructure, which is primarily used to communicate with different grid components. A smart grid is a complex cyber physical system containing a numerous and variety number of sources, devices, controllers and loads. Therefore, the smart grid is vulnerable to grid related disturbances. For such dynamic system, disturbance and intrusion detection is a paramount issue. This paper presents a Simulink and Opnet based co-simulated platform to carry out a cyber-intrusion in cyber network for modern power systems and the smart grid. The IEEE 30 bus power system model is used to demonstrate the effectiveness of the simulated testbed. The experiments were performed by disturbing the circuit breakers reclosing time through a cyber-attack. Different disturbance situations in the considered test system are considered and the results indicate the effectiveness of the proposed co-simulated scheme.

  12. Integrating Reconfigurable Hardware-Based Grid for High Performance Computing

    PubMed Central

    Dondo Gazzano, Julio; Sanchez Molina, Francisco; Rincon, Fernando; López, Juan Carlos

    2015-01-01

    FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC). The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process. PMID:25874241

  13. Effects of an ergonomics-based job stress management program on job strain, psychological distress, and blood cortisol among employees of a national private bank in Denpasar Bali.

    PubMed

    Purnawati, Susy; Kawakami, Norito; Shimazu, Akihito; Sutjana, Dewa Putu; Adiputra, Nyoman

    2016-08-06

    The present work describes a newly developed ergonomics-based job stress management program - Ergo-JSI (Ergonomics-based Job Stress Intervention) - including a pilot study to ascertain the effects of the program on job strain, psychological distress, and blood cortisol levels among bank employees in Indonesia. A single-group, pre- and post-test experimental study was conducted in a sample of employees in a National Bank in Denpasar, Bali, Indonesia. The outcomes of the study focused on reductions in job strain index and psychological distress, measured by the Indonesian version of the Brief Job Stress Questionnaire (BJSQ), and improvement in blood cortisol levels following the study.A total of 25 male employees, with an average age of 39, received an eight-week intervention with the Ergo-JSI. Compared to baseline, the job strain index decreased by 46% (p<0.05), and psychological distress decreased by 28% (p<0.05). These changes were accompanied by a 24% reduction in blood cortisol levels (p<0.05). The newly developed Ergo-JSI program may hence be effective for decreasing job strain, psychosocial distress, and blood cortisol among employees in Indonesia.

  14. Grid-based Model of The Volga Basin

    NASA Astrophysics Data System (ADS)

    Tate, E.; Georgievsky, M.; Shalygin, A.; Yezhov, A.

    The Volga is the largest river in Europe and is of great significance for the economy of Russia. The Volga basin, of about 1.4 million km2, displays a wide range of to- pography, hydrometeorology and water resource problems. Its cascade of 12 large reservoirs controls the river flow. The Volga contributes about 80% of the total water inflow to the Caspian Sea and thus forms the main influence on Sea level fluctuations. Variability in climate and climate change give uncertainty to the current and future availability and distribution of water resources in the Volga basin. This Volga model was part of a larger study that aimed to develop a realistic and consistent methodol- ogy, including the facility to take into account the effects of climate change scenarios for the year 2050, indicating possible changes in future river inflows to the Caspian Sea. The methodology involved examining flows and water demands on a 0.5 by 0.5 grid. This choice was a compromise between that needed to represent spatial variabil- ity and the availability of suitable data. The modelling approach was based on work aimed at examining water resources availability on a world-wide scale (Meigh et al., 1998). At a preliminary stage the main direction of flow for each cell is determined assuming that all the flow from one cell flows into one of the adjoining cells. Based on these flow directions, the order in which the cells must be processed is determined so that the flows from upstream cells have always been calculated before processing the cell into which they flow. The processing order also takes into account the artificial transfers between cells. Surface runoff is generated for each cell by using a rainfall- runoff model; the model chosen was the probability-distributed model (PDM) devel- oped by Moore (1985). The flows are then routed through the linked cells to estimate total runoff for each cell. The effects of lakes and wetlands, water abstractions, return flows, artificial water

  15. SARS Grid--an AG-based disease management and collaborative platform.

    PubMed

    Hung, Shu-Hui; Hung, Tsung-Chieh; Juang, Jer-Nan

    2006-01-01

    This paper describes the development of the NCHC's Severe Acute Respiratory Syndrome (SARS) Grid project-An Access Grid (AG)-based disease management and collaborative platform that allowed for SARS patient's medical data to be dynamically shared and discussed between hospitals and doctors using AG's video teleconferencing (VTC) capabilities. During the height of the SARS epidemic in Asia, SARS Grid and the SARShope website significantly curved the spread of SARS by helping doctors manage the in-hospital and in-home care of quarantined SARS patients through medical data exchange and the monitoring of the patient's symptoms. Now that the SARS epidemic has ended, the primary function of the SARS Grid project is that of a web-based informatics tool to increase pubic awareness of SARS and other epidemic diseases. Additionally, the SARS Grid project can be viewed and further studied as an outstanding model of epidemic disease prevention and/or containment.

  16. Grid-based medical image workflow and archiving for research and enterprise PACS applications

    NASA Astrophysics Data System (ADS)

    Erberich, Stephan G.; Dixit, Manasee; Chen, Vincent; Chervenak, Ann; Nelson, Marvin D.; Kesselmann, Carl

    2006-03-01

    PACS provides a consistent model to communicate and to store images with recent additions to fault tolerance and disaster reliability. However PACS still lacks fine granulated user based authentication and authorization, flexible data distribution, and semantic associations between images and their embedded information. These are critical components for future Enterprise operations in dynamic medical research and health care environments. Here we introduce a flexible Grid based model of a PACS in order to add these methods and to describe its implementation in the Children's Oncology Group (COG) Grid. The combination of existing standards for medical images, DICOM, and the abstraction to files and meta catalog information in the Grid domain provides new flexibility beyond traditional PACS design. We conclude that Grid technology demonstrates a reliable and efficient distributed informatics infrastructure which is well applicable to medical informatics as described in this work. Grid technology will provide new opportunities for PACS deployment and subsequently new medical image applications.

  17. VIM-based dynamic sparse grid approach to partial differential equations.

    PubMed

    Mei, Shu-Li

    2014-01-01

    Combining the variational iteration method (VIM) with the sparse grid theory, a dynamic sparse grid approach for nonlinear PDEs is proposed in this paper. In this method, a multilevel interpolation operator is constructed based on the sparse grids theory firstly. The operator is based on the linear combination of the basic functions and independent of them. Second, by means of the precise integration method (PIM), the VIM is developed to solve the nonlinear system of ODEs which is obtained from the discretization of the PDEs. In addition, a dynamic choice scheme on both of the inner and external grid points is proposed. It is different from the traditional interval wavelet collocation method in which the choice of both of the inner and external grid points is dynamic. The numerical experiments show that our method is better than the traditional wavelet collocation method, especially in solving the PDEs with the Nuemann boundary conditions.

  18. Parallel and Grid-Based Data Mining - Algorithms, Models and Systems for High-Performance KDD

    NASA Astrophysics Data System (ADS)

    Congiusta, Antonio; Talia, Domenico; Trunfio, Paolo

    Data Mining often is a computing intensive and time requiring process. For this reason, several Data Mining systems have been implemented on parallel computing platforms to achieve high performance in the analysis of large data sets. Moreover, when large data repositories are coupled with geographical distribution of data, users and systems, more sophisticated technologies are needed to implement high-performance distributed KDD systems. Since computational Grids emerged as privileged platforms for distributed computing, a growing number of Grid-based KDD systems has been proposed. In this chapter we first discuss different ways to exploit parallelism in the main Data Mining techniques and algorithms, then we discuss Grid-based KDD systems. Finally, we introduce the Knowledge Grid, an environment which makes use of standard Grid middleware to support the development of parallel and distributed knowledge discovery applications.

  19. CMS Configuration Editor: GUI based application for user analysis job

    NASA Astrophysics Data System (ADS)

    de Cosa, A.

    2011-12-01

    We present the user interface and the software architecture of the Configuration Editor for the CMS experiment. The analysis workflow is organized in a modular way integrated within the CMS framework that organizes in a flexible way user analysis code. The Python scripting language is adopted to define the job configuration that drives the analysis workflow. It could be a challenging task for users, especially for newcomers, to develop analysis jobs managing the configuration of many required modules. For this reason a graphical tool has been conceived in order to edit and inspect configuration files. A set of common analysis tools defined in the CMS Physics Analysis Toolkit (PAT) can be steered and configured using the Config Editor. A user-defined analysis workflow can be produced starting from a standard configuration file, applying and configuring PAT tools according to the specific user requirements. CMS users can adopt this tool, the Config Editor, to create their analysis visualizing in real time which are the effects of their actions. They can visualize the structure of their configuration, look at the modules included in the workflow, inspect the dependences existing among the modules and check the data flow. They can visualize at which values parameters are set and change them according to what is required by their analysis task. The integration of common tools in the GUI needed to adopt an object-oriented structure in the Python definition of the PAT tools and the definition of a layer of abstraction from which all PAT tools inherit.

  20. One-fifth of nonelderly Californians do not have access to job-based health insurance coverage.

    PubMed

    Lavarreda, Shana Alex; Cabezas, Livier

    2010-11-01

    Lack of job-based health insurance does not affect just workers, but entire families who depend on job-based coverage for their health care. This policy brief shows that in 2007 one-fifth of all Californians ages 0-64 who lived in households where at least one family member was employed did not have access to job-based coverage. Among adults with no access to job-based coverage through their own or a spouse's job, nearly two-thirds remained uninsured. In contrast, the majority of children with no access to health insurance through a parent obtained public health insurance, highlighting the importance of such programs. Low-income, Latino and small business employees were more likely to have no access to job-based insurance. Provisions enacted under national health care reform (the Patient Protection and Affordable Care Act of 2010) will aid some of these populations in accessing health insurance coverage.

  1. Grid-based visual aid for enhanced microscopy screening in diagnostic cytopathology

    NASA Astrophysics Data System (ADS)

    Riziotis, Christos; Tsiambas, Evangelos

    2016-10-01

    A grid acting as a spatial reference and calibration aid, fabricated on glass cover slips by laser micromachining and attached on the carrier microscope slide, is proposed as a visual aid for the improvement of microscopy diagnostic procedure in the screening of cytological slides. A set of borderline and also abnormal PAP test cases -according to Bethesda 2014 revised terminology- was analyzed by conventional and grid based screening procedures, and statistical analysis showed that the introduced grid-based microscopy led to an improved diagnosis by identifying an increased number of abnormal cells in a shorter period of time, especially concerning the number of pre- or neoplastic/cancerous cells.

  2. Head and neck 192Ir HDR-brachytherapy dosimetry using a grid-based Boltzmann solver

    PubMed Central

    Wolf, Sabine; Kóvacs, George

    2013-01-01

    Purpose To compare dosimetry for head and neck cancer patients, calculated with TG-43 formalism and a commercially available grid-based Boltzmann solver. Material and methods This study included 3D-dosimetry of 49 consecutive brachytherapy head and neck cancer patients, computed by a grid-based Boltzmann solver that takes into account tissue inhomogeneities as well as TG-43 formalism. 3D-treatment planning was carried out by using computed tomography. Results Dosimetric indices D90 and V100 for target volume were about 3% lower (median value) for the grid-based Boltzmann solver relative to TG-43-based computation (p < 0.01). The V150 dose parameter showed 1.6% increase from grid-based Boltzmann solver to TG-43 (p < 0.01). Conclusions Dose differences between results of a grid-based Boltzmann solver and TG-43 formalism for high-dose-rate head and neck brachytherapy patients to the target volume were found. Distinctions in D90 of CTV were low (2.63 Gy for grid-based Boltzmann solver vs. 2.71 Gy TG-43 in mean). In our clinical practice, prescription doses remain unchanged for high-dose-rate head and neck brachytherapy for the time being. PMID:24474973

  3. Application of remote debugging techniques in user-centric job monitoring

    NASA Astrophysics Data System (ADS)

    dos Santos, T.; Mättig, P.; Wulff, N.; Harenberg, T.; Volkmer, F.; Beermann, T.; Kalinin, S.; Ahrens, R.

    2012-06-01

    With the Job Execution Monitor, a user-centric job monitoring software developed at the University of Wuppertal and integrated into the job brokerage systems of the WLCG, job progress and grid worker node health can be supervised in real time. Imminent error conditions can thus be detected early by the submitter and countermeasures can be taken. Grid site admins can access aggregated data of all monitored jobs to infer the site status and to detect job misbehaviour. To remove the last "blind spot" from this monitoring, a remote debugging technique based on the GNU C compiler suite was developed and integrated into the software; its design concept and architecture is described in this paper and its application discussed.

  4. Direct care worker's perceptions of job satisfaction following implementation of work-based learning.

    PubMed

    Lopez, Cynthia; White, Diana L; Carder, Paula C

    2014-02-01

    The purpose of this study was to understand the impact of a work-based learning program on the work lives of Direct Care Workers (DCWs) at assisted living (AL) residences. The research questions were addressed using focus group data collected as part of a larger evaluation of a work-based learning (WBL) program called Jobs to Careers. The theoretical perspective of symbolic interactionism was used to frame the qualitative data analysis. Results indicated that the WBL program impacted DCWs' job satisfaction through the program curriculum and design and through three primary categories: relational aspects of work, worker identity, and finding time. This article presents a conceptual model for understanding how these categories are interrelated and the implications for WBL programs. Job satisfaction is an important topic that has been linked to quality of care and reduced turnover in long-term care settings.

  5. Micro-grid platform based on NODE.JS architecture, implemented in electrical network instrumentation

    NASA Astrophysics Data System (ADS)

    Duque, M.; Cando, E.; Aguinaga, A.; Llulluna, F.; Jara, N.; Moreno, T.

    2016-05-01

    In this document, I propose a theory about the impact of systems based on microgrids in non-industrialized countries that have the goal to improve energy exploitation through alternatives methods of a clean and renewable energy generation and the creation of the app to manage the behavior of the micro-grids based on the NodeJS, Django and IOJS technologies. The micro-grids allow the optimal way to manage energy flow by electric injection directly in electric network small urban's cells in a low cost and available way. In difference from conventional systems, micro-grids can communicate between them to carry energy to places that have higher demand in accurate moments. This system does not require energy storage, so, costs are lower than conventional systems like fuel cells, solar panels or else; even though micro-grids are independent systems, they are not isolated. The impact that this analysis will generate, is the improvement of the electrical network without having greater control than an intelligent network (SMART-GRID); this leads to move to a 20% increase in energy use in a specified network; that suggest there are others sources of energy generation; but for today's needs, we need to standardize methods and remain in place to support all future technologies and the best option are the Smart Grids and Micro-Grids.

  6. Smart Energy Management and Control for Fuel Cell Based Micro-Grid Connected Neighborhoods

    SciTech Connect

    Dr. Mohammad S. Alam

    2006-03-15

    Fuel cell power generation promises to be an efficient, pollution-free, reliable power source in both large scale and small scale, remote applications. DOE formed the Solid State Energy Conversion Alliance with the intention of breaking one of the last barriers remaining for cost effective fuel cell power generation. The Alliance’s goal is to produce a core solid-state fuel cell module at a cost of no more than $400 per kilowatt and ready for commercial application by 2010. With their inherently high, 60-70% conversion efficiencies, significantly reduced carbon dioxide emissions, and negligible emissions of other pollutants, fuel cells will be the obvious choice for a broad variety of commercial and residential applications when their cost effectiveness is improved. In a research program funded by the Department of Energy, the research team has been investigating smart fuel cell-operated residential micro-grid communities. This research has focused on using smart control systems in conjunction with fuel cell power plants, with the goal to reduce energy consumption, reduce demand peaks and still meet the energy requirements of any household in a micro-grid community environment. In Phases I and II, a SEMaC was developed and extended to a micro-grid community. In addition, an optimal configuration was determined for a single fuel cell power plant supplying power to a ten-home micro-grid community. In Phase III, the plan is to expand this work to fuel cell based micro-grid connected neighborhoods (mini-grid). The economic implications of hydrogen cogeneration will be investigated. These efforts are consistent with DOE’s mission to decentralize domestic electric power generation and to accelerate the onset of the hydrogen economy. A major challenge facing the routine implementation and use of a fuel cell based mini-grid is the varying electrical demand of the individual micro-grids, and, therefore, analyzing these issues is vital. Efforts are needed to determine

  7. The Particle Physics Data Grid. Final Report

    SciTech Connect

    Livny, Miron

    2002-08-16

    The main objective of the Particle Physics Data Grid (PPDG) project has been to implement and evaluate distributed (Grid-enabled) data access and management technology for current and future particle and nuclear physics experiments. The specific goals of PPDG have been to design, implement, and deploy a Grid-based software infrastructure capable of supporting the data generation, processing and analysis needs common to the physics experiments represented by the participants, and to adapt experiment-specific software to operate in the Grid environment and to exploit this infrastructure. To accomplish these goals, the PPDG focused on the implementation and deployment of several critical services: reliable and efficient file replication service, high-speed data transfer services, multisite file caching and staging service, and reliable and recoverable job management services. The focus of the activity was the job management services and the interplay between these services and distributed data access in a Grid environment. Software was developed to study the interaction between HENP applications and distributed data storage fabric. One key conclusion was the need for a reliable and recoverable tool for managing large collections of interdependent jobs. An attached document provides an overview of the current status of the Directed Acyclic Graph Manager (DAGMan) with its main features and capabilities.

  8. OGC and Grid Interoperability in enviroGRIDS Project

    NASA Astrophysics Data System (ADS)

    Gorgan, Dorian; Rodila, Denisa; Bacu, Victor; Giuliani, Gregory; Ray, Nicolas

    2010-05-01

    EnviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is a 4-years FP7 Project aiming to address the subjects of ecologically unsustainable development and inadequate resource management. The project develops a Spatial Data Infrastructure of the Black Sea Catchment region. The geospatial technologies offer very specialized functionality for Earth Science oriented applications as well as the Grid oriented technology that is able to support distributed and parallel processing. One challenge of the enviroGRIDS project is the interoperability between geospatial and Grid infrastructures by providing the basic and the extended features of the both technologies. The geospatial interoperability technology has been promoted as a way of dealing with large volumes of geospatial data in distributed environments through the development of interoperable Web service specifications proposed by the Open Geospatial Consortium (OGC), with applications spread across multiple fields but especially in Earth observation research. Due to the huge volumes of data available in the geospatial domain and the additional introduced issues (data management, secure data transfer, data distribution and data computation), the need for an infrastructure capable to manage all those problems becomes an important aspect. The Grid promotes and facilitates the secure interoperations of geospatial heterogeneous distributed data within a distributed environment, the creation and management of large distributed computational jobs and assures a security level for communication and transfer of messages based on certificates. This presentation analysis and discusses the most significant use cases for enabling the OGC Web services interoperability with the Grid environment and focuses on the description and implementation of the most promising one. In these use cases we give a special attention to issues such as: the relations between computational grid and

  9. MaGate Simulator: A Simulation Environment for a Decentralized Grid Scheduler

    NASA Astrophysics Data System (ADS)

    Huang, Ye; Brocco, Amos; Courant, Michele; Hirsbrunner, Beat; Kuonen, Pierre

    This paper presents a simulator for of a decentralized modular grid scheduler named MaGate. MaGate’s design emphasizes scheduler interoperability by providing intelligent scheduling serving the grid community as a whole. Each MaGate scheduler instance is able to deal with dynamic scheduling conditions, with continuously arriving grid jobs. Received jobs are either allocated on local resources, or delegated to other MaGates for remote execution. The proposed MaGate simulator is based on GridSim toolkit and Alea simulator, and abstracts the features and behaviors of complex fundamental grid elements, such as grid jobs, grid resources, and grid users. Simulation of scheduling tasks is supported by a grid network overlay simulator executing distributed ant-based swarm intelligence algorithms to provide services such as group communication and resource discovery. For evaluation, a comparison of behaviors of different collaborative policies among a community of MaGates is provided. Results support the use of the proposed approach as a functional ready grid scheduler simulator.

  10. Academic Job Placements in Library and Information Science Field: A Case Study Performed on ALISE Web-Based Postings

    ERIC Educational Resources Information Center

    Abouserie, Hossam Eldin Mohamed Refaat

    2010-01-01

    The study investigated and analyzed the state of academic web-based job announcements in Library and Information Science Field. The purpose of study was to get in depth understanding about main characteristics and trends of academic job market in Library and Information science field. The study focused on web-based version announcement as it was…

  11. Effects of the job stress education for supervisors on psychological distress and job performance among their immediate subordinates: a supervisor-based randomized controlled trial.

    PubMed

    Takao, Soshi; Tsutsumi, Akizumi; Nishiuchi, Kyoko; Mineyama, Sachiko; Kawakami, Norito

    2006-11-01

    As job stress is now one of the biggest health-related problems in the workplace, several education programs for supervisors have been conducted to reduce job stress. We conducted a supervisor-based randomized controlled trial to evaluate the effects of an education program on their subordinates' psychological distress and job performance. The subjects were 301 employees (46 supervisors and 255 subordinates) in a Japanese sake brewery. First, we randomly allocated supervisors to the education group (24 supervisors) and the waiting-list group (22 supervisors). Then, for the allocated supervisors we introduced a single-session, 60-min education program according to the guidelines for employee mental health promotion along with training that provided consulting skills combined with role-playing exercises. We conducted pre- and post-intervention (after 3 months) surveys for all subordinates to examine psychological distress and job performance. We defined the intervention group as those subordinates whose immediate supervisors received the education, and the control group was defined as those subordinates whose supervisors did not. To evaluate the effects, we employed a repeated measures analysis of variance (ANOVA). Overall, the intervention effects (time x group) were not significant for psychological distress or job performance among both male (p=0.456 and 0.252) and female (p=0.714 and 0.106) subordinates. However, young male subordinates engaged in white-collar occupations showed significant intervention effects for psychological distress (p=0.012) and job performance (p=0.029). In conclusion, our study indicated a possible beneficial effect of supervisor education on the psychological distress and job performance of subordinates. This effect may vary according to specific groups.

  12. Organizational Culture's Role in the Relationship between Power Bases and Job Stress

    ERIC Educational Resources Information Center

    Erkutlu, Hakan; Chafra, Jamel; Bumin, Birol

    2011-01-01

    The purpose of this research is to examine the moderating role of organizational culture in the relationship between leader's power bases and subordinate's job stress. Totally 622 lecturers and their superiors (deans) from 13 state universities chosen by random method in Ankara, Istanbul, Izmir, Antalya, Samsun, Erzurum and Gaziantep in 2008-2009…

  13. Community Based Organizations. The Challenges of the Job Training Partnership Act.

    ERIC Educational Resources Information Center

    Brown, Larry

    The advent of the Job Training Partnership Act (JTPA) has not been favorable to community-based organizations (CBOs) serving unemployed young people. The overall decline in the amount of money available for employment training is one reason for the reduction in services, but it is not the sole reason. The transition to the new act itself is also…

  14. Data Base for a Job Opportunity Vocational Agricultural Program Planning Model.

    ERIC Educational Resources Information Center

    Baggett, Connie D.; And Others

    A job opportunity-based curriculum planning model was developed for high school vocational agriculture programs. Three objectives were to identify boundaries of the geographical area within which past program graduates obtained entry-level position, title and description of position, and areas of high school specialization; number and titles of…

  15. AVQS: attack route-based vulnerability quantification scheme for smart grid.

    PubMed

    Ko, Jongbin; Lim, Hyunwoo; Lee, Seokjun; Shon, Taeshik

    2014-01-01

    A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification.

  16. AVQS: Attack Route-Based Vulnerability Quantification Scheme for Smart Grid

    PubMed Central

    Lim, Hyunwoo; Lee, Seokjun; Shon, Taeshik

    2014-01-01

    A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification. PMID:25152923

  17. A grid-based pseudo-cache solution for MISD biomedical problems with high confidentiality and efficiency.

    PubMed

    Dai, Yuan-Shun; Palakal, Mathew; Hartanto, Shielly; Wang, Xiaolong; Guo, Yanming

    2006-01-01

    The complexity of most biomedical/bioinformatics problems requires efficient solutions using collaborative/parallel computing. One promising solution is to implement Grid computing, as an emerging new field called BioGrid. However, one of the most stringent requirements in such a Grid-based solution is data privacy. This paper presents a novel solution to provide the Confidentiality when using the Grid to efficiently solve MISD biomedical problems. It is called the Grid-Based Pseudo-Cache (GBPC) solution. It is proved to have equal or better performance than traditional MIMD solution. Via case studies our theories are validated in practice, and the data dependence is also addressed.

  18. Operational flash flood forecasting platform based on grid technology

    NASA Astrophysics Data System (ADS)

    Thierion, V.; Ayral, P.-A.; Angelini, V.; Sauvagnargues-Lesage, S.; Nativi, S.; Payrastre, O.

    2009-04-01

    Flash flood events of south of France such as the 8th and 9th September 2002 in the Grand Delta territory caused important economic and human damages. Further to this catastrophic hydrological situation, a reform of flood warning services have been initiated (set in 2006). Thus, this political reform has transformed the 52 existing flood warning services (SAC) in 22 flood forecasting services (SPC), in assigning them territories more hydrological consistent and new effective hydrological forecasting mission. Furthermore, national central service (SCHAPI) has been created to ease this transformation and support local services in their new objectives. New functioning requirements have been identified: - SPC and SCHAPI carry the responsibility to clearly disseminate to public organisms, civil protection actors and population, crucial hydrologic information to better anticipate potential dramatic flood event, - a new effective hydrological forecasting mission to these flood forecasting services seems essential particularly for the flash floods phenomenon. Thus, models improvement and optimization was one of the most critical requirements. Initially dedicated to support forecaster in their monitoring mission, thanks to measuring stations and rainfall radar images analysis, hydrological models have to become more efficient in their capacity to anticipate hydrological situation. Understanding natural phenomenon occuring during flash floods mainly leads present hydrological research. Rather than trying to explain such complex processes, the presented research try to manage the well-known need of computational power and data storage capacities of these services. Since few years, Grid technology appears as a technological revolution in high performance computing (HPC) allowing large-scale resource sharing, computational power using and supporting collaboration across networks. Nowadays, EGEE (Enabling Grids for E-science in Europe) project represents the most important

  19. Developing Information Power Grid Based Algorithms and Software

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This was an exploratory study to enhance our understanding of problems involved in developing large scale applications in a heterogeneous distributed environment. It is likely that the large scale applications of the future will be built by coupling specialized computational modules together. For example, efforts now exist to couple ocean and atmospheric prediction codes to simulate a more complete climate system. These two applications differ in many respects. They have different grids, the data is in different unit systems and the algorithms for inte,-rating in time are different. In addition the code for each application is likely to have been developed on different architectures and tend to have poor performance when run on an architecture for which the code was not designed, if it runs at all. Architectural differences may also induce differences in data representation which effect precision and convergence criteria as well as data transfer issues. In order to couple such dissimilar codes some form of translation must be present. This translation should be able to handle interpolation from one grid to another as well as construction of the correct data field in the correct units from available data. Even if a code is to be developed from scratch, a modular approach will likely be followed in that standard scientific packages will be used to do the more mundane tasks such as linear algebra or Fourier transform operations. This approach allows the developers to concentrate on their science rather than becoming experts in linear algebra or signal processing. Problems associated with this development approach include difficulties associated with data extraction and translation from one module to another, module performance on different nodal architectures, and others. In addition to these data and software issues there exists operational issues such as platform stability and resource management.

  20. gLExec: gluing grid computing to the Unix world

    NASA Astrophysics Data System (ADS)

    Groep, D.; Koeroo, O.; Venekamp, G.

    2008-07-01

    The majority of compute resources in todays scientific grids are based on Unix and Unix-like operating systems. In this world, user and user-group management are based around the concepts of a numeric 'user ID' and 'group ID' that are local to the resource. In contrast, grid concepts of user and group management are centered around globally assigned identifiers and VO membership, structures that are independent of any specific resource. At the fabric boundary, these 'grid identities' have to be translated to Unix user IDs. New job submission methodologies, such as job-execution web services, community-deployed local schedulers, and the late binding of user jobs in a grid-wide overlay network of 'pilot jobs', push this fabric boundary ever further down into the resource. gLExec, a light-weight (and thereby auditable) credential mapping and authorization system, addresses these issues. It can be run both on fabric boundary, as part of an execution web service, and on the worker node in a late-binding scenario. In this contribution we describe the rationale for gLExec, how it interacts with the site authorization and credential mapping frameworks such as LCAS, LCMAPS and GUMS, and how it can be used to improve site control and traceability in a pilot-job system.

  1. Job Stress, Stress Related to Performance-Based Accreditation, Locus of Control, Age, and Gender As Related to Job Satisfaction and Burnout in Teachers and Principals.

    ERIC Educational Resources Information Center

    Hipps, Elizabeth Smith; Halpin, Glennelle

    The purpose of the study described here was to: (1) determine the amount of variance in burnout and job satisfaction in public school teachers and principals which could be accounted for by stress related to the state's performance-based accreditation standards; (2) examine the relationship between stress related to state standards and the age and…

  2. Design and implementation of GRID-based PACS in a hospital with multiple imaging departments

    NASA Astrophysics Data System (ADS)

    Yang, Yuanyuan; Jin, Jin; Sun, Jianyong; Zhang, Jianguo

    2008-03-01

    Usually, there were multiple clinical departments providing imaging-enabled healthcare services in enterprise healthcare environment, such as radiology, oncology, pathology, and cardiology, the picture archiving and communication system (PACS) is now required to support not only radiology-based image display, workflow and data flow management, but also to have more specific expertise imaging processing and management tools for other departments providing imaging-guided diagnosis and therapy, and there were urgent demand to integrate the multiple PACSs together to provide patient-oriented imaging services for enterprise collaborative healthcare. In this paper, we give the design method and implementation strategy of developing grid-based PACS (Grid-PACS) for a hospital with multiple imaging departments or centers. The Grid-PACS functions as a middleware between the traditional PACS archiving servers and workstations or image viewing clients and provide DICOM image communication and WADO services to the end users. The images can be stored in distributed multiple archiving servers, but can be managed with central mode. The grid-based PACS has auto image backup and disaster recovery services and can provide best image retrieval path to the image requesters based on the optimal algorithms. The designed grid-based PACS has been implemented in Shanghai Huadong Hospital and been running for two years smoothly.

  3. Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY

    NASA Astrophysics Data System (ADS)

    Bystritskaya, Elena; Fomenko, Alexander; Gogitidze, Nelly; Lobodzinski, Bogdan

    2014-06-01

    The H1 Virtual Organization (VO), as one of the small VOs, employs most components of the EMI or gLite Middleware. In this framework, a monitoring system is designed for the H1 Experiment to identify and recognize within the GRID the best suitable resources for execution of CPU-time consuming Monte Carlo (MC) simulation tasks (jobs). Monitored resources are Computer Elements (CEs), Storage Elements (SEs), WMS-servers (WMSs), CernVM File System (CVMFS) available to the VO HONE and local GRID User Interfaces (UIs). The general principle of monitoring GRID elements is based on the execution of short test jobs on different CE queues using submission through various WMSs and directly to the CREAM-CEs as well. Real H1 MC Production jobs with a small number of events are used to perform the tests. Test jobs are periodically submitted into GRID queues, the status of these jobs is checked, output files of completed jobs are retrieved, the result of each job is analyzed and the waiting time and run time are derived. Using this information, the status of the GRID elements is estimated and the most suitable ones are included in the automatically generated configuration files for use in the H1 MC production. The monitoring system allows for identification of problems in the GRID sites and promptly reacts on it (for example by sending GGUS (Global Grid User Support) trouble tickets). The system can easily be adapted to identify the optimal resources for tasks other than MC production, simply by changing to the relevant test jobs. The monitoring system is written mostly in Python and Perl with insertion of a few shell scripts. In addition to the test monitoring system we use information from real production jobs to monitor the availability and quality of the GRID resources. The monitoring tools register the number of job resubmissions, the percentage of failed and finished jobs relative to all jobs on the CEs and determine the average values of waiting and running time for the

  4. Professional confidence and job satisfaction: an examination of counselors' perceptions in faith-based and non-faith-based drug treatment programs.

    PubMed

    Chu, Doris C; Sung, Hung-En

    2014-08-01

    Understanding substance abuse counselors' professional confidence and job satisfaction is important since such confidence and satisfaction can affect the way counselors go about their jobs. Analyzing data derived from a random sample of 110 counselors from faith-based and non-faith-based treatment programs, this study examines counselors' professional confidence and job satisfaction in both faith-based and non-faith-based programs. The multivariate analyses indicate years of experience and being a certified counselor were the only significant predictors of professional confidence. There was no significant difference in perceived job satisfaction and confidence between counselors in faith-based and non-faith-based programs. A majority of counselors in both groups expressed a high level of satisfaction with their job. Job experience in drug counseling and prior experience as an abuser were perceived by counselors as important components to facilitate counseling skills. Policy implications are discussed.

  5. A Correlational Study of Telework Frequency, Information Communication Technology, and Job Satisfaction of Home-Based Teleworkers

    ERIC Educational Resources Information Center

    Webster-Trotman, Shana P.

    2010-01-01

    In 2008, 33.7 million Americans teleworked from home. The Telework Enhancement Act (S. 707) and the Telework Improvements Act (H.R. 1722) of 2009 were designed to increase the number of teleworkers. The research problem addressed was the lack of understanding of factors that influence home-based teleworkers' job satisfaction. Job dissatisfaction…

  6. A Computer-Based, Interactive Videodisc Job Aid and Expert System for Electron Beam Lithography Integration and Diagnostic Procedures.

    ERIC Educational Resources Information Center

    Stevenson, Kimberly

    This master's thesis describes the development of an expert system and interactive videodisc computer-based instructional job aid used for assisting in the integration of electron beam lithography devices. Comparable to all comprehensive training, expert system and job aid development require a criterion-referenced systems approach treatment to…

  7. 20 CFR 670.520 - Are students permitted to hold jobs other than work-based learning opportunities?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Are students permitted to hold jobs other than work-based learning opportunities? 670.520 Section 670.520 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR THE JOB CORPS UNDER TITLE I OF THE WORKFORCE INVESTMENT ACT Program Activities and Center Operations...

  8. 20 CFR 670.520 - Are students permitted to hold jobs other than work-based learning opportunities?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 4 2012-04-01 2012-04-01 false Are students permitted to hold jobs other than work-based learning opportunities? 670.520 Section 670.520 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR (CONTINUED) THE JOB CORPS UNDER TITLE I OF THE WORKFORCE INVESTMENT ACT Program Activities and...

  9. 20 CFR 670.520 - Are students permitted to hold jobs other than work-based learning opportunities?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 4 2014-04-01 2014-04-01 false Are students permitted to hold jobs other than work-based learning opportunities? 670.520 Section 670.520 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR (CONTINUED) THE JOB CORPS UNDER TITLE I OF THE WORKFORCE INVESTMENT ACT Program Activities and...

  10. Creating Better Child Care Jobs: Model Work Standards for Teaching Staff in Center-Based Child Care.

    ERIC Educational Resources Information Center

    Center for the Child Care Workforce, Washington, DC.

    This document presents model work standards articulating components of the child care center-based work environment that enable teachers to do their jobs well. These standards establish criteria to assess child care work environments and identify areas to improve in order to assure good jobs for adults and good care for children. The standards are…

  11. Community and job satisfactions: an argument for reciprocal influence based on the principle of stimulus generalization

    SciTech Connect

    Gavin, J.; Montgomery, J.C.

    1982-10-01

    The principle of stimulus generalization provided the underlying argument for a test of hypotheses regarding the association of community and job satisfactions and a critique of related theory and research. Two-stage least squares (2SLS) analysis made possible the examination of reciprocal causation, a notion inherent in the theoretical argument. Data were obtained from 276 employees of a Western U.S. coal mine as part of a work attitudes survey. The 2SLS analysis indicated a significant impact of community satisfaction on job satisfaction and an effect of borderline significance of job on community satisfaction. Theory-based correlational comparisons were made on groups of employees residing in four distinct communities, high and low tenure groups, males and females, and different levels in the mine's hierarchy. The pattern of correlations was generally consistent with predictions, but significance tests for differences yielded equivocal support. When considered in the context of previous studies, the data upheld a reciprocal causal model and the explanatory principle of stimulus generalization for understanding the relation of community and job satisfactions. Sample characteristics necessitate cautious interpretation and the model per se might best be viewed as a heuristic framework for more definitive research.

  12. Prediction of the position of an animal based on populations of grid and place cells: a comparative simulation study.

    PubMed

    Guanella, Alexis; Verschure, Paul F M J

    2007-09-01

    The grid cells of the rodent medial entorhinal cortex (MEC) show activity patterns correlated with the animal's position. Unlike hippocampal place cells that are activated at only one specific location in the environment, MEC grid cells increase firing frequency at multiple regions in space, or subfields, that are arranged in regular triangular grids. It has been recently shown that a conjunction of MEC grid cells can lead to unique spatial representations. However, it remains unclear what the key properties of the grids are that allow for an accurate reconstruction of the position of the animal and what the comparison with hippocampal place cells is. Here we use a theoretical approach based on data from electrophysiological recordings of the MEC to simulate the neural activity of grid cells. Our simulations account for the accurate reproduction of grid cell mean firing rates, based on only three grid parameters, that is grid phase, spacing and orientation. The analysis of the key properties of the grids first reveals that for an accurate position reconstruction, it is necessary to combine cells with different grid spacings (which are found at different dorsoventral locations of the MEC) or orientations. Second, the relationship between grid spacing and subfield size observed in physiological data is optimal to predict the animal's position. Third, the regular triangular tessellating patterns of grid cells lead to the best position reconstruction results when compared with all other regular tessellations of two-dimensional space. Finally, the comparison of grid cells with place cells shows that populations of MEC grid cells can better predict the animal's position than equally-sized populations of hippocampal place cells with similar but unique spatial fields. Taken together, our results suggest that the MEC provides highly compact representations of the animal's position, which may be subsequently integrated by the place cells of the hippocampus.

  13. Creating Motivating Job Aids.

    ERIC Educational Resources Information Center

    Tilaro, Angie; Rossett, Allison

    1993-01-01

    Explains how to create job aids that employees will be motivated to use, based on a review of pertinent literature and interviews with professionals. Topics addressed include linking motivation with job aids; Keller's ARCS (Attention, Relevance, Confidence, Satisfaction) model of motivation; and design strategies for job aids based on Keller's…

  14. Design of a nonlinear backstepping control strategy of grid interconnected wind power system based PMSG

    NASA Astrophysics Data System (ADS)

    Errami, Y.; Obbadi, A.; Sahnoun, S.; Benhmida, M.; Ouassaid, M.; Maaroufi, M.

    2016-07-01

    This paper presents nonlinear backstepping control for Wind Power Generation System (WPGS) based Permanent Magnet Synchronous Generator (PMSG) and connected to utility grid. The block diagram of the WPGS with PMSG and the grid side back-to-back converter is established with the dq frame of axes. This control scheme emphasises the regulation of the dc-link voltage and the control of the power factor at changing wind speed. Besides, in the proposed control strategy of WPGS, Maximum Power Point Tracking (MPPT) technique and pitch control are provided. The stability of the regulators is assured by employing Lyapunov analysis. The proposed control strategy for the system has been validated by MATLAB simulations under varying wind velocity and the grid fault condition. In addition, a comparison of simulation results based on the proposed Backstepping strategy and conventional Vector Control is provided.

  15. A methodology toward manufacturing grid-based virtual enterprise operation platform

    NASA Astrophysics Data System (ADS)

    Tan, Wenan; Xu, Yicheng; Xu, Wei; Xu, Lida; Zhao, Xianhua; Wang, Li; Fu, Liuliu

    2010-08-01

    Virtual enterprises (VEs) have become one of main types of organisations in the manufacturing sector through which the consortium companies organise their manufacturing activities. To be competitive, a VE relies on the complementary core competences among members through resource sharing and agile manufacturing capacity. Manufacturing grid (M-Grid) is a platform in which the production resources can be shared. In this article, an M-Grid-based VE operation platform (MGVEOP) is presented as it enables the sharing of production resources among geographically distributed enterprises. The performance management system of the MGVEOP is based on the balanced scorecard and has the capacity of self-learning. The study shows that a MGVEOP can make a semi-automated process possible for a VE, and the proposed MGVEOP is efficient and agile.

  16. Discrete Adjoint-Based Design Optimization of Unsteady Turbulent Flows on Dynamic Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Diskin, Boris; Yamaleev, Nail K.

    2009-01-01

    An adjoint-based methodology for design optimization of unsteady turbulent flows on dynamic unstructured grids is described. The implementation relies on an existing unsteady three-dimensional unstructured grid solver capable of dynamic mesh simulations and discrete adjoint capabilities previously developed for steady flows. The discrete equations for the primal and adjoint systems are presented for the backward-difference family of time-integration schemes on both static and dynamic grids. The consistency of sensitivity derivatives is established via comparisons with complex-variable computations. The current work is believed to be the first verified implementation of an adjoint-based optimization methodology for the true time-dependent formulation of the Navier-Stokes equations in a practical computational code. Large-scale shape optimizations are demonstrated for turbulent flows over a tiltrotor geometry and a simulated aeroelastic motion of a fighter jet.

  17. Model Predictive Control of A Matrix-Converter Based Solid State Transformer for Utility Grid Interaction

    SciTech Connect

    Xue, Yaosuo

    2016-01-01

    The matrix converter solid state transformer (MC-SST), formed from the back-to-back connection of two three-to-single-phase matrix converters, is studied for use in the interconnection of two ac grids. The matrix converter topology provides a light weight and low volume single-stage bidirectional ac-ac power conversion without the need for a dc link. Thus, the lifetime limitations of dc-bus storage capacitors are avoided. However, space vector modulation of this type of MC-SST requires to compute vectors for each of the two MCs, which must be carefully coordinated to avoid commutation failure. An additional controller is also required to control power exchange between the two ac grids. In this paper, model predictive control (MPC) is proposed for an MC-SST connecting two different ac power grids. The proposed MPC predicts the circuit variables based on the discrete model of MC-SST system and the cost function is formulated so that the optimal switch vector for the next sample period is selected, thereby generating the required grid currents for the SST. Simulation and experimental studies are carried out to demonstrate the effectiveness and simplicity of the proposed MPC for such MC-SST-based grid interfacing systems.

  18. An adaptive grid for graph-based segmentation in retinal OCT

    PubMed Central

    Lang, Andrew; Carass, Aaron; Calabresi, Peter A.; Ying, Howard S.; Prince, Jerry L.

    2016-01-01

    Graph-based methods for retinal layer segmentation have proven to be popular due to their efficiency and accuracy. These methods build a graph with nodes at each voxel location and use edges connecting nodes to encode the hard constraints of each layer’s thickness and smoothness. In this work, we explore deforming the regular voxel grid to allow adjacent vertices in the graph to more closely follow the natural curvature of the retina. This deformed grid is constructed by fixing node locations based on a regression model of each layer’s thickness relative to the overall retina thickness, thus we generate a subject specific grid. Graph vertices are not at voxel locations, which allows for control over the resolution that the graph represents. By incorporating soft constraints between adjacent nodes, segmentation on this grid will favor smoothly varying surfaces consistent with the shape of the retina. Our final segmentation method then follows our previous work. Boundary probabilities are estimated using a random forest classifier followed by an optimal graph search algorithm on the new adaptive grid to produce a final segmentation. Our method is shown to produce a more consistent segmentation with an overall accuracy of 3.38 μm across all boundaries. PMID:27773959

  19. Grid-based asynchronous migration of execution context in Java virtual machines

    SciTech Connect

    von Laszewski, G.; Shudo, K.; Muraoka, Y.

    2000-06-15

    Previous research efforts for building thread migration systems have concentrated on the development of frameworks dealing with a small local environment controlled by a single user. Computational Grids provide the opportunity to utilize a large-scale environment controlled over different organizational boundaries. Using this class of large-scale computational resources as part of a thread migration system provides a significant challenge previously not addressed by this community. In this paper the authors present a framework that integrates Grid services to enhance the functionality of a thread migration system. To accommodate future Grid services, the design of the framework is both flexible and extensible. Currently, the thread migration system contains Grid services for authentication, registration, lookup, and automatic software installation. In the context of distributed applications executed on a Grid-based infrastructure, the asynchronous migration of an execution context can help solve problems such as remote execution, load balancing, and the development of mobile agents. The prototype is based on the migration of Java threads, allowing asynchronous and heterogeneous migration of the execution context of the running code.

  20. PDEs on moving surfaces via the closest point method and a modified grid based particle method

    NASA Astrophysics Data System (ADS)

    Petras, A.; Ruuth, S. J.

    2016-05-01

    Partial differential equations (PDEs) on surfaces arise in a wide range of applications. The closest point method (Ruuth and Merriman (2008) [20]) is a recent embedding method that has been used to solve a variety of PDEs on smooth surfaces using a closest point representation of the surface and standard Cartesian grid methods in the embedding space. The original closest point method (CPM) was designed for problems posed on static surfaces, however the solution of PDEs on moving surfaces is of considerable interest as well. Here we propose solving PDEs on moving surfaces using a combination of the CPM and a modification of the grid based particle method (Leung and Zhao (2009) [12]). The grid based particle method (GBPM) represents and tracks surfaces using meshless particles and an Eulerian reference grid. Our modification of the GBPM introduces a reconstruction step into the original method to ensure that all the grid points within a computational tube surrounding the surface are active. We present a number of examples to illustrate the numerical convergence properties of our combined method. Experiments for advection-diffusion equations that are strongly coupled to the velocity of the surface are also presented.

  1. A Cosmic Dust Sensor Based on an Array of Grid Electrodes

    NASA Astrophysics Data System (ADS)

    Li, Y. W.; Bugiel, S.; Strack, H.; Srama, R.

    2014-04-01

    We described a low mass and high sensitivity cosmic dust trajectory sensor using a array of grid segments[1]. the sensor determines the particle velocity vector and the particle mass. An impact target is used for the detection of the impact plasma of high speed particles like interplanetary dust grains or high speed ejecta. Slower particles are measured by three planes of grid electrodes using charge induction. In contrast to conventional Dust Trajectory Sensor based on wire electrodes, grid electrodes a robust and sensitive design with a trajectory resolution of a few degree. Coulomb simulation and laboratory tests were performed in order to verify the instrument design. The signal shapes are used to derive the particle plane intersection points and to derive the exact particle trajectory. The accuracy of the instrument for the incident angle depends on the particle charge, the position of the intersection point and the signal-to-noise of the charge sensitive amplifier (CSA). There are some advantages of this grid-electrodes based design with respect to conventional trajectory sensor using individual wire electrodes: the grid segment electrodes show higher amplitudes (close to 100%induced charge) and the overall number of measurement channels can be reduced. This allows a compact instrument with low power and mass requirements.

  2. Are health workers motivated by income? Job motivation of Cambodian primary health workers implementing performance-based financing.

    PubMed

    Khim, Keovathanak

    2016-01-01

    Background Financial incentives are widely used in performance-based financing (PBF) schemes, but their contribution to health workers' incomes and job motivation is poorly understood. Cambodia undertook health sector reform from the middle of 2009 and PBF was employed as a part of the reform process. Objective This study examines job motivation for primary health workers (PHWs) under PBF reform in Cambodia and assesses the relationship between job motivation and income. Design A cross-sectional self-administered survey was conducted on 266 PHWs, from 54 health centers in the 15 districts involved in the reform. The health workers were asked to report all sources of income from public sector jobs and provide answers to 20 items related to job motivation. Factor analysis was conducted to identify the latent variables of job motivation. Factors associated with motivation were identified through multivariable regression. Results PHWs reported multiple sources of income and an average total income of US$190 per month. Financial incentives under the PBF scheme account for 42% of the average total income. PHWs had an index motivation score of 4.9 (on a scale from one to six), suggesting they had generally high job motivation that was related to a sense of community service, respect, and job benefits. Regression analysis indicated that income and the perception of a fair distribution of incentives were both statistically significant in association with higher job motivation scores. Conclusions Financial incentives used in the reform formed a significant part of health workers' income and influenced their job motivation. Improving job motivation requires fixing payment mechanisms and increasing the size of incentives. PBF is more likely to succeed when income, training needs, and the desire for a sense of community service are addressed and institutionalized within the health system.

  3. Are health workers motivated by income? Job motivation of Cambodian primary health workers implementing performance-based financing

    PubMed Central

    Khim, Keovathanak

    2016-01-01

    Background Financial incentives are widely used in performance-based financing (PBF) schemes, but their contribution to health workers’ incomes and job motivation is poorly understood. Cambodia undertook health sector reform from the middle of 2009 and PBF was employed as a part of the reform process. Objective This study examines job motivation for primary health workers (PHWs) under PBF reform in Cambodia and assesses the relationship between job motivation and income. Design A cross-sectional self-administered survey was conducted on 266 PHWs, from 54 health centers in the 15 districts involved in the reform. The health workers were asked to report all sources of income from public sector jobs and provide answers to 20 items related to job motivation. Factor analysis was conducted to identify the latent variables of job motivation. Factors associated with motivation were identified through multivariable regression. Results PHWs reported multiple sources of income and an average total income of US$190 per month. Financial incentives under the PBF scheme account for 42% of the average total income. PHWs had an index motivation score of 4.9 (on a scale from one to six), suggesting they had generally high job motivation that was related to a sense of community service, respect, and job benefits. Regression analysis indicated that income and the perception of a fair distribution of incentives were both statistically significant in association with higher job motivation scores. Conclusions Financial incentives used in the reform formed a significant part of health workers’ income and influenced their job motivation. Improving job motivation requires fixing payment mechanisms and increasing the size of incentives. PBF is more likely to succeed when income, training needs, and the desire for a sense of community service are addressed and institutionalized within the health system. PMID:27319575

  4. Experience with Remote Job Execution

    SciTech Connect

    Lynch, Vickie E; Cobb, John W; Green, Mark L; Kohl, James Arthur; Miller, Stephen D; Ren, Shelly; Smith, Bradford C; Vazhkudai, Sudharshan S

    2008-01-01

    The Neutron Science Portal at Oak Ridge National Laboratory submits jobs to the TeraGrid for remote job execution. The TeraGrid is a network of high performance computers supported by the US National Science Foundation. There are eleven partner facilities with over a petaflop of peak computing performance and sixty petabytes of long-term storage. Globus is installed on a local machine and used for job submission. The graphical user interface is produced by java coding that reads an XML file. After submission, the status of the job is displayed in a Job Information Service window which queries globus for the status. The output folder produced in the scratch directory of the TeraGrid machine is returned to the portal with globus-url-copy command that uses the gridftp servers on the TeraGrid machines. This folder is copied from the stage-in directory of the community account to the user's results directory where the output can be plotted using the portal's visualization services. The primary problem with remote job execution is diagnosing execution problems. We have daily tests of submitting multiple remote jobs from the portal. When these jobs fail on a computer, it is difficult to diagnose the problem from the globus output. Successes and problems will be presented.

  5. Grid generation strategies for turbomachinery configurations

    NASA Technical Reports Server (NTRS)

    Lee, K. D.; Henderson, T. L.

    1991-01-01

    Turbomachinery flow fields involve unique grid generation issues due to their geometrical and physical characteristics. Several strategic approaches are discussed to generate quality grids. The grid quality is further enhanced through blending and adapting. Grid blending smooths the grids locally through averaging and diffusion operators. Grid adaptation redistributes the grid points based on a grid quality assessment. These methods are demonstrated with several examples.

  6. Elliptic Curve Cryptography-Based Authentication with Identity Protection for Smart Grids.

    PubMed

    Zhang, Liping; Tang, Shanyu; Luo, He

    2016-01-01

    In a smart grid, the power service provider enables the expected power generation amount to be measured according to current power consumption, thus stabilizing the power system. However, the data transmitted over smart grids are not protected, and then suffer from several types of security threats and attacks. Thus, a robust and efficient authentication protocol should be provided to strength the security of smart grid networks. As the Supervisory Control and Data Acquisition system provides the security protection between the control center and substations in most smart grid environments, we focus on how to secure the communications between the substations and smart appliances. Existing security approaches fail to address the performance-security balance. In this study, we suggest a mitigation authentication protocol based on Elliptic Curve Cryptography with privacy protection by using a tamper-resistant device at the smart appliance side to achieve a delicate balance between performance and security of smart grids. The proposed protocol provides some attractive features such as identity protection, mutual authentication and key agreement. Finally, we demonstrate the completeness of the proposed protocol using the Gong-Needham-Yahalom logic.

  7. Cygrid: A fast Cython-powered convolution-based gridding module for Python

    NASA Astrophysics Data System (ADS)

    Winkel, B.; Lenz, D.; Flöer, L.

    2016-06-01

    Context. Data gridding is a common task in astronomy and many other science disciplines. It refers to the resampling of irregularly sampled data to a regular grid. Aims: We present cygrid, a library module for the general purpose programming language Python. Cygrid can be used to resample data to any collection of target coordinates, although its typical application involves FITS maps or data cubes. The FITS world coordinate system standard is supported. Methods: The regridding algorithm is based on the convolution of the original samples with a kernel of arbitrary shape. We introduce a lookup table scheme that allows us to parallelize the gridding and combine it with the HEALPix tessellation of the sphere for fast neighbor searches. Results: We show that for n input data points, cygrids runtime scales between O(n) and O(nlog n) and analyze the performance gain that is achieved using multiple CPU cores. We also compare the gridding speed with other techniques, such as nearest-neighbor, and linear and cubic spline interpolation. Conclusions: Cygrid is a very fast and versatile gridding library that significantly outperforms other third-party Python modules, such as the linear and cubic spline interpolation provided by SciPy. http://https://github.com/bwinkel/cygrid

  8. Elliptic Curve Cryptography-Based Authentication with Identity Protection for Smart Grids

    PubMed Central

    Zhang, Liping; Tang, Shanyu; Luo, He

    2016-01-01

    In a smart grid, the power service provider enables the expected power generation amount to be measured according to current power consumption, thus stabilizing the power system. However, the data transmitted over smart grids are not protected, and then suffer from several types of security threats and attacks. Thus, a robust and efficient authentication protocol should be provided to strength the security of smart grid networks. As the Supervisory Control and Data Acquisition system provides the security protection between the control center and substations in most smart grid environments, we focus on how to secure the communications between the substations and smart appliances. Existing security approaches fail to address the performance-security balance. In this study, we suggest a mitigation authentication protocol based on Elliptic Curve Cryptography with privacy protection by using a tamper-resistant device at the smart appliance side to achieve a delicate balance between performance and security of smart grids. The proposed protocol provides some attractive features such as identity protection, mutual authentication and key agreement. Finally, we demonstrate the completeness of the proposed protocol using the Gong-Needham- Yahalom logic. PMID:27007951

  9. Information Security Risk Assessment of Smart Grid Based on Absorbing Markov Chain and SPA

    NASA Astrophysics Data System (ADS)

    Jianye, Zhang; Qinshun, Zeng; Yiyang, Song; Cunbin, Li

    2014-12-01

    To assess and prevent the smart grid information security risks more effectively, this paper provides risk index quantitative calculation method based on absorbing Markov chain to overcome the deficiencies that links between system components were not taken into consideration and studies mostly were limited to static evaluation. The method avoids the shortcomings of traditional Expert Score with significant subjective factors and also considers the links between information system components, which make the risk index system closer to the reality. Then, a smart grid information security risk assessment model on the basis of set pair analysis improved by Markov chain was established. Using the identity, discrepancy, and contradiction of connection degree to dynamically reflect the trend of smart grid information security risk and combining with the Markov chain to calculate connection degree of the next period, the model implemented the smart grid information security risk assessment comprehensively and dynamically. Finally, this paper proves that the established model is scientific, effective, and feasible to dynamically evaluate the smart grid information security risks.

  10. The design and implementation of a remote sensing image processing system based on grid middleware

    NASA Astrophysics Data System (ADS)

    Zhong, Liang; Ma, Hongchao; Xu, Honggen; Ding, Yi

    2009-10-01

    In this article, a remote sensing image processing system is established to carry out the significant scientific problem that processing and distributing the mass earth-observed data quantitatively and intelligently with high efficiency under the Condor Environment. This system includes the submitting of the long-distantly task, the Grid middleware in the mass image processing and the quick distribution of the remote-sensing images, etc. A conclusion can be gained from the application of this system based on Grid environment. It proves to be an effective way to solve the present problem of fast processing, quick distribution and sharing of the mass remote-sensing images.

  11. Reducing the dimensionality of grid based methods for electron-atom scattering calculations below ionization threshold

    NASA Astrophysics Data System (ADS)

    Benda, Jakub; Houfek, Karel

    2017-04-01

    For total energies below the ionization threshold it is possible to dramatically reduce the computational burden of the solution of the electron-atom scattering problem based on grid methods combined with the exterior complex scaling. As in the R-matrix method, the problem can be split into the inner and outer problem, where the outer problem considers only the energetically accessible asymptotic channels. The (N + 1)-electron inner problem is coupled to the one-electron outer problems for every channel, resulting in a matrix that scales only linearly with size of the outer grid.

  12. Application of a Scalable, Parallel, Unstructured-Grid-Based Navier-Stokes Solver

    NASA Technical Reports Server (NTRS)

    Parikh, Paresh

    2001-01-01

    A parallel version of an unstructured-grid based Navier-Stokes solver, USM3Dns, previously developed for efficient operation on a variety of parallel computers, has been enhanced to incorporate upgrades made to the serial version. The resultant parallel code has been extensively tested on a variety of problems of aerospace interest and on two sets of parallel computers to understand and document its characteristics. An innovative grid renumbering construct and use of non-blocking communication are shown to produce superlinear computing performance. Preliminary results from parallelization of a recently introduced "porous surface" boundary condition are also presented.

  13. An unstructured grid, three-dimensional model based on the shallow water equations

    USGS Publications Warehouse

    Casulli, V.; Walters, R.A.

    2000-01-01

    A semi-implicit finite difference model based on the three-dimensional shallow water equations is modified to use unstructured grids. There are obvious advantages in using unstructured grids in problems with a complicated geometry. In this development, the concept of unstructured orthogonal grids is introduced and applied to this model. The governing differential equations are discretized by means of a semi-implicit algorithm that is robust, stable and very efficient. The resulting model is relatively simple, conserves mass, can fit complicated boundaries and yet is sufficiently flexible to permit local mesh refinements in areas of interest. Moreover, the simulation of the flooding and drying is included in a natural and straightforward manner. These features are illustrated by a test case for studies of convergence rates and by examples of flooding on a river plain and flow in a shallow estuary. Copyright ?? 2000 John Wiley & Sons, Ltd.

  14. Implementation of fuzzy-sliding mode based control of a grid connected photovoltaic system.

    PubMed

    Menadi, Abdelkrim; Abdeddaim, Sabrina; Ghamri, Ahmed; Betka, Achour

    2015-09-01

    The present work describes an optimal operation of a small scale photovoltaic system connected to a micro-grid, based on both sliding mode and fuzzy logic control. Real time implementation is done through a dSPACE 1104 single board, controlling a boost chopper on the PV array side and a voltage source inverter (VSI) on the grid side. The sliding mode controller tracks permanently the maximum power of the PV array regardless of atmospheric condition variations, while The fuzzy logic controller (FLC) regulates the DC-link voltage, and ensures via current control of the VSI a quasi-total transit of the extracted PV power to the grid under a unity power factor operation. Simulation results, carried out via Matlab-Simulink package were approved through experiment, showing the effectiveness of the proposed control techniques.

  15. Air Pollution Monitoring and Mining Based on Sensor Grid in London

    PubMed Central

    Ma, Yajie; Richards, Mark; Ghanem, Moustafa; Guo, Yike; Hassard, John

    2008-01-01

    In this paper, we present a distributed infrastructure based on wireless sensors network and Grid computing technology for air pollution monitoring and mining, which aims to develop low-cost and ubiquitous sensor networks to collect real-time, large scale and comprehensive environmental data from road traffic emissions for air pollution monitoring in urban environment. The main informatics challenges in respect to constructing the high-throughput sensor Grid are discussed in this paper. We present a two-layer network framework, a P2P e-Science Grid architecture, and the distributed data mining algorithm as the solutions to address the challenges. We simulated the system in TinyOS to examine the operation of each sensor as well as the networking performance. We also present the distributed data mining result to examine the effectiveness of the algorithm. PMID:27879895

  16. Study on the model of distributed remote sensing data processing based on agent grid

    NASA Astrophysics Data System (ADS)

    Zhang, Xining; Li, Deren; Li, Jingliang

    2006-10-01

    The increments of high-resolution remote sensing data about Digital Earth and the distributed data among heterogeneous remote sites have brought challenges to processing remote sensing data effectively. Traditional models of distributed computing are inadequate to support such complex applications. Agent technology provides a new method for understanding the features of distributed system and solving distributed application problems. This paper proposes a model for distributed remote sensing data processing based on agent grid. This model makes use of grid to discover, compose, utilize and deploy agents, process distributed image data, and image-processing algorithms. "Agents Group" mode is used in the model to manage all the agents distributed in the grid, which consists of one or more agents to accomplish automatic and dynamic configuration of distributed image data resources and to efficiently support ondemand image processing in distributed environment. The model, framework and implementation of prototype are reported in this paper.

  17. Probability-Based Software for Grid Optimization: Improved Power System Operations Using Advanced Stochastic Optimization

    SciTech Connect

    2012-02-24

    GENI Project: Sandia National Laboratories is working with several commercial and university partners to develop software for market management systems (MMSs) that enable greater use of renewable energy sources throughout the grid. MMSs are used to securely and optimally determine which energy resources should be used to service energy demand across the country. Contributions of electricity to the grid from renewable energy sources such as wind and solar are intermittent, introducing complications for MMSs, which have trouble accommodating the multiple sources of price and supply uncertainties associated with bringing these new types of energy into the grid. Sandia’s software will bring a new, probability-based formulation to account for these uncertainties. By factoring in various probability scenarios for electricity production from renewable energy sources in real time, Sandia’s formula can reduce the risk of inefficient electricity transmission, save ratepayers money, conserve power, and support the future use of renewable energy.

  18. Air Pollution Monitoring and Mining Based on Sensor Grid in London.

    PubMed

    Ma, Yajie; Richards, Mark; Ghanem, Moustafa; Guo, Yike; Hassard, John

    2008-06-01

    In this paper, we present a distributed infrastructure based on wireless sensors network and Grid computing technology for air pollution monitoring and mining, which aims to develop low-cost and ubiquitous sensor networks to collect real-time, large scale and comprehensive environmental data from road traffic emissions for air pollution monitoring in urban environment. The main informatics challenges in respect to constructing the high-throughput sensor Grid are discussed in this paper. We present a twolayer network framework, a P2P e-Science Grid architecture, and the distributed data mining algorithm as the solutions to address the challenges. We simulated the system in TinyOS to examine the operation of each sensor as well as the networking performance. We also present the distributed data mining result to examine the effectiveness of the algorithm.

  19. An unstructured grid, three-dimensional model based on the shallow water equations

    NASA Astrophysics Data System (ADS)

    Casulli, Vincenzo; Walters, Roy A.

    2000-02-01

    A semi-implicit finite difference model based on the three-dimensional shallow water equations is modified to use unstructured grids. There are obvious advantages in using unstructured grids in problems with a complicated geometry. In this development, the concept of unstructured orthogonal grids is introduced and applied to this model. The governing differential equations are discretized by means of a semi-implicit algorithm that is robust, stable and very efficient. The resulting model is relatively simple, conserves mass, can fit complicated boundaries and yet is sufficiently flexible to permit local mesh refinements in areas of interest. Moreover, the simulation of the flooding and drying is included in a natural and straightforward manner. These features are illustrated by a test case for studies of convergence rates and by examples of flooding on a river plain and flow in a shallow estuary. Copyright

  20. AstroGrid-D: Grid technology for astronomical science

    NASA Astrophysics Data System (ADS)

    Enke, Harry; Steinmetz, Matthias; Adorf, Hans-Martin; Beck-Ratzka, Alexander; Breitling, Frank; Brüsemeister, Thomas; Carlson, Arthur; Ensslin, Torsten; Högqvist, Mikael; Nickelt, Iliya; Radke, Thomas; Reinefeld, Alexander; Reiser, Angelika; Scholl, Tobias; Spurzem, Rainer; Steinacker, Jürgen; Voges, Wolfgang; Wambsganß, Joachim; White, Steve

    2011-02-01

    We present status and results of AstroGrid-D, a joint effort of astrophysicists and computer scientists to employ grid technology for scientific applications. AstroGrid-D provides access to a network of distributed machines with a set of commands as well as software interfaces. It allows simple use of computer and storage facilities and to schedule or monitor compute tasks and data management. It is based on the Globus Toolkit middleware (GT4). Chapter 1 describes the context which led to the demand for advanced software solutions in Astrophysics, and we state the goals of the project. We then present characteristic astrophysical applications that have been implemented on AstroGrid-D in chapter 2. We describe simulations of different complexity, compute-intensive calculations running on multiple sites (Section 2.1), and advanced applications for specific scientific purposes (Section 2.2), such as a connection to robotic telescopes (Section 2.2.3). We can show from these examples how grid execution improves e.g. the scientific workflow. Chapter 3 explains the software tools and services that we adapted or newly developed. Section 3.1 is focused on the administrative aspects of the infrastructure, to manage users and monitor activity. Section 3.2 characterises the central components of our architecture: The AstroGrid-D information service to collect and store metadata, a file management system, the data management system, and a job manager for automatic submission of compute tasks. We summarise the successfully established infrastructure in chapter 4, concluding with our future plans to establish AstroGrid-D as a platform of modern e-Astronomy.

  1. CHARACTERIZING SPATIAL AND TEMPORAL DYNAMICS: DEVELOPMENT OF A GRID-BASED WATERSHED MERCURY LOADING MODEL

    EPA Science Inventory

    A distributed grid-based watershed mercury loading model has been developed to characterize spatial and temporal dynamics of mercury from both point and non-point sources. The model simulates flow, sediment transport, and mercury dynamics on a daily time step across a diverse lan...

  2. A Cycle-Based Data Aggregation Scheme for Grid-Based Wireless Sensor Networks

    PubMed Central

    Chiang, Yung-Kuei; Wang, Neng-Chung; Hsieh, Chih-Hung

    2014-01-01

    In a wireless sensor network (WSN), a great number of sensor nodes are deployed to gather sensed data. These sensor nodes are typically powered by batteries so their energy is restricted. Sensor nodes mainly consume energy in data transmission, especially over a long distance. Since the location of the base station (BS) is remote, the energy consumed by each node to directly transmit its data to the BS is considerable and the node will die very soon. A well-designed routing protocol is thus essential to reduce the energy consumption. In this paper, we propose a Cycle-Based Data Aggregation Scheme (CBDAS) for grid-based WSNs. In CBDAS, the whole sensor field is divided into a grid of cells, each with a head. We prolong the network lifetime by linking all cell heads together to form a cyclic chain so that the gathered data can move in two directions. For data gathering in each round, the gathered data moves from node to node along the chain, getting aggregated. Finally, a designated cell head, the cycle leader, directly transmits to the BS. CBDAS performs data aggregation at every cell head so as to substantially reduce the amount of data that must be transmitted to the BS. Only cell heads need disseminate data so that the number of data transmissions is greatly diminished. Sensor nodes of each cell take turns as the cell head, and all cell heads on the cyclic chain also take turns being cycle leader. The energy depletion is evenly distributed so that the nodes' lifetime is extended. As a result, the lifetime of the whole sensor network is extended. Simulation results show that CBDAS outperforms protocols like Direct, PEGASIS, and PBDAS. PMID:24828579

  3. Simulating Runoff from a Grid Based Mercury Model: Flow Comparisons

    EPA Science Inventory

    Several mercury cycling models, including general mass balance approaches, mixed-batch reactors in streams or lakes, or regional process-based models, exist to assess the ecological exposure risks associated with anthropogenically increased atmospheric mercury (Hg) deposition, so...

  4. An In-depth Study of Grid-based Asteroseismic Analysis

    NASA Astrophysics Data System (ADS)

    Gai, Ning; Basu, Sarbani; Chaplin, William J.; Elsworth, Yvonne

    2011-04-01

    NASA's Kepler mission is providing basic asteroseismic data for hundreds of stars. One of the more common ways of determining stellar characteristics from these data is by the so-called grid-based modeling. We have made a detailed study of grid-based analysis techniques to study the errors (and error correlations) involved. As had been reported earlier, we find that it is relatively easy to get very precise values of stellar radii using grid-based techniques. However, we find that there are small, but significant, biases that can result because of the grid of models used. The biases can be minimized if metallicity is known. Masses cannot be determined as precisely as the radii and suffer from larger systematic effects. We also find that the errors in mass and radius are correlated. A positive consequence of this correlation is that log g can be determined both precisely and accurately with almost no systematic biases. Radii and log g can be determined with almost no model dependence to within 5% for realistic estimates of errors in asteroseismic and conventional observations. Errors in mass can be somewhat higher unless accurate metallicity estimates are available. Age estimates of individual stars are the most model dependent. The errors are larger, too. However, we find that for star clusters, it is possible to get a relatively precise age if one assumes that all stars in a given cluster have the same age.

  5. An adaptive grid/Navier-Stokes methodology for the calculation of nozzle afterbody base flows with a supersonic freestream

    NASA Technical Reports Server (NTRS)

    Williams, Morgan; Lim, Dennis; Ungewitter, Ronald

    1993-01-01

    This paper describes an adaptive grid method for base flows in a supersonic freestream. The method is based on the direct finite-difference statement of the equidistribution principle. The weighting factor is a combination of the Mach number, density, and velocity first-derivative gradients in the radial direction. Two key ideas of the method are to smooth the weighting factor by using a type of implicit smoothing and to allow boundary points to move in the grid adaptation process. An AGARD nozzle afterbody base flow configuration is used to demonstrate the performance of the adaptive grid methodology. Computed base pressures are compared to experimental data. The adapted grid solutions offer a dramatic improvement in base pressure prediction compared to solutions computed on a nonadapted grid. A total-variation-diminishing (TVD) Navier-Stokes scheme is used to solve the governing flow equations.

  6. Grid occupancy estimation for environment perception based on belief functions and PCR6

    NASA Astrophysics Data System (ADS)

    Moras, Julien; Dezert, Jean; Pannetier, Benjamin

    2015-05-01

    In this contribution, we propose to improve the grid map occupancy estimation method developed so far based on belief function modeling and the classical Dempster's rule of combination. Grid map offers a useful representation of the perceived world for mobile robotics navigation. It will play a major role for the security (obstacle avoidance) of next generations of terrestrial vehicles, as well as for future autonomous navigation systems. In a grid map, the occupancy of each cell representing a small piece of the surrounding area of the robot must be estimated at first from sensors measurements (typically LIDAR, or camera), and then it must also be classified into different classes in order to get a complete and precise perception of the dynamic environment where the robot moves. So far, the estimation and the grid map updating have been done using fusion techniques based on the probabilistic framework, or on the classical belief function framework thanks to an inverse model of the sensors. Mainly because the latter offers an interesting management of uncertainties when the quality of available information is low, and when the sources of information appear as conflicting. To improve the performances of the grid map estimation, we propose in this paper to replace Dempster's rule of combination by the PCR6 rule (Proportional Conflict Redistribution rule #6) proposed in DSmT (Dezert-Smarandache) Theory. As an illustrating scenario, we consider a platform moving in dynamic area and we compare our new realistic simulation results (based on a LIDAR sensor) with those obtained by the probabilistic and the classical belief-based approaches.

  7. Occupational self-coding and automatic recording (OSCAR): a novel web-based tool to collect and code lifetime job histories in large population-based studies.

    PubMed

    De Matteis, Sara; Jarvis, Deborah; Young, Heather; Young, Alan; Allen, Naomi; Potts, James; Darnton, Andrew; Rushton, Lesley; Cullinan, Paul

    2017-03-01

    Objectives The standard approach to the assessment of occupational exposures is through the manual collection and coding of job histories. This method is time-consuming and costly and makes it potentially unfeasible to perform high quality analyses on occupational exposures in large population-based studies. Our aim was to develop a novel, efficient web-based tool to collect and code lifetime job histories in the UK Biobank, a population-based cohort of over 500 000 participants. Methods We developed OSCAR (occupations self-coding automatic recording) based on the hierarchical structure of the UK Standard Occupational Classification (SOC) 2000, which allows individuals to collect and automatically code their lifetime job histories via a simple decision-tree model. Participants were asked to find each of their jobs by selecting appropriate job categories until they identified their job title, which was linked to a hidden 4-digit SOC code. For each occupation a job title in free text was also collected to estimate Cohen's kappa (κ) inter-rater agreement between SOC codes assigned by OSCAR and an expert manual coder. Results OSCAR was administered to 324 653 UK Biobank participants with an existing email address between June and September 2015. Complete 4-digit SOC-coded lifetime job histories were collected for 108 784 participants (response rate: 34%). Agreement between the 4-digit SOC codes assigned by OSCAR and the manual coder for a random sample of 400 job titles was moderately good [κ=0.45, 95% confidence interval (95% CI) 0.42-0.49], and improved when broader job categories were considered (κ=0.64, 95% CI 0.61-0.69 at a 1-digit SOC-code level). Conclusions OSCAR is a novel, efficient, and reasonably reliable web-based tool for collecting and automatically coding lifetime job histories in large population-based studies. Further application in other research projects for external validation purposes is warranted.

  8. The improved robustness of multigrid elliptic solvers based on multiple semicoarsened grids

    NASA Technical Reports Server (NTRS)

    Naik, Naomi H.; Vanrosendale, John

    1991-01-01

    Multigrid convergence rates degenerate on problems with stretched grids or anisotropic operators, unless one uses line or plane relaxation. For 3-D problems, only plane relaxation suffices, in general. While line and plane relaxation algorithms are efficient on sequential machines, they are quite awkward and inefficient on parallel machines. A new multigrid algorithm is presented based on the use of multiple coarse grids, that eliminates the need for line or plane relaxation in anisotropic problems. This algorithm was developed and the standard multigrid theory was extended to establish rapid convergence for this class of algorithms. The new algorithm uses only point relaxation, allowing easy and efficient parallel implementation, yet achieves robustness and convergence rates comparable to line and plane relaxation multigrid algorithms. The algorithm described is a variant of Mulder's multigrid algorithm for hyperbolic problems. The latter uses multiple coarse grids to achieve robustness, but is unsuitable for elliptic problems, since its V-cycle convergence rate goes to one as the number of levels increases. The new algorithm combines the contributions from the multiple coarse grid via a local switch, based on the strength of the discrete operator in each coordinate direction.

  9. Navigation in Grid Space with the NAS Grid Benchmarks

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Hood, Robert; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We present a navigational tool for computational grids. The navigational process is based on measuring the grid characteristics with the NAS Grid Benchmarks (NGB) and using the measurements to assign tasks of a grid application to the grid machines. The tool allows the user to explore the grid space and to navigate the execution at a grid application to minimize its turnaround time. We introduce the notion of gridscape as a user view of the grid and show how it can be me assured by NGB, Then we demonstrate how the gridscape can be used with two different schedulers to navigate a grid application through a rudimentary grid.

  10. Analysis of the Multi Strategy Goal Programming for Micro-Grid Based on Dynamic ant Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Qiu, J. P.; Niu, D. X.

    Micro-grid is one of the key technologies of the future energy supplies. Take economic planning. reliability, and environmental protection of micro grid as a basis for the analysis of multi-strategy objective programming problems for micro grid which contains wind power, solar power, and battery and micro gas turbine. Establish the mathematical model of each power generation characteristics and energy dissipation. and change micro grid planning multi-objective function under different operating strategies to a single objective model based on AHP method. Example analysis shows that in combination with dynamic ant mixed genetic algorithm can get the optimal power output of this model.

  11. Grid Computing

    NASA Astrophysics Data System (ADS)

    Foster, Ian

    2001-08-01

    The term "Grid Computing" refers to the use, for computational purposes, of emerging distributed Grid infrastructures: that is, network and middleware services designed to provide on-demand and high-performance access to all important computational resources within an organization or community. Grid computing promises to enable both evolutionary and revolutionary changes in the practice of computational science and engineering based on new application modalities such as high-speed distributed analysis of large datasets, collaborative engineering and visualization, desktop access to computation via "science portals," rapid parameter studies and Monte Carlo simulations that use all available resources within an organization, and online analysis of data from scientific instruments. In this article, I examine the status of Grid computing circa 2000, briefly reviewing some relevant history, outlining major current Grid research and development activities, and pointing out likely directions for future work. I also present a number of case studies, selected to illustrate the potential of Grid computing in various areas of science.

  12. Classroom-based Interventions and Teachers' Perceived Job Stressors and Confidence: Evidence from a Randomized Trial in Head Start Settings.

    PubMed

    Zhai, Fuhua; Raver, C Cybele; Li-Grining, Christine

    2011-09-01

    Preschool teachers' job stressors have received increasing attention but have been understudied in the literature. We investigated the impacts of a classroom-based intervention, the Chicago School Readiness Project (CSRP), on teachers' perceived job stressors and confidence, as indexed by their perceptions of job control, job resources, job demands, and confidence in behavior management. Using a clustered randomized controlled trial (RCT) design, the CSRP provided multifaceted services to the treatment group, including teacher training and mental health consultation, which were accompanied by stress-reduction services and workshops. Overall, 90 teachers in 35 classrooms at 18 Head Start sites participated in the study. After adjusting for teacher and classroom factors and site fixed effects, we found that the CSRP had significant effects on the improvement of teachers' perceived job control and work-related resources. We also found that the CSRP decreased teachers' confidence in behavior management and had no statistically significant effects on job demands. Overall, we did not find significant moderation effects of teacher race/ethnicity, education, teaching experience, or teacher type. The implications for research and policy are discussed.

  13. Developing physical exposure-based back injury risk models applicable to manual handling jobs in distribution centers.

    PubMed

    Lavender, Steven A; Marras, William S; Ferguson, Sue A; Splittstoesser, Riley E; Yang, Gang

    2012-01-01

    Using our ultrasound-based "Moment Monitor," exposures to biomechanical low back disorder risk factors were quantified in 195 volunteers who worked in 50 different distribution center jobs. Low back injury rates, determined from a retrospective examination of each company's Occupational Safety and Health Administration (OSHA) 300 records over the 3-year period immediately prior to data collection, were used to classify each job's back injury risk level. The analyses focused on the factors differentiating the high-risk jobs (those having had 12 or more back injuries/200,000 hr of exposure) from the low-risk jobs (those defined as having no back injuries in the preceding 3 years). Univariate analyses indicated that measures of load moment exposure and force application could distinguish between high (n = 15) and low (n = 15) back injury risk distribution center jobs. A three-factor multiple logistic regression model capable of predicting high-risk jobs with very good sensitivity (87%) and specificity (73%) indicated that risk could be assessed using the mean across the sampled lifts of the peak forward and or lateral bending dynamic load moments that occurred during each lift, the mean of the peak push/pull forces across the sampled lifts, and the mean duration of the non-load exposure periods. A surrogate model, one that does not require the Moment Monitor equipment to assess a job's back injury risk, was identified although with some compromise in model sensitivity relative to the original model.

  14. Photovoltaic Grid-Connected Modeling and Characterization Based on Experimental Results

    PubMed Central

    Humada, Ali M.; Hojabri, Mojgan; Sulaiman, Mohd Herwan Bin; Hamada, Hussein M.; Ahmed, Mushtaq N.

    2016-01-01

    A grid-connected photovoltaic (PV) system operates under fluctuated weather condition has been modeled and characterized based on specific test bed. A mathematical model of a small-scale PV system has been developed mainly for residential usage, and the potential results have been simulated. The proposed PV model based on three PV parameters, which are the photocurrent, IL, the reverse diode saturation current, Io, the ideality factor of diode, n. Accuracy of the proposed model and its parameters evaluated based on different benchmarks. The results showed that the proposed model fitting the experimental results with high accuracy compare to the other models, as well as the I-V characteristic curve. The results of this study can be considered valuable in terms of the installation of a grid-connected PV system in fluctuated climatic conditions. PMID:27035575

  15. Photovoltaic Grid-Connected Modeling and Characterization Based on Experimental Results.

    PubMed

    Humada, Ali M; Hojabri, Mojgan; Sulaiman, Mohd Herwan Bin; Hamada, Hussein M; Ahmed, Mushtaq N

    2016-01-01

    A grid-connected photovoltaic (PV) system operates under fluctuated weather condition has been modeled and characterized based on specific test bed. A mathematical model of a small-scale PV system has been developed mainly for residential usage, and the potential results have been simulated. The proposed PV model based on three PV parameters, which are the photocurrent, IL, the reverse diode saturation current, Io, the ideality factor of diode, n. Accuracy of the proposed model and its parameters evaluated based on different benchmarks. The results showed that the proposed model fitting the experimental results with high accuracy compare to the other models, as well as the I-V characteristic curve. The results of this study can be considered valuable in terms of the installation of a grid-connected PV system in fluctuated climatic conditions.

  16. Efficient calibration of a distributed pde-based hydrological model using grid coarsening

    NASA Astrophysics Data System (ADS)

    von Gunten, D.; Wöhling, T.; Haslauer, C.; Merchán, D.; Causapé, J.; Cirpka, O. A.

    2014-11-01

    Partial-differential-equation based integrated hydrological models are now regularly used at catchment scale. They rely on the shallow water equations for surface flow and on the Richards' equations for subsurface flow, allowing a spatially explicit representation of properties and states. However, these models usually come at high computational costs, which limit their accessibility to state-of-the-art methods of parameter estimation and uncertainty quantification, because these methods require a large number of model evaluations. In this study, we present an efficient model calibration strategy, based on a hierarchy of grid resolutions, each of them resolving the same zonation of subsurface and land-surface units. We first analyze which model outputs show the highest similarities between the original model and two differently coarsened grids. Then we calibrate the coarser models by comparing these similar outputs to the measurements. We finish the calibration using the fully resolved model, taking the result of the preliminary calibration as starting point. We apply the proposed approach to the well monitored Lerma catchment in North-East Spain, using the model HydroGeoSphere. The original model grid with 80,000 finite elements was complemented with two other model variants with approximately 16,000 and 10,000 elements, respectively. Comparing the model results for these different grids, we observe differences in peak discharge, evapotranspiration, and near-surface saturation. Hydraulic heads and low flow, however, are very similar for all tested parameter sets, which allows the use of these variables to calibrate our model. The calibration results are satisfactory and the duration of the calibration has been greatly decreased by using different model grid resolutions.

  17. A current sensor based on the giant magnetoresistance effect: design and potential smart grid applications.

    PubMed

    Ouyang, Yong; He, Jinliang; Hu, Jun; Wang, Shan X

    2012-11-09

    Advanced sensing and measurement techniques are key technologies to realize a smart grid. The giant magnetoresistance (GMR) effect has revolutionized the fields of data storage and magnetic measurement. In this work, a design of a GMR current sensor based on a commercial analog GMR chip for applications in a smart grid is presented and discussed. Static, dynamic and thermal properties of the sensor were characterized. The characterizations showed that in the operation range from 0 to ±5 A, the sensor had a sensitivity of 28 mV·A(-1), linearity of 99.97%, maximum deviation of 2.717%, frequency response of −1.5 dB at 10 kHz current measurement, and maximum change of the amplitude response of 0.0335%·°C(-1) with thermal compensation. In the distributed real-time measurement and monitoring of a smart grid system, the GMR current sensor shows excellent performance and is cost effective, making it suitable for applications such as steady-state and transient-state monitoring. With the advantages of having a high sensitivity, high linearity, small volume, low cost, and simple structure, the GMR current sensor is promising for the measurement and monitoring of smart grids.

  18. A Current Sensor Based on the Giant Magnetoresistance Effect: Design and Potential Smart Grid Applications

    PubMed Central

    Ouyang, Yong; He, Jinliang; Hu, Jun; Wang, Shan X.

    2012-01-01

    Advanced sensing and measurement techniques are key technologies to realize a smart grid. The giant magnetoresistance (GMR) effect has revolutionized the fields of data storage and magnetic measurement. In this work, a design of a GMR current sensor based on a commercial analog GMR chip for applications in a smart grid is presented and discussed. Static, dynamic and thermal properties of the sensor were characterized. The characterizations showed that in the operation range from 0 to ±5 A, the sensor had a sensitivity of 28 mV·A−1, linearity of 99.97%, maximum deviation of 2.717%, frequency response of −1.5 dB at 10 kHz current measurement, and maximum change of the amplitude response of 0.0335%·°C−1 with thermal compensation. In the distributed real-time measurement and monitoring of a smart grid system, the GMR current sensor shows excellent performance and is cost effective, making it suitable for applications such as steady-state and transient-state monitoring. With the advantages of having a high sensitivity, high linearity, small volume, low cost, and simple structure, the GMR current sensor is promising for the measurement and monitoring of smart grids. PMID:23202221

  19. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE PAGES

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  20. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  1. Scalability of grid- and subbasin-based land surface modeling approaches for hydrologic simulations

    SciTech Connect

    Tesfa, Teklu K.; Leung, Lai-Yung R.; Huang, Maoyi; Li, Hongyi; Voisin, Nathalie; Wigmosta, Mark S.

    2014-03-27

    This paper investigates the relative merits of grid- and subbasin-based land surface modeling approaches for hydrologic simulations, with a focus on their scalability (i.e., abilities to perform consistently across a range of spatial resolutions) in simulating runoff generation. Simulations produced by the grid- and subbasin-based configurations of the Community Land Model (CLM) are compared at four spatial resolutions (0.125o, 0.25o, 0.5o and 1o) over the topographically diverse region of the U.S. Pacific Northwest. Using the 0.125o resolution simulation as the “reference”, statistical skill metrics are calculated and compared across simulations at 0.25o, 0.5o and 1o spatial resolutions of each modeling approach at basin and topographic region levels. Results suggest significant scalability advantage for the subbasin-based approach compared to the grid-based approach for runoff generation. Basin level annual average relative errors of surface runoff at 0.25o, 0.5o, and 1o compared to 0.125o are 3%, 4%, and 6% for the subbasin-based configuration and 4%, 7%, and 11% for the grid-based configuration, respectively. The scalability advantages of the subbasin-based approach are more pronounced during winter/spring and over mountainous regions. The source of runoff scalability is found to be related to the scalability of major meteorological and land surface parameters of runoff generation. More specifically, the subbasin-based approach is more consistent across spatial scales than the grid-based approach in snowfall/rainfall partitioning, which is related to air temperature and surface elevation. Scalability of a topographic parameter used in the runoff parameterization also contributes to improved scalability of the rain driven saturated surface runoff component, particularly during winter. Hence this study demonstrates the importance of spatial structure for multi-scale modeling of hydrological processes, with implications to surface heat fluxes in coupled land

  2. Enabling Campus Grids with Open Science Grid Technology

    NASA Astrophysics Data System (ADS)

    Weitzel, Derek; Bockelman, Brian; Fraser, Dan; Pordes, Ruth; Swanson, David

    2011-12-01

    The Open Science Grid is a recognized key component of the US national cyber-infrastructure enabling scientific discovery through advanced high throughput computing. The principles and techniques that underlie the Open Science Grid can also be applied to Campus Grids since many of the requirements are the same, even if the implementation technologies differ. We find five requirements for a campus grid: trust relationships, job submission, resource independence, accounting, and data management. The Holland Computing Center's campus grid at the University of Nebraska-Lincoln was designed to fulfill the requirements of a campus grid. A bridging daemon was designed to bring non-Condor clusters into a grid managed by Condor. Condor features which make it possible to bridge Condor sites into a multi-campus grid have been exploited at the Holland Computing Center as well.

  3. Refinements and practical implementation of a power based loss of grid detection algorithm for embedded generators

    NASA Astrophysics Data System (ADS)

    Barrett, James

    The incorporation of small, privately owned generation operating in parallel with, and supplying power to, the distribution network is becoming more widespread. This method of operation does however have problems associated with it. In particular, a loss of the connection to the main utility supply which leaves a portion of the utility load connected to the embedded generator will result in a power island. This situation presents possible dangers to utility personnel and the public, complications for smooth system operation and probable plant damage should the two systems be reconnected out-of-synchronism. Loss of Grid (or Islanding), as this situation is known, is the subject of this thesis. The work begins by detailing the requirements for operation of generation embedded in the utility supply with particular attention drawn to the requirements for a loss of grid protection scheme. The mathematical basis for a new loss of grid protection algorithm is developed and the inclusion of the algorithm in an integrated generator protection scheme described. A detailed description is given on the implementation of the new algorithm in a microprocessor based relay hardware to allow practical tests on small embedded generation facilities, including an in-house multiple generator test facility. The results obtained from the practical tests are compared with those obtained from simulation studies carried out in previous work and the differences are discussed. The performance of the algorithm is enhanced from the theoretical algorithm developed using the simulation results with simple filtering together with pattern recognition techniques. This provides stability during severe load fluctuations under parallel operation and system fault conditions and improved performance under normal operating conditions and for loss of grid detection. In addition to operating for a loss of grid connection, the algorithm will respond to load fluctuations which occur within a power island

  4. A Comprehensive WSN-Based Approach to Efficiently Manage a Smart Grid

    PubMed Central

    Martinez-Sandoval, Ruben; Garcia-Sanchez, Antonio-Javier; Garcia-Sanchez, Felipe; Garcia-Haro, Joan; Flynn, David

    2014-01-01

    The Smart Grid (SG) is conceived as the evolution of the current electrical grid representing a big leap in terms of efficiency, reliability and flexibility compared to today's electrical network. To achieve this goal, the Wireless Sensor Networks (WSNs) are considered by the scientific/engineering community to be one of the most suitable technologies to apply SG technology to due to their low-cost, collaborative and long-standing nature. However, the SG has posed significant challenges to utility operators—mainly very harsh radio propagation conditions and the lack of appropriate systems to empower WSN devices—making most of the commercial widespread solutions inadequate. In this context, and as a main contribution, we have designed a comprehensive ad-hoc WSN-based solution for the Smart Grid (SENSED-SG) that focuses on specific implementations of the MAC, the network and the application layers to attain maximum performance and to successfully deal with any arising hurdles. Our approach has been exhaustively evaluated by computer simulations and mathematical analysis, as well as validation within real test-beds deployed in controlled environments. In particular, these test-beds cover two of the main scenarios found in a SG; on one hand, an indoor electrical substation environment, implemented in a High Voltage AC/DC laboratory, and, on the other hand, an outdoor case, deployed in the Transmission and Distribution segment of a power grid. The results obtained show that SENSED-SG performs better and is more suitable for the Smart Grid than the popular ZigBee WSN approach. PMID:25310468

  5. A comprehensive WSN-based approach to efficiently manage a Smart Grid.

    PubMed

    Martinez-Sandoval, Ruben; Garcia-Sanchez, Antonio-Javier; Garcia-Sanchez, Felipe; Garcia-Haro, Joan; Flynn, David

    2014-10-10

    The Smart Grid (SG) is conceived as the evolution of the current electrical grid representing a big leap in terms of efficiency, reliability and flexibility compared to today's electrical network. To achieve this goal, the Wireless Sensor Networks (WSNs) are considered by the scientific/engineering community to be one of the most suitable technologies to apply SG technology to due to their low-cost, collaborative and long-standing nature. However, the SG has posed significant challenges to utility operators-mainly very harsh radio propagation conditions and the lack of appropriate systems to empower WSN devices-making most of the commercial widespread solutions inadequate. In this context, and as a main contribution, we have designed a comprehensive ad-hoc WSN-based solution for the Smart Grid (SENSED-SG) that focuses on specific implementations of the MAC, the network and the application layers to attain maximum performance and to successfully deal with any arising hurdles. Our approach has been exhaustively evaluated by computer simulations and mathematical analysis, as well as validation within real test-beds deployed in controlled environments. In particular, these test-beds cover two of the main scenarios found in a SG; on one hand, an indoor electrical substation environment, implemented in a High Voltage AC/DC laboratory, and, on the other hand, an outdoor case, deployed in the Transmission and Distribution segment of a power grid. The results obtained show that SENSED-SG performs better and is more suitable for the Smart Grid than the popular ZigBee WSN approach.

  6. Creating analytically divergence-free velocity fields from grid-based data

    NASA Astrophysics Data System (ADS)

    Ravu, Bharath; Rudman, Murray; Metcalfe, Guy; Lester, Daniel R.; Khakhar, Devang V.

    2016-10-01

    We present a method, based on B-splines, to calculate a C2 continuous analytic vector potential from discrete 3D velocity data on a regular grid. A continuous analytically divergence-free velocity field can then be obtained from the curl of the potential. This field can be used to robustly and accurately integrate particle trajectories in incompressible flow fields. Based on the method of Finn and Chacon (2005) [10] this new method ensures that the analytic velocity field matches the grid values almost everywhere, with errors that are two to four orders of magnitude lower than those of existing methods. We demonstrate its application to three different problems (each in a different coordinate system) and provide details of the specifics required in each case. We show how the additional accuracy of the method results in qualitatively and quantitatively superior trajectories that results in more accurate identification of Lagrangian coherent structures.

  7. Robust optimization based energy dispatch in smart grids considering demand uncertainty

    NASA Astrophysics Data System (ADS)

    Nassourou, M.; Puig, V.; Blesa, J.

    2017-01-01

    In this study we discuss the application of robust optimization to the problem of economic energy dispatch in smart grids. Robust optimization based MPC strategies for tackling uncertain load demands are developed. Unexpected additive disturbances are modelled by defining an affine dependence between the control inputs and the uncertain load demands. The developed strategies were applied to a hybrid power system connected to an electrical power grid. Furthermore, to demonstrate the superiority of the standard Economic MPC over the MPC tracking, a comparison (e.g average daily cost) between the standard MPC tracking, the standard Economic MPC, and the integration of both in one-layer and two-layer approaches was carried out. The goal of this research is to design a controller based on Economic MPC strategies, that tackles uncertainties, in order to minimise economic costs and guarantee service reliability of the system.

  8. Discrete Adjoint-Based Design for Unsteady Turbulent Flows On Dynamic Overset Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Diskin, Boris

    2012-01-01

    A discrete adjoint-based design methodology for unsteady turbulent flows on three-dimensional dynamic overset unstructured grids is formulated, implemented, and verified. The methodology supports both compressible and incompressible flows and is amenable to massively parallel computing environments. The approach provides a general framework for performing highly efficient and discretely consistent sensitivity analysis for problems involving arbitrary combinations of overset unstructured grids which may be static, undergoing rigid or deforming motions, or any combination thereof. General parent-child motions are also accommodated, and the accuracy of the implementation is established using an independent verification based on a complex-variable approach. The methodology is used to demonstrate aerodynamic optimizations of a wind turbine geometry, a biologically-inspired flapping wing, and a complex helicopter configuration subject to trimming constraints. The objective function for each problem is successfully reduced and all specified constraints are satisfied.

  9. A computational-grid based system for continental drainage network extraction using SRTM digital elevation models

    NASA Technical Reports Server (NTRS)

    Curkendall, David W.; Fielding, Eric J.; Pohl, Josef M.; Cheng, Tsan-Huei

    2003-01-01

    We describe a new effort for the computation of elevation derivatives using the Shuttle Radar Topography Mission (SRTM) results. Jet Propulsion Laboratory's (JPL) SRTM has produced a near global database of highly accurate elevation data. The scope of this database enables computing precise stream drainage maps and other derivatives on Continental scales. We describe a computing architecture for this computationally very complex task based on NASA's Information Power Grid (IPG), a distributed high performance computing network based on the GLOBUS infrastructure. The SRTM data characteristics and unique problems they present are discussed. A new algorithm for organizing the conventional extraction algorithms [1] into a cooperating parallel grid is presented as an essential component to adapt to the IPG computing structure. Preliminary results are presented for a Southern California test area, established for comparing SRTM and its results against those produced using the USGS National Elevation Data (NED) model.

  10. PLL Based Energy Efficient PV System with Fuzzy Logic Based Power Tracker for Smart Grid Applications.

    PubMed

    Rohini, G; Jamuna, V

    2016-01-01

    This work aims at improving the dynamic performance of the available photovoltaic (PV) system and maximizing the power obtained from it by the use of cascaded converters with intelligent control techniques. Fuzzy logic based maximum power point technique is embedded on the first conversion stage to obtain the maximum power from the available PV array. The cascading of second converter is needed to maintain the terminal voltage at grid potential. The soft-switching region of three-stage converter is increased with the proposed phase-locked loop based control strategy. The proposed strategy leads to reduction in the ripple content, rating of components, and switching losses. The PV array is mathematically modeled and the system is simulated and the results are analyzed. The performance of the system is compared with the existing maximum power point tracking algorithms. The authors have endeavored to accomplish maximum power and improved reliability for the same insolation of the PV system. Hardware results of the system are also discussed to prove the validity of the simulation results.

  11. PLL Based Energy Efficient PV System with Fuzzy Logic Based Power Tracker for Smart Grid Applications

    PubMed Central

    Rohini, G.; Jamuna, V.

    2016-01-01

    This work aims at improving the dynamic performance of the available photovoltaic (PV) system and maximizing the power obtained from it by the use of cascaded converters with intelligent control techniques. Fuzzy logic based maximum power point technique is embedded on the first conversion stage to obtain the maximum power from the available PV array. The cascading of second converter is needed to maintain the terminal voltage at grid potential. The soft-switching region of three-stage converter is increased with the proposed phase-locked loop based control strategy. The proposed strategy leads to reduction in the ripple content, rating of components, and switching losses. The PV array is mathematically modeled and the system is simulated and the results are analyzed. The performance of the system is compared with the existing maximum power point tracking algorithms. The authors have endeavored to accomplish maximum power and improved reliability for the same insolation of the PV system. Hardware results of the system are also discussed to prove the validity of the simulation results. PMID:27294189

  12. An Analysis for an Internet Grid to Support Space Based Operations

    NASA Technical Reports Server (NTRS)

    Bradford, Robert; McNair, Ann R. (Technical Monitor)

    2002-01-01

    Currently, and in the past, dedicated communication circuits and "network services" with very stringent performance requirements have been used to support manned and unmanned mission critical ground operations at GSFC, JSC, MSFC, KSC and other NASA facilities. Because of the evolution of network technology, it is time to investigate other approaches to providing mission services for space ground and flight operations. In various scientific disciplines, effort is under way to develop network/komputing grids. These grids consisting of networks and computing equipment are enabling lower cost science. Specifically, earthquake research is headed in this direction. With a standard for network and computing interfaces using a grid, a researcher would not be required to develop and engineer NASA/DoD specific interfaces with the attendant increased cost. Use of the Internet Protocol (IP), CCSDS packet spec, and reed-solomon for satellite error correction etc. can be adopted/standardized to provide these interfaces. Generally most interfaces are developed at least to some degree end to end. This study would investigate the feasibility of using existing standards and protocols necessary to implement a SpaceOps Grid. New interface definitions or adoption/modification of existing ones for the various space operational services is required for voice both space based and ground, video, telemetry, commanding and planning may play a role to some undefined level. Security will be a separate focus in the study since security is such a large issue in using public networks. This SpaceOps Grid would be transparent to users. It would be anagulous to the Ethernet protocol's ease of use in that a researcher would plug in their experiment or instrument at one end and would be connected to the appropriate host or server without further intervention. Free flyers would be in this category as well. They would be launched and would transmit without any further intervention with the researcher or

  13. Effects of a Peer Assessment System Based on a Grid-Based Knowledge Classification Approach on Computer Skills Training

    ERIC Educational Resources Information Center

    Hsu, Ting-Chia

    2016-01-01

    In this study, a peer assessment system using the grid-based knowledge classification approach was developed to improve students' performance during computer skills training. To evaluate the effectiveness of the proposed approach, an experiment was conducted in a computer skills certification course. The participants were divided into three…

  14. Design and implementation of a web-based data grid management system for enterprise PACS backup and disaster recovery

    NASA Astrophysics Data System (ADS)

    Zhou, Zheng; Ma, Kevin; Talini, Elisa; Documet, Jorge; Lee, Jasper; Liu, Brent

    2007-03-01

    A cross-continental Data Grid infrastructure has been developed at the Image Processing and Informatics (IPI) research laboratory as a fault-tolerant image data backup and disaster recovery solution for Enterprise PACS. The Data Grid stores multiple copies of the imaging studies as well as the metadata, such as patient and study information, in geographically distributed computers and storage devices involving three different continents: America, Asia and Europe. This effectively prevents loss of image data and accelerates data recovery in the case of disaster. However, the lack of centralized management system makes the administration of the current Data Grid difficult. Three major challenges exist in current Data Grid management: 1. No single user interface to access and administrate each geographically separate component; 2. No graphical user interface available, resulting in command-line-based administration; 3. No single sign-on access to the Data Grid; administrators have to log into every Grid component with different corresponding user names/passwords. In this paper we are presenting a prototype of a unique web-based access interface for both Data Grid administrators and users. The interface has been designed to be user-friendly; it provides necessary instruments to constantly monitor the current status of the Data Grid components and their contents from any locations, contributing to longer system up-time.

  15. Novel grid-based optical Braille conversion: from scanning to wording

    NASA Astrophysics Data System (ADS)

    Yoosefi Babadi, Majid; Jafari, Shahram

    2011-12-01

    Grid-based optical Braille conversion (GOBCO) is explained in this article. The grid-fitting technique involves processing scanned images taken from old hard-copy Braille manuscripts, recognising and converting them into English ASCII text documents inside a computer. The resulted words are verified using the relevant dictionary to provide the final output. The algorithms employed in this article can be easily modified to be implemented on other visual pattern recognition systems and text extraction applications. This technique has several advantages including: simplicity of the algorithm, high speed of execution, ability to help visually impaired persons and blind people to work with fax machines and the like, and the ability to help sighted people with no prior knowledge of Braille to understand hard-copy Braille manuscripts.

  16. A Global “Natural” Grid Model Based on the Morse Complex

    NASA Astrophysics Data System (ADS)

    Wang, Hongbin; Zhao, Xuesheng; Zhu, Xinying; Li, Jiebiao

    2016-11-01

    In the exploration and interpretation of the extensive or global natural phenomena such as environmental monitoring, climatic analysis, hydrological analysis, meteorological service, simulation of sea level rise, etc., knowledge about the shape properties of the earth surface and terrain features is urgently needed. However, traditional discrete global grids (DGG) can not directly provide it and are confronted with the challenge of the rapid data volume growth as the modern earth surveying technology develops. In this paper, a global "natural"grid (GNG) model based on the Morse complex is proposed and a relatively comprehensive and theoretical comparison with the traditional DGG models is analyzed in details as well as some issues to be resolved in the future. Finally, the experimental and analysis results indicate that this distinct GNG model built from DGG is more significant to the advance of the geospatial data acquisition technology and to the interpretation of those extensive or global natural phenomena.

  17. Price Response Can Make the Grid Robust: An Agent-based Discussion

    SciTech Connect

    Roop, Joseph M.; Fathelrahman, Eihab M.; Widergren, Steven E.

    2005-11-07

    There is considerable agreement that a more price responsive system would make for a more robust grid. This raises the issue of how the end-user can be induced to accept a system that relies more heavily on price signals than the current system. From a modeling perspective, how should the software ‘agent’ representing the consumer of electricity be modeled so that this agent exhibits some price responsiveness in a realistic manner? To address these issues, we construct an agent-based approach that is realistic in the sense that it can transition from the current system behavior to one that is more price responsive. Evidence from programs around the country suggests that there are ways to implement such a program that could add robustness to the grid.

  18. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    NASA Astrophysics Data System (ADS)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  19. Evaluation of a Positive Youth Development Program Based on the Repertory Grid Test

    PubMed Central

    Shek, Daniel T. L.

    2012-01-01

    The repertory grid test, based on personal construct psychology, was used to evaluate the effectiveness of Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong. One hundred and four program participants (n = 104) were randomly invited to complete a repertory grid based on personal construct theory in order to provide both quantitative and qualitative data for measuring self-identity changes after joining the program. Findings generally showed that the participants perceived that they understood themselves better and had stronger resilience after joining the program. Participants also saw themselves as closer to their ideal selves and other positive role figures (but farther away from a loser) after joining the program. This study provides additional support for the effectiveness of the Tier 1 Program of Project P.A.T.H.S. in the Chinese context. This study also shows that the repertory grid test is a useful evaluation method to measure self-identity changes in participants in positive youth development programs. PMID:22593680

  20. A goal-directed spatial navigation model using forward trajectory planning based on grid cells.

    PubMed

    Erdem, Uğur M; Hasselmo, Michael

    2012-03-01

    A goal-directed navigation model is proposed based on forward linear look-ahead probe of trajectories in a network of head direction cells, grid cells, place cells and prefrontal cortex (PFC) cells. The model allows selection of new goal-directed trajectories. In a novel environment, the virtual rat incrementally creates a map composed of place cells and PFC cells by random exploration. After exploration, the rat retrieves memory of the goal location, picks its next movement direction by forward linear look-ahead probe of trajectories in several candidate directions while stationary in one location, and finds the one activating PFC cells with the highest reward signal. Each probe direction involves activation of a static pattern of head direction cells to drive an interference model of grid cells to update their phases in a specific direction. The updating of grid cell spiking drives place cells along the probed look-ahead trajectory similar to the forward replay during waking seen in place cell recordings. Directions are probed until the look-ahead trajectory activates the reward signal and the corresponding direction is used to guide goal-finding behavior. We report simulation results in several mazes with and without barriers. Navigation with barriers requires a PFC map topology based on the temporal vicinity of visited place cells and a reward signal diffusion process. The interaction of the forward linear look-ahead trajectory probes with the reward diffusion allows discovery of never-before experienced shortcuts towards a goal location.

  1. Integration of an MPP System into the INFN-GRID

    NASA Astrophysics Data System (ADS)

    Costa, A.; Calanducci, A. S.; Becciani, U.

    2005-12-01

    We are going to present the middleware changes we have made to integrate an IBM-SP parallel computer into the INFN-GRID and the results of the application runs made on the IBM-SP to test its operation within the grid. The IBM-SP is an 8-processor 1.1 GHz machine using the AIX 5.2 operating system. Its hardware architecture represents a major challenge for integration into the grid infrastructure because it does not support the LCFGng (Local ConFiGuration system Next Generation) facilities. In order to obtain the goal without the advantages of the LCFGng server (RPM based), we properly tuned and compiled the middleware on the IBM-SP: in particular, we installed the Grid Services toolkit and a scheduler for job execution and monitoring. The testing phase was successfully passed by submitting a set of MPI jobs through the grid onto the IBM-SP. Specifically the tests were made by using MARA, a public code for the analysis of light curve sequences, that was made accessible through the Astrocomp portal, a web based interface for astrophysical parallel codes. The IBM-SP integration into the INFN-GRID did not require us to stop production on the system. It can be considered as a demonstration case for the integration of machines using different operating systems.

  2. Predicting the level of job satisfaction based on hardiness and its components among nurses with tension headache

    PubMed Central

    Mahdavi, A; Nikmanesh, E; AghaeI, M; Kamran, F; Zahra Tavakoli, Z; Khaki Seddigh, F

    2015-01-01

    Nurses are the most significant part of human resources in a sanitary and health system. Job satisfaction results in the enhancement of organizational productivity, employee commitment to the organization and ensuring his/ her physical and mental health. The present research was conducted with the aim of predicting the level of job satisfaction based on hardiness and its components among the nurses with tension headache. The research method was correlational. The population consisted of all the nurses with tension headache who referred to the relevant specialists in Tehran. The sample size consisted of 50 individuals who were chosen by using the convenience sampling method and were measured and investigated by using the research tools of “Job Satisfaction Test” of Davis, Lofkvist and Weiss and “Personal Views Survey” of Kobasa. The data analysis was carried out by using the Pearson Correlation Coefficient and the Regression Analysis. The research findings demonstrated that the correlation coefficient obtained for “hardiness”, “job satisfaction” was 0.506, and this coefficient was significant at the 0.01 level. Moreover, it was specified that the sense of commitment and challenge were stronger predictors for job satisfaction of nurses with tension headache among the components of hardiness, and, about 16% of the variance of “job satisfaction” could be explained by the two components (sense of commitment and challenge). PMID:28316713

  3. Predicting the level of job satisfaction based on hardiness and its components among nurses with tension headache.

    PubMed

    Mahdavi, A; Nikmanesh, E; AghaeI, M; Kamran, F; Zahra Tavakoli, Z; Khaki Seddigh, F

    2015-01-01

    Nurses are the most significant part of human resources in a sanitary and health system. Job satisfaction results in the enhancement of organizational productivity, employee commitment to the organization and ensuring his/ her physical and mental health. The present research was conducted with the aim of predicting the level of job satisfaction based on hardiness and its components among the nurses with tension headache. The research method was correlational. The population consisted of all the nurses with tension headache who referred to the relevant specialists in Tehran. The sample size consisted of 50 individuals who were chosen by using the convenience sampling method and were measured and investigated by using the research tools of "Job Satisfaction Test" of Davis, Lofkvist and Weiss and "Personal Views Survey" of Kobasa. The data analysis was carried out by using the Pearson Correlation Coefficient and the Regression Analysis. The research findings demonstrated that the correlation coefficient obtained for "hardiness", "job satisfaction" was 0.506, and this coefficient was significant at the 0.01 level. Moreover, it was specified that the sense of commitment and challenge were stronger predictors for job satisfaction of nurses with tension headache among the components of hardiness, and, about 16% of the variance of "job satisfaction" could be explained by the two components (sense of commitment and challenge).

  4. A New Wall Function Model for RANS Equations Based on Overlapping Grids

    NASA Astrophysics Data System (ADS)

    Lampropoulos, Nikolaos; Papadimitriou, Dimitrios; Zervogiannis, Thomas

    2013-04-01

    This paper presents a new numerical method for the modeling of turbulent flows based on a new wall model for computing Reynolds-Averaged-Navier-Stokes (RANS) equations with the Spalart-Allmaras (SA) turbulence model. The basic objective is the reduction of the total central processing unit (CPU) cost of the numerical simulation without harming the accuracy of the results. The main idea of this study is based on the use of two overlapping computational grids covering the two distinct regions of the flow (i.e., the boundary layer and the outer region), and the implementation of appropriate (different) numerical schemes in each case. The seamless cooperation of the grids in the iterative algorithm is achieved by defining an alternative wall function concept. The unstructured grid (UG) covering the outer region consists of mixed type elements (i.e., quadrilaterals and triangles), with relatively small degrees of anisotropy, on which the full set of Navier-Stokes (NS) along with the turbulent model (TM) equations are relaxed. The inner structured grid (SG), which aims at resolving the boundary layer, is a body-fitted mesh with high element density in the normal to the wall direction. The slow relaxation of the governing equations on anisotropic SGs is alleviated by using the Tridiagonal Matrix Algorithm (TDMA) and a block Lower Upper Method (LU). These prove to be quite suitable for the relaxation of the discretized equations on SGs, which consist of banded arrays in tensor form. The application of the proposed algorithm in a couple of benchmark cases proves its superiority over the High-Reynolds SA model with standard wall functions when both methods are compared with the (more costly) Low-Reynolds SA turbulence model and experimental results.

  5. SoilGrids1km — Global Soil Information Based on Automated Mapping

    PubMed Central

    Hengl, Tomislav; de Jesus, Jorge Mendes; MacMillan, Robert A.; Batjes, Niels H.; Heuvelink, Gerard B. M.; Ribeiro, Eloi; Samuel-Rosa, Alessandro; Kempen, Bas; Leenaars, Johan G. B.; Walsh, Markus G.; Gonzalez, Maria Ruiperez

    2014-01-01

    Background Soils are widely recognized as a non-renewable natural resource and as biophysical carbon sinks. As such, there is a growing requirement for global soil information. Although several global soil information systems already exist, these tend to suffer from inconsistencies and limited spatial detail. Methodology/Principal Findings We present SoilGrids1km — a global 3D soil information system at 1 km resolution — containing spatial predictions for a selection of soil properties (at six standard depths): soil organic carbon (g kg−1), soil pH, sand, silt and clay fractions (%), bulk density (kg m−3), cation-exchange capacity (cmol+/kg), coarse fragments (%), soil organic carbon stock (t ha−1), depth to bedrock (cm), World Reference Base soil groups, and USDA Soil Taxonomy suborders. Our predictions are based on global spatial prediction models which we fitted, per soil variable, using a compilation of major international soil profile databases (ca. 110,000 soil profiles), and a selection of ca. 75 global environmental covariates representing soil forming factors. Results of regression modeling indicate that the most useful covariates for modeling soils at the global scale are climatic and biomass indices (based on MODIS images), lithology, and taxonomic mapping units derived from conventional soil survey (Harmonized World Soil Database). Prediction accuracies assessed using 5–fold cross-validation were between 23–51%. Conclusions/Significance SoilGrids1km provide an initial set of examples of soil spatial data for input into global models at a resolution and consistency not previously available. Some of the main limitations of the current version of SoilGrids1km are: (1) weak relationships between soil properties/classes and explanatory variables due to scale mismatches, (2) difficulty to obtain covariates that capture soil forming factors, (3) low sampling density and spatial clustering of soil profile locations. However, as the SoilGrids

  6. Supercontinuum optimization for dual-soliton based light sources using genetic algorithms in a grid platform.

    PubMed

    Arteaga-Sierra, F R; Milián, C; Torres-Gómez, I; Torres-Cisneros, M; Moltó, G; Ferrando, A

    2014-09-22

    We present a numerical strategy to design fiber based dual pulse light sources exhibiting two predefined spectral peaks in the anomalous group velocity dispersion regime. The frequency conversion is based on the soliton fission and soliton self-frequency shift occurring during supercontinuum generation. The optimization process is carried out by a genetic algorithm that provides the optimum input pulse parameters: wavelength, temporal width and peak power. This algorithm is implemented in a Grid platform in order to take advantage of distributed computing. These results are useful for optical coherence tomography applications where bell-shaped pulses located in the second near-infrared window are needed.

  7. Observation-based gridded runoff estimates for Europe (E-RUN version 1.1)

    NASA Astrophysics Data System (ADS)

    Gudmundsson, Lukas; Seneviratne, Sonia I.

    2016-07-01

    River runoff is an essential climate variable as it is directly linked to the terrestrial water balance and controls a wide range of climatological and ecological processes. Despite its scientific and societal importance, there are to date no pan-European observation-based runoff estimates available. Here we employ a recently developed methodology to estimate monthly runoff rates on regular spatial grid in Europe. For this we first assemble an unprecedented collection of river flow observations, combining information from three distinct databases. Observed monthly runoff rates are subsequently tested for homogeneity and then related to gridded atmospheric variables (E-OBS version 12) using machine learning. The resulting statistical model is then used to estimate monthly runoff rates (December 1950-December 2015) on a 0.5° × 0.5° grid. The performance of the newly derived runoff estimates is assessed in terms of cross validation. The paper closes with example applications, illustrating the potential of the new runoff estimates for climatological assessments and drought monitoring. The newly derived data are made publicly available at doi:10.1594/PANGAEA.861371.

  8. Grid-based steered thermodynamic integration accelerates the calculation of binding free energies.

    PubMed

    Fowler, Philip W; Jha, Shantenu; Coveney, Peter V

    2005-08-15

    The calculation of binding free energies is important in many condensed matter problems. Although formally exact computational methods have the potential to complement, add to, and even compete with experimental approaches, they are difficult to use and extremely time consuming. We describe a Grid-based approach for the calculation of relative binding free energies, which we call Steered Thermodynamic Integration calculations using Molecular Dynamics (STIMD), and its application to Src homology 2 (SH2) protein cell signalling domains. We show that the time taken to compute free energy differences using thermodynamic integration can be significantly reduced: potentially from weeks or months to days of wall-clock time. To be able to perform such accelerated calculations requires the ability to both run concurrently and control in realtime several parallel simulations on a computational Grid. We describe how the RealityGrid computational steering system, in conjunction with a scalable classical MD code, can be used to dramatically reduce the time to achieve a result. This is necessary to improve the adoption of this technique and further allows more detailed investigations into the accuracy and precision of thermodynamic integration. Initial results for the Src SH2 system are presented and compared to a reported experimental value. Finally, we discuss the significance of our approach.

  9. A grid-based model for integration of distributed medical databases.

    PubMed

    Luo, Yongxing; Jiang, Lijun; Zhuang, Tian-ge

    2009-12-01

    Grid has emerged recently as an integration infrastructure for sharing and coordinated use of diverse resources in dynamic, distributed environment. In this paper, we present a prototype system for integration of heterogeneous medical databases based on Grid technology, which can provide a uniform access interface and efficient query mechanism to different medical databases. After presenting the architecture of the prototype system that employs corresponding Grid services and middleware technologies, we make an analysis on its basic functional components including OGSA-DAI, metadata model, transaction management, and query processing in detail, which cooperate with each other to enable uniform accessing and seamless integration of the underlying heterogeneous medical databases. Then, we test effectiveness and performance of the system through a query instance, analyze the experiment result, and make a discussion on some issues relating to practical medical applications. Although the prototype system has been carried out and tested in a simulated hospital information environment at present, the underlying principles are applicable to practical applications.

  10. Grid-based methods for biochemical ab initio quantum chemical applications

    SciTech Connect

    Colvin, M.E.; Nelson, J.S.; Mori, E.

    1997-01-01

    A initio quantum chemical methods are seeing increased application in a large variety of real-world problems including biomedical applications ranging from drug design to the understanding of environmental mutagens. The vast majority of these quantum chemical methods are {open_quotes}spectral{close_quotes}, that is they describe the charge distribution around the nuclear framework in terms of a fixed analytic basis set. Despite the additional complexity they bring, methods involving grid representations of the electron or solvent charge can provide more efficient schemes for evaluating spectral operators, inexpensive methods for calculating electron correlation, and methods for treating the electrostatic energy of salvation in polar solvents. The advantage of mixed or {open_quotes}pseudospectral{close_quotes} methods is that they allow individual non-linear operators in the partial differential equations, such as coulomb operators, to be calculated in the most appropriate regime. Moreover, these molecular grids can be used to integrate empirical functionals of the electron density. These so-called density functional methods (DFT) are an extremely promising alternative to conventional post-Hartree Fock quantum chemical methods. The introduction of a grid at the molecular solvent-accessible surface allows a very sophisticated treatment of a polarizable continuum solvent model (PCM). Where most PCM approaches use a truncated expansion of the solute`s electric multipole expansion, e.g. net charge (Born model) or dipole moment (Onsager model), such a grid-based boundary-element method (BEM) yields a nearly exact treatment of the solute`s electric field. This report describes the use of both DFT and BEM methods in several biomedical chemical applications.

  11. High-Efficiency Food Production in a Renewable Energy Based Micro-Grid Power System

    NASA Technical Reports Server (NTRS)

    Bubenheim, David; Meiners, Dennis

    2016-01-01

    Controlled Environment Agriculture (CEA) systems can be used to produce high-quality, desirable food year round, and the fresh produce can positively contribute to the health and well being of residents in communities with difficult supply logistics. While CEA has many positive outcomes for a remote community, the associated high electric demands have prohibited widespread implementation in what is typically already a fully subscribed power generation and distribution system. Recent advances in CEA technologies as well as renewable power generation, storage, and micro-grid management are increasing system efficiency and expanding the possibilities for enhancing community supporting infrastructure without increasing demands for outside supplied fuels. We will present examples of how new lighting, nutrient delivery, and energy management and control systems can enable significant increases in food production efficiency while maintaining high yields in CEA. Examples from Alaskan communities where initial incorporation of renewable power generation, energy storage and grid management techniques have already reduced diesel fuel consumption for electric generation by more than 40% and expanded grid capacity will be presented. We will discuss how renewable power generation, efficient grid management to extract maximum community service per kW, and novel energy storage approaches can expand the food production, water supply, waste treatment, sanitation and other community support services without traditional increases of consumable fuels supplied from outside the community. These capabilities offer communities with a range of choices to enhance their communities. The examples represent a synergy of technology advancement efforts to develop sustainable community support systems for future space-based human habitats and practical implementation of infrastructure components to increase efficiency and enhance health and well being in remote communities today and tomorrow.

  12. Creative Engineering Based Education with Autonomous Robots Considering Job Search Support

    NASA Astrophysics Data System (ADS)

    Takezawa, Satoshi; Nagamatsu, Masao; Takashima, Akihiko; Nakamura, Kaeko; Ohtake, Hideo; Yoshida, Kanou

    The Robotics Course in our Mechanical Systems Engineering Department offers “Robotics Exercise Lessons” as one of its Problem-Solution Based Specialized Subjects. This is intended to motivate students learning and to help them acquire fundamental items and skills on mechanical engineering and improve understanding of Robotics Basic Theory. Our current curriculum was established to accomplish this objective based on two pieces of research in 2005: an evaluation questionnaire on the education of our Mechanical Systems Engineering Department for graduates and a survey on the kind of human resources which companies are seeking and their expectations for our department. This paper reports the academic results and reflections of job search support in recent years as inherited and developed from the previous curriculum.

  13. Incentive-compatible demand-side management for smart grids based on review strategies

    NASA Astrophysics Data System (ADS)

    Xu, Jie; van der Schaar, Mihaela

    2015-12-01

    Demand-side load management is able to significantly improve the energy efficiency of smart grids. Since the electricity production cost depends on the aggregate energy usage of multiple consumers, an important incentive problem emerges: self-interested consumers want to increase their own utilities by consuming more than the socially optimal amount of energy during peak hours since the increased cost is shared among the entire set of consumers. To incentivize self-interested consumers to take the socially optimal scheduling actions, we design a new class of protocols based on review strategies. These strategies work as follows: first, a review stage takes place in which a statistical test is performed based on the daily prices of the previous billing cycle to determine whether or not the other consumers schedule their electricity loads in a socially optimal way. If the test fails, the consumers trigger a punishment phase in which, for a certain time, they adjust their energy scheduling in such a way that everybody in the consumer set is punished due to an increased price. Using a carefully designed protocol based on such review strategies, consumers then have incentives to take the socially optimal load scheduling to avoid entering this punishment phase. We rigorously characterize the impact of deploying protocols based on review strategies on the system's as well as the users' performance and determine the optimal design (optimal billing cycle, punishment length, etc.) for various smart grid deployment scenarios. Even though this paper considers a simplified smart grid model, our analysis provides important and useful insights for designing incentive-compatible demand-side management schemes based on aggregate energy usage information in a variety of practical scenarios.

  14. 20 CFR 670.520 - Are students permitted to hold jobs other than work-based learning opportunities?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Are students permitted to hold jobs other than work-based learning opportunities? 670.520 Section 670.520 Employees' Benefits EMPLOYMENT AND...-based learning opportunities? Yes, a center operator may authorize a student to participate in...

  15. 20 CFR 670.520 - Are students permitted to hold jobs other than work-based learning opportunities?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 4 2013-04-01 2013-04-01 false Are students permitted to hold jobs other than work-based learning opportunities? 670.520 Section 670.520 Employees' Benefits EMPLOYMENT AND... than work-based learning opportunities? Yes, a center operator may authorize a student to...

  16. The Differences in Teachers' and Principals' General Job Stress and Stress Related to Performance-Based Accreditation.

    ERIC Educational Resources Information Center

    Hipps, Elizabeth Smith; Halpin, Glennelle

    Whether different amounts of general job stress and stress related to the Alabama Performance-Based Accreditation Standards were experienced by teachers and principals was studied in a sample of 65 principals and 242 teachers from 9 Alabama school systems. All subjects completed the Alabama Performance-Based Accreditation Standards Stress Measure,…

  17. A Study of ATLAS Grid Performance for Distributed Analysis

    NASA Astrophysics Data System (ADS)

    Panitkin, Sergey; Fine, Valery; Wenaus, Torre

    2012-12-01

    In the past two years the ATLAS Collaboration at the LHC has collected a large volume of data and published a number of ground breaking papers. The Grid-based ATLAS distributed computing infrastructure played a crucial role in enabling timely analysis of the data. We will present a study of the performance and usage of the ATLAS Grid as platform for physics analysis in 2011. This includes studies of general properties as well as timing properties of user jobs (wait time, run time, etc). These studies are based on mining of data archived by the PanDA workload management system.

  18. Development of a fully automated CFD system for three-dimensional flow simulations based on hybrid prismatic-tetrahedral grids

    SciTech Connect

    Berg, J.W. van der; Maseland, J.E.J.; Oskam, B.

    1996-12-31

    In this paper an assessment of CFD methods based on the underlying grid type is made. It is safe to say that emerging CFD methods based on hybrid body-fitted grids of tetrahedral and prismatic cells using unstructured data storage schemes have the potential to satisfy the basic requirements of problem-turnaround-time and accuracy for complex geometries. The CFD system described in this paper is based on the hybrid prismatic-tetrahedral grid approach. In an analysis it is shown that the cells in the prismatic layer have to satisfy a central symmetry property in order to obtain a second-order accurate approximation of the viscous terms in the Reynolds-averaged Navier-Stokes equations. Prismatic grid generation is demonstrated for the ONERA M6 wing-alone configuration and the AS28G wing/body configuration.

  19. CDF GlideinWMS usage in grid computing of high energy physics

    SciTech Connect

    Zvada, Marian; Benjamin, Doug; Sfiligoi, Igor; /Fermilab

    2010-01-01

    Many members of large science collaborations already have specialized grids available to advance their research in the need of getting more computing resources for data analysis. This has forced the Collider Detector at Fermilab (CDF) collaboration to move beyond the usage of dedicated resources and start exploiting Grid resources. Nowadays, CDF experiment is increasingly relying on glidein-based computing pools for data reconstruction. Especially, Monte Carlo production and user data analysis, serving over 400 users by central analysis farm middleware (CAF) on the top of Condor batch system and CDF Grid infrastructure. Condor is designed as distributed architecture and its glidein mechanism of pilot jobs is ideal for abstracting the Grid computing by making a virtual private computing pool. We would like to present the first production use of the generic pilot-based Workload Management System (glideinWMS), which is an implementation of the pilot mechanism based on the Condor distributed infrastructure. CDF Grid computing uses glideinWMS for its data reconstruction on the FNAL campus Grid, user analysis and Monte Carlo production across Open Science Grid (OSG). We review this computing model and setup used including CDF specific configuration within the glideinWMS system which provides powerful scalability and makes Grid computing working like in a local batch environment with ability to handle more than 10000 running jobs at a time.

  20. CDF GlideinWMS usage in Grid computing of high energy physics

    NASA Astrophysics Data System (ADS)

    Zvada, Marian; Benjamin, Doug; Sfiligoi, Igor

    2010-04-01

    Many members of large science collaborations already have specialized grids available to advance their research in the need of getting more computing resources for data analysis. This has forced the Collider Detector at Fermilab (CDF) collaboration to move beyond the usage of dedicated resources and start exploiting Grid resources. Nowadays, CDF experiment is increasingly relying on glidein-based computing pools for data reconstruction. Especially, Monte Carlo production and user data analysis, serving over 400 users by central analysis farm middleware (CAF) on the top of Condor batch system and CDF Grid infrastructure. Condor is designed as distributed architecture and its glidein mechanism of pilot jobs is ideal for abstracting the Grid computing by making a virtual private computing pool. We would like to present the first production use of the generic pilot-based Workload Management System (glideinWMS), which is an implementation of the pilot mechanism based on the Condor distributed infrastructure. CDF Grid computing uses glideinWMS for its data reconstruction on the FNAL campus Grid, user analysis and Monte Carlo production across Open Science Grid (OSG). We review this computing model and setup used including CDF specific configuration within the glideinWMS system which provides powerful scalability and makes Grid computing working like in a local batch environment with ability to handle more than 10000 running jobs at a time.

  1. A robust multi-grid pressure-based algorithm for multi-fluid flow at all speeds

    NASA Astrophysics Data System (ADS)

    Darwish, M.; Moukalled, F.; Sekar, B.

    2003-04-01

    This paper reports on the implementation and testing, within a full non-linear multi-grid environment, of a new pressure-based algorithm for the prediction of multi-fluid flow at all speeds. The algorithm is part of the mass conservation-based algorithms (MCBA) group in which the pressure correction equation is derived from overall mass conservation. The performance of the new method is assessed by solving a series of two-dimensional two-fluid flow test problems varying from turbulent low Mach number to supersonic flows, and from very low to high fluid density ratios. Solutions are generated for several grid sizes using the single grid (SG), the prolongation grid (PG), and the full non-linear multi-grid (FMG) methods. The main outcomes of this study are: (i) a clear demonstration of the ability of the FMG method to tackle the added non-linearity of multi-fluid flows, which is manifested through the performance jump observed when using the non-linear multi-grid approach as compared to the SG and PG methods; (ii) the extension of the FMG method to predict turbulent multi-fluid flows at all speeds. The convergence history plots and CPU-times presented indicate that the FMG method is far more efficient than the PG method and accelerates the convergence rate over the SG method, for the problems solved and the grids used, by a factor reaching a value as high as 15.

  2. QoS Differential Scheduling in Cognitive-Radio-Based Smart Grid Networks: An Adaptive Dynamic Programming Approach.

    PubMed

    Yu, Rong; Zhong, Weifeng; Xie, Shengli; Zhang, Yan; Zhang, Yun

    2016-02-01

    As the next-generation power grid, smart grid will be integrated with a variety of novel communication technologies to support the explosive data traffic and the diverse requirements of quality of service (QoS). Cognitive radio (CR), which has the favorable ability to improve the spectrum utilization, provides an efficient and reliable solution for smart grid communications networks. In this paper, we study the QoS differential scheduling problem in the CR-based smart grid communications networks. The scheduler is responsible for managing the spectrum resources and arranging the data transmissions of smart grid users (SGUs). To guarantee the differential QoS, the SGUs are assigned to have different priorities according to their roles and their current situations in the smart grid. Based on the QoS-aware priority policy, the scheduler adjusts the channels allocation to minimize the transmission delay of SGUs. The entire transmission scheduling problem is formulated as a semi-Markov decision process and solved by the methodology of adaptive dynamic programming. A heuristic dynamic programming (HDP) architecture is established for the scheduling problem. By the online network training, the HDP can learn from the activities of primary users and SGUs, and adjust the scheduling decision to achieve the purpose of transmission delay minimization. Simulation results illustrate that the proposed priority policy ensures the low transmission delay of high priority SGUs. In addition, the emergency data transmission delay is also reduced to a significantly low level, guaranteeing the differential QoS in smart grid.

  3. An Adaptive Integration Model of Vector Polyline to DEM Data Based on Spherical Degeneration Quadtree Grids

    NASA Astrophysics Data System (ADS)

    Zhao, X. S.; Wang, J. J.; Yuan, Z. Y.; Gao, Y.

    2013-10-01

    Traditional geometry-based approach can maintain the characteristics of vector data. However, complex interpolation calculations limit its applications in high resolution and multi-source spatial data integration at spherical scale in digital earth systems. To overcome this deficiency, an adaptive integration model of vector polyline and spherical DEM is presented. Firstly, Degenerate Quadtree Grid (DQG) which is one of the partition models for global discrete grids, is selected as a basic framework for the adaptive integration model. Secondly, a novel shift algorithm is put forward based on DQG proximity search. The main idea of shift algorithm is that the vector node in a DQG cell moves to the cell corner-point when the displayed area of the cell is smaller or equal to a pixel of screen in order to find a new vector polyline approximate to the original one, which avoids lots of interpolation calculations and achieves seamless integration. Detailed operation steps are elaborated and the complexity of algorithm is analyzed. Thirdly, a prototype system has been developed by using VC++ language and OpenGL 3D API. ASTER GDEM data and DCW roads data sets of Jiangxi province in China are selected to evaluate the performance. The result shows that time consumption of shift algorithm decreased about 76% than that of geometry-based approach. Analysis on the mean shift error from different dimensions has been implemented. In the end, the conclusions and future works in the integration of vector data and DEM based on discrete global grids are also given.

  4. New gridded daily climatology of Finland: Permutation-based uncertainty estimates and temporal trends in climate

    NASA Astrophysics Data System (ADS)

    Aalto, Juha; Pirinen, Pentti; Jylhä, Kirsti

    2016-04-01

    Long-term time series of key climate variables with a relevant spatiotemporal resolution are essential for environmental science. Moreover, such spatially continuous data, based on weather observations, are commonly used in, e.g., downscaling and bias correcting of climate model simulations. Here we conducted a comprehensive spatial interpolation scheme where seven climate variables (daily mean, maximum, and minimum surface air temperatures, daily precipitation sum, relative humidity, sea level air pressure, and snow depth) were interpolated over Finland at the spatial resolution of 10 × 10 km2. More precisely, (1) we produced daily gridded time series (FMI_ClimGrid) of the variables covering the period of 1961-2010, with a special focus on evaluation and permutation-based uncertainty estimates, and (2) we investigated temporal trends in the climate variables based on the gridded data. National climate station observations were supplemented by records from the surrounding countries, and kriging interpolation was applied to account for topography and water bodies. For daily precipitation sum and snow depth, a two-stage interpolation with a binary classifier was deployed for an accurate delineation of areas with no precipitation or snow. A robust cross-validation indicated a good agreement between the observed and interpolated values especially for the temperature variables and air pressure, although the effect of seasons was evident. Permutation-based analysis suggested increased uncertainty toward northern areas, thus identifying regions with suboptimal station density. Finally, several variables had a statistically significant trend indicating a clear but locally varying signal of climate change during the last five decades.

  5. The CMS integration grid testbed

    SciTech Connect

    Graham, Gregory E.

    2004-08-26

    The CMS Integration Grid Testbed (IGT) comprises USCMS Tier-1 and Tier-2 hardware at the following sites: the California Institute of Technology, Fermi National Accelerator Laboratory, the University of California at San Diego, and the University of Florida at Gainesville. The IGT runs jobs using the Globus Toolkit with a DAGMan and Condor-G front end. The virtual organization (VO) is managed using VO management scripts from the European Data Grid (EDG). Gridwide monitoring is accomplished using local tools such as Ganglia interfaced into the Globus Metadata Directory Service (MDS) and the agent based Mona Lisa. Domain specific software is packaged and installed using the Distribution After Release (DAR) tool of CMS, while middleware under the auspices of the Virtual Data Toolkit (VDT) is distributed using Pacman. During a continuous two month span in Fall of 2002, over 1 million official CMS GEANT based Monte Carlo events were generated and returned to CERN for analysis while being demonstrated at SC2002. In this paper, we describe the process that led to one of the world's first continuously available, functioning grids.

  6. GPU accelerated cell-based adaptive mesh refinement on unstructured quadrilateral grid

    NASA Astrophysics Data System (ADS)

    Luo, Xisheng; Wang, Luying; Ran, Wei; Qin, Fenghua

    2016-10-01

    A GPU accelerated inviscid flow solver is developed on an unstructured quadrilateral grid in the present work. For the first time, the cell-based adaptive mesh refinement (AMR) is fully implemented on GPU for the unstructured quadrilateral grid, which greatly reduces the frequency of data exchange between GPU and CPU. Specifically, the AMR is processed with atomic operations to parallelize list operations, and null memory recycling is realized to improve the efficiency of memory utilization. It is found that results obtained by GPUs agree very well with the exact or experimental results in literature. An acceleration ratio of 4 is obtained between the parallel code running on the old GPU GT9800 and the serial code running on E3-1230 V2. With the optimization of configuring a larger L1 cache and adopting Shared Memory based atomic operations on the newer GPU C2050, an acceleration ratio of 20 is achieved. The parallelized cell-based AMR processes have achieved 2x speedup on GT9800 and 18x on Tesla C2050, which demonstrates that parallel running of the cell-based AMR method on GPU is feasible and efficient. Our results also indicate that the new development of GPU architecture benefits the fluid dynamics computing significantly.

  7. Three hybridization models based on local search scheme for job shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Balbi Fraga, Tatiana

    2015-05-01

    This work presents three different hybridization models based on the general schema of Local Search Heuristics, named Hybrid Successive Application, Hybrid Neighborhood, and Hybrid Improved Neighborhood. Despite similar approaches might have already been presented in the literature in other contexts, in this work these models are applied to analyzes the solution of the job shop scheduling problem, with the heuristics Taboo Search and Particle Swarm Optimization. Besides, we investigate some aspects that must be considered in order to achieve better solutions than those obtained by the original heuristics. The results demonstrate that the algorithms derived from these three hybrid models are more robust than the original algorithms and able to get better results than those found by the single Taboo Search.

  8. A MPPT Algorithm Based PV System Connected to Single Phase Voltage Controlled Grid

    NASA Astrophysics Data System (ADS)

    Sreekanth, G.; Narender Reddy, N.; Durga Prasad, A.; Nagendrababu, V.

    2012-10-01

    Future ancillary services provided by photovoltaic (PV) systems could facilitate their penetration in power systems. In addition, low-power PV systems can be designed to improve the power quality. This paper presents a single-phase PV systemthat provides grid voltage support and compensation of harmonic distortion at the point of common coupling thanks to a repetitive controller. The power provided by the PV panels is controlled by a Maximum Power Point Tracking algorithm based on the incremental conductance method specifically modified to control the phase of the PV inverter voltage. Simulation and experimental results validate the presented solution.

  9. Power-based control with integral action for wind turbines connected to the grid

    NASA Astrophysics Data System (ADS)

    Peña, R. R.; Fernández, R. D.; Mantz, R. J.; Battaiotto, P. E.

    2015-10-01

    In this paper, a power shaping control with integral action is employed to control active and reactive powers of wind turbines connected to the grid. As it is well known, power shaping allows finding a Lyapunov function which ensures stability. In contrast to other passivity-based control theories, the power shaping controller design allows to use easily measurable variables, such as voltages and currents which simplify the physical interpretation and, therefore, the controller synthesis. The strategy proposed is evaluated in the context of severe operating conditions, such as abrupt changes in the wind speed and voltage drops.

  10. Web-based interactive visualization in a Grid-enabled neuroimaging application using HTML5.

    PubMed

    Siewert, René; Specovius, Svenja; Wu, Jie; Krefting, Dagmar

    2012-01-01

    Interactive visualization and correction of intermediate results are required in many medical image analysis pipelines. To allow certain interaction in the remote execution of compute- and data-intensive applications, new features of HTML5 are used. They allow for transparent integration of user interaction into Grid- or Cloud-enabled scientific workflows. Both 2D and 3D visualization and data manipulation can be performed through a scientific gateway without the need to install specific software or web browser plugins. The possibilities of web-based visualization are presented along the FreeSurfer-pipeline, a popular compute- and data-intensive software tool for quantitative neuroimaging.

  11. Efficient Dynamic Replication Algorithm Using Agent for Data Grid

    PubMed Central

    Vashisht, Priyanka; Kumar, Rajesh; Sharma, Anju

    2014-01-01

    In data grids scientific and business applications produce huge volume of data which needs to be transferred among the distributed and heterogeneous nodes of data grids. Data replication provides a solution for managing data files efficiently in large grids. The data replication helps in enhancing the data availability which reduces the overall access time of the file. In this paper an algorithm, namely, EDRA using agents for data grid, has been proposed and implemented. EDRA consists of dynamic replication of hierarchical structure taken into account for the selection of best replica. Decision for selecting the best replica is based on scheduling parameters. The scheduling parameters are bandwidth, load gauge, and computing capacity of the node. The scheduling in data grid helps in reducing the data access time. The distribution of the load on the nodes of data grid is done evenly by considering scheduling parameters. EDRA is implemented using data grid simulator, namely, OptorSim. European Data Grid CMS test bed topology is used in this experiment. The simulation results are obtained by comparing BHR, LRU, No Replication, and EDRA. The result shows the efficiency of EDRA algorithm in terms of mean job execution time, network usage, and storage usage of node. PMID:25028680

  12. Optimisation of sensing time and transmission time in cognitive radio-based smart grid networks

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Fu, Yuli; Yang, Junjie

    2016-07-01

    Cognitive radio (CR)-based smart grid (SG) networks have been widely recognised as emerging communication paradigms in power grids. However, a sufficient spectrum resource and reliability are two major challenges for real-time applications in CR-based SG networks. In this article, we study the traffic data collection problem. Based on the two-stage power pricing model, the power price is associated with the efficient received traffic data in a metre data management system (MDMS). In order to minimise the system power price, a wideband hybrid access strategy is proposed and analysed, to share the spectrum between the SG nodes and CR networks. The sensing time and transmission time are jointly optimised, while both the interference to primary users and the spectrum opportunity loss of secondary users are considered. Two algorithms are proposed to solve the joint optimisation problem. Simulation results show that the proposed joint optimisation algorithms outperform the fixed parameters (sensing time and transmission time) algorithms, and the power cost is reduced efficiently.

  13. Predicting Teacher Job Satisfaction Based on Principals' Instructional Supervision Behaviours: A Study of Turkish Teachers

    ERIC Educational Resources Information Center

    Ilgan, Abdurrahman; Parylo, Oksana; Sungu, Hilmi

    2015-01-01

    This quantitative research examined instructional supervision behaviours of school principals as a predictor of teacher job satisfaction through the analysis of Turkish teachers' perceptions of principals' instructional supervision behaviours. There was a statistically significant difference found between the teachers' job satisfaction level and…

  14. Job Designs: A Community Based Program for Students with Emotional and Behavioral Disorders.

    ERIC Educational Resources Information Center

    Lehman, Constance

    1992-01-01

    The Job Designs Project, a 3-year federally funded project, provides students (ages 16-22) at an Oregon residential treatment center for youth with emotional and behavioral disorders with supported paid employment in the community. The project has provided job supported employment services to 36 students working in such positions as restaurant bus…

  15. Job Satisfaction.

    DTIC Science & Technology

    1979-07-01

    well include an "overall, global or unidimensional component" (p 184) but that additional specific factors were also evident, ie. "job satisfaction is...between a person’s life style and organisational structure. They hypothesised that job satisfaction may be adversely affected if there is any significant...between job satisfaction and an independent life style, and; thirdly, that "job satisfac- tion is maximispd when the individual places a high value

  16. Time-domain analysis of planar microstrip devices using a generalized Yee-algorithm based on unstructured grids

    NASA Technical Reports Server (NTRS)

    Gedney, Stephen D.; Lansing, Faiza

    1993-01-01

    The generalized Yee-algorithm is presented for the temporal full-wave analysis of planar microstrip devices. This algorithm has the significant advantage over the traditional Yee-algorithm in that it is based on unstructured and irregular grids. The robustness of the generalized Yee-algorithm is that structures that contain curved conductors or complex three-dimensional geometries can be more accurately, and much more conveniently modeled using standard automatic grid generation techniques. This generalized Yee-algorithm is based on the the time-marching solution of the discrete form of Maxwell's equations in their integral form. To this end, the electric and magnetic fields are discretized over a dual, irregular, and unstructured grid. The primary grid is assumed to be composed of general fitted polyhedra distributed throughout the volume. The secondary grid (or dual grid) is built up of the closed polyhedra whose edges connect the centroid's of adjacent primary cells, penetrating shared faces. Faraday's law and Ampere's law are used to update the fields normal to the primary and secondary grid faces, respectively. Subsequently, a correction scheme is introduced to project the normal fields onto the grid edges. It is shown that this scheme is stable, maintains second-order accuracy, and preserves the divergenceless nature of the flux densities. Finally, for computational efficiency the algorithm is structured as a series of sparse matrix-vector multiplications. Based on this scheme, the generalized Yee-algorithm has been implemented on vector and parallel high performance computers in a highly efficient manner.

  17. 75 FR 24990 - Proposed Information Collection for the Evaluation of the Community-Based Job Training Grants...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-06

    ..., Room N-5641, 200 Constitution Avenue, NW., Washington, DC 20210, Attention: Garrett Groves, Telephone..., mechanical, or other technological collection techniques or other forms of information technology, e.g...: 1205-0NEW. Record Keeping: N/A. Affected Public: Community-Based Job Training Grantees....

  18. Faculty in Faith-Based Institutions: Participation in Decision-Making and Its Impact on Job Satisfaction

    ERIC Educational Resources Information Center

    Metheny, Glen A.; West, G. Bud; Winston, Bruce E.; Wood, J. Andy

    2015-01-01

    This study examined full-time faculty in Christian, faith-based colleges and universities and investigated the type of impact their participation in the decision-making process had on job satisfaction. Previous studies have examined relationships among faculty at state universities and community colleges, yet little research has been examined in…

  19. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    USGS Publications Warehouse

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research

  20. An Adaptive Reputation-Based Algorithm for Grid Virtual Organization Formation

    NASA Astrophysics Data System (ADS)

    Cui, Yongrui; Li, Mingchu; Ren, Yizhi; Sakurai, Kouichi

    A novel adaptive reputation-based virtual organization formation is proposed. It restrains the bad performers effectively based on the consideration of the global experience of the evaluator and evaluates the direct trust relation between two grid nodes accurately by consulting the previous trust value rationally. It also consults and improves the reputation evaluation process in PathTrust model by taking account of the inter-organizational trust relationship and combines it with direct and recommended trust in a weighted way, which makes the algorithm more robust against collusion attacks. Additionally, the proposed algorithm considers the perspective of the VO creator and takes required VO services as one of the most important fine-grained evaluation criterion, which makes the algorithm more suitable for constructing VOs in grid environments that include autonomous organizations. Simulation results show that our algorithm restrains the bad performers and resists against fake transaction attacks and badmouth attacks effectively. It provides a clear advantage in the design of a VO infrastructure.

  1. PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems

    PubMed Central

    Mohamed, Mohamed A.; Eltamaly, Ali M.; Alolah, Abdulrahman I.

    2016-01-01

    This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers. PMID:27513000

  2. PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems.

    PubMed

    Mohamed, Mohamed A; Eltamaly, Ali M; Alolah, Abdulrahman I

    2016-01-01

    This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers.

  3. A brief comparison between grid based real space algorithms andspectrum algorithms for electronic structure calculations

    SciTech Connect

    Wang, Lin-Wang

    2006-12-01

    Quantum mechanical ab initio calculation constitutes the biggest portion of the computer time in material science and chemical science simulations. As a computer center like NERSC, to better serve these communities, it will be very useful to have a prediction for the future trends of ab initio calculations in these areas. Such prediction can help us to decide what future computer architecture can be most useful for these communities, and what should be emphasized on in future supercomputer procurement. As the size of the computer and the size of the simulated physical systems increase, there is a renewed interest in using the real space grid method in electronic structure calculations. This is fueled by two factors. First, it is generally assumed that the real space grid method is more suitable for parallel computation for its limited communication requirement, compared with spectrum method where a global FFT is required. Second, as the size N of the calculated system increases together with the computer power, O(N) scaling approaches become more favorable than the traditional direct O(N{sup 3}) scaling methods. These O(N) methods are usually based on localized orbital in real space, which can be described more naturally by the real space basis. In this report, the author compares the real space methods versus the traditional plane wave (PW) spectrum methods, for their technical pros and cons, and the possible of future trends. For the real space method, the author focuses on the regular grid finite different (FD) method and the finite element (FE) method. These are the methods used mostly in material science simulation. As for chemical science, the predominant methods are still Gaussian basis method, and sometime the atomic orbital basis method. These two basis sets are localized in real space, and there is no indication that their roles in quantum chemical simulation will change anytime soon. The author focuses on the density functional theory (DFT), which is the

  4. Overview of the NASA Glenn Flux Reconstruction Based High-Order Unstructured Grid Code

    NASA Technical Reports Server (NTRS)

    Spiegel, Seth C.; DeBonis, James R.; Huynh, H. T.

    2016-01-01

    A computational fluid dynamics code based on the flux reconstruction (FR) method is currently being developed at NASA Glenn Research Center to ultimately provide a large- eddy simulation capability that is both accurate and efficient for complex aeropropulsion flows. The FR approach offers a simple and efficient method that is easy to implement and accurate to an arbitrary order on common grid cell geometries. The governing compressible Navier-Stokes equations are discretized in time using various explicit Runge-Kutta schemes, with the default being the 3-stage/3rd-order strong stability preserving scheme. The code is written in modern Fortran (i.e., Fortran 2008) and parallelization is attained through MPI for execution on distributed-memory high-performance computing systems. An h- refinement study of the isentropic Euler vortex problem is able to empirically demonstrate the capability of the FR method to achieve super-accuracy for inviscid flows. Additionally, the code is applied to the Taylor-Green vortex problem, performing numerous implicit large-eddy simulations across a range of grid resolutions and solution orders. The solution found by a pseudo-spectral code is commonly used as a reference solution to this problem, and the FR code is able to reproduce this solution using approximately the same grid resolution. Finally, an examination of the code's performance demonstrates good parallel scaling, as well as an implementation of the FR method with a computational cost/degree- of-freedom/time-step that is essentially independent of the solution order of accuracy for structured geometries.

  5. Model atmospheres for M (sub)dwarf stars. 1: The base model grid

    NASA Technical Reports Server (NTRS)

    Allard, France; Hauschildt, Peter H.

    1995-01-01

    We have calculated a grid of more than 700 model atmospheres valid for a wide range of parameters encompassing the coolest known M dwarfs, M subdwarfs, and brown dwarf candidates: 1500 less than or equal to T(sub eff) less than or equal to 4000 K, 3.5 less than or equal to log g less than or equal to 5.5, and -4.0 less than or equal to (M/H) less than or equal to +0.5. Our equation of state includes 105 molecules and up to 27 ionization stages of 39 elements. In the calculations of the base grid of model atmospheres presented here, we include over 300 molecular bands of four molecules (TiO, VO, CaH, FeH) in the JOLA approximation, the water opacity of Ludwig (1971), collision-induced opacities, b-f and f-f atomic processes, as well as about 2 million spectral lines selected from a list with more than 42 million atomic and 24 million molecular (H2, CH, NH, OH, MgH, SiH, C2, CN, CO, SiO) lines. High-resolution synthetic spectra are obtained using an opacity sampling method. The model atmospheres and spectra are calculated with the generalized stellar atmosphere code PHOENIX, assuming LTE, plane-parallel geometry, energy (radiative plus convective) conservation, and hydrostatic equilibrium. The model spectra give close agreement with observations of M dwarfs across a wide spectral range from the blue to the near-IR, with one notable exception: the fit to the water bands. We discuss several practical applications of our model grid, e.g., broadband colors derived from the synthetic spectra. In light of current efforts to identify genuine brown dwarfs, we also show how low-resolution spectra of cool dwarfs vary with surface gravity, and how the high-regulation line profile of the Li I resonance doublet depends on the Li abundance.

  6. Climate Simulations based on a different-grid nested and coupled model

    NASA Astrophysics Data System (ADS)

    Li, Dan; Ji, Jinjun; Li, Yinpeng

    2002-05-01

    An atmosphere-vegetation interaction model (A VIM) has been coupled with a nine-layer General Cir-culation Model (GCM) of Institute of Atmospheic Physics/State Key Laboratory of Numerical Modeling for Atmospheric Sciences and Geophysical Fluid Dynamics (IAP/LASG), which is rhomboidally truncated at zonal wave number 15, to simulate global climatic mean states. A VIM is a model having inter-feedback between land surface processes and eco-physiological processes on land. As the first step to couple land with atmosphere completely, the physiological processes are fixed and only the physical part (generally named the SVAT (soil-vegetation-atmosphere-transfer scheme) model) of AVIM is nested into IAP/LASG L9R15 GCM. The ocean part of GCM is prescribed and its monthly sea surface temperature (SST) is the climatic mean value. With respect to the low resolution of GCM, i.e., each grid cell having lon-gitude 7.5° and latitude 4.5°, the vegetation is given a high resolution of 1.5° by 1.5° to nest and couple the fine grid cells of land with the coarse grid cells of atmosphere. The coupling model has been integrated for 15 years and its last ten-year mean of outputs was chosen for analysis. Compared with observed data and NCEP reanalysis, the coupled model simulates the main characteris-tics of global atmospheric circulation and the fields of temperature and moisture. In particular, the simu-lated precipitation and surface air temperature have sound results. The work creates a solid base on coupling climate models with the biosphere.

  7. A High Performance Computing Platform for Performing High-Volume Studies With Windows-based Power Grid Tools

    SciTech Connect

    Chen, Yousu; Huang, Zhenyu

    2014-08-31

    Serial Windows-based programs are widely used in power utilities. For applications that require high volume simulations, the single CPU runtime can be on the order of days or weeks. The lengthy runtime, along with the availability of low cost hardware, is leading utilities to seriously consider High Performance Computing (HPC) techniques. However, the vast majority of the HPC computers are still Linux-based and many HPC applications have been custom developed external to the core simulation engine without consideration for ease of use. This has created a technical gap for applying HPC-based tools to today’s power grid studies. To fill this gap and accelerate the acceptance and adoption of HPC for power grid applications, this paper presents a prototype of generic HPC platform for running Windows-based power grid programs on Linux-based HPC environment. The preliminary results show that the runtime can be reduced from weeks to hours to improve work efficiency.

  8. Thread Group Multithreading: Accelerating the Computation of an Agent-Based Power System Modeling and Simulation Tool -- C GridLAB-D

    SciTech Connect

    Jin, Shuangshuang; Chassin, David P.

    2014-01-06

    GridLAB-DTM is an open source next generation agent-based smart-grid simulator that provides unprecedented capability to model the performance of smart grid technologies. Over the past few years, GridLAB-D has been used to conduct important analyses of smart grid concepts, but it is still quite limited by its computational performance. In order to break through the performance bottleneck to meet the need for large scale power grid simulations, we develop a thread group mechanism to implement highly granular multithreaded computation in GridLAB-D. We achieve close to linear speedups on multithreading version compared against the single-thread version of the same code running on general purpose multi-core commodity for a benchmark simple house model. The performance of the multithreading code shows favorable scalability properties and resource utilization, and much shorter execution time for large-scale power grid simulations.

  9. Differential Evolution Based IDWNN Controller for Fault Ride-Through of Grid-Connected Doubly Fed Induction Wind Generators.

    PubMed

    Manonmani, N; Subbiah, V; Sivakumar, L

    2015-01-01

    The key objective of wind turbine development is to ensure that output power is continuously increased. It is authenticated that wind turbines (WTs) supply the necessary reactive power to the grid at the time of fault and after fault to aid the flowing grid voltage. At this juncture, this paper introduces a novel heuristic based controller module employing differential evolution and neural network architecture to improve the low-voltage ride-through rate of grid-connected wind turbines, which are connected along with doubly fed induction generators (DFIGs). The traditional crowbar-based systems were basically applied to secure the rotor-side converter during the occurrence of grid faults. This traditional controller is found not to satisfy the desired requirement, since DFIG during the connection of crowbar acts like a squirrel cage module and absorbs the reactive power from the grid. This limitation is taken care of in this paper by introducing heuristic controllers that remove the usage of crowbar and ensure that wind turbines supply necessary reactive power to the grid during faults. The controller is designed in this paper to enhance the DFIG converter during the grid fault and this controller takes care of the ride-through fault without employing any other hardware modules. The paper introduces a double wavelet neural network controller which is appropriately tuned employing differential evolution. To validate the proposed controller module, a case study of wind farm with 1.5 MW wind turbines connected to a 25 kV distribution system exporting power to a 120 kV grid through a 30 km 25 kV feeder is carried out by simulation.

  10. Differential Evolution Based IDWNN Controller for Fault Ride-Through of Grid-Connected Doubly Fed Induction Wind Generators

    PubMed Central

    Manonmani, N.; Subbiah, V.; Sivakumar, L.

    2015-01-01

    The key objective of wind turbine development is to ensure that output power is continuously increased. It is authenticated that wind turbines (WTs) supply the necessary reactive power to the grid at the time of fault and after fault to aid the flowing grid voltage. At this juncture, this paper introduces a novel heuristic based controller module employing differential evolution and neural network architecture to improve the low-voltage ride-through rate of grid-connected wind turbines, which are connected along with doubly fed induction generators (DFIGs). The traditional crowbar-based systems were basically applied to secure the rotor-side converter during the occurrence of grid faults. This traditional controller is found not to satisfy the desired requirement, since DFIG during the connection of crowbar acts like a squirrel cage module and absorbs the reactive power from the grid. This limitation is taken care of in this paper by introducing heuristic controllers that remove the usage of crowbar and ensure that wind turbines supply necessary reactive power to the grid during faults. The controller is designed in this paper to enhance the DFIG converter during the grid fault and this controller takes care of the ride-through fault without employing any other hardware modules. The paper introduces a double wavelet neural network controller which is appropriately tuned employing differential evolution. To validate the proposed controller module, a case study of wind farm with 1.5 MW wind turbines connected to a 25 kV distribution system exporting power to a 120 kV grid through a 30 km 25 kV feeder is carried out by simulation. PMID:26516636

  11. 3D inversion based on multi-grid approach of magnetotelluric data from Northern Scandinavia

    NASA Astrophysics Data System (ADS)

    Cherevatova, M.; Smirnov, M.; Korja, T. J.; Egbert, G. D.

    2012-12-01

    In this work we investigate the geoelectrical structure of the cratonic margin of Fennoscandian Shield by means of magnetotelluric (MT) measurements carried out in Northern Norway and Sweden during summer 2011-2012. The project Magnetotellurics in the Scandes (MaSca) focuses on the investigation of the crust, upper mantle and lithospheric structure in a transition zone from a stable Precambrian cratonic interior to a passive continental margin beneath the Caledonian Orogen and the Scandes Mountains in western Fennoscandia. Recent MT profiles in the central and southern Scandes indicated a large contrast in resistivity between Caledonides and Precambrian basement. The alum shales as a highly conductive layers between the resistive Precambrian basement and the overlying Caledonian nappes are revealed from this profiles. Additional measurements in the Northern Scandes were required. All together data from 60 synchronous long period (LMT) and about 200 broad band (BMT) sites were acquired. The array stretches from Lofoten and Bodo (Norway) in the west to Kiruna and Skeleftea (Sweden) in the east covering an area of 500x500 square kilometers. LMT sites were occupied for about two months, while most of the BMT sites were measured during one day. We have used new multi-grid approach for 3D electromagnetic (EM) inversion and modelling. Our approach is based on the OcTree discretization where the spatial domain is represented by rectangular cells, each of which might be subdivided (recursively) into eight sub-cells. In this simplified implementation the grid is refined only in the horizontal direction, uniformly in each vertical layer. Using multi-grid we manage to have a high grid resolution near the surface (for instance, to tackle with galvanic distortions) and lower resolution at greater depth as the EM fields decay in the Earth according to the diffusion equation. We also have a benefit in computational costs as number of unknowns decrease. The multi-grid forward

  12. Power system voltage stability and agent based distribution automation in smart grid

    NASA Astrophysics Data System (ADS)

    Nguyen, Cuong Phuc

    2011-12-01

    Our interconnected electric power system is presently facing many challenges that it was not originally designed and engineered to handle. The increased inter-area power transfers, aging infrastructure, and old technologies, have caused many problems including voltage instability, widespread blackouts, slow control response, among others. These problems have created an urgent need to transform the present electric power system to a highly stable, reliable, efficient, and self-healing electric power system of the future, which has been termed "smart grid". This dissertation begins with an investigation of voltage stability in bulk transmission networks. A new continuation power flow tool for studying the impacts of generator merit order based dispatch on inter-area transfer capability and static voltage stability is presented. The load demands are represented by lumped load models on the transmission system. While this representation is acceptable in traditional power system analysis, it may not be valid in the future smart grid where the distribution system will be integrated with intelligent and quick control capabilities to mitigate voltage problems before they propagate into the entire system. Therefore, before analyzing the operation of the whole smart grid, it is important to understand the distribution system first. The second part of this dissertation presents a new platform for studying and testing emerging technologies in advanced Distribution Automation (DA) within smart grids. Due to the key benefits over the traditional centralized approach, namely flexible deployment, scalability, and avoidance of single-point-of-failure, a new distributed approach is employed to design and develop all elements of the platform. A multi-agent system (MAS), which has the three key characteristics of autonomy, local view, and decentralization, is selected to implement the advanced DA functions. The intelligent agents utilize a communication network for cooperation and

  13. FermiGrid - experience and future plans

    SciTech Connect

    Chadwick, K.; Berman, E.; Canal, P.; Hesselroth, T.; Garzoglio, G.; Levshina, T.; Sergeev, V.; Sfiligoi, I.; Timm, S.; Yocum, D.; /Fermilab

    2007-09-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid and the WLCG. FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the Open Science Grid (OSG), EGEE and the Worldwide LHC Computing Grid Collaboration (WLCG). Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure--the successes and the problems.

  14. Occupational stressors and hypertension: a multi-method study using observer-based job analysis and self-reports in urban transit operators.

    PubMed

    Greiner, Birgit A; Krause, Niklas; Ragland, David; Fisher, June M

    2004-09-01

    This multi-method study aimed to disentangle objective and subjective components of job stressors and determine the role of each for hypertension risk. Because research on job stressors and hypertension has been exclusively based on self-reports of stressors, the tendency of some individuals to use denial and repressive coping might be responsible for the inconclusive results in previous studies. Stressor measures with different degrees of objectivity were contrasted, including (1) an observer-based measure of stressors (job barriers, time pressure) obtained from experts, (2) self-reported frequency and appraised intensity of job problems and time pressures averaged per workplace (group level), (3) self-reported frequency of job problems and time pressures at the individual level, and (4) self-reported appraised intensity of job problems and time pressures at the individual level. The sample consisted of 274 transit operators working on 27 different transit lines and four different vehicle types. Objective stressors (job barriers and time pressure) were each significantly associated with hypertension (casual blood pressure readings and/or currently taking anti-hypertensive medication) after adjustment for age, gender and seniority. Self-reported stressors at the individual level were positively but not significantly associated with hypertension. At the group level, only appraisal of job problems significantly predicted hypertension. In a composite regression model, both observer-based job barriers and self-reported intensity of job problems were independently and significantly associated with hypertension. Associations between self-reported job problems (individual level) and hypertension were dependent on the level of objective stressors. When observer-based stressor level was low, the association between self-reported frequency of stressors and hypertension was high. When the observer-based stressor level was high the association was inverse; this might be

  15. Use of job aids to improve facility-based postnatal counseling and care in rural Benin.

    PubMed

    Jennings, L; Yebadokpo, A; Affo, J; Agbogbe, M

    2015-03-01

    This study examined the effect of a job aids-focused intervention on quality of facility-based postnatal counseling, and whether increased communication improved in-hospital newborn care and maternal knowledge of home practices and danger signs requiring urgent care. Ensuring mothers and newborns receive essential postnatal services, including health counseling, is integral to their survival. Yet, quality of clinic-based postnatal services is often low, and evidence on effective improvement strategies is scarce. Using a pre-post randomized design, data were drawn from direct observations and interviews with 411 mother-newborn pairs. Multi-level regression models with difference-in-differences analyses estimated the intervention's relative effect, adjusting for changes in the comparison arm. The mean percent of recommended messages provided to recently-delivered women significantly improved in the intervention arm as compared to the control (difference-in-differences [∆i - ∆c] +30.9, 95 % confidence interval (CI) 19.3, 42.5), and the proportion of newborns thermally protected within the first hour (∆i - ∆c +33.7, 95 % CI 19.0, 48.4) and delayed for bathing (∆i - ∆c +23.9, 95 % CI 9.4, 38.4) significantly increased. No significant changes were observed in early breastfeeding (∆i - ∆c +6.8, 95 % CI -2.8, 16.4) which was nearly universal. Omitting traditional umbilical cord substances rose slightly, but was insignificant (∆i - ∆c +8.5, 95 % CI -2.8, 19.9). The proportion of mothers with correct knowledge of maternal (∆i - ∆c +27.8, 95 % CI 11.0, 44.6) and newborn (∆i - ∆c +40.3, 95 % CI 22.2, 58.4) danger signs grew substantially, as did awareness of several home-care practices (∆i - ∆c +26.0, 95 % CI 7.7, 44.3). Counseling job aids can improve the quality of postnatal services. However, achieving reduction goals in maternal and neonatal mortality will likely require more comprehensive approaches to link enhanced facility services with

  16. Simulation of single grid-based phase-contrast x-ray imaging (g-PCXI)

    NASA Astrophysics Data System (ADS)

    Lim, H. W.; Lee, H. W.; Cho, H. S.; Je, U. K.; Park, C. K.; Kim, K. S.; Kim, G. A.; Park, S. Y.; Lee, D. Y.; Park, Y. O.; Woo, T. H.; Lee, S. H.; Chung, W. H.; Kim, J. W.; Kim, J. G.

    2017-04-01

    Single grid-based phase-contrast x-ray imaging (g-PCXI) technique, which was recently proposed by Wen et al. to retrieve absorption, scattering, and phase-gradient images from the raw image of the examined object, seems a practical method for phase-contrast imaging with great simplicity and minimal requirements on the setup alignment. In this work, we developed a useful simulation platform for g-PCXI and performed a simulation to demonstrate its viability. We also established a table-top setup for g-PCXI which consists of a focused-linear grid (200-lines/in strip density), an x-ray tube (100-μm focal spot size), and a flat-panel detector (48-μm pixel size) and performed a preliminary experiment with some samples to show the performance of the simulation platform. We successfully obtained phase-contrast x-ray images of much enhanced contrast from both the simulation and experiment and the simulated contract seemed similar to the experimental contrast, which shows the performance of the developed simulation platform. We expect that the simulation platform will be useful for designing an optimal g-PCXI system.

  17. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  18. Parallel level-set methods on adaptive tree-based grids

    NASA Astrophysics Data System (ADS)

    Mirzadeh, Mohammad; Guittet, Arthur; Burstedde, Carsten; Gibou, Frederic

    2016-10-01

    We present scalable algorithms for the level-set method on dynamic, adaptive Quadtree and Octree Cartesian grids. The algorithms are fully parallelized and implemented using the MPI standard and the open-source p4est library. We solve the level set equation with a semi-Lagrangian method which, similar to its serial implementation, is free of any time-step restrictions. This is achieved by introducing a scalable global interpolation scheme on adaptive tree-based grids. Moreover, we present a simple parallel reinitialization scheme using the pseudo-time transient formulation. Both parallel algorithms scale on the Stampede supercomputer, where we are currently using up to 4096 CPU cores, the limit of our current account. Finally, a relevant application of the algorithms is presented in modeling a crystallization phenomenon by solving a Stefan problem, illustrating a level of detail that would be impossible to achieve without a parallel adaptive strategy. We believe that the algorithms presented in this article will be of interest and useful to researchers working with the level-set framework and modeling multi-scale physics in general.

  19. Optimized Equivalent Staggered-grid FD Method for Elastic Wave Modeling Based on Plane Wave Solutions

    NASA Astrophysics Data System (ADS)

    Yong, Peng; Huang, Jianping; Li, Zhenchun; Liao, Wenyuan; Qu, Luping; Li, Qingyang; Liu, Peijun

    2016-12-01

    In finite difference (FD) method, numerical dispersion is the dominant factor influencing the accuracy of seismic modeling. Various optimized FD schemes for scalar wave modeling have been proposed to reduce grid dispersion, while the optimized time-space domain FD schemes for elastic wave modeling have not been fully investigated yet. In this paper, an optimized FD scheme with Equivalent Staggered Grid (ESG) for elastic modelling has been developed. We start from the constant P- and S-wave speed elastic wave equations and then deduce analytical plane wave solutions in the wavenumber domain with eigenvalue decomposition method. Based on the elastic plane wave solutions, three new time-space domain dispersion relations of ESG elastic modeling are obtained, which are represented by three equations corresponding to P-, S- and converted wave terms in the elastic equations, respectively. By using these new relations, we can study the dispersion errors of different spatial FD terms independently. The dispersion analysis showed that different spatial FD terms have different errors. It is therefore suggested that different FD coefficients to be used to approximate the three spatial derivative terms. In addition, the relative dispersion error in L2-norm is minimized through optimizing FD coefficients using Newton's method. Synthetic examples have demonstrated that this new optimal FD schemes have superior accuracy for elastic wave modeling compared to Taylor-series expansion and optimized space domain FD schemes.

  20. A Generalized Grid-Based Fast Multipole Method for Integrating Helmholtz Kernels.

    PubMed

    Parkkinen, Pauli; Losilla, Sergio A; Solala, Eelis; Toivanen, Elias A; Xu, Wen-Hua; Sundholm, Dage

    2017-02-14

    A grid-based fast multipole method (GB-FMM) for optimizing three-dimensional (3D) numerical molecular orbitals in the bubbles and cube double basis has been developed and implemented. The present GB-FMM method is a generalization of our recently published GB-FMM approach for numerically calculating electrostatic potentials and two-electron interaction energies. The orbital optimization is performed by integrating the Helmholtz kernel in the double basis. The steep part of the functions in the vicinity of the nuclei is represented by one-center bubbles functions, whereas the remaining cube part is expanded on an equidistant 3D grid. The integration of the bubbles part is treated by using one-center expansions of the Helmholtz kernel in spherical harmonics multiplied with modified spherical Bessel functions of the first and second kind, analogously to the numerical inward and outward integration approach for calculating two-electron interaction potentials in atomic structure calculations. The expressions and algorithms for massively parallel calculations on general purpose graphics processing units (GPGPU) are described. The accuracy and the correctness of the implementation has been checked by performing Hartree-Fock self-consistent-field calculations (HF-SCF) on H2, H2O, and CO. Our calculations show that an accuracy of 10(-4) to 10(-7) Eh can be reached in HF-SCF calculations on general molecules.

  1. Branch-based centralized data collection for smart grids using wireless sensor networks.

    PubMed

    Kim, Kwangsoo; Jin, Seong-il

    2015-05-21

    A smart grid is one of the most important applications in smart cities. In a smart grid, a smart meter acts as a sensor node in a sensor network, and a central device collects power usage from every smart meter. This paper focuses on a centralized data collection problem of how to collect every power usage from every meter without collisions in an environment in which the time synchronization among smart meters is not guaranteed. To solve the problem, we divide a tree that a sensor network constructs into several branches. A conflict-free query schedule is generated based on the branches. Each power usage is collected according to the schedule. The proposed method has important features: shortening query processing time and avoiding collisions between a query and query responses. We evaluate this method using the ns-2 simulator. The experimental results show that this method can achieve both collision avoidance and fast query processing at the same time. The success rate of data collection at a sink node executing this method is 100%. Its running time is about 35 percent faster than that of the round-robin method, and its memory size is reduced to about 10% of that of the depth-first search method.

  2. Grid-cell-based crop water accounting for the famine early warning system

    USGS Publications Warehouse

    Verdin, J.; Klaver, R.

    2002-01-01

    Rainfall monitoring is a regular activity of food security analysts for sub-Saharan Africa due to the potentially disastrous impact of drought. Crop water accounting schemes are used to track rainfall timing and amounts relative to phenological requirements, to infer water limitation impacts on yield. Unfortunately, many rain gauge reports are available only after significant delays, and the gauge locations leave large gaps in coverage. As an alternative, a grid-cell-based formulation for the water requirement satisfaction index (WRSI) was tested for maize in Southern Africa. Grids of input variables were obtained from remote sensing estimates of rainfall, meteorological models, and digital soil maps. The spatial WRSI was computed for the 1996-97 and 1997-98 growing seasons. Maize yields were estimated by regression and compared with a limited number of reports from the field for the 1996-97 season in Zimbabwe. Agreement at a useful level (r = 0.80) was observed. This is comparable to results from traditional analysis with station data. The findings demonstrate the complementary role that remote sensing, modelling, and geospatial analysis can play in an era when field data collection in sub-Saharan Africa is suffering an unfortunate decline. Published in 2002 by John Wiley & Sons, Ltd.

  3. Adaptive Hierarchical Voltage Control of a DFIG-Based Wind Power Plant for a Grid Fault

    SciTech Connect

    Kim, Jinho; Muljadi, Eduard; Park, Jung-Wook; Kang, Yong Cheol

    2016-11-01

    This paper proposes an adaptive hierarchical voltage control scheme of a doubly-fed induction generator (DFIG)-based wind power plant (WPP) that can secure more reserve of reactive power (Q) in the WPP against a grid fault. To achieve this, each DFIG controller employs an adaptive reactive power to voltage (Q-V) characteristic. The proposed adaptive Q-V characteristic is temporally modified depending on the available Q capability of a DFIG; it is dependent on the distance from a DFIG to the point of common coupling (PCC). The proposed characteristic secures more Q reserve in the WPP than the fixed one. Furthermore, it allows DFIGs to promptly inject up to the Q limit, thereby improving the PCC voltage support. To avert an overvoltage after the fault clearance, washout filters are implemented in the WPP and DFIG controllers; they can prevent a surplus Q injection after the fault clearance by eliminating the accumulated values in the proportional-integral controllers of both controllers during the fault. Test results demonstrate that the scheme can improve the voltage support capability during the fault and suppress transient overvoltage after the fault clearance under scenarios of various system and fault conditions; therefore, it helps ensure grid resilience by supporting the voltage stability.

  4. Lambda Station: On-demand flow based routing for data intensive Grid applications over multitopology networks

    SciTech Connect

    Bobyshev, A.; Crawford, M.; DeMar, P.; Grigaliunas, V.; Grigoriev, M.; Moibenko, A.; Petravick, D.; Rechenmacher, R.; Newman, H.; Bunn, J.; Van Lingen, F.; Nae, D.; Ravot, S.; Steenberg, C.; Su, X.; Thomas, M.; Xia, Y.; /Caltech

    2006-08-01

    Lambda Station is an ongoing project of Fermi National Accelerator Laboratory and the California Institute of Technology. The goal of this project is to design, develop and deploy network services for path selection, admission control and flow based forwarding of traffic among data-intensive Grid applications such as are used in High Energy Physics and other communities. Lambda Station deals with the last-mile problem in local area networks, connecting production clusters through a rich array of wide area networks. Selective forwarding of traffic is controlled dynamically at the demand of applications. This paper introduces the motivation of this project, design principles and current status. Integration of Lambda Station client API with the essential Grid middleware such as the dCache/SRM Storage Resource Manager is also described. Finally, the results of applying Lambda Station services to development and production clusters at Fermilab and Caltech over advanced networks such as DOE's UltraScience Net and NSF's UltraLight is covered.

  5. Optimized equivalent staggered-grid FD method for elastic wave modelling based on plane wave solutions

    NASA Astrophysics Data System (ADS)

    Yong, Peng; Huang, Jianping; Li, Zhenchun; Liao, Wenyuan; Qu, Luping; Li, Qingyang; Liu, Peijun

    2017-02-01

    In finite-difference (FD) method, numerical dispersion is the dominant factor influencing the accuracy of seismic modelling. Various optimized FD schemes for scalar wave modelling have been proposed to reduce grid dispersion, while the optimized time-space domain FD schemes for elastic wave modelling have not been fully investigated yet. In this paper, an optimized FD scheme with Equivalent Staggered Grid (ESG) for elastic modelling has been developed. We start from the constant P- and S-wave speed elastic wave equations and then deduce analytical plane wave solutions in the wavenumber domain with eigenvalue decomposition method. Based on the elastic plane wave solutions, three new time-space domain dispersion relations of ESG elastic modelling are obtained, which are represented by three equations corresponding to P-, S- and converted-wave terms in the elastic equations, respectively. By using these new relations, we can study the dispersion errors of different spatial FD terms independently. The dispersion analysis showed that different spatial FD terms have different errors. It is therefore suggested that different FD coefficients to be used to approximate the three spatial derivative terms. In addition, the relative dispersion error in L2-norm is minimized through optimizing FD coefficients using Newton's method. Synthetic examples have demonstrated that this new optimal FD schemes have superior accuracy for elastic wave modelling compared to Taylor-series expansion and optimized space domain FD schemes.

  6. The visibility-based tapered gridded estimator (TGE) for the redshifted 21-cm power spectrum

    NASA Astrophysics Data System (ADS)

    Choudhuri, Samir; Bharadwaj, Somnath; Chatterjee, Suman; Ali, Sk. Saiyad; Roy, Nirupam; Ghosh, Abhik

    2016-12-01

    We present an improved visibility-based tapered gridded estimator (TGE) for the power spectrum of the diffuse sky signal. The visibilities are gridded to reduce the total computation time for the calculation, and tapered through a convolution to suppress the contribution from the outer regions of the telescope's field of view. The TGE also internally estimates the noise bias, and subtracts this out to give an unbiased estimate of the power spectrum. An earlier version of the 2D TGE for the angular power spectrum Cℓ is improved and then extended to obtain the 3D TGE for the power spectrum P(k) of the 21-cm brightness temperature fluctuations. Analytic formulas are also presented for predicting the variance of the binned power spectrum. The estimator and its variance predictions are validated using simulations of 150-MHz Giant Metrewave Radio Telescope (GMRT) observations. We find that the estimator accurately recovers the input model for the 1D spherical power spectrum P(k) and the 2D cylindrical power spectrum P(k⊥, k∥), and that the predicted variance is in reasonably good agreement with the simulations.

  7. Web-based visualization of gridded dataset usings OceanBrowser

    NASA Astrophysics Data System (ADS)

    Barth, Alexander; Watelet, Sylvain; Troupin, Charles; Beckers, Jean-Marie

    2015-04-01

    OceanBrowser is a web-based visualization tool for gridded oceanographic data sets. Those data sets are typically four-dimensional (longitude, latitude, depth and time). OceanBrowser allows one to visualize horizontal sections at a given depth and time to examine the horizontal distribution of a given variable. It also offers the possibility to display the results on an arbitrary vertical section. To study the evolution of the variable in time, the horizontal and vertical sections can also be animated. Vertical section can be generated by using a fixed distance from coast or fixed ocean depth. The user can customize the plot by changing the color-map, the range of the color-bar, the type of the plot (linearly interpolated color, simple contours, filled contours) and download the current view as a simple image or as Keyhole Markup Language (KML) file for visualization in applications such as Google Earth. The data products can also be accessed as NetCDF files and through OPeNDAP. Third-party layers from a web map service can also be integrated. OceanBrowser is used in the frame of the SeaDataNet project (http://gher-diva.phys.ulg.ac.be/web-vis/) and EMODNET Chemistry (http://oceanbrowser.net/emodnet/) to distribute gridded data sets interpolated from in situ observation using DIVA (Data-Interpolating Variational Analysis).

  8. Branch-Based Centralized Data Collection for Smart Grids Using Wireless Sensor Networks

    PubMed Central

    Kim, Kwangsoo; Jin, Seong-il

    2015-01-01

    A smart grid is one of the most important applications in smart cities. In a smart grid, a smart meter acts as a sensor node in a sensor network, and a central device collects power usage from every smart meter. This paper focuses on a centralized data collection problem of how to collect every power usage from every meter without collisions in an environment in which the time synchronization among smart meters is not guaranteed. To solve the problem, we divide a tree that a sensor network constructs into several branches. A conflict-free query schedule is generated based on the branches. Each power usage is collected according to the schedule. The proposed method has important features: shortening query processing time and avoiding collisions between a query and query responses. We evaluate this method using the ns-2 simulator. The experimental results show that this method can achieve both collision avoidance and fast query processing at the same time. The success rate of data collection at a sink node executing this method is 100%. Its running time is about 35 percent faster than that of the round-robin method, and its memory size is reduced to about 10% of that of the depth-first search method. PMID:26007734

  9. MEDUSA - An overset grid flow solver for network-based parallel computer systems

    NASA Technical Reports Server (NTRS)

    Smith, Merritt H.; Pallis, Jani M.

    1993-01-01

    Continuing improvement in processing speed has made it feasible to solve the Reynolds-Averaged Navier-Stokes equations for simple three-dimensional flows on advanced workstations. Combining multiple workstations into a network-based heterogeneous parallel computer allows the application of programming principles learned on MIMD (Multiple Instruction Multiple Data) distributed memory parallel computers to the solution of larger problems. An overset-grid flow solution code has been developed which uses a cluster of workstations as a network-based parallel computer. Inter-process communication is provided by the Parallel Virtual Machine (PVM) software. Solution speed equivalent to one-third of a Cray-YMP processor has been achieved from a cluster of nine commonly used engineering workstation processors. Load imbalance and communication overhead are the principal impediments to parallel efficiency in this application.

  10. Calcium-based multi-element chemistry for grid-scale electrochemical energy storage

    NASA Astrophysics Data System (ADS)

    Ouchi, Takanari; Kim, Hojong; Spatocco, Brian L.; Sadoway, Donald R.

    2016-03-01

    Calcium is an attractive material for the negative electrode in a rechargeable battery due to its low electronegativity (high cell voltage), double valence, earth abundance and low cost; however, the use of calcium has historically eluded researchers due to its high melting temperature, high reactivity and unfavorably high solubility in molten salts. Here we demonstrate a long-cycle-life calcium-metal-based rechargeable battery for grid-scale energy storage. By deploying a multi-cation binary electrolyte in concert with an alloyed negative electrode, calcium solubility in the electrolyte is suppressed and operating temperature is reduced. These chemical mitigation strategies also engage another element in energy storage reactions resulting in a multi-element battery. These initial results demonstrate how the synergistic effects of deploying multiple chemical mitigation strategies coupled with the relaxation of the requirement of a single itinerant ion can unlock calcium-based chemistries and produce a battery with enhanced performance.

  11. NaradaBrokering as Middleware Fabric for Grid-based Remote Visualization Services

    NASA Astrophysics Data System (ADS)

    Pallickara, S.; Erlebacher, G.; Yuen, D.; Fox, G.; Pierce, M.

    2003-12-01

    Remote Visualization Services (RVS) have tended to rely on approaches based on the client server paradigm. The simplicity in these approaches is offset by problems such as single-point-of-failures, scaling and availability. Furthermore, as the complexity, scale and scope of the services hosted on this paradigm increase, this approach becomes increasingly unsuitable. We propose a scheme based on top of a distributed brokering infrastructure, NaradaBrokering, which comprises a distributed network of broker nodes. These broker nodes are organized in a cluster-based architecture that can scale to very large sizes. The broker network is resilient to broker failures and efficiently routes interactions to entities that expressed an interest in them. In our approach to RVS, services advertise their capabilities to the broker network, which manages these service advertisements. Among the services considered within our system are those that perform graphic transformations, mediate access to specialized datasets and finally those that manage the execution of specified tasks. There could be multiple instances of each of these services and the system ensures that load for a given service is distributed efficiently over these service instances. Among the features provided in our approach are efficient discovery of services and asynchronous interactions between services and service requestors (which could themselves be other services). Entities need not be online during the execution of the service request. The system also ensures that entities can be notified about task executions, partial results and failures that might have taken place during service execution. The system also facilitates specification of task overrides, distribution of execution results to alternate devices (which were not used to originally request service execution) and to multiple users. These RVS services could of course be either OGSA (Open Grid Services Architecture) based Grid services or traditional

  12. Planning for distributed workflows: constraint-based coscheduling of computational jobs and data placement in distributed environments

    NASA Astrophysics Data System (ADS)

    Makatun, Dzmitry; Lauret, Jérôme; Rudová, Hana; Šumbera, Michal

    2015-05-01

    When running data intensive applications on distributed computational resources long I/O overheads may be observed as access to remotely stored data is performed. Latencies and bandwidth can become the major limiting factor for the overall computation performance and can reduce the CPU/WallTime ratio to excessive IO wait. Reusing the knowledge of our previous research, we propose a constraint programming based planner that schedules computational jobs and data placements (transfers) in a distributed environment in order to optimize resource utilization and reduce the overall processing completion time. The optimization is achieved by ensuring that none of the resources (network links, data storages and CPUs) are oversaturated at any moment of time and either (a) that the data is pre-placed at the site where the job runs or (b) that the jobs are scheduled where the data is already present. Such an approach eliminates the idle CPU cycles occurring when the job is waiting for the I/O from a remote site and would have wide application in the community. Our planner was evaluated and simulated based on data extracted from log files of batch and data management systems of the STAR experiment. The results of evaluation and estimation of performance improvements are discussed in this paper.

  13. Comparison between staggered grid finite-volume and edge-based finite-element modelling of geophysical electromagnetic data on unstructured grids

    NASA Astrophysics Data System (ADS)

    Jahandari, Hormoz; Ansari, SeyedMasoud; Farquharson, Colin G.

    2017-03-01

    This study compares two finite-element (FE) and three finite-volume (FV) schemes which use unstructured tetrahedral grids for the modelling of electromagnetic (EM) data. All these schemes belong to a group of differential methods where the electric field is defined along the edges of the elements. The FE and FV schemes are based on both the EM-field and the potential formulations of Maxwell's equations. The EM-field FE scheme uses edge-based (vector) basis functions while the potential FE scheme uses vector and scalar basis functions. All the FV schemes use staggered tetrahedral-Voronoï grids. Three examples are used for comparisons in terms of accuracy and in terms of the computation resources required by generic iterative and direct solvers for solving the problems. Two of these examples represent survey scenarios with electric and magnetic sources and the results are compared with those from the literature while the third example is a comparison against analytical solutions for an electric dipole source. Exactly the same mesh is used for all examples to allow for direct comparison of the various schemes. The results show that while the FE and FV schemes are comparable in terms of accuracy and computation resources, the FE schemes are slightly more accurate but also more expensive than the FV schemes.

  14. The DACUM Job Analysis Process.

    ERIC Educational Resources Information Center

    Dofasco, Inc., Hamilton (Ontario).

    This document explains the DACUM (Developing A Curriculum) process for analyzing task-based jobs to: identify where standard operating procedures are required; identify duplicated low value added tasks; develop performance standards; create job descriptions; and identify the elements that must be included in job-specific training programs. The…

  15. Computer-Based Video Instruction to Teach Young Adults with Moderate Intellectual Disabilities to Perform Multiple Step, Job Tasks in a Generalized Setting

    ERIC Educational Resources Information Center

    Mechling, Linda C.; Ortega-Hurndon, Fanny

    2007-01-01

    This study evaluated the effectiveness of computer-based video instruction (CBVI) to teach three young adults with moderate intellectual disabilities to perform complex, multiple step, job tasks in a generalized setting. A multiple probe design across three job tasks and replicated across three students was used to evaluate the effectiveness of…

  16. Informatic infrastructure for Climatological and Oceanographic data based on THREDDS technology in a Grid environment

    NASA Astrophysics Data System (ADS)

    Tronconi, C.; Forneris, V.; Santoleri, R.

    2009-04-01

    CNR-ISAC-GOS is responsible for the Mediterranean Sea satellite operational system in the framework of MOON Patnership. This Observing System acquires satellite data and produces Near Real Time, Delayed Time and Re-analysis of Ocean Colour and Sea Surface Temperature products covering the Mediterranean and the Black Seas and regional basins. In the framework of several projects (MERSEA, PRIMI, Adricosm Star, SeaDataNet, MyOcean, ECOOP), GOS is producing Climatological/Satellite datasets based on optimal interpolation and specific Regional algorithm for chlorophyll, updated in Near Real Time and in Delayed mode. GOS has built • an informatic infrastructure data repository and delivery based on THREDDS technology The datasets are generated in NETCDF format, compliant with both the CF convention and the international satellite-oceanographic specification, as prescribed by GHRSST (for SST). All data produced, are made available to the users through a THREDDS server catalog. • A LAS has been installed in order to exploit the potential of NETCDF data and the OPENDAP URL. It provides flexible access to geo-referenced scientific data • a Grid Environment based on Globus Technologies (GT4) connecting more than one Institute; in particular exploiting CNR and ESA clusters makes possible to reprocess 12 years of Chlorophyll data in less than one month.(estimated processing time on a single core PC: 9months). In the poster we will give an overview of: • the features of the THREDDS catalogs, pointing out the powerful characteristics of this new middleware that has replaced the "old" OPENDAP Server; • the importance of adopting a common format (as NETCDF) for data exchange; • the tools (e.g. LAS) connected with THREDDS and NETCDF format use. • the Grid infrastructure on ISAC We will present also specific basin-scale High Resolution products and Ultra High Resolution regional/coastal products available on these catalogs.

  17. An Analysis of Job Attitudes of Junior Enlisted Personnel Members Assigned to the Consolidated Base Personnel Office (CBPO)

    DTIC Science & Technology

    1986-04-01

    the CBPO group had a mean score lower than the data base target group contained a variable relating to additional duty interference with primary job...possess the DAFSC 732X0 or did not work in the CBPO. Criteria The criteria used for selecting the target group within the Personnel career area for...weaknesses within the target group : and 4. To make recommendations for changes based upon the results and analyses. The present report addresses each of

  18. Near-Body Grid Adaption for Overset Grids

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2016-01-01

    A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.

  19. GLIDE: a grid-based light-weight infrastructure for data-intensive environments

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris A.; Malek, Sam; Beckman, Nels; Mikic-Rakic, Marija; Medvidovic, Nenad; Chrichton, Daniel J.

    2005-01-01

    The promise of the grid is that it will enable public access and sharing of immense amounts of computational and data resources among dynamic coalitions of individuals and institutions. However, the current grid solutions make several limiting assumptions that curtail their widespread adoption. To address these limitations, we present GLIDE, a prototype light-weight, data-intensive middleware infrastructure that enables access to the robust data and computational power of the grid on DREAM platforms.

  20. Trust Management in an Agent-Based Grid Resource Brokering System-Preliminary Considerations

    NASA Astrophysics Data System (ADS)

    Ganzha, M.; Paprzycki, M.; Lirkov, I.

    2007-10-01

    It has been suggested that utilization of autonomous software agents in computational Grids may deliver the needed functionality to speed-up Grid adoption. I our recent work we have outlined an approach in which agent teams facilitate Grid resource brokering and management. One of the interesting questions is how to manage trust in such a system. The aim of this paper is to outline our proposed solution.

  1. Ab Initio potential grid based docking: From High Performance Computing to In Silico Screening

    NASA Astrophysics Data System (ADS)

    de Jonge, Marc R.; Vinkers, H. Maarten; van Lenthe, Joop H.; Daeyaert, Frits; Bush, Ian J.; van Dam, Huub J. J.; Sherwood, Paul; Guest, Martyn F.

    2007-09-01

    We present a new and completely parallel method for protein ligand docking. The potential of the docking target structure is obtained directly from the electron density derived through an ab initio computation. A large subregion of the crystal structure of Isocitrate Lyase, was selected as docking target. To allow the full ab initio treatment of this region special care was taken to assign optimal basis functions. The electrostatic potential is tested by docking a small charged molecule (succinate) into the binding site. The ab initio grid yields a superior result by producing the best binding orientation and position, and by recognizing it as the best. In contrast the same docking procedure, but using a classical point-charge based potential, produces a number of additional incorrect binding poses, and does not recognize the correct pose as the best solution.

  2. A grid-based implementation of XDS-I as a part of a metropolitan EHR in Shanghai

    NASA Astrophysics Data System (ADS)

    Zhang, Jianguo; Zhang, Chenghao; Sun, Jianyong, Sr.; Yang, Yuanyuan; Jin, Jin; Yu, Fenghai; He, Zhenyu; Zheng, Xichuang; Qin, Huanrong; Feng, Jie; Zhang, Guozheng

    2007-03-01

    A number of hospitals in Shanghai are piloting the development of an EHR solution based on a grid concept with a service-oriented architecture (SOA). The first phase of the project targets the Diagnostic Imaging domain and allows seamless sharing of images and reports across the multiple hospitals. The EHR solution is fully aligned with the IHE XDS-I integration profile and consists of the components of the XDS-I Registry, Repository, Source and Consumer actors. By using SOA, the solution uses ebXML over secured http for all transactions with in the grid. However, communication with the PACS and RIS is DICOM and HL7 v3.x. The solution was installed in three hospitals and one date center in Shanghai and tested for performance of data publication, user query and image retrieval. The results are extremely positive and demonstrate that the EHR solution based on SOA with grid concept can scale effectively to server a regional implementation.

  3. Long Range Debye-Hückel Correction for Computation of Grid-based Electrostatic Forces Between Biomacromolecules

    SciTech Connect

    Mereghetti, Paolo; Martinez, M.; Wade, Rebecca C.

    2014-06-17

    Brownian dynamics (BD) simulations can be used to study very large molecular systems, such as models of the intracellular environment, using atomic-detail structures. Such simulations require strategies to contain the computational costs, especially for the computation of interaction forces and energies. A common approach is to compute interaction forces between macromolecules by precomputing their interaction potentials on three-dimensional discretized grids. For long-range interactions, such as electrostatics, grid-based methods are subject to finite size errors. We describe here the implementation of a Debye-Hückel correction to the grid-based electrostatic potential used in the SDA BD simulation software that was applied to simulate solutions of bovine serum albumin and of hen egg white lysozyme.

  4. The impact of job crafting on job demands, job resources, and well-being.

    PubMed

    Tims, Maria; Bakker, Arnold B; Derks, Daantje

    2013-04-01

    This longitudinal study examined whether employees can impact their own well-being by crafting their job demands and resources. Based on the job demands-resources model, we hypothesized that employee job crafting would have an impact on work engagement, job satisfaction, and burnout through changes in job demands and job resources. Data was collected in a chemical plant at three time points with one month in between the measurement waves (N = 288). The results of structural equation modeling showed that employees who crafted their job resources in the first month of the study showed an increase in their structural and social resources over the course of the study (2 months). This increase in job resources was positively related to employee well-being (increased engagement and job satisfaction, and decreased burnout). Crafting job demands did not result in a change in job demands, but results revealed direct effects of crafting challenging demands on increases in well-being. We conclude that employee job crafting has a positive impact on well-being and that employees therefore should be offered opportunities to craft their own jobs.

  5. An objective decision model of power grid environmental protection based on environmental influence index and energy-saving and emission-reducing index

    NASA Astrophysics Data System (ADS)

    Feng, Jun-shu; Jin, Yan-ming; Hao, Wei-hua

    2017-01-01

    Based on modelling the environmental influence index of power transmission and transformation project and energy-saving and emission-reducing index of source-grid-load of power system, this paper establishes an objective decision model of power grid environmental protection, with constraints of power grid environmental protection objectives being legal and economical, and considering both positive and negative influences of grid on the environmental in all-life grid cycle. This model can be used to guide the programming work of power grid environmental protection. A numerical simulation of Jiangsu province’s power grid environmental protection objective decision model has been operated, and the results shows that the maximum goal of energy-saving and emission-reducing benefits would be reached firstly as investment increasing, and then the minimum goal of environmental influence.

  6. Active Job Monitoring in Pilots

    NASA Astrophysics Data System (ADS)

    Kuehn, Eileen; Fischer, Max; Giffels, Manuel; Jung, Christopher; Petzold, Andreas

    2015-12-01

    Recent developments in high energy physics (HEP) including multi-core jobs and multi-core pilots require data centres to gain a deep understanding of the system to monitor, design, and upgrade computing clusters. Networking is a critical component. Especially the increased usage of data federations, for example in diskless computing centres or as a fallback solution, relies on WAN connectivity and availability. The specific demands of different experiments and communities, but also the need for identification of misbehaving batch jobs, requires an active monitoring. Existing monitoring tools are not capable of measuring fine-grained information at batch job level. This complicates network-aware scheduling and optimisations. In addition, pilots add another layer of abstraction. They behave like batch systems themselves by managing and executing payloads of jobs internally. The number of real jobs being executed is unknown, as the original batch system has no access to internal information about the scheduling process inside the pilots. Therefore, the comparability of jobs and pilots for predicting run-time behaviour or network performance cannot be ensured. Hence, identifying the actual payload is important. At the GridKa Tier 1 centre a specific tool is in use that allows the monitoring of network traffic information at batch job level. This contribution presents the current monitoring approach and discusses recent efforts and importance to identify pilots and their substructures inside the batch system. It will also show how to determine monitoring data of specific jobs from identified pilots. Finally, the approach is evaluated.

  7. Using Grid Benchmarks for Dynamic Scheduling of Grid Applications

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Hood, Robert

    2003-01-01

    Navigation or dynamic scheduling of applications on computational grids can be improved through the use of an application-specific characterization of grid resources. Current grid information systems provide a description of the resources, but do not contain any application-specific information. We define a GridScape as dynamic state of the grid resources. We measure the dynamic performance of these resources using the grid benchmarks. Then we use the GridScape for automatic assignment of the tasks of a grid application to grid resources. The scalability of the system is achieved by limiting the navigation overhead to a few percent of the application resource requirements. Our task submission and assignment protocol guarantees that the navigation system does not cause grid congestion. On a synthetic data mining application we demonstrate that Gridscape-based task assignment reduces the application tunaround time.

  8. glideinWMS - A generic pilot-based Workload Management System

    SciTech Connect

    Sfiligoi, Igor; /Fermilab

    2007-09-01

    The Grid resources are distributed among hundreds of independent Grid sites, requiring a higher level Workload Management System (WMS) to be used efficiently. Pilot jobs have been used for this purpose by many communities, bringing increased reliability, global fair share and just in time resource matching. GlideinWMS is a WMS based on the Condor glidein concept, i.e. a regular Condor pool, with the Condor daemons (startds) being started by pilot jobs, and real jobs being vanilla, standard or MPI universe jobs. The glideinWMS is composed of a set of Glidein Factories, handling the submission of pilot jobs to a set of Grid sites, and a set of VO Frontends, requesting pilot submission based on the status of user jobs. This paper contains the structural overview of glideinWMS as well as a detailed description of the current implementation and the current scalability limits.

  9. Analysis and Validation of Grid dem Generation Based on Gaussian Markov Random Field

    NASA Astrophysics Data System (ADS)

    Aguilar, F. J.; Aguilar, M. A.; Blanco, J. L.; Nemmaoui, A.; García Lorca, A. M.

    2016-06-01

    Digital Elevation Models (DEMs) are considered as one of the most relevant geospatial data to carry out land-cover and land-use classification. This work deals with the application of a mathematical framework based on a Gaussian Markov Random Field (GMRF) to interpolate grid DEMs from scattered elevation data. The performance of the GMRF interpolation model was tested on a set of LiDAR data (0.87 points/m2) provided by the Spanish Government (PNOA Programme) over a complex working area mainly covered by greenhouses in Almería, Spain. The original LiDAR data was decimated by randomly removing different fractions of the original points (from 10% to up to 99% of points removed). In every case, the remaining points (scattered observed points) were used to obtain a 1 m grid spacing GMRF-interpolated Digital Surface Model (DSM) whose accuracy was assessed by means of the set of previously extracted checkpoints. The GMRF accuracy results were compared with those provided by the widely known Triangulation with Linear Interpolation (TLI). Finally, the GMRF method was applied to a real-world case consisting of filling the LiDAR-derived DSM gaps after manually filtering out non-ground points to obtain a Digital Terrain Model (DTM). Regarding accuracy, both GMRF and TLI produced visually pleasing and similar results in terms of vertical accuracy. As an added bonus, the GMRF mathematical framework makes possible to both retrieve the estimated uncertainty for every interpolated elevation point (the DEM uncertainty) and include break lines or terrain discontinuities between adjacent cells to produce higher quality DTMs.

  10. Department 1824 Job Card System: A new web-based business tool

    SciTech Connect

    Brangan, J.R.

    1998-02-01

    The Analytical Chemistry Department uses a system of job cards to control and monitor the work through the organization. In the past, many different systems have been developed to allow each laboratory to monitor their individual work and report data. Unfortunately, these systems were separate and unique which caused difficulty in ascertaining any overall picture of the Department`s workload. To overcome these shortcomings, a new Job Card System was developed on Lotus Notes/Domino{trademark} for tracking the work through the laboratory. This application is groupware/database software and is located on the Sandia Intranet which allows users of any type of computer running a network browser to access the system. Security is provided through the use of logons and passwords for users who must add and/or modify information on the system. Customers may view the jobs in process by entering the system as an anonymous user. An overall view of the work in the department can be obtained by selecting from a variety of on screen reports. This enables the analysts, customers, customer contacts, and the Department Manager to quickly evaluate the work in process, the resources required, and the availability of equipment. On-line approval of the work and e-mail messaging of completed jobs has been provided to streamline the review and approval cycle. This paper provides a guide for the use of the Job Card System and information on maintenance of the system.

  11. Adapting a commercial power system simulator for smart grid based system study and vulnerability assessment

    NASA Astrophysics Data System (ADS)

    Navaratne, Uditha Sudheera

    The smart grid is the future of the power grid. Smart meters and the associated network play a major role in the distributed system of the smart grid. Advance Metering Infrastructure (AMI) can enhance the reliability of the grid, generate efficient energy management opportunities and many innovations around the future smart grid. These innovations involve intense research not only on the AMI network itself but as also on the influence an AMI network can have upon the rest of the power grid. This research describes a smart meter testbed with hardware in loop that can facilitate future research in an AMI network. The smart meters in the testbed were developed such that their functionality can be customized to simulate any given scenario such as integrating new hardware components into a smart meter or developing new encryption algorithms in firmware. These smart meters were integrated into the power system simulator to simulate the power flow variation in the power grid on different AMI activities. Each smart meter in the network also provides a communication interface to the home area network. This research delivers a testbed for emulating the AMI activities and monitoring their effect on the smart grid.

  12. Safe Grid

    NASA Technical Reports Server (NTRS)

    Chow, Edward T.; Stewart, Helen; Korsmeyer, David (Technical Monitor)

    2003-01-01

    The biggest users of GRID technologies came from the science and technology communities. These consist of government, industry and academia (national and international). The NASA GRID is moving into a higher technology readiness level (TRL) today; and as a joint effort among these leaders within government, academia, and industry, the NASA GRID plans to extend availability to enable scientists and engineers across these geographical boundaries collaborate to solve important problems facing the world in the 21 st century. In order to enable NASA programs and missions to use IPG resources for program and mission design, the IPG capabilities needs to be accessible from inside the NASA center networks. However, because different NASA centers maintain different security domains, the GRID penetration across different firewalls is a concern for center security people. This is the reason why some IPG resources are been separated from the NASA center network. Also, because of the center network security and ITAR concerns, the NASA IPG resource owner may not have full control over who can access remotely from outside the NASA center. In order to obtain organizational approval for secured remote access, the IPG infrastructure needs to be adapted to work with the NASA business process. Improvements need to be made before the IPG can be used for NASA program and mission development. The Secured Advanced Federated Environment (SAFE) technology is designed to provide federated security across NASA center and NASA partner's security domains. Instead of one giant center firewall which can be difficult to modify for different GRID applications, the SAFE "micro security domain" provide large number of professionally managed "micro firewalls" that can allow NASA centers to accept remote IPG access without the worry of damaging other center resources. The SAFE policy-driven capability-based federated security mechanism can enable joint organizational and resource owner approved remote

  13. Software Based Barriers To Integration Of Renewables To The Future Distribution Grid

    SciTech Connect

    Stewart, Emma; Kiliccote, Sila

    2014-06-01

    The future distribution grid has complex analysis needs, which may not be met with the existing processes and tools. In addition there is a growing number of measured and grid model data sources becoming available. For these sources to be useful they must be accurate, and interpreted correctly. Data accuracy is a key barrier to the growth of the future distribution grid. A key goal for California, and the United States, is increasing the renewable penetration on the distribution grid. To increase this penetration measured and modeled representations of generation must be accurate and validated, giving distribution planners and operators confidence in their performance. This study will review the current state of these software and modeling barriers and opportunities for the future distribution grid.

  14. ReSS: A Resource Selection Service for the Open Science Grid

    SciTech Connect

    Garzoglio, Gabriele; Levshina, Tanya; Mhashilkar, Parag; Timm, Steve; /Fermilab

    2008-01-01

    The Open Science Grid offers access to hundreds of computing and storage resources via standard Grid interfaces. Before the deployment of an automated resource selection system, users had to submit jobs directly to these resources. They would manually select a resource and specify all relevant attributes in the job description prior to submitting the job. The necessity of a human intervention in resource selection and attribute specification hinders automated job management components from accessing OSG resources and it is inconvenient for the users. The Resource Selection Service (ReSS) project addresses these shortcomings. The system integrates condor technology, for the core match making service, with the gLite CEMon component, for gathering and publishing resource information in the Glue Schema format. Each one of these components communicates over secure protocols via web services interfaces. The system is currently used in production on OSG by the DZero Experiment, the Engagement Virtual Organization, and the Dark Energy. It is also the resource selection service for the Fermilab Campus Grid, FermiGrid. ReSS is considered a lightweight solution to push-based workload management. This paper describes the architecture, performance, and typical usage of the system.

  15. A procedure for the estimation of the numerical uncertainty of CFD calculations based on grid refinement studies

    SciTech Connect

    Eça, L.; Hoekstra, M.

    2014-04-01

    This paper offers a procedure for the estimation of the numerical uncertainty of any integral or local flow quantity as a result of a fluid flow computation; the procedure requires solutions on systematically refined grids. The error is estimated with power series expansions as a function of the typical cell size. These expansions, of which four types are used, are fitted to the data in the least-squares sense. The selection of the best error estimate is based on the standard deviation of the fits. The error estimate is converted into an uncertainty with a safety factor that depends on the observed order of grid convergence and on the standard deviation of the fit. For well-behaved data sets, i.e. monotonic convergence with the expected observed order of grid convergence and no scatter in the data, the method reduces to the well known Grid Convergence Index. Examples of application of the procedure are included. - Highlights: • Estimation of the numerical uncertainty of any integral or local flow quantity. • Least squares fits to power series expansions to handle noisy data. • Excellent results obtained for manufactured solutions. • Consistent results obtained for practical CFD calculations. • Reduces to the well known Grid Convergence Index for well-behaved data sets.

  16. Old and Unemployable? How Age‐Based Stereotypes Affect Willingness to Hire Job Candidates

    PubMed Central

    Swift, Hannah J.; Drury, Lisbeth

    2016-01-01

    Across the world, people are required, or want, to work until an increasingly old age. But how might prospective employers view job applicants who have skills and qualities that they associate with older adults? This article draws on social role theory, age stereotypes and research on hiring biases, and reports three studies using age‐diverse North American participants. These studies reveal that: (1) positive older age stereotype characteristics are viewed less favorably as criteria for job hire, (2) even when the job role is low‐status, a younger stereotype profile tends to be preferred, and (3) an older stereotype profile is only considered hirable when the role is explicitly cast as subordinate to that of a candidate with a younger age profile. Implications for age‐positive selection procedures and ways to reduce the impact of implicit age biases are discussed. PMID:27635102

  17. Using a representative sample of workers for constructing the SUMEX French general population based job-exposure matrix

    PubMed Central

    Gueguen, A; Goldberg, M; Bonenfant, S; Martin, J

    2004-01-01

    Background: Job-exposure matrices (JEMs) applicable to the general population are usually constructed by using only the expertise of specialists. Aims: To construct a population based JEM for chemical agents from data based on a sample of French workers for surveillance purposes. Methods: The SUMEX job-exposure matrix was constructed from data collected via a cross-sectional survey of a sample of French workers representative of the main economic sectors through the SUMER-94 survey: 1205 occupational physicians questioned 48 156 workers, and inventoried exposure to 102 chemicals. The companies' economic activities and the workers' occupations were coded according to the official French nomenclatures. A segmentation method was used to construct job groups that were homogeneous for exposure prevalence to chemical agents. The matrix was constructed in two stages: consolidation of occupations according to exposure prevalence; and establishment of exposure indices based on individual data from all the subjects in the sample. Results: An agent specific matrix could be constructed for 80 of the chemicals. The quality of the classification obtained for each was variable: globally, the performance of the method was better for less specific and therefore more easy to assess agents, and for exposures specific to certain occupations. Conclusions: Software has been developed to enable the SUMEX matrix to be used by occupational physicians and other prevention professionals responsible for surveillance of the health of the workforce in France. PMID:15208374

  18. DISTRIBUTED GRID-CONNECTED PHOTOVOLTAIC POWER SYSTEM EMISSION OFFSET ASSESSMENT: STATISTICAL TEST OF SIMULATED- AND MEASURED-BASED DATA

    EPA Science Inventory

    This study assessed the pollutant emission offset potential of distributed grid-connected photovoltaic (PV) power systems. Computer-simulated performance results were utilized for 211 PV systems located across the U.S. The PV systems' monthly electrical energy outputs were based ...

  19. LEOPARD: A grid-based dispersion relation solver for arbitrary gyrotropic distributions

    NASA Astrophysics Data System (ADS)

    Astfalk, Patrick; Jenko, Frank

    2017-01-01

    Particle velocity distributions measured in collisionless space plasmas often show strong deviations from idealized model distributions. Despite this observational evidence, linear wave analysis in space plasma environments such as the solar wind or Earth's magnetosphere is still mainly carried out using dispersion relation solvers based on Maxwellians or other parametric models. To enable a more realistic analysis, we present the new grid-based kinetic dispersion relation solver LEOPARD (Linear Electromagnetic Oscillations in Plasmas with Arbitrary Rotationally-symmetric Distributions) which no longer requires prescribed model distributions but allows for arbitrary gyrotropic distribution functions. In this work, we discuss the underlying numerical scheme of the code and we show a few exemplary benchmarks. Furthermore, we demonstrate a first application of LEOPARD to ion distribution data obtained from hybrid simulations. In particular, we show that in the saturation stage of the parallel fire hose instability, the deformation of the initial bi-Maxwellian distribution invalidates the use of standard dispersion relation solvers. A linear solver based on bi-Maxwellians predicts further growth even after saturation, while LEOPARD correctly indicates vanishing growth rates. We also discuss how this complies with former studies on the validity of quasilinear theory for the resonant fire hose. In the end, we briefly comment on the role of LEOPARD in directly analyzing spacecraft data, and we refer to an upcoming paper which demonstrates a first application of that kind.

  20. Comparisons of the Anelastic and Unified Modes Based on the Lorenz and Charney-Phillips Vertical Grids

    NASA Astrophysics Data System (ADS)

    Konor, Celal; Arakawa, Akio

    2010-05-01

    The anelastic and unified models based on the Lorenz and Charney-Phillips vertical grids are compared in view of nonhydrostatic simulation of buoyant bubbles. It is widely accepted that small-scale nonacoustic motions such as convection and turbulence are basically anelastic. The recently proposed unified system (Arakawa and Konor, 2009) unifies the anelastic and quasi-hydrostatic systems by including quasi-hydrostatic compressibility and, therefore, it can be used for simulating a wide range of motion from turbulence to planetary scales. There are two basic grids for the vertical discretization of governing equations. The most commonly used vertical grid is the Lorenz grid (L-grid), on which the thermodynamic variables and the horizontal momentum are staggered from the vertical momentum. The other is the less commonly used Charney-Phillips grid (CP-grid), on which the thermodynamic variables and the vertical momentum are staggered from the horizontal momentum. The existence of a computational mode with the L-grid in the vertical structure of temperature is well-known. It should be also pointed out that, when the L-grid is used in a non-hydrostatic model, the buoyancy force cannot properly respond to the dynamically generated noise in the vertical velocity field. With the unified system of equations, however, we find that the dynamical generation of noise tends to be suppressed. This can be interpreted as a result of including the quasi-hydrostatic compressibility. Even when the motion is basically nonhydrostatic, the generated noise tends to be quasi-stationary and, therefore, quasi-hydrostatic. Although the original intention of including the quasi-hydrostatic compressibility in the unified system is to improve simulation of planetary waves, the results presented here indicate that the unified system can also better control small-scale computational noise without generating vertically propagating acoustic waves. In this presentation, we show results from

  1. Securing smart grid technology

    NASA Astrophysics Data System (ADS)

    Chaitanya Krishna, E.; Kosaleswara Reddy, T.; Reddy, M. YogaTeja; Reddy G. M., Sreerama; Madhusudhan, E.; AlMuhteb, Sulaiman

    2013-03-01

    In the developing countries electrical energy is very important for its all-round improvement by saving thousands of dollars and investing them in other sector for development. For Growing needs of power existing hierarchical, centrally controlled grid of the 20th Century is not sufficient. To produce and utilize effective power supply for industries or people we should have Smarter Electrical grids that address the challenges of the existing power grid. The Smart grid can be considered as a modern electric power grid infrastructure for enhanced efficiency and reliability through automated control, high-power converters, modern communications infrastructure along with modern IT services, sensing and metering technologies, and modern energy management techniques based on the optimization of demand, energy and network availability and so on. The main objective of this paper is to provide a contemporary look at the current state of the art in smart grid communications as well as critical issues on smart grid technologies primarily in terms of information and communication technology (ICT) issues like security, efficiency to communications layer field. In this paper we propose new model for security in Smart Grid Technology that contains Security Module(SM) along with DEM which will enhance security in Grid. It is expected that this paper will provide a better understanding of the technologies, potential advantages and research challenges of the smart grid and provoke interest among the research community to further explore this promising research area.

  2. Smart grid initialization reduces the computational complexity of multi-objective image registration based on a dual-dynamic transformation model to account for large anatomical differences

    NASA Astrophysics Data System (ADS)

    Bosman, Peter A. N.; Alderliesten, Tanja

    2016-03-01

    We recently demonstrated the strong potential of using dual-dynamic transformation models when tackling deformable image registration problems involving large anatomical differences. Dual-dynamic transformation models employ two moving grids instead of the common single moving grid for the target image (and single fixed grid for the source image). We previously employed powerful optimization algorithms to make use of the additional flexibility offered by a dual-dynamic transformation model with good results, directly obtaining insight into the trade-off between important registration objectives as a result of taking a multi-objective approach to optimization. However, optimization has so far been initialized using two regular grids, which still leaves a great potential of dual-dynamic transformation models untapped: a-priori grid alignment with image structures/areas that are expected to deform more. This allows (far) less grid points to be used, compared to using a sufficiently refined regular grid, leading to (far) more efficient optimization, or, equivalently, more accurate results using the same number of grid points. We study the implications of exploiting this potential by experimenting with two new smart grid initialization procedures: one manual expert-based and one automated image-feature-based. We consider a CT test case with large differences in bladder volume with and without a multi-resolution scheme and find a substantial benefit of using smart grid initialization.

  3. Development of Smart Grid for Community and Cyber based Landslide Hazard Monitoring and Early Warning System

    NASA Astrophysics Data System (ADS)

    Karnawati, D.; Wilopo, W.; Fathani, T. F.; Fukuoka, H.; Andayani, B.

    2012-12-01

    A Smart Grid is a cyber-based tool to facilitate a network of sensors for monitoring and communicating the landslide hazard and providing the early warning. The sensor is designed as an electronic sensor installed in the existing monitoring and early warning instruments, and also as the human sensors which comprise selected committed-people at the local community, such as the local surveyor, local observer, member of the local task force for disaster risk reduction, and any person at the local community who has been registered to dedicate their commitments for sending reports related to the landslide symptoms observed at their living environment. This tool is designed to be capable to receive up to thousands of reports/information at the same time through the electronic sensors, text message (mobile phone), the on-line participatory web as well as various social media such as Twitter and Face book. The information that should be recorded/ reported by the sensors is related to the parameters of landslide symptoms, for example the progress of cracks occurrence, ground subsidence or ground deformation. Within 10 minutes, this tool will be able to automatically elaborate and analyse the reported symptoms to predict the landslide hazard and risk levels. The predicted level of hazard/ risk can be sent back to the network of electronic and human sensors as the early warning information. The key parameters indicating the symptoms of landslide hazard were recorded/ monitored by the electrical and the human sensors. Those parameters were identified based on the investigation on geological and geotechnical conditions, supported with the laboratory analysis. The cause and triggering mechanism of landslide in the study area was also analysed in order to define the critical condition to launch the early warning. However, not only the technical but also social system were developed to raise community awareness and commitments to serve the mission as the human sensors, which will

  4. An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    1999-01-01

    An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.

  5. Information theoretically secure, enhanced Johnson noise based key distribution over the smart grid with switched filters.

    PubMed

    Gonzalez, Elias; Kish, Laszlo B; Balog, Robert S; Enjeti, Prasad

    2013-01-01

    We introduce a protocol with a reconfigurable filter system to create non-overlapping single loops in the smart power grid for the realization of the Kirchhoff-Law-Johnson-(like)-Noise secure key distribution system. The protocol is valid for one-dimensional radial networks (chain-like power line) which are typical of the electricity distribution network between the utility and the customer. The speed of the protocol (the number of steps needed) versus grid size is analyzed. When properly generalized, such a system has the potential to achieve unconditionally secure key distribution over the smart power grid of arbitrary geometrical dimensions.

  6. Information Theoretically Secure, Enhanced Johnson Noise Based Key Distribution over the Smart Grid with Switched Filters

    PubMed Central

    2013-01-01

    We introduce a protocol with a reconfigurable filter system to create non-overlapping single loops in the smart power grid for the realization of the Kirchhoff-Law-Johnson-(like)-Noise secure key distribution system. The protocol is valid for one-dimensional radial networks (chain-like power line) which are typical of the electricity distribution network between the utility and the customer. The speed of the protocol (the number of steps needed) versus grid size is analyzed. When properly generalized, such a system has the potential to achieve unconditionally secure key distribution over the smart power grid of arbitrary geometrical dimensions. PMID:23936164

  7. Threshold-Based Random Charging Scheme for Decentralized PEV Charging Operation in a Smart Grid

    PubMed Central

    Kwon, Ojin; Kim, Pilkee; Yoon, Yong-Jin

    2016-01-01

    Smart grids have been introduced to replace conventional power distribution systems without real time monitoring for accommodating the future market penetration of plug-in electric vehicles (PEVs). When a large number of PEVs require simultaneous battery charging, charging coordination techniques have become one of the most critical factors to optimize the PEV charging performance and the conventional distribution system. In this case, considerable computational complexity of a central controller and exchange of real time information among PEVs may occur. To alleviate these problems, a novel threshold-based random charging (TBRC) operation for a decentralized charging system is proposed. Using PEV charging thresholds and random access rates, the PEVs themselves can participate in the charging requests. As PEVs with a high battery state do not transmit the charging requests to the central controller, the complexity of the central controller decreases due to the reduction of the charging requests. In addition, both the charging threshold and the random access rate are statistically calculated based on the average of supply power of the PEV charging system that do not require a real time update. By using the proposed TBRC with a tolerable PEV charging degradation, a 51% reduction of the PEV charging requests is achieved. PMID:28035963

  8. Threshold-Based Random Charging Scheme for Decentralized PEV Charging Operation in a Smart Grid.

    PubMed

    Kwon, Ojin; Kim, Pilkee; Yoon, Yong-Jin

    2016-12-26

    Smart grids have been introduced to replace conventional power distribution systems without real time monitoring for accommodating the future market penetration of plug-in electric vehicles (PEVs). When a large number of PEVs require simultaneous battery charging, charging coordination techniques have become one of the most critical factors to optimize the PEV charging performance and the conventional distribution system. In this case, considerable computational complexity of a central controller and exchange of real time information among PEVs may occur. To alleviate these problems, a novel threshold-based random charging (TBRC) operation for a decentralized charging system is proposed. Using PEV charging thresholds and random access rates, the PEVs themselves can participate in the charging requests. As PEVs with a high battery state do not transmit the charging requests to the central controller, the complexity of the central controller decreases due to the reduction of the charging requests. In addition, both the charging threshold and the random access rate are statistically calculated based on the average of supply power of the PEV charging system that do not require a real time update. By using the proposed TBRC with a tolerable PEV charging degradation, a 51% reduction of the PEV charging requests is achieved.

  9. Sound Source Localization for HRI Using FOC-Based Time Difference Feature and Spatial Grid Matching.

    PubMed

    Li, Xiaofei; Liu, Hong

    2013-08-01

    In human-robot interaction (HRI), speech sound source localization (SSL) is a convenient and efficient way to obtain the relative position between a speaker and a robot. However, implementing a SSL system based on TDOA method encounters many problems, such as noise of real environments, the solution of nonlinear equations, switch between far field and near field. In this paper, fourth-order cumulant spectrum is derived, based on which a time delay estimation (TDE) algorithm that is available for speech signal and immune to spatially correlated Gaussian noise is proposed. Furthermore, time difference feature of sound source and its spatial distribution are analyzed, and a spatial grid matching (SGM) algorithm is proposed for localization step, which handles some problems that geometric positioning method faces effectively. Valid feature detection algorithm and a decision tree method are also suggested to improve localization performance and reduce computational complexity. Experiments are carried out in real environments on a mobile robot platform, in which thousands of sets of speech data with noise collected by four microphones are tested in 3D space. The effectiveness of our TDE method and SGM algorithm is verified.

  10. High Energy IED measurements with MEMs based Si grid technology inside a 300mm Si wafer

    NASA Astrophysics Data System (ADS)

    Funk, Merritt

    2012-10-01

    The measurement of ion energy at the wafer surface for commercial equipment and process development without extensive modification of the reactor geometry has been an industry challenge. High energy, wide frequency range, process gases tolerant, contamination free and accurate ion energy measurements are the base requirements. In this work we will report on the complete system developed to achieve the base requirements. The system includes: a reusable silicon ion energy analyzer (IEA) wafer, signal feed through, RF confinement, and high voltage measurement and control. The IEA wafer design required carful understanding of the relationships between the plasma Debye length, the number of grids, intergrid charge exchange (spacing), capacitive coupling, materials, and dielectric flash over constraints. RF confinement with measurement transparency was addressed so as not to disturb the chamber plasma, wafer sheath and DC self-bias as well as to achieve spectral accuracy The experimental results were collected using a commercial parallel plate etcher powered by a dual frequency (VHF + LF). Modeling and Simulations also confirmed the details captured in the IED.

  11. A Gender Based Study on Job Satisfaction among Higher Secondary School Heads in Khyber Pakhtunkhwa, (Pakistan)

    ERIC Educational Resources Information Center

    Mumtaz, Safina; Suleman, Qaiser; Ahmad, Zubair

    2016-01-01

    The purpose of the study was to analyze and compare the job satisfaction with twenty dimensions of male and female higher secondary school heads in Khyber Pakhtunkhwa. A total of 108 higher secondary school heads were selected from eleven districts as sample through multi-stage sampling technique in which 66 were male and 42 were female. The study…

  12. Beginning Teachers' Job Satisfaction: The Impact of School-Based Factors

    ERIC Educational Resources Information Center

    Lam, Bick-har; Yan, Hoi-fai

    2011-01-01

    Using a longitudinal design, the job satisfaction and career development of beginning teachers are explored in the present study. Beginning teachers were initially interviewed after graduation from the teacher training programme and then after gaining a two-year teaching experience. The results are presented in a fourfold typology in which the…

  13. The Impact of Diagnosis on Job Retention: A Danish Registry-Based Cohort Study.

    PubMed

    Espersen, Rasmus; Jensen, Vibeke; Berg Johansen, Martin; Fonager, Kirsten

    2015-01-01

    Background. In 1998, Denmark introduced the flex job scheme to ensure employment of people with a permanent reduced work capacity. This study investigated the association between select diagnoses and the risk of disability pension among persons eligible for the scheme. Methods. Using the national DREAM database we identified all persons eligible for the flex job scheme from 2001 to 2008. This information piece was linked to the hospital discharge registry. Selected participants were followed for 5 years. Results. From the 72,629 persons identified, our study included 329 patients with rheumatoid arthritis, 10,120 patients with spine disorders, 2179 patients with ischemic heart disease, and 1765 patients with functional disorders. A reduced risk of disability pension was found in the group with rheumatoid arthritis (hazard ratio = 0.69 (0.53-0.90)) compared to the group with spine disorders. No differences were found when comparing ischemic heart disease and functional disorders. Employment during the first 3 months of the flex job scheme increased the degree of employment for all groups. Conclusion. Differences in the risk of disability pension were identified only in patients with rheumatoid arthritis. This study demonstrates the importance of obtaining employment immediately after allocation to the flex job scheme, regardless of diagnosis.

  14. MAGNETIC GRID

    DOEpatents

    Post, R.F.

    1960-08-01

    An electronic grid is designed employing magnetic forces for controlling the passage of charged particles. The grid is particularly applicable to use in gas-filled tubes such as ignitrons. thyratrons, etc., since the magnetic grid action is impartial to the polarity of the charged particles and, accordingly. the sheath effects encountered with electrostatic grids are not present. The grid comprises a conductor having sections spaced apart and extending in substantially opposite directions in the same plane, the ends of the conductor being adapted for connection to a current source.

  15. Automatic Integration Testbeds validation on Open Science Grid

    NASA Astrophysics Data System (ADS)

    Caballero, J.; Thapa, S.; Gardner, R.; Potekhin, M.

    2011-12-01

    A recurring challenge in deploying high quality production middleware is the extent to which realistic testing occurs before release of the software into the production environment. We describe here an automated system for validating releases of the Open Science Grid software stack that leverages the (pilot-based) PanDA job management system developed and used by the ATLAS experiment. The system was motivated by a desire to subject the OSG Integration Testbed to more realistic validation tests. In particular those which resemble to every extent possible actual job workflows used by the experiments thus utilizing job scheduling at the compute element (CE), use of the worker node execution environment, transfer of data to/from the local storage element (SE), etc. The context is that candidate releases of OSG compute and storage elements can be tested by injecting large numbers of synthetic jobs varying in complexity and coverage of services tested. The native capabilities of the PanDA system can thus be used to define jobs, monitor their execution, and archive the resulting run statistics including success and failure modes. A repository of generic workflows and job types to measure various metrics of interest has been created. A command-line toolset has been developed so that testbed managers can quickly submit "VO-like" jobs into the system when newly deployed services are ready for testing. A system for automatic submission has been crafted to send jobs to integration testbed sites, collecting the results in a central service and generating regular reports for performance and reliability.

  16. Heterojunction solar cells based on single-crystal silicon with an inkjet-printed contact grid

    NASA Astrophysics Data System (ADS)

    Abolmasov, S. N.; Abramov, A. S.; Ivanov, G. A.; Terukov, E. I.; Emtsev, K. V.; Nyapshaev, I. A.; Bazeley, A. A.; Gubin, S. P.; Kornilov, D. Yu.; Tkachev, S. V.; Kim, V. P.; Ryndin, D. A.; Levchenkova, V. I.

    2017-01-01

    Results on the creation of a current-collecting grid for heterojunction silicon solar cells by ink-jet printing are presented. Characteristics of the obtained solar cells are compared with those of the samples obtained using standard screen printing.

  17. Grid-based precision aim system and method for disrupting suspect objects

    DOEpatents

    Gladwell, Thomas Scott; Garretson, Justin; Hobart, Clinton G.; Monda, Mark J.

    2014-06-10

    A system and method for disrupting at least one component of a suspect object is provided. The system has a source for passing radiation through the suspect object, a grid board positionable adjacent the suspect object (the grid board having a plurality of grid areas, the radiation from the source passing through the grid board), a screen for receiving the radiation passing through the suspect object and generating at least one image, a weapon for deploying a discharge, and a targeting unit for displaying the image of the suspect object and aiming the weapon according to a disruption point on the displayed image and deploying the discharge into the suspect object to disable the suspect object.

  18. Initial Study on the Predictability of Real Power on the Grid based on PMU Data

    SciTech Connect

    Ferryman, Thomas A.; Tuffner, Francis K.; Zhou, Ning; Lin, Guang

    2011-03-23

    Operations on the electric power grid provide highly reliable power to the end users. These operations involve hundreds of human operators and automated control schemes. However, the operations process can often take several minutes to complete. During these several minutes, the operations are often evaluated on a past state of the power system. Proper prediction methods could change this to make the operations evaluate the state of the power grid minutes in advance. Such information allows proactive, rather than reactive, actions on the power system and aids in improving the efficiency and reliability of the power grid as a whole. A successful demonstration of this prediction framework is necessary to evaluate the feasibility of utilizing such predicted states in grid operations.

  19. Comparisons of purely topological model, betweenness based model and direct current power flow model to analyze power grid vulnerability.

    PubMed

    Ouyang, Min

    2013-06-01

    This paper selects three frequently used power grid models, including a purely topological model (PTM), a betweennness based model (BBM), and a direct current power flow model (DCPFM), to describe three different dynamical processes on a power grid under both single and multiple component failures. Each of the dynamical processes is then characterized by both a topology-based and a flow-based vulnerability metrics to compare the three models with each other from the vulnerability perspective. Taking as an example, the IEEE 300 power grid with line capacity set proportional to a tolerance parameter tp, the results show non-linear phenomenon: under single node failures, there exists a critical value of tp = 1.36, above which the three models all produce identical topology-based vulnerability results and more than 85% nodes have identical flow-based vulnerability from any two models; under multiple node failures that each node fails with an identical failure probability fp, there exists a critical fp = 0.56, above which the three models produce almost identical topology-based vulnerability results at any tp ≥ 1, but producing identical flow-based vulnerability results only occurs at fp = . In addition, the topology-based vulnerability results can provide a good approximation for the flow-based vulnerability under large fp, and the priority of PTM and BBM to better approach the DCPFM for vulnerability analysis mainly depends on the value of fp. Similar results are also found for other failure types, other system operation parameters, and other power grids.

  20. GNARE: an environment for Grid-based high-throughput genome analysis.

    SciTech Connect

    Sulakhe, D.; Rodriguez, A.; D'Souza, M.; Wilde, M.; Nefedova, V.; Foster, I.; Maltsev, N.; Mathematics and Computer Science; Univ. of Chicago

    2005-01-01

    Recent progress in genomics and experimental biology has brought exponential growth of the biological information available for computational analysis in public genomics databases. However, applying the potentially enormous scientific value of this information to the understanding of biological systems requires computing and data storage technology of an unprecedented scale. The grid, with its aggregated and distributed computational and storage infrastructure, offers an ideal platform for high-throughput bioinformatics analysis. To leverage this we have developed the Genome Analysis Research Environment (GNARE) - a scalable computational system for the high-throughput analysis of genomes, which provides an integrated database and computational backend for data-driven bioinformatics applications. GNARE efficiently automates the major steps of genome analysis including acquisition of data from multiple genomic databases; data analysis by a diverse set of bioinformatics tools; and storage of results and annotations. High-throughput computations in GNARE are performed using distributed heterogeneous grid computing resources such as Grid2003, TeraGrid, and the DOE science grid. Multi-step genome analysis workflows involving massive data processing, the use of application-specific toots and algorithms and updating of an integrated database to provide interactive Web access to results are all expressed and controlled by a 'virtual data' model which transparently maps computational workflows to distributed grid resources. This paper describes how Grid technologies such as Globus, Condor, and the Gryphyn virtual data system were applied in the development of GNARE. It focuses on our approach to Grid resource allocation and to the use of GNARE as a computational framework for the development of bioinformatics applications.

  1. A Mobile Phone-Based Sensor Grid for Distributed Team Operations

    DTIC Science & Technology

    2010-09-01

    When the grid is breached by a human , animal or machine, the individual phones capture signals generated by the intruders’ movements. These signals are...microphone to capture sound in the area. When the grid is breached by a human , animal or machine, the individual phones capture signals generated by the...secondary objective is to determine if the Bluetooth networks are reliable enough to create an ad hoc network and transfer alerts to a human sentry

  2. Uncertainty Analysis of Power Grid Investment Capacity Based on Monte Carlo

    NASA Astrophysics Data System (ADS)

    Qin, Junsong; Liu, Bingyi; Niu, Dongxiao

    By analyzing the influence factors of the investment capacity of power grid, to depreciation cost, sales price and sales quantity, net profit, financing and GDP of the second industry as the dependent variable to build the investment capacity analysis model. After carrying out Kolmogorov-Smirnov test, get the probability distribution of each influence factor. Finally, obtained the grid investment capacity uncertainty of analysis results by Monte Carlo simulation.

  3. ReSS: Resource Selection Service for National and Campus Grid Infrastructure

    NASA Astrophysics Data System (ADS)

    Mhashilkar, Parag; Garzoglio, Gabriele; Levshina, Tanya; Timm, Steve

    2010-04-01

    The Open Science Grid (OSG) offers access to around hundred Compute elements (CE) and storage elements (SE) via standard Grid interfaces. The Resource Selection Service (ReSS) is a push-based workload management system that is integrated with the OSG information systems and resources. ReSS integrates standard Grid tools such as Condor, as a brokering service and the gLite CEMon, for gathering and publishing resource information in GLUE Schema format. ReSS is used in OSG by Virtual Organizations (VO) such as Dark Energy Survey (DES), DZero and Engagement VO. ReSS is also used as a Resource Selection Service for Campus Grids, such as FermiGrid. VOs use ReSS to automate the resource selection in their workload management system to run jobs over the grid. In the past year, the system has been enhanced to enable publication and selection of storage resources and of any special software or software libraries (like MPI libraries) installed at computing resources. In this paper, we discuss the Resource Selection Service, its typical usage on the two scales of a National Cyber Infrastructure Grid, such as OSG, and of a campus Grid, such as FermiGrid.

  4. ReSS: Resource Selection Service for National and Campus Grid Infrastructure

    SciTech Connect

    Mhashilkar, Parag; Garzoglio, Gabriele; Levshina, Tanya; Timm, Steve; /Fermilab

    2009-05-01

    The Open Science Grid (OSG) offers access to around hundred Compute elements (CE) and storage elements (SE) via standard Grid interfaces. The Resource Selection Service (ReSS) is a push-based workload management system that is integrated with the OSG information systems and resources. ReSS integrates standard Grid tools such as Condor, as a brokering service and the gLite CEMon, for gathering and publishing resource information in GLUE Schema format. ReSS is used in OSG by Virtual Organizations (VO) such as Dark Energy Survey (DES), DZero and Engagement VO. ReSS is also used as a Resource Selection Service for Campus Grids, such as FermiGrid. VOs use ReSS to automate the resource selection in their workload management system to run jobs over the grid. In the past year, the system has been enhanced to enable publication and selection of storage resources and of any special software or software libraries (like MPI libraries) installed at computing resources. In this paper, we discuss the Resource Selection Service, its typical usage on the two scales of a National Cyber Infrastructure Grid, such as OSG, and of a campus Grid, such as FermiGrid.

  5. Interoperability of GADU in using heterogeneous Grid resources for bioinformatics applications.

    SciTech Connect

    Sulakhe, D.; Rodriguez, A.; Wilde, M.; Foster, I.; Maltsev, N.; Univ. of Chicago

    2008-03-01

    Bioinformatics tools used for efficient and computationally intensive analysis of genetic sequences require large-scale computational resources to accommodate the growing data. Grid computational resources such as the Open Science Grid and TeraGrid have proved useful for scientific discovery. The genome analysis and database update system (GADU) is a high-throughput computational system developed to automate the steps involved in accessing the Grid resources for running bioinformatics applications. This paper describes the requirements for building an automated scalable system such as GADU that can run jobs on different Grids. The paper describes the resource-independent configuration of GADU using the Pegasus-based virtual data system that makes high-throughput computational tools interoperable on heterogeneous Grid resources. The paper also highlights the features implemented to make GADU a gateway to computationally intensive bioinformatics applications on the Grid. The paper will not go into the details of problems involved or the lessons learned in using individual Grid resources as it has already been published in our paper on genome analysis research environment (GNARE) and will focus primarily on the architecture that makes GADU resource independent and interoperable across heterogeneous Grid resources.

  6. Motivating medical information system performance by system quality, service quality, and job satisfaction for evidence-based practice

    PubMed Central

    2012-01-01

    Background No previous studies have addressed the integrated relationships among system quality, service quality, job satisfaction, and system performance; this study attempts to bridge such a gap with evidence-based practice study. Methods The convenience sampling method was applied to the information system users of three hospitals in southern Taiwan. A total of 500 copies of questionnaires were distributed, and 283 returned copies were valid, suggesting a valid response rate of 56.6%. SPSS 17.0 and AMOS 17.0 (structural equation modeling) statistical software packages were used for data analysis and processing. Results The findings are as follows: System quality has a positive influence on service quality (γ11= 0.55), job satisfaction (γ21= 0.32), and system performance (γ31= 0.47). Service quality (β31= 0.38) and job satisfaction (β32= 0.46) will positively influence system performance. Conclusions It is thus recommended that the information office of hospitals and developers take enhancement of service quality and user satisfaction into consideration in addition to placing b on system quality and information quality when designing, developing, or purchasing an information system, in order to improve benefits and gain more achievements generated by hospital information systems. PMID:23171394

  7. Internet 2 Access Grid.

    ERIC Educational Resources Information Center

    Simco, Greg

    2002-01-01

    Discussion of the Internet 2 Initiative, which is based on collaboration among universities, businesses, and government, focuses on the Access Grid, a Computational Grid that includes interactive multimedia within high-speed networks to provide resources to enable remote collaboration among the research community. (Author/LRW)

  8. Geometric grid generation

    NASA Technical Reports Server (NTRS)

    Ives, David

    1995-01-01

    This paper presents a highly automated hexahedral grid generator based on extensive geometrical and solid modeling operations developed in response to a vision of a designer-driven one day turnaround CFD process which implies a designer-driven one hour grid generation process.

  9. Validation of elastic registration algorithms based on adaptive irregular grids for medical applications

    NASA Astrophysics Data System (ADS)

    Franz, Astrid; Carlsen, Ingwer C.; Renisch, Steffen; Wischmann, Hans-Aloys

    2006-03-01

    Elastic registration of medical images is an active field of current research. Registration algorithms have to be validated in order to show that they fulfill the requirements of a particular clinical application. Furthermore, validation strategies compare the performance of different registration algorithms and can hence judge which algorithm is best suited for a target application. In the literature, validation strategies for rigid registration algorithms have been analyzed. For a known ground truth they assess the displacement error at a few landmarks, which is not sufficient for elastic transformations described by a huge number of parameters. Hence we consider the displacement error averaged over all pixels in the whole image or in a region-of-interest of clinical relevance. Using artificially, but realistically deformed images of the application domain, we use this quality measure to analyze an elastic registration based on transformations defined on adaptive irregular grids for the following clinical applications: Magnetic Resonance (MR) images of freely moving joints for orthopedic investigations, thoracic Computed Tomography (CT) images for the detection of pulmonary embolisms, and transmission images as used for the attenuation correction and registration of independently acquired Positron Emission Tomography (PET) and CT images. The definition of a region-of-interest allows to restrict the analysis of the registration accuracy to clinically relevant image areas. The behaviour of the displacement error as a function of the number of transformation control points and their placement can be used for identifying the best strategy for the initial placement of the control points.

  10. Optimal RTP Based Power Scheduling for Residential Load in Smart Grid

    NASA Astrophysics Data System (ADS)

    Joshi, Hemant I.; Pandya, Vivek J.

    2015-12-01

    To match supply and demand, shifting of load from peak period to off-peak period is one of the effective solutions. Presently flat rate tariff is used in major part of the world. This type of tariff doesn't give incentives to the customers if they use electrical energy during off-peak period. If real time pricing (RTP) tariff is used, consumers can be encouraged to use energy during off-peak period. Due to advancement in information and communication technology, two-way communications is possible between consumers and utility. To implement this technique in smart grid, home energy controller (HEC), smart meters, home area network (HAN) and communication link between consumers and utility are required. HEC interacts automatically by running an algorithm to find optimal energy consumption schedule for each consumer. However, all the consumers are not allowed to shift their load simultaneously during off-peak period to avoid rebound peak condition. Peak to average ratio (PAR) is considered while carrying out minimization problem. Linear programming problem (LPP) method is used for minimization. The simulation results of this work show the effectiveness of the minimization method adopted. The hardware work is in progress and the program based on the method described here will be made to solve real problem.

  11. Performance evaluation of four grid-based dispersion models in complex terrain

    NASA Astrophysics Data System (ADS)

    Tesche, T. W.; Haney, J. L.; Morris, R. E.

    Four numerical grid-based dispersion models (Mathew/ADPIC, SMOG, Hybrid, and 2DFLOW) were adapted to the Geysers-Calistoga geothermal area in northern California. The models were operated using five intensive meteorological and tracer diffusion data sets collected during the 1981 ASCOT field experiment at the Geysers (three nocturnal drainage and two daytime valley stagnation episodes). The 2DFLOW and Hybrid Models were found to perform best for drainage and limited-mixing conditions, respectively. These two models were subsequently evaluated using data from five 1980 ASCOT drainage experiments. The Hybrid Model was also tested using data from nine limited-mixing and downwash tracer experiments performed at the Geysers prior to the ASCOT program. Overall, the 2DFLOW Model performed best for drainage flow conditions, whereas the Hybrid Model performed best for valley stagnation (limited-mixing) and moderate crossridge wind conditions. To aid new source review studies at the Geysers, a series of source-receptor transfer matrices were generated for several different meteorological regimes under a variety of emission scenarios using the Hybrid Model. These matrices supply ready estimates of cumulative hydrogen sulfide impacts from various geothermal sources in the region.

  12. Location-Aware Dynamic Session-Key Management for Grid-Based Wireless Sensor Networks

    PubMed Central

    Chen, Chin-Ling; Lin, I-Hsien

    2010-01-01

    Security is a critical issue for sensor networks used in hostile environments. When wireless sensor nodes in a wireless sensor network are distributed in an insecure hostile environment, the sensor nodes must be protected: a secret key must be used to protect the nodes transmitting messages. If the nodes are not protected and become compromised, many types of attacks against the network may result. Such is the case with existing schemes, which are vulnerable to attacks because they mostly provide a hop-by-hop paradigm, which is insufficient to defend against known attacks. We propose a location-aware dynamic session-key management protocol for grid-based wireless sensor networks. The proposed protocol improves the security of a secret key. The proposed scheme also includes a key that is dynamically updated. This dynamic update can lower the probability of the key being guessed correctly. Thus currently known attacks can be defended. By utilizing the local information, the proposed scheme can also limit the flooding region in order to reduce the energy that is consumed in discovering routing paths. PMID:22163606

  13. Grid-based Infrastructure and Distributed Data Mining for Virtual Observatories

    NASA Astrophysics Data System (ADS)

    Karimabadi, H.; Sipes, T.; Ferenci, S.; Fujimoto, R.; Olschanowsky, R.; Balac, N.; Roberts, A.

    2006-12-01

    Data access as well as analysis of geographically distributed data sets are challenges common to a wide variety of fields. To address this problem, we have been working on the development of two pieces of technology: a grid-based software called IDDAT that supports processing and remote data analysis of widely distributed data and RemoteMiner which is a parallel, distributed data mining software. IDDAT and RemoteMiner work seamlessly and provide the necessary backend functionalities hidden from the user. The user accesses the system through a single web portal where data selection is performed and data mining functions are planned. The data mining functions are prepared for execution by IDDat services. Preparation can include moving data to the processing location via services built over Storage Resource Broker (SRB), preprocessing data, and allocating computation and storage resources. IDDat services also initiate and monitor data mining functions and provide services to allow the results to be shared among other users. In this presentation, we illustrate a general user workflow and the provided functionalities. We will also provide an overview of the technical issues and design features such as storage scheduling, efficient network traffic management and resource selection.

  14. Modeling and assessment of civil aircraft evacuation based on finer-grid

    NASA Astrophysics Data System (ADS)

    Fang, Zhi-Ming; Lv, Wei; Jiang, Li-Xue; Xu, Qing-Feng; Song, Wei-Guo

    2016-04-01

    Studying civil aircraft emergency evacuation process by using computer model is an effective way. In this study, the evacuation of Airbus A380 is simulated using a Finer-Grid Civil Aircraft Evacuation (FGCAE) model. In this model, the effect of seat area and others on escape process and pedestrian's "hesitation" before leaving exits are considered, and an optimized rule of exit choice is defined. Simulations reproduce typical characteristics of aircraft evacuation, such as the movement synchronization between adjacent pedestrians, route choice and so on, and indicate that evacuation efficiency will be determined by pedestrian's "preference" and "hesitation". Based on the model, an assessment procedure of aircraft evacuation safety is presented. The assessment and comparison with the actual evacuation test demonstrate that the available exit setting of "one exit from each exit pair" used by practical demonstration test is not the worst scenario. The half exits of one end of the cabin are all unavailable is the worst one, that should be paid more attention to, and even be adopted in the certification test. The model and method presented in this study could be useful for assessing, validating and improving the evacuation performance of aircraft.

  15. Mindfulness-Based Cognitive Therapy for Psychosis: Measuring Psychological Change Using Repertory Grids.

    PubMed

    Randal, Chloe; Bucci, Sandra; Morera, Tirma; Barrett, Moya; Pratt, Daniel

    2016-11-01

    There are an increasing, but limited, number of studies investigating the benefits of mindfulness interventions for people experiencing psychosis. To our knowledge, changes following mindfulness for psychosis have not yet been explored from a personal construct perspective. This study had two main aims: (i) to explore changes in the way a person construes their self, others and their experience of psychosis following a Mindfulness-Based Cognitive Therapy (MBCT) group; and (ii) to replicate the findings of other studies exploring the feasibility and potential benefits of MBCT for psychosis. Sixteen participants, with experience of psychosis, completed an 8-week MBCT group. Participants completed pre-group and post-group assessments including a repertory grid, in addition to a range of outcome measures. There was some evidence of changes in construing following MBCT, with changes in the way participants viewed their ideal self and recovered self, and an indication of increased self-understanding. Improvements were found in participants' self-reported ability to act with awareness and in recovery. This study demonstrates the feasibility and potential benefits of MBCT groups for people experiencing psychosis. Furthermore, it provides some evidence of changes in construal following MBCT that warrant further exploration. Large-scale controlled trials of MBCT for psychosis are needed, as well as studies investigating the mechanisms of change. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Implementation of nonlinear registration of brain atlas based on piecewise grid system

    NASA Astrophysics Data System (ADS)

    Liu, Rong; Gu, Lixu; Xu, Jianrong

    2007-12-01

    In this paper, a multi-step registration method of brain atlas and clinical Magnetic Resonance Imaging (MRI) data based on Thin-Plate Splines (TPS) and Piecewise Grid System (PGS) is presented. The method can help doctors to determine the corresponding anatomical structure between patient image and the brain atlas by piecewise nonlinear registration. Since doctors mostly pay attention to particular Region of Interest (ROI), and a global nonlinear registration is quite time-consuming which is not suitable for real-time clinical application, we propose a novel method to conduct linear registration in global area before nonlinear registration is performed in selected ROI. The homogenous feature points are defined to calculate the transform matrix between patient data and the brain atlas to conclude the mapping function. Finally, we integrate the proposed approach into an application of neurosurgical planning and guidance system which lends great efficiency in both neuro-anatomical education and guiding of neurosurgical operations. The experimental results reveal that the proposed approach can keep an average registration error of 0.25mm in near real-time manner.

  17. Comparison of two expert-based assessments of diesel exhaust exposure in a case-control study: Programmable decision rules versus expert review of individual jobs

    PubMed Central

    Pronk, Anjoeka; Stewart, Patricia A.; Coble, Joseph B.; Katki, Hormuzd A.; Wheeler, David C.; Colt, Joanne S.; Baris, Dalsu; Schwenn, Molly; Karagas, Margaret R.; Johnson, Alison; Waddell, Richard; Verrill, Castine; Cherala, Sai; Silverman, Debra T.; Friesen, Melissa C.

    2012-01-01

    Objectives Professional judgment is necessary to assess occupational exposure in population-based case-control studies; however, the assessments lack transparency and are time-consuming to perform. To improve transparency and efficiency, we systematically applied decision rules to the questionnaire responses to assess diesel exhaust exposure in the New England Bladder Cancer Study, a population-based case-control study. Methods 2,631 participants reported 14,983 jobs; 2,749 jobs were administered questionnaires (‘modules’) with diesel-relevant questions. We applied decision rules to assign exposure metrics based solely on the occupational history responses (OH estimates) and based on the module responses (module estimates); we combined the separate OH and module estimates (OH/module estimates). Each job was also reviewed one at a time to assign exposure (one-by-one review estimates). We evaluated the agreement between the OH, OH/module, and one-by-one review estimates. Results The proportion of exposed jobs was 20–25% for all jobs, depending on approach, and 54–60% for jobs with diesel-relevant modules. The OH/module and one-by-one review had moderately high agreement for all jobs (κw=0.68–0.81) and for jobs with diesel-relevant modules (κw=0.62–0.78) for the probability, intensity, and frequency metrics. For exposed subjects, the Spearman correlation statistic was 0.72 between the cumulative OH/module and one-by-one review estimates. Conclusions The agreement seen here may represent an upper level of agreement because the algorithm and one-by-one review estimates were not fully independent. This study shows that applying decision-based rules can reproduce a one-by-one review, increase transparency and efficiency, and provide a mechanism to replicate exposure decisions in other studies. PMID:22843440

  18. Visualization, analysis, and design of COMBO-FISH probes in the grid-based GLOBE 3D genome platform.

    PubMed

    Kepper, Nick; Schmitt, Eberhard; Lesnussa, Michael; Weiland, Yanina; Eussen, Hubert B; Grosveld, Frank G; Hausmann, Michael; Knoch, Tobias A

    2010-01-01

    The genome architecture in cell nuclei plays an important role in modern microscopy for the monitoring of medical diagnosis and therapy since changes of function and dynamics of genes are interlinked with changing geometrical parameters. The planning of corresponding diagnostic experiments and their imaging is a complex and often interactive IT intensive challenge and thus makes high-performance grids a necessity. To detect genetic changes we recently developed a new form of fluorescence in situ hybridization (FISH) - COMBinatorial Oligonucleotide FISH (COMBO-FISH) - which labels small nucleotide sequences clustering at a desired genomic location. To achieve a unique hybridization spot other side clusters have to be excluded. Therefore, we have designed an interactive pipeline using the grid-based GLOBE 3D Genome Viewer and Platform to design and display different labelling variants of candidate probe sets. Thus, we have created a grid-based virtual "paper" tool for easy interactive calculation, analysis, management, and representation for COMBO-FISH probe design with many an advantage: Since all the calculations and analysis run in a grid, one can instantly and with great visual ease locate duplications of gene subsequences to guide the elimination of side clustering sequences during the probe design process, as well as get at least an impression of the 3D architectural embedding of the respective chromosome region, which is of major importance to estimate the hybridization probe dynamics. Beyond, even several people at different locations could work on the same process in a team wise manner. Consequently, we present how a complex interactive process can profit from grid infrastructure technology using our unique GLOBE 3D Genome Platform gateway towards a real interactive curative diagnosis planning and therapy monitoring.

  19. Task based exposure assessment in ergonomic epidemiology: a study of upper arm elevation in the jobs of machinists, car mechanics, and house painters

    PubMed Central

    Svendsen, S; Mathiassen, S; Bonde, J

    2005-01-01

    Aims: To explore the precision of task based estimates of upper arm elevation in three occupational groups, compared to direct measurements of job exposure. Methods: Male machinists (n = 26), car mechanics (n = 23), and house painters (n = 23) were studied. Whole day recordings of upper arm elevation were obtained for four consecutive working days, and associated task information was collected in diaries. For each individual, task based estimates of job exposure were calculated by weighting task exposures from a collective database by task proportions according to the diaries. These estimates were validated against directly measured job exposures using linear regression. The performance of the task based approach was expressed through the gain in precision of occupational group mean exposures that could be obtained by adding subjects with task based estimates to a group of subjects with measured job exposures in a "validation" design. Results: In all three occupations, tasks differed in mean exposure, and task proportions varied between individuals. Task based estimation proved inefficient, with squared correlation coefficients only occasionally exceeding 0.2 for the relation between task based and measured job exposures. Consequently, it was not possible to substantially improve the precision of an estimated group mean by including subjects whose job exposures were based on task information. Conclusions: Task based estimates of mechanical job exposure can be very imprecise, and only marginally better than estimates based on occupation. It is recommended that investigators in ergonomic epidemiology consider the prospects of task based exposure assessment carefully before placing resources at obtaining task information. Strategies disregarding tasks may be preferable in many cases. PMID:15613604

  20. Individual Skills Based Volunteerism and Life Satisfaction among Healthcare Volunteers in Malaysia: Role of Employer Encouragement, Self-Esteem and Job Performance, A Cross-Sectional Study

    PubMed Central

    Veerasamy, Chanthiran; Sambasivan, Murali; Kumar, Naresh

    2013-01-01

    The purpose of this paper is to analyze two important outcomes of individual skills-based volunteerism (ISB-V) among healthcare volunteers in Malaysia. The outcomes are: job performance and life satisfaction. This study has empirically tested the impact of individual dimensions of ISB-V along with their inter-relationships in explaining the life satisfaction and job performance. Besides, the effects of employer encouragement to the volunteers, demographic characteristics of volunteers, and self-esteem of volunteers on job performance and life satisfaction have been studied. The data were collected through a questionnaire distributed to 1000 volunteers of St. John Ambulance in Malaysia. Three hundred and sixty six volunteers responded by giving their feedback. The model was tested using Structural Equation Modeling (SEM). The main results of this study are: (1) Volunteer duration and nature of contact affects life satisfaction, (2) volunteer frequency has impact on volunteer duration, (3) self-esteem of volunteers has significant relationships with volunteer frequency, job performance and life satisfaction, (4) job performance of volunteers affect their life satisfaction and (5) current employment level has significant relationships with duration of volunteering, self esteem, employer encouragement and job performance of volunteers. The model in this study has been able to explain 39% of the variance in life satisfaction and 45% of the variance in job performance. The current study adds significantly to the body of knowledge on healthcare volunteerism. PMID:24194894

  1. Individual skills based volunteerism and life satisfaction among healthcare volunteers in Malaysia: role of employer encouragement, self-esteem and job performance, a cross-sectional study.

    PubMed

    Veerasamy, Chanthiran; Sambasivan, Murali; Kumar, Naresh

    2013-01-01

    The purpose of this paper is to analyze two important outcomes of individual skills-based volunteerism (ISB-V) among healthcare volunteers in Malaysia. The outcomes are: job performance and life satisfaction. This study has empirically tested the impact of individual dimensions of ISB-V along with their inter-relationships in explaining the life satisfaction and job performance. Besides, the effects of employer encouragement to the volunteers, demographic characteristics of volunteers, and self-esteem of volunteers on job performance and life satisfaction have been studied. The data were collected through a questionnaire distributed to 1000 volunteers of St. John Ambulance in Malaysia. Three hundred and sixty six volunteers responded by giving their feedback. The model was tested using Structural Equation Modeling (SEM). The main results of this study are: (1) Volunteer duration and nature of contact affects life satisfaction, (2) volunteer frequency has impact on volunteer duration, (3) self-esteem of volunteers has significant relationships with volunteer frequency, job performance and life satisfaction, (4) job performance of volunteers affect their life satisfaction and (5) current employment level has significant relationships with duration of volunteering, self esteem, employer encouragement and job performance of volunteers. The model in this study has been able to explain 39% of the variance in life satisfaction and 45% of the variance in job performance. The current study adds significantly to the body of knowledge on healthcare volunteerism.

  2. Grid workflow validation using ontology-based tacit knowledge: A case study for quantitative remote sensing applications

    NASA Astrophysics Data System (ADS)

    Liu, Jia; Liu, Longli; Xue, Yong; Dong, Jing; Hu, Yingcui; Hill, Richard; Guang, Jie; Li, Chi

    2017-01-01

    Workflow for remote sensing quantitative retrieval is the ;bridge; between Grid services and Grid-enabled application of remote sensing quantitative retrieval. Workflow averts low-level implementation details of the Grid and hence enables users to focus on higher levels of application. The workflow for remote sensing quantitative retrieval plays an important role in remote sensing Grid and Cloud computing services, which can support the modelling, construction and implementation of large-scale complicated applications of remote sensing science. The validation of workflow is important in order to support the large-scale sophisticated scientific computation processes with enhanced performance and to minimize potential waste of time and resources. To research the semantic correctness of user-defined workflows, in this paper, we propose a workflow validation method based on tacit knowledge research in the remote sensing domain. We first discuss the remote sensing model and metadata. Through detailed analysis, we then discuss the method of extracting the domain tacit knowledge and expressing the knowledge with ontology. Additionally, we construct the domain ontology with Protégé. Through our experimental study, we verify the validity of this method in two ways, namely data source consistency error validation and parameters matching error validation.

  3. Experimental Demonstration of a Self-organized Architecture for Emerging Grid Computing Applications on OBS Testbed

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Hong, Xiaobin; Wu, Jian; Lin, Jintong

    As Grid computing continues to gain popularity in the industry and research community, it also attracts more attention from the customer level. The large number of users and high frequency of job requests in the consumer market make it challenging. Clearly, all the current Client/Server(C/S)-based architecture will become unfeasible for supporting large-scale Grid applications due to its poor scalability and poor fault-tolerance. In this paper, based on our previous works [1, 2], a novel self-organized architecture to realize a highly scalable and flexible platform for Grids is proposed. Experimental results show that this architecture is suitable and efficient for consumer-oriented Grids.

  4. Delay grid multiplexing: simple time-based multiplexing and readout method for silicon photomultipliers

    NASA Astrophysics Data System (ADS)

    Won, Jun Yeon; Ko, Guen Bae; Lee, Jae Sung

    2016-10-01

    In this paper, we propose a fully time-based multiplexing and readout method that uses the principle of the global positioning system. Time-based multiplexing allows simplifying the multiplexing circuits where the only innate traces that connect the signal pins of the silicon photomultiplier (SiPM) channels to the readout channels are used as the multiplexing circuit. Every SiPM channel is connected to the delay grid that consists of the traces on a printed circuit board, and the inherent transit times from each SiPM channel to the readout channels encode the position information uniquely. Thus, the position of each SiPM can be identified using the time difference of arrival (TDOA) measurements. The proposed multiplexing can also allow simplification of the readout circuit using the time-to-digital converter (TDC) implemented in a field-programmable gate array (FPGA), where the time-over-threshold (ToT) is used to extract the energy information after multiplexing. In order to verify the proposed multiplexing method, we built a positron emission tomography (PET) detector that consisted of an array of 4  ×  4 LGSO crystals, each with a dimension of 3  ×  3  ×  20 mm3, and one- to-one coupled SiPM channels. We first employed the waveform sampler as an initial study, and then replaced the waveform sampler with an FPGA-TDC to further simplify the readout circuits. The 16 crystals were clearly resolved using only the time information obtained from the four readout channels. The coincidence resolving times (CRTs) were 382 and 406 ps FWHM when using the waveform sampler and the FPGA-TDC, respectively. The proposed simple multiplexing and readout methods can be useful for time-of-flight (TOF) PET scanners.

  5. Delay grid multiplexing: simple time-based multiplexing and readout method for silicon photomultipliers.

    PubMed

    Won, Jun Yeon; Ko, Guen Bae; Lee, Jae Sung

    2016-10-07

    In this paper, we propose a fully time-based multiplexing and readout method that uses the principle of the global positioning system. Time-based multiplexing allows simplifying the multiplexing circuits where the only innate traces that connect the signal pins of the silicon photomultiplier (SiPM) channels to the readout channels are used as the multiplexing circuit. Every SiPM channel is connected to the delay grid that consists of the traces on a printed circuit board, and the inherent transit times from each SiPM channel to the readout channels encode the position information uniquely. Thus, the position of each SiPM can be identified using the time difference of arrival (TDOA) measurements. The proposed multiplexing can also allow simplification of the readout circuit using the time-to-digital converter (TDC) implemented in a field-programmable gate array (FPGA), where the time-over-threshold (ToT) is used to extract the energy information after multiplexing. In order to verify the proposed multiplexing method, we built a positron emission tomography (PET) detector that consisted of an array of 4  ×  4 LGSO crystals, each with a dimension of 3  ×  3  ×  20 mm(3), and one- to-one coupled SiPM channels. We first employed the waveform sampler as an initial study, and then replaced the waveform sampler with an FPGA-TDC to further simplify the readout circuits. The 16 crystals were clearly resolved using only the time information obtained from the four readout channels. The coincidence resolving times (CRTs) were 382 and 406 ps FWHM when using the waveform sampler and the FPGA-TDC, respectively. The proposed simple multiplexing and readout methods can be useful for time-of-flight (TOF) PET scanners.

  6. Are gay men and lesbians discriminated against when applying for jobs? A four-city, Internet-based field experiment.

    PubMed

    Bailey, John; Wallace, Michael; Wright, Bradley

    2013-01-01

    An Internet-based field experiment was conducted to examine potential hiring discrimination based on sexual orientation; specifically, the "first contact" between job applicants and employers was looked at. In response to Internet job postings on CareerBuilder.com®, more than 4,600 resumes were sent to employers in 4 U.S. cities: Philadelphia, Chicago, Dallas, and San Francisco. The resumes varied randomly with regard to gender, implied sexual orientation, and other characteristics. Two hypotheses were tested: first, that employers' response rates vary by the applicants' assumed sexuality; and second, that employers' Response Rates by Sexuality vary by city. Effects of city were controlled for to hold constant any variation in labor market conditions in the 4 cities. Based on employer responses to the applications, it was concluded that there is no evidence that gay men or lesbians are discriminated against in their first encounter with employers, and no significant variation across cities in these encounters was found. Implications of these results for the literature on hiring discrimination based on sexual orientation, the strengths and limitations of the research, and the potential for the Internet-based field experiment design in future studies of discrimination are discussed.

  7. Adventures in Computational Grids

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Sometimes one supercomputer is not enough. Or your local supercomputers are busy, or not configured for your job. Or you don't have any supercomputers. You might be trying to simulate worldwide weather changes in real time, requiring more compute power than you could get from any one machine. Or you might be collecting microbiological samples on an island, and need to examine them with a special microscope located on the other side of the continent. These are the times when you need a computational grid.

  8. A grid computing-based approach for the acceleration of simulations in cardiology.

    PubMed

    Alonso, José M; Ferrero, José M; Hernández, Vicente; Moltó, Germán; Saiz, Javier; Trénor, Beatriz

    2008-03-01

    This paper combines high-performance computing and grid computing technologies to accelerate multiple executions of a biomedical application that simulates the action potential propagation on cardiac tissues. First, a parallelization strategy was employed to accelerate the execution of simulations on a cluster of personal computers (PCs). Then, grid computing was employed to concurrently perform the multiple simulations that compose the cardiac case studies on the resources of a grid deployment, by means of a service-oriented approach. This way, biomedical experts are provided with a gateway to easily access a grid infrastructure for the execution of these research studies. Emphasis is stressed on the methodology employed. In order to assess the benefits of the grid, a cardiac case study, which analyzes the effects of premature stimulation on reentry generation during myocardial ischemia, has been carried out. The collaborative usage of a distributed computing infrastructure has reduced the time required for the execution of cardiac case studies, which allows, for example, to take more accurate decisions when evaluating the effects of new antiarrhythmic drugs on the electrical activity of the heart.

  9. Competency-based on-the-job training for aviation maintenance and inspection--a human factors approach.

    PubMed

    Walter, D

    2000-08-01

    More than 90% of the critical skills that an aviation maintenance technician uses are acquired through on-the-job training (OJT). Yet many aviation maintenance technicians rely on a 'degenerating buddy system', 'follow Joe around', or unstructured approach to OJT. Many aspects of the aviation maintenance environment point to the need for a structured OJT program, but perhaps the most significant is the practice of job bidding which can create rapid turnover of technicians. The task analytic training system (TATS), a model for developing team-driven structured OJT was developed by the author, and first introduced in Boeing Commercial Airplane Group to provide competency-based OJT for aviation maintenance and inspection personnel. The goal of the model was not only to provide a comprehensive, highly structured training system that could be applied to any maintenance and inspection task, but also to improve team coordination, attitude and morale. The first goal was accomplished by following the systems eight-step process, the latter through incorporating human factors principles such as decision making, communication, team building and conflict resolution into the process itself. In general, the process helps to instill mutual respect and trust, enhance goal-directed behavior, strengthen technicians' self-esteem and responsiveness to new ideas and encourage technicians to make worthwhile contributions. The theoretical background of the model is addressed by illustrating how the proven training methodologies of job task analysis and job instruction training are blended with human factors principles resulting in a unique team-driven approach to training. The paper discusses major elements of the model including needs identification, outlining targeted jobs, writing and verifying training procedures, an approval system, sequencing of training, certifying trainers, implementing, employing tracking mechanisms, evaluating, and establishing a maintenance/audit plan

  10. An Investigation of Wavelet Bases for Grid-Based Multi-Scale Simulations Final Report

    SciTech Connect

    Baty, R.S.; Burns, S.P.; Christon, M.A.; Roach, D.W.; Trucano, T.G.; Voth, T.E.; Weatherby, J.R.; Womble, D.E.

    1998-11-01

    The research summarized in this report is the result of a two-year effort that has focused on evaluating the viability of wavelet bases for the solution of partial differential equations. The primary objective for this work has been to establish a foundation for hierarchical/wavelet simulation methods based upon numerical performance, computational efficiency, and the ability to exploit the hierarchical adaptive nature of wavelets. This work has demonstrated that hierarchical bases can be effective for problems with a dominant elliptic character. However, the strict enforcement of orthogonality was found to be less desirable than weaker semi-orthogonality or bi-orthogonality for solving partial differential equations. This conclusion has led to the development of a multi-scale linear finite element based on a hierarchical change of basis. The reproducing kernel particle method has been found to yield extremely accurate phase characteristics for hyperbolic problems while providing a convenient framework for multi-scale analyses.

  11. A gridded hourly rainfall dataset for the UK applied to a national physically-based modelling system

    NASA Astrophysics Data System (ADS)

    Lewis, Elizabeth; Blenkinsop, Stephen; Quinn, Niall; Freer, Jim; Coxon, Gemma; Woods, Ross; Bates, Paul; Fowler, Hayley

    2016-04-01

    An hourly gridded rainfall product has great potential for use in many hydrological applications that require high temporal resolution meteorological data. One important example of this is flood risk management, with flooding in the UK highly dependent on sub-daily rainfall intensities amongst other factors. Knowledge of sub-daily rainfall intensities is therefore critical to designing hydraulic structures or flood defences to appropriate levels of service. Sub-daily rainfall rates are also essential inputs for flood forecasting, allowing for estimates of peak flows and stage for flood warning and response. In addition, an hourly gridded rainfall dataset has significant potential for practical applications such as better representation of extremes and pluvial flash flooding, validation of high resolution climate models and improving the representation of sub-daily rainfall in weather generators. A new 1km gridded hourly rainfall dataset for the UK has been created by disaggregating the daily Gridded Estimates of Areal Rainfall (CEH-GEAR) dataset using comprehensively quality-controlled hourly rain gauge data from over 1300 observation stations across the country. Quality control measures include identification of frequent tips, daily accumulations and dry spells, comparison of daily totals against the CEH-GEAR daily dataset, and nearest neighbour checks. The quality control procedure was validated against historic extreme rainfall events and the UKCP09 5km daily rainfall dataset. General use of the dataset has been demonstrated by testing the sensitivity of a physically-based hydrological modelling system for Great Britain to the distribution and rates of rainfall and potential evapotranspiration. Of the sensitivity tests undertaken, the largest improvements in model performance were seen when an hourly gridded rainfall dataset was combined with potential evapotranspiration disaggregated to hourly intervals, with 61% of catchments showing an increase in NSE between

  12. A measurement method for micro 3D shape based on grids-processing and stereovision technology

    NASA Astrophysics Data System (ADS)

    Li, Chuanwei; Liu, Zhanwei; Xie, Huimin

    2013-04-01

    An integrated measurement method for micro 3D surface shape by a combination of stereovision technology in a scanning electron microscope (SEM) and grids-processing methodology is proposed. The principle of the proposed method is introduced in detail. By capturing two images of the tested specimen with grids on the surface at different tilt angles in an SEM, the 3D surface shape of the specimen can be obtained. Numerical simulation is applied to analyze the feasibility of the proposed method. A validation experiment is performed here. The surface shape of the metal-wire/polymer-membrane structures with thermal deformation is reconstructed. By processing the surface grids of the specimen, the out-of-plane displacement field of the specimen surface is also obtained. Compared with the measurement results obtained by a 3D digital microscope, the experimental error of the proposed method is discussed

  13. Global Renewable Energy-Based Electricity Generation and Smart Grid System for Energy Security

    PubMed Central

    Islam, M. A.; Hasanuzzaman, M.; Rahim, N. A.; Nahar, A.; Hosenuzzaman, M.

    2014-01-01

    Energy is an indispensable factor for the economic growth and development of a country. Energy consumption is rapidly increasing worldwide. To fulfill this energy demand, alternative energy sources and efficient utilization are being explored. Various sources of renewable energy and their efficient utilization are comprehensively reviewed and presented in this paper. Also the trend in research and development for the technological advancement of energy utilization and smart grid system for future energy security is presented. Results show that renewable energy resources are becoming more prevalent as more electricity generation becomes necessary and could provide half of the total energy demands by 2050. To satisfy the future energy demand, the smart grid system can be used as an efficient system for energy security. The smart grid also delivers significant environmental benefits by conservation and renewable generation integration. PMID:25243201

  14. Global renewable energy-based electricity generation and smart grid system for energy security.

    PubMed

    Islam, M A; Hasanuzzaman, M; Rahim, N A; Nahar, A; Hosenuzzaman, M

    2014-01-01

    Energy is an indispensable factor for the economic growth and development of a country. Energy consumption is rapidly increasing worldwide. To fulfill this energy demand, alternative energy sources and efficient utilization are being explored. Various sources of renewable energy and their efficient utilization are comprehensively reviewed and presented in this paper. Also the trend in research and development for the technological advancement of energy utilization and smart grid system for future energy security is presented. Results show that renewable energy resources are becoming more prevalent as more electricity generation becomes necessary and could provide half of the total energy demands by 2050. To satisfy the future energy demand, the smart grid system can be used as an efficient system for energy security. The smart grid also delivers significant environmental benefits by conservation and renewable generation integration.

  15. Job Task Analysis.

    ERIC Educational Resources Information Center

    Clemson Univ., SC.

    This publication consists of job task analyses for jobs in textile manufacturing. Information provided for each job in the greige and finishing plants includes job title, job purpose, and job duties with related educational objectives, curriculum, assessment, and outcome. These job titles are included: yarn manufacturing head overhauler, yarn…

  16. An expert-based job exposure matrix for large scale epidemiologic studies of primary hip and knee osteoarthritis: The Lower Body JEM

    PubMed Central

    2014-01-01

    Background When conducting large scale epidemiologic studies, it is a challenge to obtain quantitative exposure estimates, which do not rely on self-report where estimates may be influenced by symptoms and knowledge of disease status. In this study we developed a job exposure matrix (JEM) for use in population studies of the work-relatedness of hip and knee osteoarthritis. Methods Based on all 2227 occupational titles in the Danish version of the International Standard Classification of Occupations (D-ISCO 88), we constructed 121 job groups comprising occupational titles with expected homogeneous exposure patterns in addition to a minimally exposed job group, which was not included in the JEM. The job groups were allocated the mean value of five experts’ ratings of daily duration (hours/day) of standing/walking, kneeling/squatting, and whole-body vibration as well as total load lifted (kg/day), and frequency of lifting loads weighing ≥20 kg (times/day). Weighted kappa statistics were used to evaluate inter-rater agreement on rankings of the job groups for four of these exposures (whole-body vibration could not be evaluated due to few exposed job groups). Two external experts checked the face validity of the rankings of the mean values. Results A JEM was constructed and English ISCO codes were provided where possible. The experts’ ratings showed fair to moderate agreement with respect to rankings of the job groups (mean weighted kappa values between 0.36 and 0.49). The external experts agreed on 586 of the 605 rankings. Conclusion The Lower Body JEM based on experts’ ratings was established. Experts agreed on rankings of the job groups, and rankings based on mean values were in accordance with the opinion of external experts. PMID:24927760

  17. Comparing alternative computer-based methods for presenting job-task instructions. Interim report, May 1986-May 1987

    SciTech Connect

    Nugent, W.A.

    1988-02-01

    This study compared the effects of previous task training/experience and alternative methods for presenting procedural instructions on job-task performance. Six computer-based methods were examined by having oscilloscope operators perform four equipment-related tasks. The presentation methods were text-only, audio-only, text-audio, text-graphics, audio-training and experience in operating oscilloscopes. The most-efficient and effective task performances were obtained through a combination of audio and graphic presentations, an effect which can be further enhanced by the addition of redundant textual instructions. The practical applications and theoretical implications of these findings are discussed.

  18. New Jobs, Old Occupational Stereotypes: Gender and Jobs in the New Economy

    ERIC Educational Resources Information Center

    Miller, Linda; Hayward, Rowena

    2006-01-01

    This paper reports data from a questionnaire-based UK study that examined occupational sex-role stereotypes, perceived occupational gender segregation, job knowledge and job preferences of male and female pupils aged 14-18 for 23 jobs. Data were collected from 508 pupils in total. Both boys and girls perceived the majority of the jobs as being…

  19. a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.

    2015-04-01

    Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.

  20. Verification & Validation of High-Order Short-Characteristics-Based Deterministic Transport Methodology on Unstructured Grids

    SciTech Connect

    Azmy, Yousry; Wang, Yaqi

    2013-12-20

    The research team has developed a practical, high-order, discrete-ordinates, short characteristics neutron transport code for three-dimensional configurations represented on unstructured tetrahedral grids that can be used for realistic reactor physics applications at both the assembly and core levels. This project will perform a comprehensive verification and validation of this new computational tool against both a continuous-energy Monte Carlo simulation (e.g. MCNP) and experimentally measured data, an essential prerequisite for its deployment in reactor core modeling. Verification is divided into three phases. The team will first conduct spatial mesh and expansion order refinement studies to monitor convergence of the numerical solution to reference solutions. This is quantified by convergence rates that are based on integral error norms computed from the cell-by-cell difference between the code’s numerical solution and its reference counterpart. The latter is either analytic or very fine- mesh numerical solutions from independent computational tools. For the second phase, the team will create a suite of code-independent benchmark configurations to enable testing the theoretical order of accuracy of any particular discretization of the discrete ordinates approximation of the transport equation. For each tested case (i.e. mesh and spatial approximation order), researchers will execute the code and compare the resulting numerical solution to the exact solution on a per cell basis to determine the distribution of the numerical error. The final activity comprises a comparison to continuous-energy Monte Carlo solutions for zero-power critical configuration measurements at Idaho National Laboratory’s Advanced Test Reactor (ATR). Results of this comparison will allow the investigators to distinguish between modeling errors and the above-listed discretization errors introduced by the deterministic method, and to separate the sources of uncertainty.

  1. Grid-based performance evaluation of GCM-RCM combinations for rainfall reproduction

    NASA Astrophysics Data System (ADS)

    Danandeh Mehr, Ali; Kahya, Ercan

    2016-03-01

    Prior to hydrological assessment of climate change at catchment scale, an applied methodology is necessary to evaluate the performance of climate models available for a given catchment. This study presents a grid-based performance evaluation approach as well as an intercomparison framework to evaluate the uncertainty of climate models for rainfall reproduction. For this purpose, we used outputs of two general circulation models (GCMs), namely ECHAM5 and CCSM3, downscaled by a regional climate model (RCM), namely RegCM3, over ten small to mid-size catchments in Rize Province, Turkey. To this end, five rainfall-borne climatic statistics are computed from the outputs of ECHAM5-RegCM3 and CCSM3-RegCM3 combinations in order to compare with those of observations in the province for the reference period 1961-1990. Performance of each combination is tested by means of scatter diagram, bias, mean absolute bias, root mean squared error, and model performance index (MPI) measures. Our results indicated that ECHAM5-RegCM3 overestimates the total monthly rainfall observations whereas CCSM3-RegCM3 tends to underestimate. In terms of maximum monthly and annual maximum rainfall reproduction, ECHAM5-RegCM3 shows higher performance than CCSM3-RegCM3, particularly in the coastland areas. In contrast, CCSM3-RegCM3 outperforms ECHAM5-RegCM3 in reproducing the number of rainy days, especially in the inland areas. The results also revealed that if a GCM-RCM combination performs well for a portion (statistic) of a catchment, it is not necessarily appropriate for the other portions (statistics). Moreover, the MPI measure demonstrated the superiority of ECHAM5-RegCM3 to CCSM3-RegCM3 up to 33 % excelling for annual rainfall reproduction in Rize Province.

  2. A SUNTANS-based unstructured grid local exact particle tracking model

    NASA Astrophysics Data System (ADS)

    Liu, Guangliang; Chua, Vivien P.

    2016-07-01

    A parallel particle tracking model, which employs the local exact integration method to achieve high accuracy, has been developed and embedded in an unstructured-grid coastal ocean model, Stanford Unstructured Nonhydrostatic Terrain-following Adaptive Navier-Stokes Simulator (SUNTANS). The particle tracking model is verified and compared with traditional numerical integration methods, such as Runge-Kutta fourth-order methods using several test cases. In two-dimensional linear steady rotating flow, the local exact particle tracking model is able to track particles along the circular streamline accurately, while Runge-Kutta fourth-order methods produce trajectories that deviate from the streamlines. In periodically varying double-gyre flow, the trajectories produced by local exact particle tracking model with time step of 1.0 × 10- 2 s are similar to those trajectories obtained from the numerical integration methods with reduced time steps of 1.0 × 10- 4 s. In three-dimensional steady Arnold-Beltrami-Childress (ABC) flow, the trajectories integrated with the local exact particle tracking model compares well with the approximated true path. The trajectories spiral upward and their projection on the x- y plane is a periodic ellipse. The trajectories derived with the Runge-Kutta fourth-order method deviate from the approximated true path, and their projections on the x- y plane are unclosed ellipses with growing long and short axes. The spatial temporal resolution needs to be carefully chosen before particle tracking models are applied. Our results show that the developed local exact particle tracking model is accurate and suitable for marine Lagrangian (trajectory-based)-related research.

  3. Overture: The grid classes

    SciTech Connect

    Brislawn, K.; Brown, D.; Chesshire, G.; Henshaw, W.

    1997-01-01

    Overture is a library containing classes for grids, overlapping grid generation and the discretization and solution of PDEs on overlapping grids. This document describes the Overture grid classes, including classes for single grids and classes for collections of grids.

  4. Fibonacci Grids

    NASA Technical Reports Server (NTRS)

    Swinbank, Richard; Purser, James

    2006-01-01

    Recent years have seen a resurgence of interest in a variety of non-standard computational grids for global numerical prediction. The motivation has been to reduce problems associated with the converging meridians and the polar singularities of conventional regular latitude-longitude grids. A further impetus has come from the adoption of massively parallel computers, for which it is necessary to distribute work equitably across the processors; this is more practicable for some non-standard grids. Desirable attributes of a grid for high-order spatial finite differencing are: (i) geometrical regularity; (ii) a homogeneous and approximately isotropic spatial resolution; (iii) a low proportion of the grid points where the numerical procedures require special customization (such as near coordinate singularities or grid edges). One family of grid arrangements which, to our knowledge, has never before been applied to numerical weather prediction, but which appears to offer several technical advantages, are what we shall refer to as "Fibonacci grids". They can be thought of as mathematically ideal generalizations of the patterns occurring naturally in the spiral arrangements of seeds and fruit found in sunflower heads and pineapples (to give two of the many botanical examples). These grids possess virtually uniform and highly isotropic resolution, with an equal area for each grid point. There are only two compact singular regions on a sphere that require customized numerics. We demonstrate the practicality of these grids in shallow water simulations, and discuss the prospects for efficiently using these frameworks in three-dimensional semi-implicit and semi-Lagrangian weather prediction or climate models.

  5. Job Satisfaction of NAIA Head Coaches at Small Faith-Based Colleges: The Teacher-Coach Model

    ERIC Educational Resources Information Center

    Stiemsma, Craig L.

    2010-01-01

    The head coaches at smaller colleges usually have other job responsibilities that include teaching, along with the responsibilities of coaching, recruiting, scheduling, and other coaching-related jobs. There is often a dual role involved for these coaches who try to juggle two different jobs that sometimes require different skill sets and involve…

  6. Correspondence between Video CD-ROM and Community-Based Job Preferences for Individuals with Developmental Disabilities

    ERIC Educational Resources Information Center

    Ellerd, David A.; Morgan, Robert L.; Salzberg, Charles L.

    2006-01-01

    This study examined correspondence in selections of job preference across a video CD-ROM assessment program, community jobs observed during employment site visits, and photographs of employment sites. For 20 participants ages 18 - 22 with developmental disabilities, the video CD-ROM program was initially administered to identify preferred jobs,…

  7. Grid enabled Service Support Environment - SSE Grid

    NASA Astrophysics Data System (ADS)

    Goor, Erwin; Paepen, Martine

    2010-05-01

    The SSEGrid project is an ESA/ESRIN project which started in 2009 and is executed by two Belgian companies, Spacebel and VITO, and one Dutch company, Dutch Space. The main project objectives are the introduction of a Grid-based processing on demand infrastructure at the Image Processing Centre for earth observation products at VITO and the inclusion of Grid processing services in the Service Support Environment (SSE) at ESRIN. The Grid-based processing on demand infrastructure is meant to support a Grid processing on demand model for Principal Investigators (PI) and allow the design and execution of multi-sensor applications with geographically spread data while minimising the transfer of huge volumes of data. In the first scenario, 'support a Grid processing on demand model for Principal Investigators', we aim to provide processing power close to the EO-data at the processing and archiving centres. We will allow a PI (non-Grid expert user) to upload his own algorithm, as a process, and his own auxiliary data from the SSE Portal and use them in an earth observation workflow on the SSEGrid Infrastructure. The PI can design and submit workflows using his own processes, processes made available by VITO/ESRIN and possibly processes from other users that are available on the Grid. These activities must be user-friendly and not requiring detailed knowledge about the underlying Grid middleware. In the second scenario we aim to design, implement and demonstrate a methodology to set up an earth observation processing facility, which uses large volumes of data from various geographically spread sensors. The aim is to provide solutions for problems that we face today, like wasting bandwidth by copying large volumes of data to one location. We will avoid this by processing the data where they are. The multi-mission Grid-based processing on demand infrastructure will allow developing and executing complex and massive multi-sensor data (re-)processing applications more

  8. Grid-Assembly: An oligonucleotide composition-based partitioning strategy to aid metagenomic sequence assembly.

    PubMed

    Ghosh, Tarini Shankar; Mehra, Varun; Mande, Sharmila S

    2015-06-01

    Metagenomics approach involves extraction, sequencing and characterization of the genomic content of entire community of microbes present in a given environment. In contrast to genomic data, accurate assembly of metagenomic sequences is a challenging task. Given the huge volume and the diverse taxonomic origin of metagenomic sequences, direct application of single genome assembly methods on metagenomes are likely to not only lead to an immense increase in requirements of computational infrastructure, but also result in the formation of chimeric contigs. A strategy to address the above challenge would be to partition metagenomic sequence datasets into clusters and assemble separately the sequences in individual clusters using any single-genome assembly method. The current study presents such an approach that uses tetranucleotide usage patterns to first represent sequences as points in a three dimensional (3D) space. The 3D space is subsequently partitioned into "Grids". Sequences within overlapping grids are then progressively assembled using any available assembler. We demonstrate the applicability of the current Grid-Assembly method using various categories of assemblers as well as different simulated metagenomic datasets. Validation results indicate that the Grid-Assembly approach helps in improving the overall quality of assembly, in terms of the purity and volume of the assembled contigs.

  9. Data distribution service-based interoperability framework for smart grid testbed infrastructure

    SciTech Connect

    Youssef, Tarek A.; Elsayed, Ahmed T.; Mohammed, Osama A.

    2016-03-02

    This study presents the design and implementation of a communication and control infrastructure for smart grid operation. The proposed infrastructure enhances the reliability of the measurements and control network. The advantages of utilizing the data-centric over message-centric communication approach are discussed in the context of smart grid applications. The data distribution service (DDS) is used to implement a data-centric common data bus for the smart grid. This common data bus improves the communication reliability, enabling distributed control and smart load management. These enhancements are achieved by avoiding a single point of failure while enabling peer-to-peer communication and an automatic discovery feature for dynamic participating nodes. The infrastructure and ideas presented in this paper were implemented and tested on the smart grid testbed. A toolbox and application programing interface for the testbed infrastructure are developed in order to facilitate interoperability and remote access to the testbed. This interface allows control, monitoring, and performing of experiments remotely. Furthermore, it could be used to integrate multidisciplinary testbeds to study complex cyber-physical systems (CPS).

  10. Computational fluid dynamics for propulsion technology: Geometric grid visualization in CFD-based propulsion technology research

    NASA Technical Reports Server (NTRS)

    Ziebarth, John P.; Meyer, Doug

    1992-01-01

    The coordination is examined of necessary resources, facilities, and special personnel to provide technical integration activities in the area of computational fluid dynamics applied to propulsion technology. Involved is the coordination of CFD activities between government, industry, and universities. Current geometry modeling, grid generation, and graphical methods are established to use in the analysis of CFD design methodologies.

  11. Wiki-Based Rapid Prototyping for Teaching-Material Design in E-Learning Grids

    ERIC Educational Resources Information Center

    Shih, Wen-Chung; Tseng, Shian-Shyong; Yang, Chao-Tung

    2008-01-01

    Grid computing environments with abundant resources can support innovative e-Learning applications, and are promising platforms for e-Learning. To support individualized and adaptive learning, teachers are encouraged to develop various teaching materials according to different requirements. However, traditional methodologies for designing teaching…

  12. Data distribution service-based interoperability framework for smart grid testbed infrastructure

    DOE PAGES

    Youssef, Tarek A.; Elsayed, Ahmed T.; Mohammed, Osama A.

    2016-03-02

    This study presents the design and implementation of a communication and control infrastructure for smart grid operation. The proposed infrastructure enhances the reliability of the measurements and control network. The advantages of utilizing the data-centric over message-centric communication approach are discussed in the context of smart grid applications. The data distribution service (DDS) is used to implement a data-centric common data bus for the smart grid. This common data bus improves the communication reliability, enabling distributed control and smart load management. These enhancements are achieved by avoiding a single point of failure while enabling peer-to-peer communication and an automatic discoverymore » feature for dynamic participating nodes. The infrastructure and ideas presented in this paper were implemented and tested on the smart grid testbed. A toolbox and application programing interface for the testbed infrastructure are developed in order to facilitate interoperability and remote access to the testbed. This interface allows control, monitoring, and performing of experiments remotely. Furthermore, it could be used to integrate multidisciplinary testbeds to study complex cyber-physical systems (CPS).« less

  13. Grids = Structure.

    ERIC Educational Resources Information Center

    Barrington, Linda; Carter, Jacky

    2003-01-01

    Proposes that narrow columns provide a flexible system of organization for designers. Notes that grids serve the content on the pages, help to develop a layout that will clearly direct the reader to information; and prevent visual monotony. Concludes when grid layouts are used, school publications look as good as professional ones. (PM)

  14. Job submission and management through web services: the experience with the CREAM service

    NASA Astrophysics Data System (ADS)

    Aiftimiei, C.; Andreetto, P.; Bertocco, S.; Fina, S. D.; Ronco, S. D.; Dorigo, A.; Gianelle, A.; Marzolla, M.; Mazzucato, M.; Sgaravatto, M.; Verlato, M.; Zangrando, L.; Corvo, M.; Miccio, V.; Sciaba, A.; Cesini, D.; Dongiovanni, D.; Grandi, C.

    2008-07-01

    Modern Grid middleware is built around components providing basic functionality, such as data storage, authentication, security, job management, resource monitoring and reservation. In this paper we describe the Computing Resource Execution and Management (CREAM) service. CREAM provides a Web service-based job execution and management capability for Grid systems; in particular, it is being used within the gLite middleware. CREAM exposes a Web service interface allowing conforming clients to submit and manage computational jobs to a Local Resource Management System. We developed a special component, called ICE (Interface to CREAM Environment) to integrate CREAM in gLite. ICE transfers job submissions and cancellations from the Workload Management System, allowing users to manage CREAM jobs from the gLite User Interface. This paper describes some recent studies aimed at assessing the performance and reliability of CREAM and ICE; those tests have been performed as part of the acceptance tests for integration of CREAM and ICE in gLite. We also discuss recent work towards enhancing CREAM with a BES and JSDL compliant interface.

  15. GSHR-Tree: a spatial index tree based on dynamic spatial slot and hash table in grid environments

    NASA Astrophysics Data System (ADS)

    Chen, Zhanlong; Wu, Xin-cai; Wu, Liang

    2008-12-01

    Computation Grids enable the coordinated sharing of large-scale distributed heterogeneous computing resources that can be used to solve computationally intensive problems in science, engineering, and commerce. Grid spatial applications are made possible by high-speed networks and a new generation of Grid middleware that resides between networks and traditional GIS applications. The integration of the multi-sources and heterogeneous spatial information and the management of the distributed spatial resources and the sharing and cooperative of the spatial data and Grid services are the key problems to resolve in the development of the Grid GIS. The performance of the spatial index mechanism is the key technology of the Grid GIS and spatial database affects the holistic performance of the GIS in Grid Environments. In order to improve the efficiency of parallel processing of a spatial mass data under the distributed parallel computing grid environment, this paper presents a new grid slot hash parallel spatial index GSHR-Tree structure established in the parallel spatial indexing mechanism. Based on the hash table and dynamic spatial slot, this paper has improved the structure of the classical parallel R tree index. The GSHR-Tree index makes full use of the good qualities of R-Tree and hash data structure. This paper has constructed a new parallel spatial index that can meet the needs of parallel grid computing about the magnanimous spatial data in the distributed network. This arithmetic splits space in to multi-slots by multiplying and reverting and maps these slots to sites in distributed and parallel system. Each sites constructs the spatial objects in its spatial slot into an R tree. On the basis of this tree structure, the index data was distributed among multiple nodes in the grid networks by using large node R-tree method. The unbalance during process can be quickly adjusted by means of a dynamical adjusting algorithm. This tree structure has considered the

  16. Synchrophasor Sensing and Processing based Smart Grid Security Assessment for Renewable Energy Integration

    NASA Astrophysics Data System (ADS)

    Jiang, Huaiguang

    With the evolution of energy and power systems, the emerging Smart Grid (SG) is mainly featured by distributed renewable energy generations, demand-response control and huge amount of heterogeneous data sources. Widely distributed synchrophasor sensors, such as phasor measurement units (PMUs) and fault disturbance recorders (FDRs), can record multi-modal signals, for power system situational awareness and renewable energy integration. An effective and economical approach is proposed for wide-area security assessment. This approach is based on wavelet analysis for detecting and locating the short-term and long-term faults in SG, using voltage signals collected by distributed synchrophasor sensors. A data-driven approach for fault detection, identification and location is proposed and studied. This approach is based on matching pursuit decomposition (MPD) using Gaussian atom dictionary, hidden Markov model (HMM) of real-time frequency and voltage variation features, and fault contour maps generated by machine learning algorithms in SG systems. In addition, considering the economic issues, the placement optimization of distributed synchrophasor sensors is studied to reduce the number of the sensors without affecting the accuracy and effectiveness of the proposed approach. Furthermore, because the natural hazards is a critical issue for power system security, this approach is studied under different types of faults caused by natural hazards. A fast steady-state approach is proposed for voltage security of power systems with a wind power plant connected. The impedance matrix can be calculated by the voltage and current information collected by the PMUs. Based on the impedance matrix, locations in SG can be identified, where cause the greatest impact on the voltage at the wind power plants point of interconnection. Furthermore, because this dynamic voltage security assessment method relies on time-domain simulations of faults at different locations, the proposed approach

  17. Grid-based molecular footprint comparison method for docking and de novo design: application to HIVgp41.

    PubMed

    Balius, Trent E; Allen, William J; Mukherjee, Sudipto; Rizzo, Robert C

    2013-05-30

    Scoring functions are a critically important component of computer-aided screening methods for the identification of lead compounds during early stages of drug discovery. Here, we present a new multigrid implementation of the footprint similarity (FPS) scoring function that was recently developed in our laboratory which has proven useful for identification of compounds which bind to a protein on a per-residue basis in a way that resembles a known reference. The grid-based FPS method is much faster than its Cartesian-space counterpart, which makes it computationally tractable for on-the-fly docking, virtual screening, or de novo design. In this work, we establish that: (i) relatively few grids can be used to accurately approximate Cartesian space footprint similarity, (ii) the method yields improved success over the standard DOCK energy function for pose identification across a large test set of experimental co-crystal structures, for crossdocking, and for database enrichment, and (iii) grid-based FPS scoring can be used to tailor construction of new molecules to have specific properties, as demonstrated in a series of test cases targeting the viral protein HIVgp41. The method is available in the program DOCK6.

  18. BatTri: A two-dimensional bathymetry-based unstructured triangular grid generator for finite element circulation modeling

    NASA Astrophysics Data System (ADS)

    Bilgili, Ata; Smith, Keston W.; Lynch, Daniel R.

    2006-06-01

    A brief summary of Delaunay unstructured triangular grid refinement algorithms, including the recent "off-centers" method, is provided and mesh generation requirements that are imperative to meet the criteria of the circulation modeling community are defined. A Matlab public-domain two-dimensional (2-D) mesh generation package (BatTri) based on these requirements is then presented and its efficiency shown through examples. BatTri consists of a graphical mesh editing interface and several bathymetry-based refinement algorithms, complemented by a set of diagnostic utilities to check and improve grid quality. The final output mesh node locations, node depths and element incidence list are obtained starting from only a basic set of bathymetric data. This simple but efficient setup allows fast interactive mesh customization and provides circulation modelers with problem-specific flexibility while satisfying the usual requirements on mesh size and element quality. A test of the "off-centers" method performed on 100 domains with randomly generated coastline and bathymetry shows an overall 25% reduction in the number of elements with only slight decrease in element quality. More importantly, this shows that BatTri is easily upgradeable to meet the future demands by the addition of new grid generation algorithms and Delaunay refinement schemes as they are made available.

  19. Equil: A Global Grid System

    NASA Astrophysics Data System (ADS)

    Hahn, Sebastian; Reimer, Christioph; Paulik, Christoph; Wagner, Wolfgang

    2016-08-01

    Geophysical parameters derived from space-borne Earth Observation Systems are either assigned to discrete points on a fixed Earth grid (e.g. regular lon/lat grid) or located on orbital point nodes with a customized arrangement, often in-line with the instrument's measurement geometry. The driving factors of the choice and structure of a spatial reference system (i.e. the grid) are typically spatial resolution, instrument geometry, measurement technique or application.In this study we propose a global grid system, the so- called Equil grid, and demonstrate its realization and structure. An exemplary Equil grid with a base sampling distance of 12.5 km is compared against two other grids commonly used in the domain of remote sensing of soil moisture. The simple nearly-equidistant grid design makes it interesting for a wide range of other geophysical parameters as well.

  20. PATH: a work sampling-based approach to ergonomic job analysis for construction and other non-repetitive work.

    PubMed

    Buchholz, B; Paquet, V; Punnett, L; Lee, D; Moir, S

    1996-06-01

    A high prevalence and incidence of work-related musculoskeletal disorders have been reported in construction work. Unlike industrial production-line activity, construction work, as well as work in many other occupations (e.g. agriculture, mining), is non-repetitive in nature; job tasks are non-cyclic, or consist of long or irregular cycles. PATH (Posture, Activity, Tools and Handling), a work sampling-based approach, was developed to characterize the ergonomic hazards of construction and other non-repetitive work. The posture codes in the PATH method are based on the Ovako Work Posture Analysing System (OWAS), with other codes included for describing worker activity, tool use, loads handled and grasp type. For heavy highway construction, observations are stratified by construction stage and operation, using a taxonomy developed specifically for this purpose. Observers can code the physical characteristics of the job reliably after about 30 h of training. A pilot study of six construction laborers during four road construction operations suggests that laborers spend large proportions of time in nonneutral trunk postures and spend approximately 20% of their time performing manual material handling tasks. These results demonstrate how the PATH method can be used to identify specific construction operations and tasks that are ergonomically hazardous.

  1. XML-based data model and architecture for a knowledge-based grid-enabled problem-solving environment for high-throughput biological imaging.

    PubMed

    Ahmed, Wamiq M; Lenz, Dominik; Liu, Jia; Paul Robinson, J; Ghafoor, Arif

    2008-03-01

    High-throughput biological imaging uses automated imaging devices to collect a large number of microscopic images for analysis of biological systems and validation of scientific hypotheses. Efficient manipulation of these datasets for knowledge discovery requires high-performance computational resources, efficient storage, and automated tools for extracting and sharing such knowledge among different research sites. Newly emerging grid technologies provide powerful means for exploiting the full potential of these imaging techniques. Efficient utilization of grid resources requires the development of knowledge-based tools and services that combine domain knowledge with analysis algorithms. In this paper, we first investigate how grid infrastructure can facilitate high-throughput biological imaging research, and present an architecture for providing knowledge-based grid services for this field. We identify two levels of knowledge-based services. The first level provides tools for extracting spatiotemporal knowledge from image sets and the second level provides high-level knowledge management and reasoning services. We then present cellular imaging markup language, an extensible markup language-based language for modeling of biological images and representation of spatiotemporal knowledge. This scheme can be used for spatiotemporal event composition, matching, and automated knowledge extraction and representation for large biological imaging datasets. We demonstrate the expressive power of this formalism by means of different examples and extensive experimental results.

  2. Simulation of Shallow Water Jets with a Unified Element-based Continuous/Discontinuous Galerkin Model with Grid Flexibility on the Sphere

    DTIC Science & Technology

    2013-01-01

    Gordon and Hall (1973); Eriksson (1984)). The way this is done will be described shortly. The main Lat-Lon region is composed of four faces obtained from...also referred to as Hex in the Figures and Tables throughout the paper) has only 6 faces and they are all equal. An example is shown in Fig. 4. The...sufficiently high. We will discuss this issue shortly. Figure 4. Conforming Hex grid. 2.3. Quad-based icosahedral grid The quad-based icosahedral grid

  3. A Java commodity grid kit.

    SciTech Connect

    von Laszewski, G.; Foster, I.; Gawor, J.; Lane, P.; Mathematics and Computer Science

    2001-07-01

    In this paper we report on the features of the Java Commodity Grid Kit. The Java CoG Kit provides middleware for accessing Grid functionality from the Java framework. Java CoG Kit middleware is general enough to design a variety of advanced Grid applications with quite different user requirements. Access to the Grid is established via Globus protocols, allowing the Java CoG Kit to communicate also with the C Globus reference implementation. Thus, the Java CoG Kit provides Grid developers with the ability to utilize the Grid, as well as numerous additional libraries and frameworks developed by the Java community to enable network, Internet, enterprise, and peer-to peer computing. A variety of projects have successfully used the client libraries of the Java CoG Kit to access Grids driven by the C Globus software. In this paper we also report on the efforts to develop server side Java CoG Kit components. As part of this research we have implemented a prototype pure Java resource management system that enables one to run Globus jobs on platforms on which a Java virtual machine is supported, including Windows NT machines.

  4. Expanding access to off-grid rural electrification in Africa: An analysis of community-based micro-grids in Kenya

    NASA Astrophysics Data System (ADS)

    Kirubi, Charles Gathu

    Community micro-grids have played a central role in increasing access to off-grid rural electrification (RE) in many regions of the developing world, notably South Asia. However, the promise of community micro-grids in sub-Sahara Africa remains largely unexplored. My study explores the potential and limits of community micro-grids as options for increasing access to off-grid RE in sub-Sahara Africa. Contextualized in five community micro-grids in rural Kenya, my study is framed through theories of collective action and combines qualitative and quantitative methods, including household surveys, electronic data logging and regression analysis. The main contribution of my research is demonstrating the circumstances under which community micro-grids can contribute to rural development and the conditions under which individuals are likely to initiate and participate in such projects collectively. With regard to rural development, I demonstrate that access to electricity enables the use of electric equipment and tools by small and micro-enterprises, resulting in significant improvement in productivity per worker (100--200% depending on the task at hand) and a corresponding growth in income levels in the order of 20--70%, depending on the product made. Access to electricity simultaneously enables and improves delivery of social and business services from a wide range of village-level infrastructure (e.g. schools, markets, water pumps) while improving the productivity of agricultural activities. Moreover, when local electricity users have an ability to charge and enforce cost-reflective tariffs and electricity consumption is closely linked to productive uses that generate incomes, cost recovery is feasible. By their nature---a new technology delivering highly valued services by the elites and other members, limited local experience and expertise, high capital costs---community micro-grids are good candidates for elite-domination. Even so, elite control does not necessarily

  5. Performance of algebraic multi-grid solvers based on unsmoothed and smoothed aggregation schemes

    NASA Astrophysics Data System (ADS)

    Webster, R.

    2001-08-01

    A comparison is made of the performance of two algebraic multi-grid (AMG0 and AMG1) solvers for the solution of discrete, coupled, elliptic field problems. In AMG0, the basis functions for each coarse grid/level approximation (CGA) are obtained directly by unsmoothed aggregation, an appropriate scaling being applied to each CGA to improve consistency. In AMG1 they are assembled using a smoothed aggregation with a constrained energy optimization method providing the smoothing. Although more costly, smoothed basis functions provide a better (more consistent) CGA. Thus, AMG1 might be viewed as a benchmark for the assessment of the simpler AMG0. Selected test problems for D'Arcy flow in pipe networks, Fick diffusion, plane strain elasticity and Navier-Stokes flow (in a Stokes approximation) are used in making the comparison. They are discretized on the basis of both structured and unstructured finite element meshes. The range of discrete equation sets covers both symmetric positive definite systems and systems that may be non-symmetric and/or indefinite. Both global and local mesh refinements to at least one order of resolving power are examined. Some of these include anisotropic refinements involving elements of large aspect ratio; in some hydrodynamics cases, the anisotropy is extreme, with aspect ratios exceeding two orders. As expected, AMG1 delivers typical multi-grid convergence rates, which for all practical purposes are independent of mesh bandwidth. AMG0 rates are slower. They may also be more discernibly mesh-dependent. However, for the range of mesh bandwidths examined, the overall cost effectiveness of the two solvers is remarkably similar when a full convergence to machine accuracy is demanded. Thus, the shorter solution times for AMG1 do not necessarily compensate for the extra time required for its costly grid generation. This depends on the severity of the problem and the demanded level of convergence. For problems requiring few iterations, where grid

  6. Your Job.

    ERIC Educational Resources Information Center

    Torre, Liz; And Others

    Information and accompanying exercises are provided in this learning module to reinforce basic reading, writing, and math skills and, at the same time, introduce personal assessment and job-seeking techniques. The module's first section provides suggestions for assessing personal interests and identifying the assets one has to offer an employer.…

  7. Job Ready.

    ERIC Educational Resources Information Center

    Easter Seal Society for Crippled Children and Adults of Washington, Seattle.

    Intended for use by employers for assessing how "job-ready" their particular business environment may be, the booklet provides information illustrating what physical changes could be made to allow persons with mobility limitations to enter and conduct business independently in a particular building. Illustrations along with brief explanations are…

  8. Job Olympics.

    ERIC Educational Resources Information Center

    Gerweck, Debra R.; Chauza, Phyllis J.

    This document consists of materials on Hiawatha (Kansas) High School's 1993 Job Olympics, a competition for high school students with disabilities. The materials are those included in a packet for student participants. A cover/information sheet details eligibility, entry deadline, date and place of competition, opening ceremonies, events, and a…

  9. A grid spacing control technique for algebraic grid generation methods

    NASA Technical Reports Server (NTRS)

    Smith, R. E.; Kudlinski, R. A.; Everton, E. L.

    1982-01-01

    A technique which controls the spacing of grid points in algebraically defined coordinate transformations is described. The technique is based on the generation of control functions which map a uniformly distributed computational grid onto parametric variables defining the physical grid. The control functions are smoothed cubic splines. Sets of control points are input for each coordinate directions to outline the control functions. Smoothed cubic spline functions are then generated to approximate the input data. The technique works best in an interactive graphics environment where control inputs and grid displays are nearly instantaneous. The technique is illustrated with the two-boundary grid generation algorithm.

  10. Autonomous, Decentralized Grid Architecture: Prosumer-Based Distributed Autonomous Cyber-Physical Architecture for Ultra-Reliable Green Electricity Networks

    SciTech Connect

    2012-01-11

    GENI Project: Georgia Tech is developing a decentralized, autonomous, internet-like control architecture and control software system for the electric power grid. Georgia Tech’s new architecture is based on the emerging concept of electricity prosumers—economically motivated actors that can produce, consume, or store electricity. Under Georgia Tech’s architecture, all of the actors in an energy system are empowered to offer associated energy services based on their capabilities. The actors achieve their sustainability, efficiency, reliability, and economic objectives, while contributing to system-wide reliability and efficiency goals. This is in marked contrast to the current one-way, centralized control paradigm.

  11. Collectives for Multiple Resource Job Scheduling Across Heterogeneous Servers

    DTIC Science & Technology

    2005-01-14

    have good Keywords signal-to-noise ratios, and are "factored" i.e., the utilities Reinforcement learning , Job Scheduling, Computational Grid, are aligned...were made based on the agents’ probability vectors which in TG 0.6376 7.48% turn are set using reinforcement learning algorithms. DU 0.6911 41.98...rmanc by having tions, wty a i agent ion is ig n t because s the the probability vectors set using reinforcement learning . only way an agent can get a

  12. Fuzzy logic, PSO based fuzzy logic algorithm and current controls comparative for grid-connected hybrid system

    NASA Astrophysics Data System (ADS)

    Borni, A.; Abdelkrim, T.; Zaghba, L.; Bouchakour, A.; Lakhdari, A.; Zarour, L.

    2017-02-01

    In this paper the model of a grid connected hybrid system is presented. The hybrid system includes a variable speed wind turbine controlled by aFuzzy MPPT control, and a photovoltaic generator controlled with PSO Fuzzy MPPT control to compensate the power fluctuations caused by the wind in a short and long term, the inverter currents injected to the grid is controlled by a decoupled PI current control. In the first phase, we start by modeling of the conversion system components; the wind system is consisted of a turbine coupled to a gearless permanent magnet generator (PMG), the AC/DC and DC-DC (Boost) converter are responsible to feed the electric energy produced by the PMG to the DC-link. The solar system consists of a photovoltaic generator (GPV) connected to a DC/DC boost converter controlled by a PSO fuzzy MPPT control to extract at any moment the maximum available power at the GPV terminals, the system is based on maximum utilization of both of sources because of their complementary. At the end. The active power reached to the DC-link is injected to the grid through a DC/AC inverter, this function is achieved by controlling the DC bus voltage to keep it constant and close to its reference value, The simulation studies have been performed using Matlab/Simulink. It can be concluded that a good control system performance can be achieved.

  13. Wave height possibility distribution characteristics of significant wave height in China Sea based on multi-satellite grid data

    NASA Astrophysics Data System (ADS)

    Han, W.; Yang, J.

    2016-11-01

    This paper discusses the group of wave height possibility distribution characteristics of significant wave height in China Sea based on multi-satellite grid data, the grid SWH data merges six satellites (TOPEX/Poseidon, Jason-1/2, ENVISAT, Cryosat-2, HY-2A) corrected satellite altimeter data into the global SWH grid data in 2000∼2015 using Inverse Distance Weighting Method. Comparing the difference of wave height possibility distribution of two schemes that scheme two includes all of 6 satellite data and scheme one includes all of other 5 satellite data except HY-2A in two wave height interval, the first interval is [0,25) m, the second interval is [4,25) m, finding that two schemes have close wave height probability distribution and the probability change trend, there are difference only in interval [0.4, 1.8) m and the possibility in this interval occupies over 70%; then mainly discussing scheme two, finding that the interval of greatest wave height possibility is [0.6, 3) m, and the wave height possibility that the SWH is greater than 4m is less than 0.18%.

  14. Grid Inertial Response-Based Probabilistic Determination of Energy Storage System Capacity Under High Solar Penetration

    DOE PAGES

    Yue, Meng; Wang, Xiaoyu

    2015-07-01

    It is well-known that responsive battery energy storage systems (BESSs) are an effective means to improve the grid inertial response to various disturbances including the variability of the renewable generation. One of the major issues associated with its implementation is the difficulty in determining the required BESS capacity mainly due to the large amount of inherent uncertainties that cannot be accounted for deterministically. In this study, a probabilistic approach is proposed to properly size the BESS from the perspective of the system inertial response, as an application of probabilistic risk assessment (PRA). The proposed approach enables a risk-informed decision-making processmore » regarding (1) the acceptable level of solar penetration in a given system and (2) the desired BESS capacity (and minimum cost) to achieve an acceptable grid inertial response with a certain confidence level.« less

  15. Grid Inertial Response-Based Probabilistic Determination of Energy Storage System Capacity Under High Solar Penetration

    SciTech Connect

    Yue, Meng; Wang, Xiaoyu

    2015-07-01

    It is well-known that responsive battery energy storage systems (BESSs) are an effective means to improve the grid inertial response to various disturbances including the variability of the renewable generation. One of the major issues associated with its implementation is the difficulty in determining the required BESS capacity mainly due to the large amount of inherent uncertainties that cannot be accounted for deterministically. In this study, a probabilistic approach is proposed to properly size the BESS from the perspective of the system inertial response, as an application of probabilistic risk assessment (PRA). The proposed approach enables a risk-informed decision-making process regarding (1) the acceptable level of solar penetration in a given system and (2) the desired BESS capacity (and minimum cost) to achieve an acceptable grid inertial response with a certain confidence level.

  16. Three-Dimensional Optimal Shape Design in Heat Transfer Based on Body-fitted Grid Generation

    NASA Astrophysics Data System (ADS)

    Mohebbi, Farzad; Sellier, Mathieu

    2013-10-01

    This paper is concerned with an optimal shape design (shape optimization) problem in heat transfer. As an inverse steady-state heat transfer problem, given a body locally heated by a specified heat flux and exposed to convective heat transfer on parts of its boundary, the aim is to find the optimal shape of this body such that the temperature is constant on a desired subset of its boundary. The numerical method to achieve this aim consists of a three-dimensional elliptic grid generation technique to generate a mesh over the body and solve for a heat conduction equation. This paper describes a novel sensitivity analysis scheme to compute the sensitivity of the temperatures to variation of grid node positions and the conjugate gradient method (CGM) is used as an optimization algorithm to minimize the difference between the computed temperature on the boundary and desired temperature. The elliptic grid generation technique allows us to map the physical domain (body) onto a fixed computational domain and to discretize the heat conduction equation using the finite difference method (FDM).

  17. Fabrication of a flexible Ag-grid transparent electrode using ac based electrohydrodynamic Jet printing

    NASA Astrophysics Data System (ADS)

    Park, Jaehong; Hwang, Jungho

    2014-10-01

    In the dc voltage-applied electrohydrodynamic (EHD) jet printing of metal nanoparticles, the residual charge of droplets deposited on a substrate changes the electrostatic field distribution and interrupts the subsequent printing behaviour, especially for insulating substrates that have slow charge decay rates. In this paper, a sinusoidal ac voltage was used in the EHD jet printing process to switch the charge polarity of droplets containing Ag nanoparticles, thereby neutralizing the charge on a polyethylene terephthalate (PET) substrate. Printed Ag lines with a width of 10 µm were invisible to the naked eye. After sintering lines with 500 µm of line pitch at 180 °C, a grid-type transparent electrode (TE) with a sheet resistance of ˜7 Ω sq-1 and a dc to optical conductivity ratio of ˜300 at ˜84.2% optical transmittance was obtained, values that were superior to previously reported results. In order to evaluate the durability of the TE under bending stresses, the sheet resistance was measured as the number of bending cycles was increased. The sheet resistance of the Ag grid electrode increased only slightly, by less than 20% from its original value, even after 500 cycles. To the best of our knowledge, this is the first time that Ag (invisible) grid TEs have been fabricated on PET substrates by ac voltage applied EHD jet printing.

  18. Impacts of Inverter-Based Advanced Grid Support Functions on Islanding Detection

    SciTech Connect

    Nelson, Austin; Hoke, Anderson; Miller, Brian; Chakraborty, Sudipta; Bell, Frances; McCarty, Michael

    2016-12-12

    A long-standing requirement for inverters paired with distributed energy resources is that they are required to disconnect from the electrical power system (EPS) when an electrical island is formed. In recent years, advanced grid support controls have been developed for inverters to provide voltage and frequency support by integrating functions such as voltage and frequency ride-through, volt-VAr control, and frequency-Watt control. With these new capabilities integrated into the inverter, additional examination is needed to determine how voltage and frequency support will impact pre-existing inverter functions like island detection. This paper inspects how advanced inverter functions will impact its ability to detect the formation of an electrical island. Results are presented for the unintentional islanding laboratory tests of three common residential-scale photovoltaic inverters performing various combinations of grid support functions. For the inverters tested, grid support functions prolonged island disconnection times slightly; however, it was found that in all scenarios the inverters disconnected well within two seconds, the limit imposed by IEEE Std 1547-2003.

  19. A GPU-based incompressible Navier-Stokes solver on moving overset grids

    NASA Astrophysics Data System (ADS)

    Chandar, Dominic D. J.; Sitaraman, Jayanarayanan; Mavriplis, Dimitri J.

    2013-07-01

    In pursuit of obtaining high fidelity solutions to the fluid flow equations in a short span of time, graphics processing units (GPUs) which were originally intended for gaming applications are currently being used to accelerate computational fluid dynamics (CFD) codes. With a high peak throughput of about 1 TFLOPS on a PC, GPUs seem to be favourable for many high-resolution computations. One such computation that involves a lot of number crunching is computing time accurate flow solutions past moving bodies. The aim of the present paper is thus to discuss the development of a flow solver on unstructured and overset grids and its implementation on GPUs. In its present form, the flow solver solves the incompressible fluid flow equations on unstructured/hybrid/overset grids using a fully implicit projection method. The resulting discretised equations are solved using a matrix-free Krylov solver using several GPU kernels such as gradient, Laplacian and reduction. Some of the simple arithmetic vector calculations are implemented using the CU++: An Object Oriented Framework for Computational Fluid Dynamics Applications using Graphics Processing Units, Journal of Supercomputing, 2013, doi:10.1007/s11227-013-0985-9 approach where GPU kernels are automatically generated at compile time. Results are presented for two- and three-dimensional computations on static and moving grids.

  20. Job Readiness Training Curriculum.

    ERIC Educational Resources Information Center

    Tesolowski, Dennis G.

    Designed for professionals in rehabilitation settings, this curriculum guide presents fifteen lessons that focus on preparing to seek a job, job seeking, and job maintenance. Among the lesson titles included in the guide are (1) How to Find the Right Job and Categories of Jobs, (2) Self-Expressed Interests and Attitudes for Specific Jobs, (3)…

  1. Job-Preference and Job-Matching Assessment Results and Their Association with Job Performance and Satisfaction among Young Adults with Developmental Disabilities

    ERIC Educational Resources Information Center

    Hall, Julie; Morgan, Robert L.; Salzberg, Charles L.

    2014-01-01

    We investigated the effects of preference and degree of match on job performance of four 19 to 20-year-old young adults with developmental disabilities placed in community-based job conditions. We identified high-preference, high-matched and low-preference, low-matched job tasks using a video web-based assessment program. The job matching…

  2. Wireless Communications in Smart Grid

    NASA Astrophysics Data System (ADS)

    Bojkovic, Zoran; Bakmaz, Bojan

    Communication networks play a crucial role in smart grid, as the intelligence of this complex system is built based on information exchange across the power grid. Wireless communications and networking are among the most economical ways to build the essential part of the scalable communication infrastructure for smart grid. In particular, wireless networks will be deployed widely in the smart grid for automatic meter reading, remote system and customer site monitoring, as well as equipment fault diagnosing. With an increasing interest from both the academic and industrial communities, this chapter systematically investigates recent advances in wireless communication technology for the smart grid.

  3. Grid oscillators

    NASA Technical Reports Server (NTRS)

    Popovic, Zorana B.; Kim, Moonil; Rutledge, David B.

    1988-01-01

    Loading a two-dimensional grid with active devices offers a means of combining the power of solid-state oscillators in the microwave and millimeter-wave range. The grid structure allows a large number of negative resistance devices to be combined. This approach is attractive because the active devices do not require an external locking signal, and the combining is done in free space. In addition, the loaded grid is a planar structure amenable to monolithic integration. Measurements on a 25-MESFET grid at 9.7 GHz show power-combining and frequency-locking without an external locking signal, with an ERP of 37 W. Experimental far-field patterns agree with theoretical results obtained using reciprocity.

  4. Design and optimization of smart grid system based on renewable energy in Nyamuk Island, Karimunjawa district, Central Java

    NASA Astrophysics Data System (ADS)

    Novitasari, D.; Indartono, Y. S.; Rachmidha, T. D.; Reksowardojo, I. K.; Irsyad, M.

    2017-03-01

    Nyamuk Island in Karimunjawa District is one of the regions in Java that has no access to electricity grid. The electricity in Nyamuk Island relies on diesel engine which is managed by local government and only operated for 6 hours per day. It occurs as a consequence of high fuel cost. A study on smart micro grid system based on renewable energy was conducted in Combustion Engine and Propulsion System Laboratory of Institut Teknologi Bandung by using 1 kWp solar panels and a 3 kW bio based diesel engine. The fuels used to run the bio based diesel engine were diesel, virgin coconut oil and pure palm oil. The results show that the smart grid system run well at varying load and also with different fuel. Based on the experiments, average inverter efficiency was about 87%. This experiments proved that the use of biofuels had no effects to the overall system performance. Based on the results of prototype experiments, this paper will focus on design and optimization of smart micro grid system using HOMER software for Nyamuk Island. The design consists of (1) a diesel engine existing in Nyamuk Island whose fuel was diesel, (2) a lister engine whose fuel was from vegetable oil from Callophyllum inophyllum, (3) solar panels, (4) batteries and (5) converter. In this simulation, the existing diesel engine was set to operate 2 hours per day, while operating time of the lister engine has been varied with several scenarios. In scenario I, the lister engine was operated 5 hours per day, in scenario II the lister engine was operated 24 hours per day and in scenario III the lister engine was operated 8 hours per week in the weekend. In addition, a design using a modified diesel engine was conducted as well with an assumption that the modified cost was about 10% of new diesel engine cost. By modifying the diesel engine, the system will not need a lister engine. Assessments has been done to evaluate the designs, and the result shows that the optimal value obtains by the lister engine

  5. A grid-based distributed flood forecasting model for use with weather radar data: Part 1. Formulation

    NASA Astrophysics Data System (ADS)

    Bell, V. A.; Moore, R. J.

    A practical methodology for distributed rainfall-runoff modelling using grid square weather radar data is developed for use in real-time flood forecasting. The model, called the Grid Model, is configured so as to share the same grid as used by the weather radar, thereby exploiting the distributed rainfall estimates to the full. Each grid square in the catchment is conceptualised as a storage which receives water as precipitation and generates water by overflow and drainage. This water is routed across the catchment using isochrone pathways. These are derived from a digital terrain model assuming two fixed velocities of travel for land and river pathways which are regarded as model parameters to be optimised. Translation of water between isochrones is achieved using a discrete kinematic routing procedure, parameterised through a single dimensionless wave speed parameter, which advects the water and incorporates diffusion effects through the discrete space-time formulation. The basic model routes overflow and drainage separately through a parallel system of kinematic routing reaches, characterised by different wave speeds but using the same isochrone-based space discretisation; these represent fast and slow pathways to the basin outlet, respectively. A variant allows the slow pathway to have separate isochrones calculated using Darcy velocities controlled by the hydraulic gradient as estimated by the local gradient of the terrain. Runoff production within a grid square is controlled by its absorption capacity which is parameterised through a simple linkage function to the mean gradient in the square, as calculated from digital terrain data. This allows absorption capacity to be specified differently for every grid square in the catchment through the use of only two regional parameters and a DTM measurement of mean gradient for each square. An extension of this basic idea to consider the distribution of gradient within the square leads analytically to a Pareto

  6. Clinical Decision Support Systems (CDSS) in GRID Environments.

    PubMed

    Blanquer, Ignacio; Hernández, Vicente; Segrelles, Damià; Robles, Montserrat; García, Juan Miguel; Robledo, Javier Vicente

    2005-01-01

    This paper presents an architecture defined for searching and executing Clinical Decision Support Systems (CDSS) in a LCG2/GT2 Grid environment, using web-based protocols. A CDSS is a system that provides a classification of the patient illness according to the knowledge extracted from clinical practice and using the patient's information in a structured format. The CDSS classification engines can be installed in any site and can be used by different medical users from a Virtual Organization (VO). All users in a VO can consult and execute different classification engines that have been installed in the Grid independently of the platform, architecture or site where the engines are installed or the users are located. The present paper present a solution to requirements such as short-job execution, reducing the response delay on LCG2 environments and providing grid-enabled authenticated access through web portals. Resource discovering and job submission is performed through web services, which are also described in the article.

  7. Grid infrastructure to support science portals for large scale instruments.

    SciTech Connect

    von Laszewski, G.; Foster, I.

    1999-09-29

    Soon, a new generation of scientific workbenches will be developed as a collaborative effort among various research institutions in the US. These scientific workbenches will be accessed in the Web via portals. Reusable components are needed to build such portals for different scientific disciplines, allowing uniform desktop access to remote resources. Such components will include tools and services enabling easy collaboration, job submission, job monitoring, component discovery, and persistent object storage. Based on experience gained from Grand Challenge applications for large-scale instruments, we demonstrate how Grid infrastructure components can be used to support the implementation of science portals. The availability of these components will simplify the prototype implementation of a common portal architecture.

  8. Residential Customer Enrollment in Time-based Rate and Enabling Technology Programs: Smart Grid Investment Grant Consumer Behavior Study Analysis

    SciTech Connect

    Todd, Annika; Cappers, Peter; Goldman, Charles

    2013-05-01

    The U.S. Department of Energy’s (DOE’s) Smart Grid Investment Grant (SGIG) program is working with a subset of the 99 SGIG projects undertaking Consumer Behavior Studies (CBS), which examine the response of mass market consumers (i.e., residential and small commercial customers) to time-varying electricity prices (referred to herein as time-based rate programs) in conjunction with the deployment of advanced metering infrastructure (AMI) and associated technologies. The effort presents an opportunity to advance the electric industry’s understanding of consumer behavior.

  9. Service-Oriented Architecture for NVO and TeraGrid Computing

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph; Miller, Craig; Williams, Roy; Steenberg, Conrad; Graham, Matthew

    2008-01-01

    The National Virtual Observatory (NVO) Extensible Secure Scalable Service Infrastructure (NESSSI) is a Web service architecture and software framework that enables Web-based astronomical data publishing and processing on grid computers such as the National Science Foundation's TeraGrid. Characteristics of this architecture include the following: (1) Services are created, managed, and upgraded by their developers, who are trusted users of computing platforms on which the services are deployed. (2) Service jobs can be initiated by means of Java or Python client programs run on a command line or with Web portals. (3) Access is granted within a graduated security scheme in which the size of a job that can be initiated depends on the level of authentication of the user.

  10. Beyond grid security

    NASA Astrophysics Data System (ADS)

    Hoeft, B.; Epting, U.; Koenig, T.

    2008-07-01

    While many fields relevant to Grid security are already covered by existing working groups, their remit rarely goes beyond the scope of the Grid infrastructure itself. However, security issues pertaining to the internal set-up of compute centres have at least as much impact on Grid security. Thus, this talk will present briefly the EU ISSeG project (Integrated Site Security for Grids). In contrast to groups such as OSCT (Operational Security Coordination Team) and JSPG (Joint Security Policy Group), the purpose of ISSeG is to provide a holistic approach to security for Grid computer centres, from strategic considerations to an implementation plan and its deployment. The generalised methodology of Integrated Site Security (ISS) is based on the knowledge gained during its implementation at several sites as well as through security audits, and this will be briefly discussed. Several examples of ISS implementation tasks at the Forschungszentrum Karlsruhe will be presented, including segregation of the network for administration and maintenance and the implementation of Application Gateways. Furthermore, the web-based ISSeG training material will be introduced. This aims to offer ISS implementation guidance to other Grid installations in order to help avoid common pitfalls.

  11. A novel multi-model neuro-fuzzy-based MPPT for three-phase grid-connected photovoltaic system

    SciTech Connect

    Chaouachi, Aymen; Kamel, Rashad M.; Nagasaka, Ken

    2010-12-15

    This paper presents a novel methodology for Maximum Power Point Tracking (MPPT) of a grid-connected 20 kW photovoltaic (PV) system using neuro-fuzzy network. The proposed method predicts the reference PV voltage guarantying optimal power transfer between the PV generator and the main utility grid. The neuro-fuzzy network is composed of a fuzzy rule-based classifier and three multi-layered feed forwarded Artificial Neural Networks (ANN). Inputs of the network (irradiance and temperature) are classified before they are fed into the appropriated ANN for either training or estimation process while the output is the reference voltage. The main advantage of the proposed methodology, comparing to a conventional single neural network-based approach, is the distinct generalization ability regarding to the nonlinear and dynamic behavior of a PV generator. In fact, the neuro-fuzzy network is a neural network based multi-model machine learning that defines a set of local models emulating the complex and nonlinear behavior of a PV generator under a wide range of operating conditions. Simulation results under several rapid irradiance variations proved that the proposed MPPT method fulfilled the highest efficiency comparing to a conventional single neural network and the Perturb and Observe (P and O) algorithm dispositive. (author)

  12. A scalable architecture for online anomaly detection of WLCG batch jobs

    NASA Astrophysics Data System (ADS)

    Kuehn, E.; Fischer, M.; Giffels, M.; Jung, C.; Petzold, A.

    2016-10-01

    For data centres it is increasingly important to monitor the network usage, and learn from network usage patterns. Especially configuration issues or misbehaving batch jobs preventing a smooth operation need to be detected as early as possible. At the GridKa data and computing centre we therefore operate a tool BPNetMon for monitoring traffic data and characteristics of WLCG batch jobs and pilots locally on different worker nodes. On the one hand local information itself are not sufficient to detect anomalies for several reasons, e.g. the underlying job distribution on a single worker node might change or there might be a local misconfiguration. On the other hand a centralised anomaly detection approach does not scale regarding network communication as well as computational costs. We therefore propose a scalable architecture based on concepts of a super-peer network.

  13. Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2002-01-01

    Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.

  14. A wide field-of-view microscope based on holographic focus grid

    NASA Astrophysics Data System (ADS)

    Wu, Jigang; Cui, Xiquan; Zheng, Guoan; Lee, Lap Man; Yang, Changhuei

    2010-02-01

    We have developed a novel microscope technique that can achieve wide field-of-view (FOV) imaging and yet possess resolution that is comparable to conventional microscope. The principle of wide FOV microscope system breaks the link between resolution and FOV magnitude of traditional microscopes. Furthermore, by eliminating bulky optical elements from its design and utilizing holographic optical elements, the wide FOV microscope system is more cost-effective. In our system, a hologram was made to focus incoming collimated beam into a focus grid. The sample is put in the focal plane and the transmissions of the focuses are detected by an imaging sensor. By scanning the incident angle of the incoming beam, the focus grid will scan across the sample and the time-varying transmission can be detected. We can then reconstruct the transmission image of the sample. The resolution of microscopic image is limited by the size of the focus formed by the hologram. The scanning area of each focus spot is determined by the separation of the focus spots and can be made small for fast imaging speed. We have fabricated a prototype system with a 2.4-mm FOV and 1-μm resolution. The prototype system was used to image onion skin cells for a demonstration. The preliminary experiments prove the feasibility of the wide FOV microscope technique, and the possibility of a wider FOV system with better resolution.

  15. Simulation of plasma based semiconductor processing using block structured locally refined grids

    SciTech Connect

    Wake, D.D.

    1998-01-01

    We have described a new numerical method for plasma simulation. Calculations have been presented which show that the method is accurate and suggest the regimes in which the method provides savings in CPU time and memory requirements. A steady state simulation of a four centimeter domain was modeled with sheath scale (150 microns) resolution using only 40 grid points. Simulations of semiconductor processing equipment have been performed which imply the usefulness of the method for engineering applications. It is the author`s opinion that these accomplishments represent a significant contribution to plasma simulation and the efficient numerical solution of certain systems of non-linear partial differential equations. More work needs to be done, however, for the algorithm to be of practical use in an engineering environment. Despite our success at avoiding the dielectric relaxation timestep restrictions the algorithm is still conditionally stable and requires timesteps which are relatively small. This represents a prohibitive runtime for steady state solutions on high resolution grids. Current research suggests that these limitations may be overcome and the use of much larger timesteps will be possible.

  16. Optimal Operation Method of Smart House by Controllable Loads based on Smart Grid Topology

    NASA Astrophysics Data System (ADS)

    Yoza, Akihiro; Uchida, Kosuke; Yona, Atsushi; Senju, Tomonobu

    2013-08-01

    From the perspective of global warming suppression and depletion of energy resources, renewable energy such as wind generation (WG) and photovoltaic generation (PV) are getting attention in distribution systems. Additionally, all electrification apartment house or residence such as DC smart house have increased in recent years. However, due to fluctuating power from renewable energy sources and loads, supply-demand balancing fluctuations of power system become problematic. Therefore, "smart grid" has become very popular in the worldwide. This article presents a methodology for optimal operation of a smart grid to minimize the interconnection point power flow fluctuations. To achieve the proposed optimal operation, we use distributed controllable loads such as battery and heat pump. By minimizing the interconnection point power flow fluctuations, it is possible to reduce the maximum electric power consumption and the electric cost. This system consists of photovoltaics generator, heat pump, battery, solar collector, and load. In order to verify the effectiveness of the proposed system, MATLAB is used in simulations.

  17. GridTool: A surface modeling and grid generation tool

    NASA Technical Reports Server (NTRS)

    Samareh-Abolhassani, Jamshid

    1995-01-01

    GridTool is designed around the concept that the surface grids are generated on a set of bi-linear patches. This type of grid generation is quite easy to implement, and it avoids the problems associated with complex CAD surface representations and associated surface parameterizations. However, the resulting surface grids are close to but not on the original CAD surfaces. This problem can be alleviated by projecting the resulting surface grids onto the original CAD surfaces. GridTool is designed primary for unstructured grid generation systems. Currently, GridTool supports VGRID and FELISA systems, and it can be easily extended to support other unstructured grid generation systems. The data in GridTool is stored parametrically so that once the problem is set up, one can modify the surfaces and the entire set of points, curves and patches will be updated automatically. This is very useful in a multidisciplinary design and optimization process. GridTool is written entirely in ANSI 'C', the interface is based on the FORMS library, and the graphics is based on the GL library. The code has been tested successfully on IRIS workstations running IRIX4.0 and above. The memory is allocated dynamically, therefore, memory size will depend on the complexity of geometry/grid. GridTool data structure is based on a link-list structure which allows the required memory to expand and contract dynamically according to the user's data size and action. Data structure contains several types of objects such as points, curves, patches, sources and surfaces. At any given time, there is always an active object which is drawn in magenta, or in their highlighted colors as defined by the resource file which will be discussed later.

  18. Job Clusters as Perceived by High School Students.

    ERIC Educational Resources Information Center

    Vivekananthan, Pathe S.; Weber, Larry J.

    Career awareness is described as the manner by which students cluster jobs. The clustering of jobs was based on the students perceptions of similarities among job titles. Interest inventories were used as the bases to select 36 job titles. Seventy-eight high school students sorted the stimuli into several categories. The multidimensional scaling…

  19. Using Adventure-Based Cooperation Training To Develop Job Related Social Skills for Adolescents with Severe Behavioral and Emotional Problems.

    ERIC Educational Resources Information Center

    Reganick, Karol

    This practicum addressed the attitudes and behaviors of 10 adolescents with severe behavioral and emotional problems participating in a cooperative job training program. The intervention used an adventure approach to help the students replace aggression and misconduct with job-related social skills. A needs assessment was conducted to identify…

  20. Configuration interaction singles based on the real-space numerical grid method: Kohn-Sham versus Hartree-Fock orbitals.

    PubMed

    Kim, Jaewook; Hong, Kwangwoo; Choi, Sunghwan; Hwang, Sang-Yeon; Youn Kim, Woo

    2015-12-21

    We developed a program code of configuration interaction singles (CIS) based on a numerical grid method. We used Kohn-Sham (KS) as well as Hartree-Fock (HF) orbitals as a reference configuration and Lagrange-sinc functions as a basis set. Our calculations show that KS-CIS is more cost-effective and more accurate than HF-CIS. The former is due to the fact that the non-local HF exchange potential greatly reduces the sparsity of the Hamiltonian matrix in grid-based methods. The latter is because the energy gaps between KS occupied and virtual orbitals are already closer to vertical excitation energies and thus KS-CIS needs small corrections, whereas HF results in much larger energy gaps and more diffuse virtual orbitals. KS-CIS using the Lagrange-sinc basis set also shows a better or a similar accuracy to smaller orbital space compared to the standard HF-CIS using Gaussian basis sets. In particular, KS orbitals from an exact exchange potential by the Krieger-Li-Iafrate approximation lead to more accurate excitation energies than those from conventional (semi-) local exchange-correlation potentials.

  1. Spatiotemporal analysis of urban growth in three African capital cities: A grid-cell-based analysis using remote sensing data

    NASA Astrophysics Data System (ADS)

    Hou, Hao; Estoque, Ronald C.; Murayama, Yuji

    2016-11-01

    Spatiotemporal analysis of urban growth patterns and dynamics is important not only in urban geography but also in landscape and urban planning and sustainability studies. Based on remote sensing-derived land-cover maps and LandScan population data of two time points (ca. 2000 and 2014), this study examines the spatiotemporal patterns and dynamics of the urban growth of three rapidly urbanizing African capital cities, namely, Bamako (Mali), Cairo (Egypt) and Nairobi (Kenya). A grid-cell-based analysis technique was employed to integrate the LandScan population and land-cover data, creating grid maps of population density and the density of each land-cover category. The results revealed that Bamako's urban (built-up) area has been expanding at a rate of 5.37% per year. Nairobi had a lower annual expansion rate (4.99%), but had a higher rate compared to Cairo (2.79%). Bamako's urban expansion was at the expense of its bareland and green spaces (i.e., cropland, grassland and forest), whereas the urban expansions of Cairo and Nairobi were at the cost of their bareland. In all three cities, there was a weak, but significant positive relationship between urban expansion (change in built-up density) and population growth (change in population density). Overall, this study provides an overview of the spatial patterns and dynamics of urban growth in these three African capitals, which might be useful in the context of urban studies and landscape and urban planning.

  2. Full time and full coverage global observation system for ecological monitoring base on MEO satellite grid constellation

    NASA Astrophysics Data System (ADS)

    You, Rui; Liu, Shuhao

    Human life more and more rely on earth environment and atmosphere, environmental information required by space based monitor is a crucial importance, although GEO and polar weather satellite in orbit by several countries, but it can’t monitor all zone of earth with real time. This paper present a conception proposal which can realize stable, continue and real time observation for any zone(include arctic and ant-arctic zone) of earth and its atmosphere, it base on walker constellation in 20000Km high medium orbit with 24 satellites, payloads configuration with infrared spectrometer, visible camera, ultraviolet ray camera, millimeter wave radiometer, leaser radar, spatial resolution are 1km@ infrared,0.5km@ visible optical. This satellite of grid constellation can monitor any zone of global with 1-3hours retrial observation cycles. Air pollution, ozone of atmosphere, earth surface pollution, desert storm, water pollution, vegetation change, natural disasters, man-made emergency situations, agriculture and climate change can monitor by this MEO satellite grid constellation. This system is a international space infrastructure, use of mature technologies and products, can build by co-operation with multi countries.

  3. High-, Middle-, and Low-Wage Job Preparatory Programs--The Creation and Use of Policy Tool Based on UI Wages Data. Technical Report.

    ERIC Educational Resources Information Center

    Whittaker, Doug

    This is a report on the 2001 after-college earnings of students from Washington State's community and technical colleges. The state board created a wage-based category system for all 500 vocational/job-preparatory programs offered by the 34 state two-year colleges. The programs were divided into high- ($12 or more per hour), middle- ($10.50-$12…

  4. Outcomes of Functional Assessment-Based Interventions for Students with and at Risk for Emotional and Behavioral Disorders in a Job-Share Setting

    ERIC Educational Resources Information Center

    Lane, Kathleen Lynne; Eisner, Shanna L.; Kretzer, James; Bruhn, Allison L.; Crnobori, Mary; Funke, Laura; Lerner, Tara; Casey, Amy

    2009-01-01

    In this article, we describe a systematic approach to designing, implementing, and evaluating functional assessment-based interventions developed by Umbreit, Ferro, Liaupsin, and Lane (2007), implemented in a job-share classroom with two first-grade students. One student was at risk for emotional and behavioral disorders (EBD) according to…

  5. A Study of the Impact of a School-Based, Job-Embedded Professional Development Program on Elementary and Middle School Teacher Efficacy for Technology Integration

    ERIC Educational Resources Information Center

    Skoretz, Yvonne M.

    2011-01-01

    The purpose of this study was to determine the impact of a school-based, job-embedded professional development program on elementary and middle school teacher efficacy for technology integration. Teacher efficacy has been identified as a strong predictor of whether the content of professional development will transfer to classroom practice…

  6. The Relationship of Locus of Control, Stress Related to Performance-Based Accreditation, and Job Stress to Burnout in Public School Teachers and Principals.

    ERIC Educational Resources Information Center

    Hipps, Elizabeth Smith; Malpin, Glennelle

    Results of a study to determine the amount of burnout experienced by Alabama public school teachers and principals that could be accounted for by stress related to the Alabama Performance-Based Accreditation Standards, job stress, locus of control, age, and gender are reported in this paper. Objectives of the study were to develop a measure of…

  7. How Female Professionals Successfully Process and Negotiate Involuntary Job Loss at Faith-Based Colleges and Universities: A Grounded Theory Study

    ERIC Educational Resources Information Center

    Cunningham, Debra Jayne

    2013-01-01

    Using a constructivist grounded theory approach (Charmaz, 2006), this qualitative study examined how 8 female senior-level professionals employed at faith-based colleges and universities processed and navigated the experience of involuntary job loss and successfully transitioned to another position. The purpose of this research was to contribute…

  8. How Female Professionals Successfully Process and Negotiate Involuntary Job Loss at Faith-Based Colleges and Universities: A Grounded Theory Study

    ERIC Educational Resources Information Center

    Cunningham, Debra Jayne

    2015-01-01

    Using a constructivist grounded theory approach (Charmaz, 2006), this qualitative study examined how eight female senior-level professionals employed at faith-based colleges and universities processed and navigated the experience of involuntary job loss and successfully transitioned to another position. The theoretical framework of psychological…

  9. Personal vulnerability and work-home interaction: the effect of job performance-based self-esteem on work/home conflict and facilitation.

    PubMed

    Innstrand, Siw Tone; Langballe, Ellen Melbye; Espnes, Geir Arild; Aasland, Olaf Gjerløw; Falkum, Erik

    2010-12-01

    The aim of the present study was to examine the longitudinal relationship between job performance-based self-esteem (JPB-SE) and work-home interaction (WHI) in terms of the direction of the interaction (work-to-home vs. home-to-work) and the effect (conflict vs. facilitation). A sample of 3,475 respondents from eight different occupational groups (lawyers, physicians, nurses, teachers, church ministers, bus drivers, and people working in advertising and information technology) supplied data at two points of time with a two-year time interval. The two-wave, cross-lagged structural equations modeling (SEM) analysis demonstrated reciprocal relationships between these variables, i.e., job performance-based self-esteem may act as a precursor as well as an outcome of work-home interaction. The strongest association was between job performance-based self-esteem and work-to-home conflict. Previous research on work-home interaction has mainly focused on situational factors. This longitudinal study expands the work-home literature by demonstrating how individual vulnerability (job performance-based self-esteem) contributes to the explanation of work-home interactions.

  10. Smart Grid Maturity Model Webinar: Defining the Pathway to the California Smart Grid of 2020, for Publicly Owned Utilities

    DTIC Science & Technology

    2012-03-21

    system cost; support clean energy job creation and will be accomplished in a financially responsible manner at a pace and scope of deployment that...Enhance service offerings Improve grid efficiency and reliability Reflect local financial, environmental and social priorities Support clean ... energy job creation Followers Fast Followers Leaders Reduce Maintain Increase Maintain Explore Enhance Maintain Improve Improve React Explore

  11. Towards risk-based management of critical infrastructures : enabling insights and analysis methodologies from a focused study of the bulk power grid.

    SciTech Connect

    Richardson, Bryan T.; LaViolette, Randall A.; Cook, Benjamin Koger

    2008-02-01

    This report summarizes research on a holistic analysis framework to assess and manage risks in complex infrastructures, with a specific focus on the bulk electric power grid (grid). A comprehensive model of the grid is described that can approximate the coupled dynamics of its physical, control, and market components. New realism is achieved in a power simulator extended to include relevant control features such as relays. The simulator was applied to understand failure mechanisms in the grid. Results suggest that the implementation of simple controls might significantly alter the distribution of cascade failures in power systems. The absence of cascade failures in our results raises questions about the underlying failure mechanisms responsible for widespread outages, and specifically whether these outages are due to a system effect or large-scale component degradation. Finally, a new agent-based market model for bilateral trades in the short-term bulk power market is presented and compared against industry observations.

  12. An efficient overset grid technique for computational fluid dynamics based on method coupling and feature tracking

    NASA Astrophysics Data System (ADS)

    Snyder, Richard Dean

    A new overset grid method that permits different fluid models to be coupled in a single simulation is presented. High fidelity methods applied in regions of complex fluid flow can be coupled with simpler methods to save computer simulation time without sacrificing accuracy. A mechanism for automatically moving grid zones to track unsteady flow features complements the method. The coupling method is quite general and will support a variety of governing equations and discretization methods. Furthermore, there are no restrictions on the geometrical layout of the coupling. Four sets of governing equations have been implemented to date: the Navier-Stokes, full Euler, Cartesian Euler, and linearized Euler equations. In all cases, the MacCormack explicit predictor-corrector scheme was used to discretize the equations. The overset coupling technique was applied to a variety of configurations in one, two, and three dimensions. Steady configurations include the flow over a bump, a NACA0012 airfoil, and an F-5 wing. Unsteady configurations include two aeroacoustic benchmark problems and a NACA64A006 airfoil with an oscillating simple flap. Solutions obtained with the overset coupling method are compared with other numerical results and, when available, with experimental data. Results from the NACA0012 airfoil and F-5 wing show a 30% reduction in simulation time without a loss of accuracy when the linearized Euler equations were coupled with the full Euler equations. A 25% reduction was recorded for the NACA0012 airfoil when the Euler equations were solved together with the Navier-Stokes equations. Feature tracking was used in the aeroacoustic benchmark and NACA64A006 problems and was found to be very effective in minimizing the dispersion error in the vicinity of shocks. The computer program developed to implement the overset grid method coupling technique was written entirely in C++, an object-oriented programming language. The principles of object-oriented programming were

  13. Comparison of a grid-based filter to a Kalman filter for the state estimation of a maneuvering target

    NASA Astrophysics Data System (ADS)

    Silbert, Mark; Mazzuchi, Thomas; Sarkani, Shahram

    2011-09-01

    Providing accurate state estimates of a maneuvering target is an important problem. This problem occurs when tracking maneuvering boats or even people wandering around. In our earlier paper, a specialized grid-based filter (GBF) was introduced as an effective method to produce accurate state estimates of a target moving in two dimensions, while requiring only a two-dimensional grid. The paper showed that this GBF produces accurate state estimates because the filter can capture the kinematic constraints of the target directly, and thus account for them in the estimation process. In this paper, the relative performance of a GBF to a Kalman filter is investigated. The state estimates (position and velocity) from a GBF are compared to those from a Kalman filter, against a maneuvering target. This study will employ the comparison paradigm presented by Kirubarajan and Bar-Shalom. The paradigm incrementally increases the maneuverability of a target to determine how the two different track filters compare as the target becomes more maneuverable. The intent of this study is to determine how maneuverable the target must be to gain the benefit from a GBF over a Kalman filter. The paper will discuss the target motion model, the GBF implementation, and the Kalman filter used for the study. Our results show that the GBF outperforms a Kalman filter, especially as the target becomes more maneuverable. A disadvantage of the GBF is that it is more computational than a Kalman filter. The paper will discuss the grid and sample sizing needed to obtain quality estimates from a GBF. It will be shown that the sizes are much smaller than what may be expected and is quite stable over a large range of sizes. Furthermore, this GBF can exploit parallelization of the computations, making the processing time significantly less.

  14. A bioinformatics knowledge discovery in text application for grid computing

    PubMed Central

    Castellano, Marcello; Mastronardi, Giuseppe; Bellotti, Roberto; Tarricone, Gianfranco

    2009-01-01

    Background A fundamental activity in biomedical research is Knowledge Discovery which has the ability to search through large amounts of biomedical information such as documents and data. High performance computational infrastructures, such as Grid technologies, are emerging as a possible infrastructure to tackle the intensive use of Information and Communication resources in life science. The goal of this work was to develop a software middleware solution in order to exploit the many knowledge discovery applications on scalable and distributed computing systems to achieve intensive use of ICT resources. Methods The development of a grid application for Knowledge Discovery in Text using a middleware solution based methodology is presented. The system must be able to: perform a user application model, process the jobs with the aim of creating many parallel jobs to distribute on the computational nodes. Finally, the system must be aware of the computational resources available, their status and must be able to monitor the execution of parallel jobs. These operative requirements lead to design a middleware to be specialized using user application modules. It included a graphical user interface in order to access to a node search system, a load balancing system and a transfer optimizer to reduce communication costs. Results A middleware solution prototype and the performance evaluation of it in terms of the speed-up factor is shown. It was written in JAVA on Globus Toolkit 4 to build the grid infrastructure based on GNU/Linux computer grid nodes. A test was carried out and the results are shown for the named entity recognition search of symptoms and pathologies. The search was applied to a collection of 5,000 scientific documents taken from PubMed. Conclusion In this paper we discuss the development of a grid application based on a middleware solution. It has been tested on a knowledge discovery in text process to extract new and useful information about symptoms and

  15. Evaluating the Information Power Grid using the NAS Grid Benchmarks

    NASA Technical Reports Server (NTRS)

    VanderWijngaartm Rob F.; Frumkin, Michael A.

    2004-01-01

    The NAS Grid Benchmarks (NGB) are a collection of synthetic distributed applications designed to rate the performance and functionality of computational grids. We compare several implementations of the NGB to determine programmability and efficiency of NASA's Information Power Grid (IPG), whose services are mostly based on the Globus Toolkit. We report on the overheads involved in porting existing NGB reference implementations to the IPG. No changes were made to the component tasks of the NGB can still be improved.

  16. Optimal Shape Design in Heat Transfer Based on Body-Fitted Grid Generation

    NASA Astrophysics Data System (ADS)

    Mohebbi, Farzad; Sellier, Mathieu

    2013-04-01

    This paper deals with an inverse steady-state heat transfer problem. We develop in this work a new numerical methodology to infer the shape a heated body should have for the temperature distribution on part of its boundary to match a prescribed one. This new numerical methodology solves this shape optimization problem using body-fitted grid generation to map the unknown optimal shape onto a fixed computational domain. This mapping enables a simple discretization of the Heat Equation using finite differences and allows us to remesh the physical domain, which varies at each optimization iteration. A novel aspect of this work is the sensitivity analysis, which is expressed explicitly in the fixed computational domain. This allows a very efficient evaluation of the sensitivities. The Conjugate Gradient method is used to minimize the objective function and this work proposes an efficient redistribution method to maintain the quality of the mesh throughout the optimization procedure.

  17. Regional study on investment for transmission infrastructure in China based on the State Grid data

    NASA Astrophysics Data System (ADS)

    Wei, Wendong; Wu, Xudong; Wu, Xiaofang; Xi, Qiangmin; Ji, Xi; Li, Guoping

    2016-06-01

    Transmission infrastructure is an integral component of safeguarding the stability of electricity delivery. However, existing studies of transmission infrastructure mostly rely on a simple review of the network, while the analysis of investments remains rudimentary. This study conducted the first regionally focused analysis of investments in transmission infrastructure in China to help optimize its structure and reduce investment costs. Using State Grid data, the investment costs, under various voltages, for transmission lines and transformer substations are calculated. By analyzing the regional profile of cumulative investment in transmission infrastructure, we assess correlations between investment, population, and economic development across the regions. The recent development of ultra-high-voltage transmission networks will provide policy-makers new options for policy development.

  18. Inspection Robot Based Mobile Sensing and Power Line Tracking for Smart Grid

    PubMed Central

    Byambasuren, Bat-erdene; Kim, Donghan; Oyun-Erdene, Mandakh; Bold, Chinguun; Yura, Jargalbaatar

    2016-01-01

    Smart sensing and power line tracking is very important in a smart grid system. Illegal electricity usage can be detected by remote current measurement on overhead power lines using an inspection robot. There is a need for accurate detection methods of illegal electricity usage. Stable and correct power line tracking is a very prominent issue. In order to correctly track and make accurate measurements, the swing path of a power line should be previously fitted and predicted by a mathematical function using an inspection robot. After this, the remote inspection robot can follow the power line and measure the current. This paper presents a new power line tracking method using parabolic and circle fitting algorithms for illegal electricity detection. We demonstrate the effectiveness of the proposed tracking method by simulation and experimental results. PMID:26907274

  19. Inspection Robot Based Mobile Sensing and Power Line Tracking for Smart Grid.

    PubMed

    Byambasuren, Bat-Erdene; Kim, Donghan; Oyun-Erdene, Mandakh; Bold, Chinguun; Yura, Jargalbaatar

    2016-02-19

    Smart sensing and power line tracking is very important in a smart grid system. Illegal electricity usage can be detected by remote current measurement on overhead power lines using an inspection robot. There is a need for accurate detection methods of illegal electricity usage. Stable and correct power line tracking is a very prominent issue. In order to correctly track and make accurate measurements, the swing path of a power line should be previously fitted and predicted by a mathematical function using an inspection robot. After this, the remote inspection robot can follow the power line and measure the current. This paper presents a new power line tracking method using parabolic and circle fitting algorithms for illegal electricity detection. We demonstrate the effectiveness of the proposed tracking method by simulation and experimental results.

  20. Magnetic field extraction of trap-based electron beams using a high-permeability grid

    SciTech Connect

    Hurst, N. C.; Danielson, J. R.; Surko, C. M.

    2015-07-15

    A method to form high quality electrostatically guided lepton beams is explored. Test electron beams are extracted from tailored plasmas confined in a Penning-Malmberg trap. The particles are then extracted from the confining axial magnetic field by passing them through a high magnetic permeability grid with radial tines (a so-called “magnetic spider”). An Einzel lens is used to focus and analyze the beam properties. Numerical simulations are used to model non-adiabatic effects due to the spider, and the predictions are compared with the experimental results. Improvements in beam quality are discussed relative to the use of a hole in a high permeability shield (i.e., in lieu of the spider), and areas for further improvement are described.

  1. Regional study on investment for transmission infrastructure in China based on the State Grid data

    NASA Astrophysics Data System (ADS)

    Wei, Wendong; Wu, Xudong; Wu, Xiaofang; Xi, Qiangmin; Ji, Xi; Li, Guoping

    2017-03-01

    Transmission infrastructure is an integral component of safeguarding the stability of electricity delivery. However, existing studies of transmission infrastructure mostly rely on a simple review of the network, while the analysis of investments remains rudimentary. This study conducted the first regionally focused analysis of investments in transmission infrastructure in China to help optimize its structure and reduce investment costs. Using State Grid data, the investment costs, under various voltages, for transmission lines and transformer substations are calculated. By analyzing the regional profile of cumulative investment in transmission infrastructure, we assess correlations between investment, population, and economic development across the regions. The recent development of ultra-high-voltage transmission networks will provide policy-makers new options for policy development.

  2. Grid-based Parallel Data Streaming Implemented for the Gyrokinetic Toroidal Code

    SciTech Connect

    S. Klasky; S. Ethier; Z. Lin; K. Martins; D. McCune; R. Samtaney

    2003-09-15

    We have developed a threaded parallel data streaming approach using Globus to transfer multi-terabyte simulation data from a remote supercomputer to the scientist's home analysis/visualization cluster, as the simulation executes, with negligible overhead. Data transfer experiments show that this concurrent data transfer approach is more favorable compared with writing to local disk and then transferring this data to be post-processed. The present approach is conducive to using the grid to pipeline the simulation with post-processing and visualization. We have applied this method to the Gyrokinetic Toroidal Code (GTC), a 3-dimensional particle-in-cell code used to study microturbulence in magnetic confinement fusion from first principles plasma theory.

  3. Comparisons of Ship-based Observations of Air-Sea Energy Budgets with Gridded Flux Products

    NASA Astrophysics Data System (ADS)

    Fairall, C. W.; Blomquist, B.

    2015-12-01

    Air-surface interactions are characterized directly by the fluxes of momentum, heat, moisture, trace gases, and particles near the interface. In the last 20 years advances in observation technologies have greatly expanded the database of high-quality direct (covariance) turbulent flux and irradiance observations from research vessels. In this paper, we will summarize observations from the NOAA sea-going flux system from participation in various field programs executed since 1999 and discuss comparisons with several gridded flux products. We will focus on comparisons of turbulent heat fluxes and solar and IR radiative fluxes. The comparisons are done for observing programs in the equatorial Pacific and Indian Oceans and SE subtropical Pacific.

  4. A Comparison of Grid-based and SPH Binary Mass-transfer and Merger Simulations

    NASA Astrophysics Data System (ADS)

    Motl, Patrick M.; Frank, Juhan; Staff, Jan; Clayton, Geoffrey C.; Fryer, Christopher L.; Even, Wesley; Diehl, Steven; Tohline, Joel E.

    2017-04-01

    There is currently a great amount of interest in the outcomes and astrophysical implications of mergers of double degenerate binaries. In a commonly adopted approximation, the components of such binaries are represented by polytropes with an index of n = 3/2. We present detailed comparisons of stellar mass-transfer and merger simulations of polytropic binaries that have been carried out using two very different numerical algorithms—a finite-volume “grid” code and a smoothed-particle hydrodynamics (SPH) code. We find that there is agreement in both the ultimate outcomes of the evolutions and the intermediate stages if the initial conditions for each code are chosen to match as closely as possible. We find that even with closely matching initial setups, the time it takes to reach a concordant evolution differs between the two codes because the initial depth of contact cannot be matched exactly. There is a general tendency for SPH to yield higher mass transfer rates and faster evolution to the final outcome. We also present comparisons of simulations calculated from two different energy equations: in one series, we assume a polytropic equation of state and in the other series an ideal gas equation of state. In the latter series of simulations, an atmosphere forms around the accretor, which can exchange angular momentum and cause a more rapid loss of orbital angular momentum. In the simulations presented here, the effect of the ideal equation of state is to de-stabilize the binary in both SPH and grid simulations, but the effect is more pronounced in the grid code.

  5. Impact of Spatial Scale on Calibration and Model Output for a Grid-based SWAT Model

    NASA Astrophysics Data System (ADS)

    Pignotti, G.; Vema, V. K.; Rathjens, H.; Raj, C.; Her, Y.; Chaubey, I.; Crawford, M. M.

    2014-12-01

    The traditional implementation of the Soil and Water Assessment Tool (SWAT) model utilizes common landscape characteristics known as hydrologic response units (HRUs). Discretization into HRUs provides a simple, computationally efficient framework for simulation, but also represents a significant limitation of the model as spatial connectivity between HRUs is ignored. SWATgrid, a newly developed, distributed version of SWAT, provides modified landscape routing via a grid, overcoming these limitations. However, the current implementation of SWATgrid has significant computational overhead, which effectively precludes traditional calibration and limits the total number of grid cells in a given modeling scenario. Moreover, as SWATgrid is a relatively new modeling approach, it remains largely untested with little understanding of the impact of spatial resolution on model output. The objective of this study was to determine the effects of user-defined input resolution on SWATgrid predictions in the Upper Cedar Creek Watershed (near Auburn, IN, USA). Original input data, nominally at 30 m resolution, was rescaled for a range of resolutions between 30 and 4,000 m. A 30 m traditional SWAT model was developed as the baseline for model comparison. Monthly calibration was performed, and the calibrated parameter set was then transferred to all other SWAT and SWATgrid models to focus the effects of resolution on prediction uncertainty relative to the baseline. Model output was evaluated with respect to stream flow at the outlet and water quality parameters. Additionally, output of SWATgrid models were compared to output of traditional SWAT models at each resolution, utilizing the same scaled input data. A secondary objective considered the effect of scale on calibrated parameter values, where each standard SWAT model was calibrated independently, and parameters were transferred to SWATgrid models at equivalent scales. For each model, computational requirements were evaluated

  6. It's My Job: Job Descriptions for Over 30 Camp Jobs.

    ERIC Educational Resources Information Center

    Klein, Edie

    This book was created to assist youth-camp directors define their camp jobs to improve employee performance assessment, training, and hiring. The book, aimed at clarifying issues in fair-hiring practices required by the 1990 Americans with Disabilities Act (ADA), includes the descriptions of 31 jobs. Each description includes the job's minimum…

  7. The Open Science Grid

    SciTech Connect

    Pordes, Ruth; Kramer, Bill; Olson, Doug; Livny, Miron; Roy, Alain; Avery, Paul; Blackburn, Kent; Wenaus, Torre; Wurthwein, Frank; Gardner, Rob; Wilde, Mike; /Chicago U. /Indiana U.

    2007-06-01

    The Open Science Grid (OSG) provides a distributed facility where the Consortium members provide guaranteed and opportunistic access to shared computing and storage resources. OSG provides support for and evolution of the infrastructure through activities that cover operations, security, software, troubleshooting, addition of new capabilities, and support for existing and engagement with new communities. The OSG SciDAC-2 project provides specific activities to manage and evolve the distributed infrastructure and support its use. The innovative aspects of the project are the maintenance and performance of a collaborative (shared & common) petascale national facility over tens of autonomous computing sites, for many hundreds of users, transferring terabytes of data a day, executing tens of thousands of jobs a day, and providing robust and usable resources for scientific groups of all types and sizes. More information can be found at the OSG web site: www.opensciencegrid.org.

  8. Faces of the Recovery Act: The Impact of Smart Grid

    ScienceCinema

    President Obama

    2016-07-12

    On October 27th, Baltimore Gas & Electric was selected to receive $200 million for Smart Grid innovation projects under the Recovery Act. Watch as members of their team, along with President Obama, explain how building a smarter grid will help consumers cut their utility bills, battle climate change and create jobs.

  9. Faces of the Recovery Act: The Impact of Smart Grid

    SciTech Connect

    President Obama

    2009-11-24

    On October 27th, Baltimore Gas & Electric was selected to receive $200 million for Smart Grid innovation projects under the Recovery Act. Watch as members of their team, along with President Obama, explain how building a smarter grid will help consumers cut their utility bills, battle climate change and create jobs.

  10. 3D laser inspection of fuel assembly grid spacers for nuclear reactors based on diffractive optical elements

    NASA Astrophysics Data System (ADS)

    Finogenov, L. V.; Lemeshko, Yu A.; Zav'yalov, P. S.; Chugui, Yu V.

    2007-06-01

    Ensuring the safety and high operation reliability of nuclear reactors takes 100% inspection of geometrical parameters of fuel assemblies, which include the grid spacers performed as a cellular structure with fuel elements. The required grid spacer geometry of assembly in the transverse and longitudinal cross sections is extremely important for maintaining the necessary heat regime. A universal method for 3D grid spacer inspection using a diffractive optical element (DOE), which generates as the structural illumination a multiple-ring pattern on the inner surface of a grid spacer cell, is investigated. Using some DOEs one can inspect the nomenclature of all produced grids. A special objective has been developed for forming the inner surface cell image. The problems of diffractive elements synthesis, projecting optics calculation, adjusting methods as well as calibration of the experimental measuring system are considered. The algorithms for image processing for different constructive elements of grids (cell, channel hole, outer grid spacer rim) and the experimental results are presented.

  11. Job satisfaction, job stress and psychosomatic health problems in software professionals in India.

    PubMed

    Madhura, Sahukar; Subramanya, Pailoor; Balaram, Pradhan

    2014-01-01

    This questionnaire based study investigates correlation between job satisfaction, job stress and psychosomatic health in Indian software professionals. Also, examines how yoga practicing Indian software professionals cope up with stress and psychosomatic health problems. The sample consisted of yoga practicing and non-yoga practicing Indian software professionals working in India. The findings of this study have shown that there is significant correlation among job satisfaction, job stress and health. In Yoga practitioners job satisfaction is not significantly related to Psychosomatic health whereas in non-yoga group Psychosomatic Health symptoms showed significant relationship with Job satisfaction.

  12. Job satisfaction, job stress and psychosomatic health problems in software professionals in India

    PubMed Central

    Madhura, Sahukar; Subramanya, Pailoor; Balaram, Pradhan

    2014-01-01

    This questionnaire based study investigates correlation between job satisfaction, job stress and psychosomatic health in Indian software professionals. Also, examines how yoga practicing Indian software professionals cope up with stress and psychosomatic health problems. The sample consisted of yoga practicing and non-yoga practicing Indian software professionals working in India. The findings of this study have shown that there is significant correlation among job satisfaction, job stress and health. In Yoga practitioners job satisfaction is not significantly related to Psychosomatic health whereas in non-yoga group Psychosomatic Health symptoms showed significant relationship with Job satisfaction. PMID:25598623

  13. Impact of Heterogeneity-Based Dose Calculation Using a Deterministic Grid-Based Boltzmann Equation Solver for Intracavitary Brachytherapy

    SciTech Connect

    Mikell, Justin K.; Klopp, Ann H.; Gonzalez, Graciela M.N.; Kisling, Kelly D.; Price, Michael J.; Berner, Paula A.; Eifel, Patricia J.; Mourtada, Firas

    2012-07-01

    Purpose: To investigate the dosimetric impact of the heterogeneity dose calculation Acuros (Transpire Inc., Gig Harbor, WA), a grid-based Boltzmann equation solver (GBBS), for brachytherapy in a cohort of cervical cancer patients. Methods and Materials: The impact of heterogeneities was retrospectively assessed in treatment plans for 26 patients who had previously received {sup 192}Ir intracavitary brachytherapy for cervical cancer with computed tomography (CT)/magnetic resonance-compatible tandems and unshielded colpostats. The GBBS models sources, patient boundaries, applicators, and tissue heterogeneities. Multiple GBBS calculations were performed with and without solid model applicator, with and without overriding the patient contour to 1 g/cm{sup 3} muscle, and with and without overriding contrast materials to muscle or 2.25 g/cm{sup 3} bone. Impact of source and boundary modeling, applicator, tissue heterogeneities, and sensitivity of CT-to-material mapping of contrast were derived from the multiple calculations. American Association of Physicists in Medicine Task Group 43 (TG-43) guidelines and the GBBS were compared for the following clinical dosimetric parameters: Manchester points A and B, International Commission on Radiation Units and Measurements (ICRU) report 38 rectal and bladder points, three and nine o'clock, and {sub D2cm3} to the bladder, rectum, and sigmoid. Results: Points A and B, D{sub 2} cm{sup 3} bladder, ICRU bladder, and three and nine o'clock were within 5% of TG-43 for all GBBS calculations. The source and boundary and applicator account for most of the differences between the GBBS and TG-43 guidelines. The D{sub 2cm3} rectum (n = 3), D{sub 2cm3} sigmoid (n = 1), and ICRU rectum (n = 6) had differences of >5% from TG-43 for the worst case incorrect mapping of contrast to bone. Clinical dosimetric parameters were within 5% of TG-43 when rectal and balloon contrast were mapped to bone and radiopaque packing was not overridden. Conclusions

  14. Algebraic grid generation for complex geometries

    NASA Technical Reports Server (NTRS)

    Shih, T. I.-P.; Bailey, R. T.; Nguyen, H. L.; Roelke, R. J.

    1991-01-01

    An efficient computer program called GRID2D/3D has been developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2D and 3D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation. The distribution of grid points within the spatial domain is controlled by stretching functions and grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For 2D spatial domains the boundary curves are constructed by using either cubic or tension spline interpolation. For 3D spatial domains the boundary surfaces are constructed by using a new technique, developed in this study, referred to as 3D bidirectional Hermite interpolation.

  15. CRT--Cascade Routing Tool to define and visualize flow paths for grid-based watershed models

    USGS Publications Warehouse

    Henson, Wesley R.; Medina, Rose L.; Mayers, C. Justin; Niswonger, Richard G.; Regan, R.S.

    2013-01-01

    The U.S. Geological Survey Cascade Routing Tool (CRT) is a computer application for watershed models that include the coupled Groundwater and Surface-water FLOW model, GSFLOW, and the Precipitation-Runoff Modeling System (PRMS). CRT generates output to define cascading surface and shallow subsurface flow paths for grid-based model domains. CRT requires a land-surface elevation for each hydrologic response unit (HRU) of the model grid; these elevations can be derived from a Digital Elevation Model raster data set of the area containing the model domain. Additionally, a list is required of the HRUs containing streams, swales, lakes, and other cascade termination features along with indices that uniquely define these features. Cascade flow paths are determined from the altitudes of each HRU. Cascade paths can cross any of the four faces of an HRU to a stream or to a lake within or adjacent to an HRU. Cascades can terminate at a stream, lake, or HRU that has been designated as a watershed outflow location.

  16. Grid-based methods for diatomic quantum scattering problems: a finite-element, discrete variable representation in prolate spheroidal coordinates

    SciTech Connect

    Tao, Liang; McCurdy, C.W.; Rescigno, T.N.

    2008-11-25

    We show how to combine finite elements and the discrete variable representation in prolate spheroidal coordinates to develop a grid-based approach for quantum mechanical studies involving diatomic molecular targets. Prolate spheroidal coordinates are a natural choice for diatomic systems and have been used previously in a variety of bound-state applications. The use of exterior complex scaling in the present implementation allows for a transparently simple way of enforcing Coulomb boundary conditions and therefore straightforward application to electronic continuum problems. Illustrative examples involving the bound and continuum states of H2+, as well as the calculation of photoionization cross sections, show that the speed and accuracy of the present approach offer distinct advantages over methods based on single-center expansions.

  17. Peer-to-peer Cooperative Scheduling Architecture for National Grid Infrastructure

    NASA Astrophysics Data System (ADS)

    Matyska, Ludek; Ruda, Miroslav; Toth, Simon

    For some ten years, the Czech National Grid Infrastructure MetaCentrum uses a single central PBSPro installation to schedule jobs across the country. This centralized approach keeps a full track about all the clusters, providing support for jobs spanning several sites, implementation for the fair-share policy and better overall control of the grid environment. Despite a steady progress in the increased stability and resilience to intermittent very short network failures, growing number of sites and processors makes this architecture, with a single point of failure and scalability limits, obsolete. As a result, a new scheduling architecture is proposed, which relies on higher autonomy of clusters. It is based on a peer to peer network of semi-independent schedulers for each site or even cluster. Each scheduler accepts jobs for the whole infrastructure, cooperating with other schedulers on implementation of global policies like central job accounting, fair-share, or submission of jobs across several sites. The scheduling system is integrated with the Magrathea system to support scheduling of virtual clusters, including the setup of their internal network, again eventually spanning several sites. On the other hand, each scheduler is local to one of several clusters and is able to directly control and submit jobs to them even if the connection of other scheduling peers is lost. In parallel to the change of the overall architecture, the scheduling system itself is being replaced. Instead of PBSPro, chosen originally for its declared support of large scale distributed environment, the new scheduling architecture is based on the open-source Torque system. The implementation and support for the most desired properties in PBSPro and Torque are discussed and the necessary modifications to Torque to support the MetaCentrum scheduling architecture are presented, too.

  18. Volcano deformation analysis based an on-demand DInSAR-GRID system: the SBAS-GPOD solution

    NASA Astrophysics Data System (ADS)

    Manunta, M.; Casu, F.; Cossu, R.; Fusco, L.; Guarino, S.; Lanari, R.; Mazzarella, G.; Sansosti, E.

    2009-04-01

    Differential SAR Interferometry (DInSAR) has already demonstrated to be an effective technique to detect and monitor ground displacements with centimeter accuracy. Moreover, the recent development of advanced DInSAR techniques, aimed at the generation of deformation time series, has led to the exploitation of the large archive of SAR data acquired all over the world, during the last 16 years, by the ERS, ENVISAT and RADARSAT satellites. Among these advanced approaches, we focus on the Small BAseline Subset (SBAS) algorithm that relies on the combination of DInSAR data pairs, characterized by a small separation between the acquisition orbits (baseline), in order to produce mean deformation velocity maps and the corresponding time series, maximizing the coherent pixel density of the investigated area. One of the main capabilities of the SBAS approach is the possibility to work at two spatial resolution scales, thus allowing us to investigate deformation phenomena affecting both extended areas (with resolution of about 100 by 100 m) and selected zones, in the latter case highlighting localized displacements that may affect single structures or buildings (at the full instrument resolution). Similarly to other advanced DInSAR techniques, the SBAS approach requires extended data storage and processing capabilities due to the large amount of data exploited for the generation of the final products. Accordingly, we present in this work the results of the first experiment to "plug" the robustness of the SBAS algorithm into the high computing capability provided by a GRID-based system. In particular, we have exploited the low-resolution SBAS algorithm [1] and the ESA Grid Processing-on-Demand (G-POD) system. This environment is one of the results achieved by the ESA Science and Application Department of Earth Observation Programmes Directorate at ESRIN that focused, following the participation to the DATAGRID project (the first large European Commission funded Grid project

  19. Spatial distribution of polychlorinated naphthalenes in the atmosphere across North China based on gridded field observations.

    PubMed

    Lin, Yan; Zhao, Yifan; Qiu, Xinghua; Ma, Jin; Yang, Qiaoyun; Shao, Min; Zhu, Tong

    2013-09-01

    Polychlorinated naphthalenes (PCNs) belong to a group of dioxin-like pollutants; however little information is available on PCNs in North China. In this study, gridded field observations by passive air sampling at 90 sites were undertaken to determine the levels, spatial distributions, and sources of PCNs in the atmosphere of North China. A median concentration of 48 pg m(-3) (range: 10-2460 pg m(-3)) for ∑29PCNs indicated heavy PCN pollution. The compositional profile indicated that nearly 90% of PCNs observed were from thermal processes rather than from commercial mixtures. Regarding the source type, a quantitative apportionment suggested that local non-point emissions contributed two-thirds of the total PCNs observed in the study, whereas a point source of electronic-waste recycling site contributed a quarter of total PCNs. The estimated toxic equivalent quantity for dioxin-like PCNs ranged from 0.97 to 687 fg TEQ m(-3), with the electronic-waste recycling site with the highest risk.

  20. Low energy-consumption plasma electrolytic oxidation based on grid cathode.

    PubMed

    Zhang, X M; Tian, X B; Yang, S Q; Gong, C Z; Fu, R K Y; Chu, P K

    2010-10-01

    Plasma electrolytic oxidation (PEO) has attracted widespread attention owing to the simplicity of operation and the excellent properties of the formed coating. However, wider applications of PEO have been limited due to the high power consumption. This work describes the design and performance of a novel technique named shorter distance PEO (SD-PEO), which is intended for lowering the energy consumption. The key feature of the method is the application of grid cathode to eliminate the gaseous envelope effect and to block of the exchange of charge carries during SD-PEO process. Compared to PEO carried out at a normal electrode distance, e.g., 50 mm, both the voltage drop and the joule heat consumed in the electrolyte at a shorter distance, e.g., of 5 mm (SD-PEO) are relatively small. Consequently, the energy consumption rendered by the novel SD-PEO method may decrease by more than 25%. Our results reveal that SD-PEO is a low energy-consumption microarc oxidation technique with more potential in industry applications.

  1. A Seamless Grid-Based Interface for Mean-Field QM/MM Coupled with Efficient Solvation Free Energy Calculations.

    PubMed

    Lim, Hyung-Kyu; Lee, Hankyul; Kim, Hyungjun

    2016-10-11

    Among various models that incorporate solvation effects into first-principles-based electronic structure theory such as density functional theory (DFT), the average solvent electrostatic potential/molecular dynamics (ASEP/MD) method is particularly advantageous. This method explicitly includes the nature of complicated solvent structures that is absent in implicit solvation methods. Because the ASEP/MD method treats only solvent molecule dynamics, it requires less computational cost than the conventional quantum mechanics/molecular mechanics (QM/MM) approaches. Herein, we present a real-space rectangular grid-based method to implement the mean-field QM/MM idea of ASEP/MD to plane-wave DFT, which is termed "DFT in classical explicit solvents", or DFT-CES. By employing a three-dimensional real-space grid as a communication medium, we can treat the electrostatic interactions between the DFT solute and the ASEP sampled from MD simulations in a seamless and straightforward manner. Moreover, we couple a fast and efficient free energy calculation method based on the two-phase thermodynamic (2PT) model with our DFT-CES method, which enables direct and simultaneous computation of the solvation free energies as well as the geometric and electronic responses of a solute of interest under the solvation effect. With the aid of DFT-CES/2PT, we investigate the solvation free energies and detailed solvation thermodynamics for 17 types of organic molecules, which show good agreement with the experimental data. We further compare our simulation results with previous theoretical models and assumptions made for the development of implicit solvation models. We anticipate that our proposed method, DFT-CES/2PT, will enable vast utilization of the ASEP/MD method for investigating solvation properties of materials by using periodic DFT calculations in the future.

  2. Adult Competency Education Kit. Basic Skills in Speaking, Math, and Reading for Employment. Part H. ACE Competency Based Job Descriptions: #25--Household Appliance Mechanic; #26--Lineworker; #27--Painter Helper, Spray; #28--Painter, Brush; #29--Carpenter Apprentice.

    ERIC Educational Resources Information Center

    San Mateo County Office of Education, Redwood City, CA. Career Preparation Centers.

    This fifth of fifteen sets of Adult Competency Education (ACE) Competency Based Job Descriptions in the ACE kit contains job descriptions for Household Appliance Mechanic; Lineworker; Painter Helper, Spray; Painter, Brush; and Carpenter Apprentice. Each begins with a fact sheet that includes this information: occupational title, D.O.T. code, ACE…

  3. A New Grid based Ionosphere Algorithm for GAGAN using Data Fusion Technique (ISRO GIVE Model-Multi Layer Data Fusion)

    NASA Astrophysics Data System (ADS)

    Srinivasan, Nirmala; Ganeshan, A. S.; Mishra, Saumyaketu

    2012-07-01

    A New Grid based Ionosphere Algorithm for GAGAN using Data Fusion Technique (ISRO GIVE Model-Multi Layer Data Fusion) Saumyaketu Mishra, Nirmala S, A S Ganeshan ISRO Satellite Centre, Bangalore and Timothy Schempp, Gregory Um, Hans Habereder Raytheon Company Development of a region-specific ionosphere model is the key element in providing precision approach services for civil aviation with GAGAN (GPS Aided GEO Augmented Navigation). GAGAN is an Indian SBAS (Space Based Augmentation System) comprising of three segments; space segment (GEO and GPS), ground segment (15 Indian reference stations (INRES), 2 master control centers and 3 ground uplink stations) and user segment. The GAGAN system is intended to provide air navigation services for APV 1/1.5 precision approach over the Indian land mass and RNP 0.1 navigation service over Indian Flight Information Region (FIR), conforming to the standards of GNSS ICAO-SARPS. Ionosphere being largest source of error is of prime concern for a SBAS. India is a low latitude country, posing challenges for grid based ionosphere algorithm development; large spatial and temporal gradients, Equatorial anomaly, Depletions (bubbles), Scintillations etc. To meet the required GAGAN performance, it is necessary to develop and implement a best suitable ionosphere model, applicable for the Indian region as thin shell models like planar does not meet the requirement. ISRO GIVE Model - Multi Layer Data Fusion (IGM-MLDF) employs an innovative approach for computing the ionosphere corrections and confidences at pre-defined grid points at 350 Km shell height. Ionosphere variations over the Geo-magnetic equatorial regions shows peak electron density shell height variations from 200 km to 500 km, so single thin shell assumption at 350 km is not valid over Indian region. Hence IGM-MLDF employs innovative scheme of modeling at two shell heights. Through empirical analysis the shell heights of 250 km and 450 km are chosen. The ionosphere measurement

  4. The GEWEX LandFlux project: Evaluation of model evaporation using tower-based and globally gridded forcing data

    SciTech Connect

    McCabe, M. F.; Ershadi, A.; Jimenez, C.; Miralles, D. G.; Michel, D.; Wood, E. F.

    2016-01-26

    Determining the spatial distribution and temporal development of evaporation at regional and global scales is required to improve our understanding of the coupled water and energy cycles and to better monitor any changes in observed trends and variability of linked hydrological processes. With recent international efforts guiding the development of long-term and globally distributed flux estimates, continued product assessments are required to inform upon the selection of suitable model structures and also to establish the appropriateness of these multi-model simulations for global application. In support of the objectives of the Global Energy and Water Cycle Exchanges (GEWEX) LandFlux project, four commonly used evaporation models are evaluated against data from tower-based eddy-covariance observations, distributed across a range of biomes and climate zones. The selected schemes include the Surface Energy Balance System (SEBS) approach, the Priestley–Taylor Jet Propulsion Laboratory (PT-JPL) model, the Penman–Monteith-based Mu model (PM-Mu) and the Global Land Evaporation Amsterdam Model (GLEAM). Here we seek to examine the fidelity of global evaporation simulations by examining the multi-model response to varying sources of forcing data. To do this, we perform parallel and collocated model simulations using tower-based data together with a global-scale grid-based forcing product. Through quantifying the multi-model response to high-quality tower data, a better understanding of the subsequent model response to the coarse-scale globally gridded data that underlies the LandFlux product can be obtained, while also providing a relative evaluation and assessment of model performance.

    Using surface flux observations from 45 globally distributed eddy-covariance stations as independent metrics of performance, the tower-based analysis indicated that PT-JPL provided the highest overall statistical performance (0.72; 61 W m–2; 0.65), followed

  5. The GEWEX LandFlux project: evaluation of model evaporation using tower-based and globally-gridded forcing data

    NASA Astrophysics Data System (ADS)

    McCabe, M. F.; Ershadi, A.; Jimenez, C.; Miralles, D. G.; Michel, D.; Wood, E. F.

    2015-08-01

    Determining the spatial distribution and temporal development of evaporation at regional and global scales is required to improve our understanding of the coupled water and energy cycles and to better monitor any changes in observed trends and variability of linked hydrological processes. With recent international efforts guiding the development of long-term and globally distributed flux estimates, continued product assessments are required to inform upon the selection of suitable model structures and also to establish the appropriateness of these multi-model simulations for global application. In support of the objectives of the GEWEX LandFlux project, four commonly used evaporation models are evaluated against data from tower-based eddy-covariance observations, distributed across a range of biomes and climate zones. The selected schemes include the Surface Energy Balance System (SEBS) approach, the Priestley-Taylor Jet Propulsion Laboratory (PT-JPL) model, the Penman-Monteith based Mu model (PM-Mu) and the Global Land Evaporation: the Amsterdam Methodology (GLEAM). Here we seek to examine the fidelity of global evaporation simulations by examining the multi-model response to varying sources of forcing data. To do this, we perform parallel and collocated model simulations using tower-based data together with a global-scale grid-based forcing product. Through quantifying the multi-model response to high-quality tower data, a better understanding of the subsequent model response to coarse-scale globally gridded data that underlies the LandFlux product can be obtained, while also providing a relative evaluation and assessment of model performance. Using surface flux observations from forty-five globally distributed eddy-covariance stations as independent metrics of performance, the tower-based analysis indicated that PT-JPL provided the highest overally statistical performance (0.72; 61 W m-2; 0.65), followed closely by GLEAM (0.68; 64 W m-2; 0.62), with values in

  6. The GEWEX LandFlux project: evaluation of model evaporation using tower-based and globally gridded forcing data

    NASA Astrophysics Data System (ADS)

    McCabe, M. F.; Ershadi, A.; Jimenez, C.; Miralles, D. G.; Michel, D.; Wood, E. F.

    2016-01-01

    Determining the spatial distribution and temporal development of evaporation at regional and global scales is required to improve our understanding of the coupled water and energy cycles and to better monitor any changes in observed trends and variability of linked hydrological processes. With recent international efforts guiding the development of long-term and globally distributed flux estimates, continued product assessments are required to inform upon the selection of suitable model structures and also to establish the appropriateness of these multi-model simulations for global application. In support of the objectives of the Global Energy and Water Cycle Exchanges (GEWEX) LandFlux project, four commonly used evaporation models are evaluated against data from tower-based eddy-covariance observations, distributed across a range of biomes and climate zones. The selected schemes include the Surface Energy Balance System (SEBS) approach, the Priestley-Taylor Jet Propulsion Laboratory (PT-JPL) model, the Penman-Monteith-based Mu model (PM-Mu) and the Global Land Evaporation Amsterdam Model (GLEAM). Here we seek to examine the fidelity of global evaporation simulations by examining the multi-model response to varying sources of forcing data. To do this, we perform parallel and collocated model simulations using tower-based data together with a global-scale grid-based forcing product. Through quantifying the multi-model response to high-quality tower data, a better understanding of the subsequent model response to the coarse-scale globally gridded data that underlies the LandFlux product can be obtained, while also providing a relative evaluation and assessment of model performance. Using surface flux observations from 45 globally distributed eddy-covariance stations as independent metrics of performance, the tower-based analysis indicated that PT-JPL provided the highest overall statistical performance (0.72; 61 W m-2; 0.65), followed closely by GLEAM (0.68; 64 W m-2

  7. The GEWEX LandFlux project: Evaluation of model evaporation using tower-based and globally gridded forcing data

    DOE PAGES

    McCabe, M. F.; Ershadi, A.; Jimenez, C.; ...

    2016-01-26

    Determining the spatial distribution and temporal development of evaporation at regional and global scales is required to improve our understanding of the coupled water and energy cycles and to better monitor any changes in observed trends and variability of linked hydrological processes. With recent international efforts guiding the development of long-term and globally distributed flux estimates, continued product assessments are required to inform upon the selection of suitable model structures and also to establish the appropriateness of these multi-model simulations for global application. In support of the objectives of the Global Energy and Water Cycle Exchanges (GEWEX) LandFlux project, fourmore » commonly used evaporation models are evaluated against data from tower-based eddy-covariance observations, distributed across a range of biomes and climate zones. The selected schemes include the Surface Energy Balance System (SEBS) approach, the Priestley–Taylor Jet Propulsion Laboratory (PT-JPL) model, the Penman–Monteith-based Mu model (PM-Mu) and the Global Land Evaporation Amsterdam Model (GLEAM). Here we seek to examine the fidelity of global evaporation simulations by examining the multi-model response to varying sources of forcing data. To do this, we perform parallel and collocated model simulations using tower-based data together with a global-scale grid-based forcing product. Through quantifying the multi-model response to high-quality tower data, a better understanding of the subsequent model response to the coarse-scale globally gridded data that underlies the LandFlux product can be obtained, while also providing a relative evaluation and assessment of model performance. Using surface flux observations from 45 globally distributed eddy-covariance stations as independent metrics of performance, the tower-based analysis indicated that PT-JPL provided the highest overall statistical performance (0.72; 61 W m–2; 0.65), followed closely by GLEAM

  8. GridPP - Preparing for LHC Run 2 and the Wider Context

    NASA Astrophysics Data System (ADS)

    Coles, Jeremy

    2015-12-01

    This paper elaborates upon the operational status and directions within the UK Computing for Particle Physics (GridPP) project as it approaches LHC Run 2. It details the pressures that have been gradually reshaping the deployed hardware and middleware environments at GridPP sites - from the increasing adoption of larger multicore nodes to the move towards alternative batch systems and cloud alternatives - as well as changes being driven by funding considerations. The paper highlights work being done with non-LHC communities and describes some of the early outcomes of adopting a generic DIRAC based job submission and management framework. The paper presents results from an analysis of how GridPP effort is distributed across various deployment and operations tasks and how this may be used to target further improvements in efficiency.

  9. The Costs of Today's Jobs: Job Characteristics and Organizational Supports as Antecedents of Negative Spillover

    ERIC Educational Resources Information Center

    Grotto, Angela R.; Lyness, Karen S.

    2010-01-01

    This study examined job characteristics and organizational supports as antecedents of negative work-to-nonwork spillover for 1178 U.S. employees. Based on hierarchical regression analyses of 2002 National Study of the Changing Workforce data and O*NET data, job demands (requirements to work at home beyond scheduled hours, job complexity, time and…

  10. The pilot way to Grid resources using glideinWMS

    SciTech Connect

    Sfiligoi, Igor; Bradley, Daniel C.; Holzman, Burt; Mhashilkar, Parag; Padhi, Sanjay; Wurthwrin, Frank; /UC, San Diego

    2010-09-01

    Grid computing has become very popular in big and widespread scientific communities with high computing demands, like high energy physics. Computing resources are being distributed over many independent sites with only a thin layer of Grid middleware shared between them. This deployment model has proven to be very convenient for computing resource providers, but has introduced several problems for the users of the system, the three major being the complexity of job scheduling, the nonuniformity of computer resources, and the lack of good job monitoring. Pilot jobs address all the above problems by creating a virtual private computing pool on top of Grid resources. This paper presents both the general pilot concept, as well as a concrete implementation, called glideinWMS, deployed in the Open Science Grid.

  11. RBioCloud: A Light-Weight Framework for Bioconductor and R-based Jobs on the Cloud.

    PubMed

    Varghese, Blesson; Patel, Ishan; Barker, Adam

    2015-01-01

    Large-scale ad hoc analytics of genomic data is popular using the R-programming language supported by over 700 software packages provided by Bioconductor. More recently, analytical jobs are benefitting from on-demand computing and storage, their scalability and their low maintenance cost, all of which are offered by the cloud. While biologists and bioinformaticists can take an analytical job and execute it on their personal workstations, it remains challenging to seamlessly execute the job on the cloud infrastructure without extensive knowledge of the cloud dashboard. How analytical jobs can not only with minimum effort be executed on the cloud, but also how both the resources and data required by the job can be managed is explored in this paper. An open-source light-weight framework for executing R-scripts using Bioconductor packages, referred to as `RBioCloud', is designed and developed. RBioCloud offers a set of simple command-line tools for managing the cloud resources, the data and the execution of the job. Three biological test cases validate the feasibility of RBioCloud. The framework is available from http://www.rbiocloud.com.

  12. Polarization-extinction-based detection of DNA hybridization in situ using a nanoparticle wire-grid polarizer.

    PubMed

    Yu, Hojeong; Oh, Youngjin; Kim, Soowon; Song, Seok Ho; Kim, Donghyun

    2012-09-15

    Metallic wires can discriminate light polarization due to strong absorption of electric fields oscillating in parallel to wires. Here, we explore polarization-based biosensing of DNA hybridization in situ by employing metal target-conjugated nanoparticles to form a wire-grid polarizer (WGP) as complementary DNA strands hybridize. Experimental results using gold nanoparticles of 15 nm diameter to form a WGP of 400 nm period suggest that polarization extinction can detect DNA hybridization with a limit of detection in the range of 1 nM concentration. The sensitivity may be improved by more than an order of magnitude if larger nanoparticles are employed to define WGPs at a period between 400 and 500 nm.

  13. Solving large-scale fixed cost integer linear programming models for grid-based location problems with heuristic techniques

    NASA Astrophysics Data System (ADS)

    Noor-E-Alam, Md.; Doucette, John

    2015-08-01

    Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.

  14. Running CMS remote analysis builder jobs on advanced resource connector middleware

    NASA Astrophysics Data System (ADS)

    Edelmann, E.; Happonen, K.; Koivumäki, J.; Lindén, T.; Välimaa, J.

    2011-12-01

    CMS user analysis jobs are distributed over the grid with the CMS Remote Analysis Builder application (CRAB). According to the CMS computing model the applications should run transparently on the different grid flavours in use. In CRAB this is handled with different plugins that are able to submit to different grids. Recently a CRAB plugin for submitting to the Advanced Resource Connector (ARC) middleware has been developed. The CRAB ARC plugin enables simple and fast job submission with full job status information available. CRAB can be used with a server which manages and monitors the grid jobs on behalf of the user. In the presentation we will report on the CRAB ARC plugin and on the status of integrating it with the CRAB server and compare this with using the gLite ARC interoperability method for job submission.

  15. Using CREAM and CEMonitor for job submission and management in the gLite middleware

    NASA Astrophysics Data System (ADS)

    Aiftimiei, C.; Andreetto, P.; Bertocco, S.; Dalla Fina, S.; Dorigo, A.; Frizziero, E.; Gianelle, A.; Marzolla, M.; Mazzucato, M.; Mendez Lorenzo, P.; Miccio, V.; Sgaravatto, M.; Traldi, S.; Zangrando, L.

    2010-04-01

    In this paper we describe the use of CREAM and CEMonitor services for job submission and management within the gLite Grid middleware. Both CREAM and CEMonitor address one of the most fundamental operations of a Grid middleware, that is job submission and management. Specifically, CREAM is a job management service used for submitting, managing and monitoring computational jobs. CEMonitor is an event notification framework, which can be coupled with CREAM to provide the users with asynchronous job status change notifications. Both components have been integrated in the gLite Workload Management System by means of ICE (Interface to CREAM Environment). These software components have been released for production in the EGEE Grid infrastructure and, for what concerns the CEMonitor service, also in the OSG Grid. In this paper we report the current status of these services, the achieved results, and the issues that still have to be addressed.

  16. Post-Earthquake People Loss Evaluation Based on Seismic Multi-Level Hybrid Grid: A Case Study on Yushu Ms 7.1 Earthquake in China

    NASA Astrophysics Data System (ADS)

    Yang, Xiaohong; Xie, Zhong; Ling, Feng; Luo, Xiangang; Zhong, Ming

    2016-01-01

    People loss is one of the most important information that the government concerns after an earthquake, because it affects appropriate rescue levels. However, existing evaluation methods often consider an entire stricken region as a whole assessment area but disregard the spatial disparity of influencing factors. As a consequence, results are inaccurately evaluated. In order to address this problem, this paper proposes a post-earthquake evaluation approach of people loss based on the seismic multi-level hybrid grid (SMHG). In SMHG, the whole area is divided into grids at different levels with various sizes. In this manner, the efficiency of data management is improved. With SMHG, disaster statistics can be easily counted under both the administrative unit and per unit area. The proposed approach was then applied to investigate Yushu Ms7.1 earthquake in China. Results revealed that the number of deaths varied with different exposure grids. Among all the different grids, we found that using the 50×50 exposure grid can get the most satisfactory results, and the estimated number of deaths was 2,203, with an 18.3% deviation from the actual loss. People loss results obtained through the proposed approach were more accurate than those obtained through traditional GIS-based methods.

  17. Scheduling job shop - A case study

    NASA Astrophysics Data System (ADS)

    Abas, M.; Abbas, A.; Khan, W. A.

    2016-08-01

    The scheduling in job shop is important for efficient utilization of machines in the manufacturing industry. There are number of algorithms available for scheduling of jobs which depend on machines tools, indirect consumables and jobs which are to be processed. In this paper a case study is presented for scheduling of jobs when parts are treated on available machines. Through time and motion study setup time and operation time are measured as total processing time for variety of products having different manufacturing processes. Based on due dates different level of priority are assigned to the jobs and the jobs are scheduled on the basis of priority. In view of the measured processing time, the times for processing of some new jobs are estimated and for efficient utilization of the machines available an algorithm is proposed and validated.

  18. An Evaluation of Recently Developed RANS-Based Turbulence Models for Flow Over a Two-Dimensional Block Subjected to Different Mesh Structures and Grid Resolutions

    NASA Astrophysics Data System (ADS)

    Kardan, Farshid; Cheng, Wai-Chi; Baverel, Olivier; Porté-Agel, Fernando

    2016-04-01

    Understanding, analyzing and predicting meteorological phenomena related to urban planning and built environment are becoming more essential than ever to architectural and urban projects. Recently, various version of RANS models have been established but more validation cases are required to confirm their capability for wind flows. In the present study, the performance of recently developed RANS models, including the RNG k-ɛ , SST BSL k-ω and SST ⪆mma-Reθ , have been evaluated for the flow past a single block (which represent the idealized architecture scale). For validation purposes, the velocity streamlines and the vertical profiles of the mean velocities and variances were compared with published LES and wind tunnel experiment results. Furthermore, other additional CFD simulations were performed to analyze the impact of regular/irregular mesh structures and grid resolutions based on selected turbulence model in order to analyze the grid independency. Three different grid resolutions (coarse, medium and fine) of Nx × Ny × Nz = 320 × 80 × 320, 160 × 40 × 160 and 80 × 20 × 80 for the computational domain and nx × nz = 26 × 32, 13 × 16 and 6 × 8, which correspond to number of grid points on the block edges, were chosen and tested. It can be concluded that among all simulated RANS models, the SST ⪆mma-Reθ model performed best and agreed fairly well to the LES simulation and experimental results. It can also be concluded that the SST ⪆mma-Reθ model provides a very satisfactory results in terms of grid dependency in the fine and medium grid resolutions in both regular and irregular structure meshes. On the other hand, despite a very good performance of the RNG k-ɛ model in the fine resolution and in the regular structure grids, a disappointing performance of this model in the coarse and medium grid resolutions indicates that the RNG k-ɛ model is highly dependent on grid structure and grid resolution. These quantitative validations are essential

  19. Application of Geographical Information System Arc/info Grid-Based Surface Hyrologic Modeling to the Eastern Hellas Region, Mars

    NASA Astrophysics Data System (ADS)

    Mest, S. C.; Harbert, W.; Crown, D. A.

    2001-05-01

    Geographical Information System GRID-based raster modeling of surface water runoff in the eastern Hellas region of Mars has been completed. We utilized the 0.0625 by 0.0625 degree topographic map of Mars collected by the Mars Global Surveyor Mars Orbiter Laser Altimeter (MOLA) instrument to model watershed and surface runoff drainage systems. Scientific interpretation of these models with respect to ongoing geological mapping is presented in Mest et al., (2001). After importing a region of approximately 77,000,000 square kilometers into Arc/Info 8.0.2 we reprojected this digital elevation model (DEM) from a Mars sphere into a Mars ellipsoid. Using a simple cylindrical geographic projection and horizontal spatial units of decimal degrees and then an Albers projection with horizontal spatial units of meters, we completed basic hydrological modeling. Analysis of the raw DEM to determine slope, aspect, flow direction, watershed and flow accumulation grids demonstrated the need for correction of single pixel sink anomalies. After analysis of zonal elevation statistics associated with single pixel sinks, which identified 0.8 percent of the DEM points as having undefined surface water flow directions, we filled single pixel sink values of 89 meters or less. This correction is comparable with terrestrial DEMs that contain 0.9 percent to 4.7 percent of cells, which are sinks (Tarboton et al., 1991). The fill-corrected DEM was then used to determine slope, aspect, surface water flow direction and surface water flow accumulation. Within the region of interest 8,776 watersheds were identified. Using Arc/Info GRID flow direction and flow accumulation tools, regions of potential surface water flow accumulation were identified. These networks were then converted to a Strahler ordered stream network. Surface modeling produced Strahler orders one through six. As presented in Mest et al., (2001) comparisons of mapped features may prove compatible with drainage networks and

  20. 3D Discontinuous Galerkin elastic seismic wave modeling based upon a grid injection method

    NASA Astrophysics Data System (ADS)

    Monteiller, V.

    2015-12-01

    Full waveform inversion (FWI) is a seismic imaging method that estimates thesub-surface physical properties with a spatial resolution of the order of thewavelength. FWI is generally recast as the iterative optimization of anobjective function that measures the distance between modeled and recordeddata. In the framework of local descent methods, FWI requires to perform atleast two seismic modelings per source and per FWI iteration.Due to the resulting computational burden, applications of elastic FWI have been usuallyrestricted to 2D geometries. Despite the continuous growth of high-performancecomputing facilities, application of 3D elastic FWI to real-scale problemsremain computationally too expensive. To perform elastic seismic modeling with a reasonable amount of time, weconsider a reduced computational domain embedded in a larger background modelin which seismic sources are located. Our aim is to compute repeatedly thefull wavefield in the targeted domain after model alteration, once theincident wavefield has been computed once for all in the background model. Toachieve this goal, we use a grid injection method referred to as the Total-Field/Scattered-Field (TF/SF) technique in theelectromagnetic community. We implemented the Total-Field/Scattered-Field approach in theDiscontinuous Galerkin Finite Element method (DG-FEM) that is used to performmodeling in the local domain. We show how to interface the DG-FEM with any modeling engine (analytical solution, finite difference or finite elements methods) that is suitable for the background simulation. One advantage of the Total-Field/Scattered-Field approach is related to thefact that the scattered wavefield instead of the full wavefield enter thePMLs, hence making more efficient the absorption of the outgoing waves at theouter edges of the computational domain. The domain reduction in which theDG-FEM is applied allows us to use modest computational resources opening theway for high-resolution imaging by full