Science.gov

Sample records for based grid job

  1. A Grid job monitoring system

    NASA Astrophysics Data System (ADS)

    Dumitrescu, Catalin; Nowack, Andreas; Padhi, Sanjay; Sarkar, Subir

    2010-04-01

    This paper presents a web-based Job Monitoring framework for individual Grid sites that allows users to follow in detail their jobs in quasi-real time. The framework consists of several independent components : (a) a set of sensors that run on the site CE and worker nodes and update a database, (b) a simple yet extensible web services framework and (c) an Ajax powered web interface having a look-and-feel and control similar to a desktop application. The monitoring framework supports LSF, Condor and PBS-like batch systems. This is one of the first monitoring systems where an X.509 authenticated web interface can be seamlessly accessed by both end-users and site administrators. While a site administrator has access to all the possible information, a user can only view the jobs for the Virtual Organizations (VO) he/she is a part of. The monitoring framework design supports several possible deployment scenarios. For a site running a supported batch system, the system may be deployed as a whole, or existing site sensors can be adapted and reused with the web services components. A site may even prefer to build the web server independently and choose to use only the Ajax powered web interface. Finally, the system is being used to monitor a glideinWMS instance. This broadens the scope significantly, allowing it to monitor jobs over multiple sites.

  2. A grid job monitoring system

    SciTech Connect

    Dumitrescu, Catalin; Nowack, Andreas; Padhi, Sanjay; Sarkar, Subir; /INFN, Pisa /Pisa, Scuola Normale Superiore

    2010-01-01

    This paper presents a web-based Job Monitoring framework for individual Grid sites that allows users to follow in detail their jobs in quasi-real time. The framework consists of several independent components: (a) a set of sensors that run on the site CE and worker nodes and update a database, (b) a simple yet extensible web services framework and (c) an Ajax powered web interface having a look-and-feel and control similar to a desktop application. The monitoring framework supports LSF, Condor and PBS-like batch systems. This is one of the first monitoring systems where an X.509 authenticated web interface can be seamlessly accessed by both end-users and site administrators. While a site administrator has access to all the possible information, a user can only view the jobs for the Virtual Organizations (VO) he/she is a part of. The monitoring framework design supports several possible deployment scenarios. For a site running a supported batch system, the system may be deployed as a whole, or existing site sensors can be adapted and reused with the web services components. A site may even prefer to build the web server independently and choose to use only the Ajax powered web interface. Finally, the system is being used to monitor a glideinWMS instance. This broadens the scope significantly, allowing it to monitor jobs over multiple sites.

  3. Job Scheduling in a Heterogeneous Grid Environment

    NASA Technical Reports Server (NTRS)

    Shan, Hong-Zhang; Smith, Warren; Oliker, Leonid; Biswas, Rupak

    2004-01-01

    Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.

  4. Job scheduling in a heterogenous grid environment

    SciTech Connect

    Oliker, Leonid; Biswas, Rupak; Shan, Hongzhang; Smith, Warren

    2004-02-11

    Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.

  5. Mediated definite delegation - Certified Grid jobs in ALICE and beyond

    NASA Astrophysics Data System (ADS)

    Schreiner, Steffen; Grigoras, Costin; Litmaath, Maarten; Betev, Latchezar; Buchmann, Johannes

    2012-12-01

    Grid computing infrastructures need to provide traceability and accounting of their users’ activity and protection against misuse and privilege escalation, where the delegation of privileges in the course of a job submission is a key concern. This work describes an improved handling of Multi-user Grid Jobs in the ALICE Grid Services. A security analysis of the ALICE Grid job model is presented with derived security objectives, followed by a discussion of existing approaches of unrestricted delegation based on X.509 proxy certificates and the Grid middleware gLExec. Unrestricted delegation has severe security consequences and limitations, most importantly allowing for identity theft and forgery of jobs and data. These limitations are discussed and formulated, both in general and with respect to an adoption in line with Multi-user Grid Jobs. A new general model of mediated definite delegation is developed, allowing a broker to dynamically process and assign Grid jobs to agents while providing strong accountability and long-term traceability. A prototype implementation allowing for fully certified Grid jobs is presented as well as a potential interaction with gLExec. The achieved improvements regarding system security, malicious job exploitation, identity protection, and accountability are emphasized, including a discussion of non-repudiation in the face of malicious Grid jobs.

  6. Pilot job accounting and auditing in Open Science Grid

    SciTech Connect

    Sfiligoi, Igor; Green, Chris; Quinn, Greg; Thain, Greg; /Wisconsin U., Madison

    2008-06-01

    The Grid accounting and auditing mechanisms were designed under the assumption that users would submit their jobs directly to the Grid gatekeepers. However, many groups are starting to use pilot-based systems, where users submit jobs to a centralized queue and are successively transferred to the Grid resources by the pilot infrastructure. While this approach greatly improves the user experience, it does disrupt the established accounting and auditing procedures. Open Science Grid deploys gLExec on the worker nodes to keep the pilot-related accounting and auditing information and centralizes the accounting collection with GRATIA.

  7. A Novel Particle Swarm Optimization Approach for Grid Job Scheduling

    NASA Astrophysics Data System (ADS)

    Izakian, Hesam; Tork Ladani, Behrouz; Zamanifar, Kamran; Abraham, Ajith

    This paper represents a Particle Swarm Optimization (PSO) algorithm, for grid job scheduling. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. In this paper we used a PSO approach for grid job scheduling. The scheduler aims at minimizing makespan and flowtime simultaneously. Experimental studies show that the proposed novel approach is more efficient than the PSO approach reported in the literature.

  8. Grid Service for User-Centric Job

    SciTech Connect

    Lauret, Jerome

    2009-07-31

    The User Centric Monitoring (UCM) project was aimed at developing a toolkit that provides the Virtual Organization (VO) with tools to build systems that serve a rich set of intuitive job and application monitoring information to the VO’s scientists so that they can be more productive. The tools help collect and serve the status and error information through a Web interface. The proposed UCM toolkit is composed of a set of library functions, a database schema, and a Web portal that will collect and filter available job monitoring information from various resources and present it to users in a user-centric view rather than and administrative-centric point of view. The goal is to create a set of tools that can be used to augment grid job scheduling systems, meta-schedulers, applications, and script sets in order to provide the UCM information. The system provides various levels of an application programming interface that is useful through out the Grid environment and at the application level for logging messages, which are combined with the other user-centric monitoring information in a abstracted “data store”. A planned monitoring portal will also dynamically present the information to users in their web browser in a secure manor, which is also easily integrated into any JSR-compliant portal deployment that a VO might employ. The UCM is meant to be flexible and modular in the ways that it can be adopted to give the VO many choices to build a solution that works for them with special attention to the smaller VOs that do not have the resources to implement home-grown solutions.

  9. Jobs masonry in LHCb with elastic Grid Jobs

    NASA Astrophysics Data System (ADS)

    Stagni, F.; Charpentier, Ph

    2015-12-01

    In any distributed computing infrastructure, a job is normally forbidden to run for an indefinite amount of time. This limitation is implemented using different technologies, the most common one being the CPU time limit implemented by batch queues. It is therefore important to have a good estimate of how much CPU work a job will require: otherwise, it might be killed by the batch system, or by whatever system is controlling the jobs’ execution. In many modern interwares, the jobs are actually executed by pilot jobs, that can use the whole available time in running multiple consecutive jobs. If at some point the available time in a pilot is too short for the execution of any job, it should be released, while it could have been used efficiently by a shorter job. Within LHCbDIRAC, the LHCb extension of the DIRAC interware, we developed a simple way to fully exploit computing capabilities available to a pilot, even for resources with limited time capabilities, by adding elasticity to production MonteCarlo (MC) simulation jobs. With our approach, independently of the time available, LHCbDIRAC will always have the possibility to execute a MC job, whose length will be adapted to the available amount of time: therefore the same job, running on different computing resources with different time limits, will produce different amounts of events. The decision on the number of events to be produced is made just in time at the start of the job, when the capabilities of the resource are known. In order to know how many events a MC job will be instructed to produce, LHCbDIRAC simply requires three values: the CPU-work per event for that type of job, the power of the machine it is running on, and the time left for the job before being killed. Knowing these values, we can estimate the number of events the job will be able to simulate with the available CPU time. This paper will demonstrate that, using this simple but effective solution, LHCb manages to make a more efficient use of

  10. Job execution in virtualized runtime environments in grid

    NASA Astrophysics Data System (ADS)

    Shamardin, Lev; Demichev, Andrey; Gorbunov, Ilya; Ilyin, Slava; Kryukov, Alexander

    2010-04-01

    Grid systems are used for calculations and data processing in various applied areas such as biomedicine, nanotechnology and materials science, cosmophysics and high energy physics as well as in a number of industrial and commercial areas. Traditional method of execution of jobs in grid is running jobs directly on the cluster nodes. This puts restrictions on the choice of the operational environment to the operating system of the node and also does not allow to enforce resource sharing policies or jobs isolation nor guarantee minimal level of available system resources. We propose a new approach to running jobs on the cluster nodes when each grid job runs in its own virtual environment. This allows to use different operating systems for different jobs on the same nodes in cluster, provides better isolation between running jobs and allows to enforce resource sharing policies. The implementation of the proposed approach was made in the framework of gLite middleware of the EGEE/WLCG project and was successfully tested in SINP MSU. The implementation is transparent for the grid user and allows to submit binaries compiled for various operating systems using exactly the same gLite interface. Virtual machine images with the standard gLite worker node software and sample MS Windows execution environment were created.

  11. Smart Grid Cybersecurity: Job Performance Model Report

    SciTech Connect

    O'Neil, Lori Ross; Assante, Michael; Tobey, David

    2012-08-01

    This is the project report to DOE OE-30 for the completion of Phase 1 of a 3 phase report. This report outlines the work done to develop a smart grid cybersecurity certification. This work is being done with the subcontractor NBISE.

  12. Job Superscheduler Architecture and Performance in Computational Grid Environments

    NASA Technical Reports Server (NTRS)

    Shan, Hongzhang; Oliker, Leonid; Biswas, Rupak

    2003-01-01

    Computational grids hold great promise in utilizing geographically separated heterogeneous resources to solve large-scale complex scientific problems. However, a number of major technical hurdles, including distributed resource management and effective job scheduling, stand in the way of realizing these gains. In this paper, we propose a novel grid superscheduler architecture and three distributed job migration algorithms. We also model the critical interaction between the superscheduler and autonomous local schedulers. Extensive performance comparisons with ideal, central, and local schemes using real workloads from leading computational centers are conducted in a simulation environment. Additionally, synthetic workloads are used to perform a detailed sensitivity analysis of our superscheduler. Several key metrics demonstrate that substantial performance gains can be achieved via smart superscheduling in distributed computational grids.

  13. Development of Interface to Grid for Job Submission for D0 Experiment

    NASA Astrophysics Data System (ADS)

    Reggio, James; Gopalratnam, Karthik; Balasubramanian, Anand; Bhamidipati, Prashant; Sosebee, Mark; de, Kaushik; Levine, David; Yu, Jaehoon; Meyer, Drew

    2002-10-01

    We present a web based user interface for job submission to computational grids in high energy physics. The backend of this interface will create a file describing the job, using specific definition language for D0 experiment. The D0 experiment is a high energy physics experiment at the Fermi Accelerator Laboratory in Batavia, Illinois. The amount of data from the experiment is expected to exceed multiple petabytes. This immense amount of data poses issues for effectively sharing data within the collaboration. The interface covered in this talk is expected to provide easy access to the computational grid for the researchers submitting D0 specific computing applications.

  14. Grid-based Visualization Framework

    NASA Astrophysics Data System (ADS)

    Thiebaux, M.; Tangmunarunkit, H.; Kesselman, C.

    2003-12-01

    Advances in science and engineering have put high demands on tools for high-performance large-scale visual data exploration and analysis. For example, earthquake scientists can now study earthquake phenomena from first principle physics-based simulations. These simulations can generate large amounts of data, possibly high spatial resolution, and long time series. Single-system visualization software running on commodity machines cannot scale up to the large amounts of data generated by these simulations. To address this problem, we propose a flexible and extensible Grid-based visualization framework for time-critical, interactively controlled visual browsing of spatially and temporally large datasets in a Grid environment. Our framework leverages Grid resources for scalable computation and data storage to maintain performance and interactivity with large visualization jobs. Our framework utilizes Globus Toolkit 2.4 components for security (i.e., GSI), resource allocation and management (i.e., DUROC, GRAM) and communication (i.e., Globus-IO) to couple commodity desktops with remote, scalable storage and computational resources in a Grid for interactive data exploration. There are two major components in this framework---Grid Data Transport (GDT) and the Grid Visualization Utility (GVU). GDT provides libraries for performing parallel data filtering and parallel data exchange among Grid resources. GDT allows arbitrary data filtering to be integrated into the system. It also facilitates multi-tiered pipeline topology construction of compute resources and displays. In addition to scientific visualization applications, GDT can be used to support other applications that require parallel processing and parallel transfer of partial ordered independent files, such as file-set transfer. On top of GDT, we have developed the Grid Visualization Utility (GVU), which is designed to assist visualization dataset management, including file formatting, data transport and automatic

  15. Multicore job scheduling in the Worldwide LHC Computing Grid

    NASA Astrophysics Data System (ADS)

    Forti, A.; Pérez-Calero Yzquierdo, A.; Hartmann, T.; Alef, M.; Lahiff, A.; Templon, J.; Dal Pra, S.; Gila, M.; Skipsey, S.; Acosta-Silva, C.; Filipcic, A.; Walker, R.; Walker, C. J.; Traynor, D.; Gadrat, S.

    2015-12-01

    After the successful first run of the LHC, data taking is scheduled to restart in Summer 2015 with experimental conditions leading to increased data volumes and event complexity. In order to process the data generated in such scenario and exploit the multicore architectures of current CPUs, the LHC experiments have developed parallelized software for data reconstruction and simulation. However, a good fraction of their computing effort is still expected to be executed as single-core tasks. Therefore, jobs with diverse resources requirements will be distributed across the Worldwide LHC Computing Grid (WLCG), making workload scheduling a complex problem in itself. In response to this challenge, the WLCG Multicore Deployment Task Force has been created in order to coordinate the joint effort from experiments and WLCG sites. The main objective is to ensure the convergence of approaches from the different LHC Virtual Organizations (VOs) to make the best use of the shared resources in order to satisfy their new computing needs, minimizing any inefficiency originated from the scheduling mechanisms, and without imposing unnecessary complexities in the way sites manage their resources. This paper describes the activities and progress of the Task Force related to the aforementioned topics, including experiences from key sites on how to best use different batch system technologies, the evolution of workload submission tools by the experiments and the knowledge gained from scale tests of the different proposed job submission strategies.

  16. Grid-based HPC astrophysical applications at INAF Catania.

    NASA Astrophysics Data System (ADS)

    Costa, A.; Calanducci, A.; Becciani, U.; Capuzzo Dolcetta, R.

    The research activity on grid area at INAF Catania has been devoted to two main goals: the integration of a multiprocessor supercomputer (IBM SP4) within INFN-GRID middleware and the developing of a web-portal, Astrocomp-G, for the submission of astrophysical jobs into the grid infrastructure. Most of the actual grid implementation infrastructure is based on common hardware, i.e. i386 architecture machines (Intel Celeron, Pentium III, IV, Amd Duron, Athlon) using Linux RedHat OS. We were the first institute to integrate a totally different machine, an IBM SP with RISC architecture and AIX OS, as a powerful Worker Node inside a grid infrastructure. We identified and ported to AIX OS the grid components dealing with job monitoring and execution and properly tuned the Computing Element to delivery jobs into this special Worker Node. For testing purpose we used MARA, an astrophysical application for the analysis of light curve sequences. Astrocomp-G is a user-friendly front end to our grid site. Users who want to submit the astrophysical applications already available in the portal need to own a valid personal X509 certificate in addiction to a username and password released by the grid portal web master. The personal X509 certificate is a prerequisite for the creation of a short or long-term proxy certificate that allows the grid infrastructure services to identify clearly whether the owner of the job has the permissions to use resources and data. X509 and proxy certificates are part of GSI (Grid Security Infrastructure), a standard security tool adopted by all major grid sites around the world.

  17. Remote Job Testing for the Neutron Science TeraGrid Gateway

    SciTech Connect

    Lynch, Vickie E; Cobb, John W; Miller, Stephen D; Reuter, Michael A; Smith, Bradford C

    2009-01-01

    Remote job execution gives neutron science facilities access to high performance computing such as the TeraGrid. A scientific community can use community software with a community certificate and account through a common interface of a portal. Results show this approach is successful, but with more testing and problem solving, we expect remote job executions to become more reliable.

  18. The Grid[Way] Job Template Manager, a tool for parameter sweeping

    NASA Astrophysics Data System (ADS)

    Lorca, Alejandro; Huedo, Eduardo; Llorente, Ignacio M.

    2011-04-01

    Parameter sweeping is a widely used algorithmic technique in computational science. It is specially suited for high-throughput computing since the jobs evaluating the parameter space are loosely coupled or independent. A tool that integrates the modeling of a parameter study with the control of jobs in a distributed architecture is presented. The main task is to facilitate the creation and deletion of job templates, which are the elements describing the jobs to be run. Extra functionality relies upon the GridWay Metascheduler, acting as the middleware layer for job submission and control. It supports interesting features like multi-dimensional sweeping space, wildcarding of parameters, functional evaluation of ranges, value-skipping and job template automatic indexation. The use of this tool increases the reliability of the parameter sweep study thanks to the systematic bookkeeping of job templates and respective job statuses. Furthermore, it simplifies the porting of the target application to the grid reducing the required amount of time and effort. Program summaryProgram title: Grid[Way] Job Template Manager (version 1.0) Catalogue identifier: AEIE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Apache license 2.0 No. of lines in distributed program, including test data, etc.: 3545 No. of bytes in distributed program, including test data, etc.: 126 879 Distribution format: tar.gz Programming language: Perl 5.8.5 and above Computer: Any (tested on PC x86 and x86_64) Operating system: Unix, GNU/Linux (tested on Ubuntu 9.04, Scientific Linux 4.7, centOS 5.4), Mac OS X (tested on Snow Leopard 10.6) RAM: 10 MB Classification: 6.5 External routines: The GridWay Metascheduler [1]. Nature of problem: To parameterize and manage an application running on a grid or cluster. Solution method: Generation of job templates as a cross product of

  19. A modify ant colony optimization for the grid jobs scheduling problem with QoS requirements

    NASA Astrophysics Data System (ADS)

    Pu, Xun; Lu, XianLiang

    2011-10-01

    Job scheduling with customers' quality of service (QoS) requirement is challenging in grid environment. In this paper, we present a modify Ant colony optimization (MACO) for the Job scheduling problem in grid. Instead of using the conventional construction approach to construct feasible schedules, the proposed algorithm employs a decomposition method to satisfy the customer's deadline and cost requirements. Besides, a new mechanism of service instances state updating is embedded to improve the convergence of MACO. Experiments demonstrate the effectiveness of the proposed algorithm.

  20. Wavelet-Based Grid Generation

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1996-01-01

    Wavelets can provide a basis set in which the basis functions are constructed by dilating and translating a fixed function known as the mother wavelet. The mother wavelet can be seen as a high pass filter in the frequency domain. The process of dilating and expanding this high-pass filter can be seen as altering the frequency range that is 'passed' or detected. The process of translation moves this high-pass filter throughout the domain, thereby providing a mechanism to detect the frequencies or scales of information at every location. This is exactly the type of information that is needed for effective grid generation. This paper provides motivation to use wavelets for grid generation in addition to providing the final product: source code for wavelet-based grid generation.

  1. A History-based Estimation for LHCb job requirements

    NASA Astrophysics Data System (ADS)

    Rauschmayr, Nathalie

    2015-12-01

    The main goal of a Workload Management System (WMS) is to find and allocate resources for the given tasks. The more and better job information the WMS receives, the easier will be to accomplish its task, which directly translates into higher utilization of resources. Traditionally, the information associated with each job, like expected runtime, is defined beforehand by the Production Manager in best case and fixed arbitrary values by default. In the case of LHCb's Workload Management System no mechanisms are provided which automate the estimation of job requirements. As a result, much more CPU time is normally requested than actually needed. Particularly, in the context of multicore jobs this presents a major problem, since single- and multicore jobs shall share the same resources. Consequently, grid sites need to rely on estimations given by the VOs in order to not decrease the utilization of their worker nodes when making multicore job slots available. The main reason for going to multicore jobs is the reduction of the overall memory footprint. Therefore, it also needs to be studied how memory consumption of jobs can be estimated. A detailed workload analysis of past LHCb jobs is presented. It includes a study of job features and their correlation with runtime and memory consumption. Following the features, a supervised learning algorithm is developed based on a history based prediction. The aim is to learn over time how jobs’ runtime and memory evolve influenced due to changes in experiment conditions and software versions. It will be shown that estimation can be notably improved if experiment conditions are taken into account.

  2. Grid-based Meteorological and Crisis Applications

    NASA Astrophysics Data System (ADS)

    Hluchy, Ladislav; Bartok, Juraj; Tran, Viet; Lucny, Andrej; Gazak, Martin

    2010-05-01

    forecast model is a subject of the parameterization and parameter optimization before its real deployment. The parameter optimization requires tens of evaluations of the parameterized model accuracy and each evaluation of the model parameters requires re-running of the hundreds of meteorological situations collected over the years and comparison of the model output with the observed data. The architecture and inherent heterogeneity of both examples and their computational complexity and their interfaces to other systems and services make them well suited for decomposition into a set of web and grid services. Such decomposition has been performed within several projects we participated or participate in cooperation with academic sphere, namely int.eu.grid (dispersion model deployed as a pilot application to an interactive grid), SEMCO-WS (semantic composition of the web and grid services), DMM (development of a significant meteorological phenomena prediction system based on the data mining), VEGA 2009-2011 and EGEE III. We present useful and practical applications of technologies of high performance computing. The use of grid technology provides access to much higher computation power not only for modeling and simulation, but also for the model parameterization and validation. This results in the model parameters optimization and more accurate simulation outputs. Having taken into account that the simulations are used for the aviation, road traffic and crisis management, even small improvement in accuracy of predictions may result in significant improvement of safety as well as cost reduction. We found grid computing useful for our applications. We are satisfied with this technology and our experience encourages us to extend its use. Within an ongoing project (DMM) we plan to include processing of satellite images which extends our requirement on computation very rapidly. We believe that thanks to grid computing we are able to handle the job almost in real time.

  3. On the Optimization of GLite-Based Job Submission

    NASA Astrophysics Data System (ADS)

    Misurelli, Giuseppe; Palmieri, Francesco; Pardi, Silvio; Veronesi, Paolo

    2011-12-01

    A Grid is a very dynamic, complex and heterogeneous system, whose reliability can be adversely conditioned by several different factors such as communications and hardware faults, middleware bugs or wrong configurations due to human errors. As the infrastructure scales, spanning a large number of sites, each hosting hundreds or thousands of hosts/resources, the occurrence of runtime faults following job submission becomes a very frequent and phenomenon. Therefore, fault avoidance becomes a fundamental aim in modern Grids since the dependability of individual resources spread upon widely distributed computing infrastructures and often used outside of their native organizational boundaries, cannot be guaranteed in any systematic way. Accordingly, we propose a simple job optimization solution based on a user-driven fault avoidance strategy. Such strategy starts from the introduction within the grid information system of several on-line service-monitoring metrics that can be used as specific hints to the workload management system for driving resource discovery operations according to a fault-free resource-scheduling plan. This solution, whose main goal is to minimize the execution time by avoiding execution failures, demonstrated to be very effective in incrementing both the user perceivable quality and the overall grid performance.

  4. Ganga: User-friendly Grid job submission and management tool for LHC and beyond

    NASA Astrophysics Data System (ADS)

    Vanderster, D. C.; Brochu, F.; Cowan, G.; Egede, U.; Elmsheuser, J.; Gaidoz, B.; Harrison, K.; Lee, H. C.; Liko, D.; Maier, A.; Mościcki, J. T.; Muraru, A.; Pajchel, K.; Reece, W.; Samset, B.; Slater, M.; Soroko, A.; Tan, C. L.; Williams, M.

    2010-04-01

    Ganga has been widely used for several years in ATLAS, LHCb and a handful of other communities. Ganga provides a simple yet powerful interface for submitting and managing jobs to a variety of computing backends. The tool helps users configuring applications and keeping track of their work. With the major release of version 5 in summer 2008, Ganga's main user-friendly features have been strengthened. Examples include a new configuration interface, enhanced support for job collections, bulk operations and easier access to subjobs. In addition to the traditional batch and Grid backends such as Condor, LSF, PBS, gLite/EDG a point-to-point job execution via ssh on remote machines is now supported. Ganga is used as an interactive job submission interface for end-users, and also as a job submission component for higher-level tools. For example GangaRobot is used to perform automated, end-to-end testing of distributed data analysis. Ganga comes with an extensive test suite covering more than 350 test cases. The development model involves all active developers in the release management shifts which is an important and novel approach for the distributed software collaborations. Ganga 5 is a mature, stable and widely-used tool with long-term support from the HEP community.

  5. Output-based Job Descriptions: Beyond Skills and Competencies.

    ERIC Educational Resources Information Center

    Thomas, Mary Norris

    2000-01-01

    Explains output-based job descriptions, which describe the work rather than the worker. Topics include identifying job outputs; job analyses; identifying skills and competencies as support elements; and benefits over traditional job descriptions, including help in achieving business goals, use in strategic planning, clarifying role relationships,…

  6. A grid service-based tool for hyperspectral imaging analysis

    NASA Astrophysics Data System (ADS)

    Carvajal, Carmen L.; Lugo, Wilfredo; Rivera, Wilson; Sanabria, John

    2005-06-01

    This paper outlines the design and implementation of Grid-HSI, a Service Oriented Architecture-based Grid application to enable hyperspectral imaging analysis. Grid-HSI provides users with a transparent interface to access computational resources and perform remotely hyperspectral imaging analysis through a set of Grid services. Grid-HSI is composed by a Portal Grid Interface, a Data Broker and a set of specialized Grid services. Grid based applications, contrary to other clientserver approaches, provide the capabilities of persistence and potential transient process on the web. Our experimental results on Grid-HSI show the suitability of the prototype system to perform efficiently hyperspectral imaging analysis.

  7. Final Report for 'An Abstract Job Handling Grid Service for Dataset Analysis'

    SciTech Connect

    David A Alexander

    2005-07-11

    For Phase I of the Job Handling project, Tech-X has built a Grid service for processing analysis requests, as well as a Graphical User Interface (GUI) client that uses the service. The service is designed to generically support High-Energy Physics (HEP) experimental analysis tasks. It has an extensible, flexible, open architecture and language. The service uses the Solenoidal Tracker At RHIC (STAR) experiment as a working example. STAR is an experiment at the Relativistic Heavy Ion Collider (RHIC) at the Brookhaven National Laboratory (BNL). STAR and other experiments at BNL generate multiple Petabytes of HEP data. The raw data is captured as millions of input files stored in a distributed data catalog. Potentially using thousands of files as input, analysis requests are submitted to a processing environment containing thousands of nodes. The Grid service provides a standard interface to the processing farm. It enables researchers to run large-scale, massively parallel analysis tasks, regardless of the computational resources available in their location.

  8. Space-based Science Operations Grid Prototype

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Welch, Clara L.; Redman, Sandra

    2004-01-01

    Grid technology is the up and coming technology that is enabling widely disparate services to be offered to users that is very economical, easy to use and not available on a wide basis. Under the Grid concept disparate organizations generally defined as "virtual organizations" can share services i.e. sharing discipline specific computer applications, required to accomplish the specific scientific and engineering organizational goals and objectives. Grids are emerging as the new technology of the future. Grid technology has been enabled by the evolution of increasingly high speed networking. Without the evolution of high speed networking Grid technology would not have emerged. NASA/Marshall Space Flight Center's (MSFC) Flight Projects Directorate, Ground Systems Department is developing a Space-based Science Operations Grid prototype to provide to scientists and engineers the tools necessary to operate space-based science payloads/experiments and for scientists to conduct public and educational outreach. In addition Grid technology can provide new services not currently available to users. These services include mission voice and video, application sharing, telemetry management and display, payload and experiment commanding, data mining, high order data processing, discipline specific application sharing and data storage, all from a single grid portal. The Prototype will provide most of these services in a first step demonstration of integrated Grid and space-based science operations technologies. It will initially be based on the International Space Station science operational services located at the Payload Operations Integration Center at MSFC, but can be applied to many NASA projects including free flying satellites and future projects. The Prototype will use the Internet2 Abilene Research and Education Network that is currently a 10 Gb backbone network to reach the University of Alabama at Huntsville and several other, as yet unidentified, Space Station based

  9. Grid artifact reduction for direct digital radiography detectors based on rotated stationary grids with homomorphic filtering

    SciTech Connect

    Kim, Dong Sik; Lee, Sanggyun

    2013-06-15

    Purpose: Grid artifacts are caused when using the antiscatter grid in obtaining digital x-ray images. In this paper, research on grid artifact reduction techniques is conducted especially for the direct detectors, which are based on amorphous selenium. Methods: In order to analyze and reduce the grid artifacts, the authors consider a multiplicative grid image model and propose a homomorphic filtering technique. For minimal damage due to filters, which are used to suppress the grid artifacts, rotated grids with respect to the sampling direction are employed, and min-max optimization problems for searching optimal grid frequencies and angles for given sampling frequencies are established. The authors then propose algorithms for the grid artifact reduction based on the band-stop filters as well as low-pass filters. Results: The proposed algorithms are experimentally tested for digital x-ray images, which are obtained from direct detectors with the rotated grids, and are compared with other algorithms. It is shown that the proposed algorithms can successfully reduce the grid artifacts for direct detectors. Conclusions: By employing the homomorphic filtering technique, the authors can considerably suppress the strong grid artifacts with relatively narrow-bandwidth filters compared to the normal filtering case. Using rotated grids also significantly reduces the ringing artifact. Furthermore, for specific grid frequencies and angles, the authors can use simple homomorphic low-pass filters in the spatial domain, and thus alleviate the grid artifacts with very low implementation complexity.

  10. Developing Grid based infrastructure or climate modeling

    SciTech Connect

    Taylor, J.; Dvorak, M.; Mickelson, S.

    2002-08-15

    In this paper we discuss the development of a high performance climate modeling system as an example of the application of Grid based technology to climate modeling. The climate simulation system at Argonne currently includes a scientific modeling interface (Espresso) written in Java which incorporates Globus middleware to facilitate climate simulations on the Grid. The climate modeling system also includes a high performance version of MM5v3.4 modified for long climate simulations on our 512 processor Linux cluster (Chiba City), an interactive web based tool to facilitate analysis and collaboration via the web, and an enhanced version of the Cave5D software capable of visualizing large climate data sets. We plan to incorporate other climate modeling systems such as the Fast Ocean Atmosphere Model (FOAM) and the National Center for Atmospheric Research's (NCAR) Community Climate Systems Model (CCSM) within Espresso to facilitate their application on computational grids.

  11. Knowledge Grid Based Knowledge Supply Model

    NASA Astrophysics Data System (ADS)

    Zhen, Lu; Jiang, Zuhua

    This paper is mainly concerned with a knowledge supply model in the environment of knowledge grid to realize the knowledge sharing globally. By integrating members, roles, and tasks in a workflow, three sorts of knowledge demands are gained. Based on knowledge demand information, a knowledge supply model is proposed for the purpose of delivering the right knowledge to the right persons. Knowledge grid, acting as a platform for implementing the knowledge supply, is also discussed mainly from the view of knowledge space. A prototype system of knowledge supply has been implemented and applied in product development.

  12. KARDIONET: telecardiology based on GRID technology.

    PubMed

    Sierdzinski, Janusz; Bala, Piotr; Rudowski, Robert; Grabowski, Marcin; Karpinski, Grzegorz; Kaczynski, Bartosz

    2009-01-01

    The telecardiological system Kardionet is being developed to support interventional cardiology. The main aim of the system is to collect specific and systemized patient data from the distant medical centers and to organize it in the best possible way to diagnose quickly and choose the medical treatment. It is the distributed GRID type system operating in shortest achievable time. Computational GRID solutions together with distributed archive data GRID support creation, implementation and operations of software using considerable computational power. Kardionet system devoted to cardiology purposes includes specially developed data bases for the multimodal data and metadata, including information on a patient and his/her medical examination results. As Kardionet uses modern technology and methods we expect it could have a considerable impact on telemedicine development in Poland. The presented telecardiological system can provide a number of important gains for the national health care system if it is implemented nationwide. PMID:19745355

  13. Space-based Operations Grid Prototype

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Welch, Clara L.

    2003-01-01

    The Space based Operations Grid is intended to integrate the "high end" network services and compute resources that a remote payload investigator needs. This includes integrating and enhancing existing services such as access to telemetry, payload commanding, payload planning and internet voice distribution as well as the addition of services such as video conferencing, collaborative design, modeling or visualization, text messaging, application sharing, and access to existing compute or data grids. Grid technology addresses some of the greatest challenges and opportunities presented by the current trends in technology, i.e. how to take advantage of ever increasing bandwidth, how to manage virtual organizations and how to deal with the increasing threats to information technology security. We will discuss the pros and cons of using grid technology in space-based operations and share current plans for the prototype. It is hoped that early on the prototype can incorporate many of the existing as well as future services that are discussed in the first paragraph above to cooperating International Space Station Principle Investigators both nationally and internationally.

  14. The Construction of Job Families Based on Company Specific PAQ Job Dimensions.

    ERIC Educational Resources Information Center

    Taylor, L. R.; Colbert, G. A.

    1978-01-01

    Research is presented on the construction of job families based on Position Analysis Questionnaire data. The data were subjected to a component analysis. Results were interpreted as sufficiently encouraging to proceed with analyses of validity generalization within the job families. (Editor/RK)

  15. Technology for a NASA Space-Based Science Operations Grid

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Redman, Sandra H.

    2003-01-01

    This viewgraph representation presents an overview of a proposal to develop a space-based operations grid in support of space-based science experiments. The development of such a grid would provide a dynamic, secure and scalable architecture based on standards and next-generation reusable software and would enable greater science collaboration and productivity through the use of shared resources and distributed computing. The authors propose developing this concept for use on payload experiments carried aboard the International Space Station. Topics covered include: grid definitions, portals, grid development and coordination, grid technology and potential uses of such a grid.

  16. Intelligent geospatial data retrieval based on the geospatial grid portal

    NASA Astrophysics Data System (ADS)

    Yuan, Jie; Yue, Peng; Gong, Jianya

    2008-12-01

    The Open Geospatial Consortium (OGC) standard-compliant services define a set of standard interfaces for geospatial Web services to achieve the interoperability in an open distributed computing environment. Grid technology is a distributed computing infrastructure to allow distributed resources sharing and coordinated problem solving. Based on the OGC standards for geospatial services and grid technology, we propose the geospatial grid portal to integrate and interoperate grid-enabled geospatial services. The implementation of the geospatial grid portal is based on a three-tier architecture which consists of grid-enabled geospatial services tier, grid service portal tier and application tier. The OGC standard-compliant services are deployed in a grid environment, the so-called grid-enabled geospatial services. Grid service portals for each type of geospatial services, including WFS, WMS, WCS and CSW, provide a single point of Web entry to discover and access different types of geospatial information. A resource optimization mechanism is incorporated into these service portals to optimize the selection of grid nodes. At the top tier, i.e. the application tier, the client interacts with a semantic middleware for the grid CSW portal, thus allows the semantics-enabled search. The proposed approach can not only optimize the grid resource selection among multiple grid nodes, but also incorporate the power of Semantic Web technology into geospatial grid portal to allow the precise discovery of geospatial data.

  17. Grid-based platform for training in Earth Observation

    NASA Astrophysics Data System (ADS)

    Petcu, Dana; Zaharie, Daniela; Panica, Silviu; Frincu, Marc; Neagul, Marian; Gorgan, Dorian; Stefanut, Teodor

    2010-05-01

    found in [4]. The Workload Management System (WMS) provides two types of resource managers. The first one will be based on Condor HTC and use Condor as a job manager for task dispatching and working nodes (for development purposes) while the second one will use GT4 GRAM (for production purposes). The WMS main component, the Grid Task Dispatcher (GTD), is responsible for the interaction with other internal services as the composition engine in order to facilitate access to the processing platform. Its main responsibilities are to receive tasks from the workflow engine or directly from user interface, to use a task description language (the ClassAd meta language in case of Condor HTC) for job units, to submit and check the status of jobs inside the workload management system and to retrieve job logs for debugging purposes. More details can be found in [4]. A particular component of the platform is eGLE, the eLearning environment. It provides the functionalities necessary to create the visual appearance of the lessons through the usage of visual containers like tools, patterns and templates. The teacher uses the platform for testing the already created lessons, as well as for developing new lesson resources, such as new images and workflows describing graph-based processing. The students execute the lessons or describe and experiment with new workflows or different data. The eGLE database includes several workflow-based lesson descriptions, teaching materials and lesson resources, selected satellite and spatial data. More details can be found in [5]. A first training event of using the platform was organized in September 2009 during 11th SYNASC symposium (links to the demos, testing interface, and exercises are available on project site [1]). The eGLE component was presented at 4th GPC conference in May 2009. Moreover, the functionality of the platform will be presented as demo in April 2010 at 5th EGEE User forum. References: [1] GiSHEO consortium, Project site, http

  18. Design of a grid service-based platform for in silico protein-ligand screenings.

    PubMed

    Levesque, Marshall J; Ichikawa, Kohei; Date, Susumu; Haga, Jason H

    2009-01-01

    Grid computing offers the powerful alternative of sharing resources on a worldwide scale, across different institutions to run computationally intensive, scientific applications without the need for a centralized supercomputer. Much effort has been put into development of software that deploys legacy applications on a grid-based infrastructure and efficiently uses available resources. One field that can benefit greatly from the use of grid resources is that of drug discovery since molecular docking simulations are an integral part of the discovery process. In this paper, we present a scalable, reusable platform to choreograph large virtual screening experiments over a computational grid using the molecular docking simulation software DOCK. Software components are applied on multiple levels to create automated workflows consisting of input data delivery, job scheduling, status query, and collection of output to be displayed in a manageable fashion for further analysis. This was achieved using Opal OP to wrap the DOCK application as a grid service and PERL for data manipulation purposes, alleviating the requirement for extensive knowledge of grid infrastructure. With the platform in place, a screening of the ZINC 2,066,906 compound "drug-like" subset database against an enzyme's catalytic site was successfully performed using the MPI version of DOCK 5.4 on the PRAGMA grid testbed. The screening required 11.56 days laboratory time and utilized 200 processors over 7 clusters. PMID:18771812

  19. Design of a Grid Service-based Platform for In Silico Protein-Ligand Screenings

    PubMed Central

    Levesque, Marshall J.; Ichikawa, Kohei; Date, Susumu; Haga, Jason H.

    2009-01-01

    Grid computing offers the powerful alternative of sharing resources on a worldwide scale, across different institutions to run computationally intensive, scientific applications without the need for a centralized supercomputer. Much effort has been put into development of software that deploys legacy applications on a grid-based infrastructure and efficiently uses available resources. One field that can benefit greatly from the use of grid resources is that of drug discovery since molecular docking simulations are an integral part of the discovery process. In this paper, we present a scalable, reusable platform to choreograph large virtual screening experiments over a computational grid using the molecular docking simulation software DOCK. Software components are applied on multiple levels to create automated workflows consisting of input data delivery, job scheduling, status query, and collection of output to be displayed in a manageable fashion for further analysis. This was achieved using Opal OP to wrap the DOCK application as a grid service and PERL for data manipulation purposes, alleviating the requirement for extensive knowledge of grid infrastructure. With the platform in place, a screening of the ZINC 2,066,906 compound “druglike” subset database against an enzyme's catalytic site was successfully performed using the MPI version of DOCK 5.4 on the PRAGMA grid testbed. The screening required 11.56 days laboratory time and utilized 200 processors over 7 clusters. PMID:18771812

  20. GridLAB-D: An Agent-Based Simulation Framework for Smart Grids

    SciTech Connect

    Chassin, David P.; Fuller, Jason C.; Djilali, Ned

    2014-06-23

    Simulation of smart grid technologies requires a fundamentally new approach to integrated modeling of power systems, energy markets, building technologies, and the plethora of other resources and assets that are becoming part of modern electricity production, delivery, and consumption systems. As a result, the US Department of Energy’s Office of Electricity commissioned the development of a new type of power system simulation tool called GridLAB-D that uses an agent-based approach to simulating smart grids. This paper presents the numerical methods and approach to time-series simulation used by GridLAB-D and reviews applications in power system studies, market design, building control system design, and integration of wind power in a smart grid.

  1. GridLAB-D: An Agent-Based Simulation Framework for Smart Grids

    DOE PAGESBeta

    Chassin, David P.; Fuller, Jason C.; Djilali, Ned

    2014-01-01

    Simulation of smart grid technologies requires a fundamentally new approach to integrated modeling of power systems, energy markets, building technologies, and the plethora of other resources and assets that are becoming part of modern electricity production, delivery, and consumption systems. As a result, the US Department of Energy’s Office of Electricity commissioned the development of a new type of power system simulation tool called GridLAB-D that uses an agent-based approach to simulating smart grids. This paper presents the numerical methods and approach to time-series simulation used by GridLAB-D and reviews applications in power system studies, market design, building control systemmore » design, and integration of wind power in a smart grid.« less

  2. A Grid-based solution for management and analysis of microarrays in distributed experiments

    PubMed Central

    Porro, Ivan; Torterolo, Livia; Corradi, Luca; Fato, Marco; Papadimitropoulos, Adam; Scaglione, Silvia; Schenone, Andrea; Viti, Federica

    2007-01-01

    Several systems have been presented in the last years in order to manage the complexity of large microarray experiments. Although good results have been achieved, most systems tend to lack in one or more fields. A Grid based approach may provide a shared, standardized and reliable solution for storage and analysis of biological data, in order to maximize the results of experimental efforts. A Grid framework has been therefore adopted due to the necessity of remotely accessing large amounts of distributed data as well as to scale computational performances for terabyte datasets. Two different biological studies have been planned in order to highlight the benefits that can emerge from our Grid based platform. The described environment relies on storage services and computational services provided by the gLite Grid middleware. The Grid environment is also able to exploit the added value of metadata in order to let users better classify and search experiments. A state-of-art Grid portal has been implemented in order to hide the complexity of framework from end users and to make them able to easily access available services and data. The functional architecture of the portal is described. As a first test of the system performances, a gene expression analysis has been performed on a dataset of Affymetrix GeneChip® Rat Expression Array RAE230A, from the ArrayExpress database. The sequence of analysis includes three steps: (i) group opening and image set uploading, (ii) normalization, and (iii) model based gene expression (based on PM/MM difference model). Two different Linux versions (sequential and parallel) of the dChip software have been developed to implement the analysis and have been tested on a cluster. From results, it emerges that the parallelization of the analysis process and the execution of parallel jobs on distributed computational resources actually improve the performances. Moreover, the Grid environment have been tested both against the possibility of

  3. Feature combination analysis in smart grid based using SOM for Sudan national grid

    NASA Astrophysics Data System (ADS)

    Bohari, Z. H.; Yusof, M. A. M.; Jali, M. H.; Sulaima, M. F.; Nasir, M. N. M.

    2015-12-01

    In the investigation of power grid security, the cascading failure in multicontingency situations has been a test because of its topological unpredictability and computational expense. Both system investigations and burden positioning routines have their limits. In this project, in view of sorting toward Self Organizing Maps (SOM), incorporated methodology consolidating spatial feature (distance)-based grouping with electrical attributes (load) to evaluate the vulnerability and cascading impact of various part sets in the force lattice. Utilizing the grouping result from SOM, sets of overwhelming stacked beginning victimized people to perform assault conspires and asses the consequent falling impact of their failures, and this SOM-based approach viably distinguishes the more powerless sets of substations than those from the conventional burden positioning and other bunching strategies. The robustness of power grids is a central topic in the design of the so called "smart grid". In this paper, to analyze the measures of importance of the nodes in a power grid under cascading failure. With these efforts, we can distinguish the most vulnerable nodes and protect them, improving the safety of the power grid. Also we can measure if a structure is proper for power grids.

  4. Multi-core job submission and grid resource scheduling for ATLAS AthenaMP

    NASA Astrophysics Data System (ADS)

    Crooks, D.; Calafiura, P.; Harrington, R.; Jha, M.; Maeno, T.; Purdie, S.; Severini, H.; Skipsey, S.; Tsulaia, V.; Walker, R.; Washbrook, A.

    2012-12-01

    AthenaMP is the multi-core implementation of the ATLAS software framework and allows the efficient sharing of memory pages between multiple threads of execution. This has now been validated for production and delivers a significant reduction on the overall application memory footprint with negligible CPU overhead. Before AthenaMP can be routinely run on the LHC Computing Grid it must be determined how the computing resources available to ATLAS can best exploit the notable improvements delivered by switching to this multi-process model. A study into the effectiveness and scalability of AthenaMP in a production environment will be presented. Best practices for configuring the main LRMS implementations currently used by grid sites will be identified in the context of multi-core scheduling optimisation.

  5. Pilot factory - a Condor-based system for scalable Pilot Job generation in the Panda WMS framework

    NASA Astrophysics Data System (ADS)

    Chiu, Po-Hsiang; Potekhin, Maxim

    2010-04-01

    The Panda Workload Management System is designed around the concept of the Pilot Job - a "smart wrapper" for the payload executable that can probe the environment on the remote worker node before pulling down the payload from the server and executing it. Such design allows for improved logging and monitoring capabilities as well as flexibility in Workload Management. In the Grid environment (such as the Open Science Grid), Panda Pilot Jobs are submitted to remote sites via mechanisms that ultimately rely on Condor-G. As our experience has shown, in cases where a large number of Panda jobs are simultaneously routed to a particular remote site, the increased load on the head node of the cluster, which is caused by the Pilot Job submission, may lead to overall lack of scalability. We have developed a Condor-inspired solution to this problem, which is using the schedd-based glidein, whose mission is to redirect pilots to the native batch system. Once a glidein schedd is installed and running, it can be utilized exactly the same way as local schedds and therefore, from the user's perspective, Pilots thus submitted are quite similar to jobs submitted to the local Condor pool.

  6. DEM Based Modeling: Grid or TIN? The Answer Depends

    NASA Astrophysics Data System (ADS)

    Ogden, F. L.; Moreno, H. A.

    2015-12-01

    The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.

  7. ISS Space-Based Science Operations Grid for the Ground Systems Architecture Workshop (GSAW)

    NASA Technical Reports Server (NTRS)

    Welch, Clara; Bradford, Bob

    2003-01-01

    Contents include the following:What is grid? Benefits of a grid to space-based science operations. Our approach. Score of prototype grid. The security question. Short term objectives. Long term objectives. Space-based services required for operations. The prototype. Score of prototype grid. Prototype service layout. Space-based science grid service components.

  8. Team Primacy Concept (TPC) Based Employee Evaluation and Job Performance

    ERIC Educational Resources Information Center

    Muniute, Eivina I.; Alfred, Mary V.

    2007-01-01

    This qualitative study explored how employees learn from Team Primacy Concept (TPC) based employee evaluation and how they use the feedback in performing their jobs. TPC based evaluation is a form of multirater evaluation, during which the employee's performance is discussed by one's peers in a face-to-face team setting. The study used Kolb's…

  9. Grid-based electronic structure calculations: The tensor decomposition approach

    NASA Astrophysics Data System (ADS)

    Rakhuba, M. V.; Oseledets, I. V.

    2016-05-01

    We present a fully grid-based approach for solving Hartree-Fock and all-electron Kohn-Sham equations based on low-rank approximation of three-dimensional electron orbitals. Due to the low-rank structure the total complexity of the algorithm depends linearly with respect to the one-dimensional grid size. Linear complexity allows for the usage of fine grids, e.g. 81923 and, thus, cheap extrapolation procedure. We test the proposed approach on closed-shell atoms up to the argon, several molecules and clusters of hydrogen atoms. All tests show systematical convergence with the required accuracy.

  10. Optimizing Resource Utilization in Grid Batch Systems

    NASA Astrophysics Data System (ADS)

    Gellrich, Andreas

    2012-12-01

    On Grid sites, the requirements of the computing tasks (jobs) to computing, storage, and network resources differ widely. For instance Monte Carlo production jobs are almost purely CPU-bound, whereas physics analysis jobs demand high data rates. In order to optimize the utilization of the compute node resources, jobs must be distributed intelligently over the nodes. Although the job resource requirements cannot be deduced directly, jobs are mapped to POSIX UID/GID according to the VO, VOMS group and role information contained in the VOMS proxy. The UID/GID then allows to distinguish jobs, if users are using VOMS proxies as planned by the VO management, e.g. ‘role=production’ for Monte Carlo jobs. It is possible to setup and configure batch systems (queuing system and scheduler) at Grid sites based on these considerations although scaling limits were observed with the scheduler MAUI. In tests these limitations could be overcome with a home-made scheduler.

  11. Supersampling method for efficient grid-based electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Ryu, Seongok; Choi, Sunghwan; Hong, Kwangwoo; Kim, Woo Youn

    2016-03-01

    The egg-box effect, the spurious variation of energy and force due to the discretization of continuous space, is an inherent vexing problem in grid-based electronic structure calculations. Its effective suppression allowing for large grid spacing is thus crucial for accurate and efficient computations. We here report that the supersampling method drastically alleviates it by eliminating the rapidly varying part of a target function along both radial and angular directions. In particular, the use of the sinc filtering function performs best because as an ideal low pass filter it clearly cuts out the high frequency region beyond allowed by a given grid spacing.

  12. Protecting the Smart Grid: A Risk Based Approach

    SciTech Connect

    Clements, Samuel L.; Kirkham, Harold; Elizondo, Marcelo A.; Lu, Shuai

    2011-10-10

    This paper describes a risk-based approach to security that has been used for years in protecting physical assets, and shows how it could be modified to help secure the digital aspects of the smart grid and control systems in general. One way the smart grid has been said to be vulnerable is that mass load fluctuations could be created by quickly turning off and on large quantities of smart meters. We investigate the plausibility.

  13. Deploying web-based visual exploration tools on the grid

    SciTech Connect

    Jankun-Kelly, T.J.; Kreylos, Oliver; Shalf, John; Ma, Kwan-Liu; Hamann, Bernd; Joy, Kenneth; Bethel, E. Wes

    2002-02-01

    We discuss a web-based portal for the exploration, encapsulation, and dissemination of visualization results over the Grid. This portal integrates three components: an interface client for structured visualization exploration, a visualization web application to manage the generation and capture of the visualization results, and a centralized portal application server to access and manage grid resources. We demonstrate the usefulness of the developed system using an example for Adaptive Mesh Refinement (AMR) data visualization.

  14. GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE

    SciTech Connect

    Mikkelsen, K.; Næss, S. K.; Eriksen, H. K.

    2013-11-10

    We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3) better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.

  15. Design of Grid Portal System Based on RIA

    NASA Astrophysics Data System (ADS)

    Cao, Caifeng; Luo, Jianguo; Qiu, Zhixin

    Grid portal is an important branch of grid research. In order to solve the weak expressive force, the poor interaction, the low operating efficiency and other insufficiencies of the first and second generation of grid portal system, RIA technology was introduced to it. A new portal architecture was designed based on RIA and Web service. The concrete realizing scheme of portal system was presented by using Adobe Flex/Flash technology, which formed a new design pattern. In system architecture, the design pattern has B/S and C/S superiorities, balances server and its client side, optimizes the system performance, realizes platform irrelevance. In system function, the design pattern realizes grid service call, provides client interface with rich user experience, integrates local resources by using FABridge, LCDS, Flash player and some other components.

  16. Advances in Distance-Based Hole Cuts on Overset Grids

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Pandya, Shishir A.

    2015-01-01

    An automatic and efficient method to determine appropriate hole cuts based on distances to the wall and donor stencil maps for overset grids is presented. A new robust procedure is developed to create a closed surface triangulation representation of each geometric component for accurate determination of the minimum hole. Hole boundaries are then displaced away from the tight grid-spacing regions near solid walls to allow grid overlap to occur away from the walls where cell sizes from neighboring grids are more comparable. The placement of hole boundaries is efficiently determined using a mid-distance rule and Cartesian maps of potential valid donor stencils with minimal user input. Application of this procedure typically results in a spatially-variable offset of the hole boundaries from the minimum hole with only a small number of orphan points remaining. Test cases on complex configurations are presented to demonstrate the new scheme.

  17. Market-Based Indian Grid Integration Study Options: Preprint

    SciTech Connect

    Stoltenberg, B.; Clark, K.; Negi, S. K.

    2012-03-01

    The Indian state of Gujarat is forecasting solar and wind generation expansion from 16% to 32% of installed generation capacity by 2015. Some states in India are already experiencing heavy wind power curtailment. Understanding how to integrate variable generation (VG) into the grid is of great interest to local transmission companies and India's Ministry of New and Renewable Energy. This paper describes the nature of a market-based integration study and how this approach, while new to Indian grid operation and planning, is necessary to understand how to operate and expand the grid to best accommodate the expansion of VG. Second, it discusses options in defining a study's scope, such as data granularity, generation modeling, and geographic scope. The paper also explores how Gujarat's method of grid operation and current system reliability will affect how an integration study can be performed.

  18. Antibody-based affinity cryo-EM grid.

    PubMed

    Yu, Guimei; Li, Kunpeng; Jiang, Wen

    2016-05-01

    The Affinity Grid technique combines sample purification and cryo-Electron Microscopy (cryo-EM) grid preparation into a single step. Several types of affinity surfaces, including functionalized lipids monolayers, streptavidin 2D crystals, and covalently functionalized carbon surfaces have been reported. More recently, we presented a new affinity cryo-EM approach, cryo-SPIEM, which applies the traditional Solid Phase Immune Electron Microscopy (SPIEM) technique to cryo-EM. This approach significantly simplifies the preparation of affinity grids and directly works with native macromolecular complexes without need of target modifications. With wide availability of high affinity and high specificity antibodies, the antibody-based affinity grid would enable cryo-EM studies of the native samples directly from cell cultures, targets of low abundance, and unstable or short-lived intermediate states.

  19. Grist : grid-based data mining for astronomy

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph C.; Katz, Daniel S.; Miller, Craig D.; Walia, Harshpreet; Williams, Roy; Djorgovski, S. George; Graham, Matthew J.; Mahabal, Ashish; Babu, Jogesh; Berk, Daniel E. Vanden; Nichol, Robert

    2004-01-01

    The Grist project is developing a grid-technology based system as a research environment for astronomy with massive and complex datasets. This knowledge extraction system will consist of a library of distributed grid services controlled by a workflow system, compliant with standards emerging from the grid computing, web services, and virtual observatory communities. This new technology is being used to find high redshift quasars, study peculiar variable objects, search for transients in real time, and fit SDSS QSO spectra to measure black hole masses. Grist services are also a component of the 'hyperatlas' project to serve high-resolution multi-wavelength imagery over the Internet. In support of these science and outreach objectives, the Grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access, federation, mining, subsetting, source extraction, image mosaicking, statistics, and visualization.

  20. Jobs for JOBS: Toward a Work-Based Welfare System. Occasional Paper 1993-1.

    ERIC Educational Resources Information Center

    Levitan, Sar A.; Gallo, Frank

    The Job Opportunities and Basic Skills (JOBS) program, a component of the 1988 Family Support Act, emphasizes education and occupational training for welfare recipients, but it has not provided sufficient corrective measures to promote work among recipients of Aid for Families with Dependent Children (AFDC). The most serious deficiency of JOBS is…

  1. Software-Based Challenges of Developing the Future Distribution Grid

    SciTech Connect

    Stewart, Emma; Kiliccote, Sila; McParland, Charles

    2014-06-01

    distribution grid modeling, and measured data sources are a key missing element . Modeling tools need to be calibrated based on measured grid data to validate their output in varied conditions such as high renewables penetration and rapidly changing topology. In addition, establishing a standardized data modeling format would enable users to transfer data among tools to take advantage of different analysis features. ?

  2. A Grid Infrastructure for Supporting Space-based Science Operations

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Redman, Sandra H.; McNair, Ann R. (Technical Monitor)

    2002-01-01

    Emerging technologies for computational grid infrastructures have the potential for revolutionizing the way computers are used in all aspects of our lives. Computational grids are currently being implemented to provide a large-scale, dynamic, and secure research and engineering environments based on standards and next-generation reusable software, enabling greater science and engineering productivity through shared resources and distributed computing for less cost than traditional architectures. Combined with the emerging technologies of high-performance networks, grids provide researchers, scientists and engineers the first real opportunity for an effective distributed collaborative environment with access to resources such as computational and storage systems, instruments, and software tools and services for the most computationally challenging applications.

  3. Grid Based Nonlinear Filtering Revisited: Recursive Estimation & Asymptotic Optimality

    NASA Astrophysics Data System (ADS)

    Kalogerias, Dionysios S.; Petropulu, Athina P.

    2016-08-01

    We revisit the development of grid based recursive approximate filtering of general Markov processes in discrete time, partially observed in conditionally Gaussian noise. The grid based filters considered rely on two types of state quantization: The \\textit{Markovian} type and the \\textit{marginal} type. We propose a set of novel, relaxed sufficient conditions, ensuring strong and fully characterized pathwise convergence of these filters to the respective MMSE state estimator. In particular, for marginal state quantizations, we introduce the notion of \\textit{conditional regularity of stochastic kernels}, which, to the best of our knowledge, constitutes the most relaxed condition proposed, under which asymptotic optimality of the respective grid based filters is guaranteed. Further, we extend our convergence results, including filtering of bounded and continuous functionals of the state, as well as recursive approximate state prediction. For both Markovian and marginal quantizations, the whole development of the respective grid based filters relies more on linear-algebraic techniques and less on measure theoretic arguments, making the presentation considerably shorter and technically simpler.

  4. Constructing the ASCI computational grid

    SciTech Connect

    BEIRIGER,JUDY I.; BIVENS,HUGH P.; HUMPHREYS,STEVEN L.; JOHNSON,WILBUR R.; RHEA,RONALD E.

    2000-06-01

    The Accelerated Strategic Computing Initiative (ASCI) computational grid is being constructed to interconnect the high performance computing resources of the nuclear weapons complex. The grid will simplify access to the diverse computing, storage, network, and visualization resources, and will enable the coordinated use of shared resources regardless of location. To match existing hardware platforms, required security services, and current simulation practices, the Globus MetaComputing Toolkit was selected to provide core grid services. The ASCI grid extends Globus functionality by operating as an independent grid, incorporating Kerberos-based security, interfacing to Sandia's Cplant{trademark},and extending job monitoring services. To fully meet ASCI's needs, the architecture layers distributed work management and criteria-driven resource selection services on top of Globus. These services simplify the grid interface by allowing users to simply request ''run code X anywhere''. This paper describes the initial design and prototype of the ASCI grid.

  5. Correspondence between Video-Based Preference Assessment and Subsequent Community Job Performance

    ERIC Educational Resources Information Center

    Morgan, Robert L.; Horrocks, Erin L.

    2011-01-01

    Researchers identified high and low preference jobs using a video web-based assessment program with three young adults ages 18 to 19 with intellectual disabilities. Individual participants were then taught to perform high and low preference jobs in community locations. The order of 25-min high and low preference job sessions was randomized. A…

  6. A geometry-based adaptive unstructured grid generation algorithm for complex geological media

    NASA Astrophysics Data System (ADS)

    Bahrainian, Seyed Saied; Dezfuli, Alireza Daneh

    2014-07-01

    In this paper a novel unstructured grid generation algorithm is presented that considers the effect of geological features and well locations in grid resolution. The proposed grid generation algorithm presents a strategy for definition and construction of an initial grid based on the geological model, geometry adaptation of geological features, and grid resolution control. The algorithm is applied to seismotectonic map of the Masjed-i-Soleiman reservoir. Comparison of grid results with the “Triangle” program shows a more suitable permeability contrast. Immiscible two-phase flow solutions are presented for a fractured porous media test case using different grid resolutions. Adapted grid on the fracture geometry gave identical results with that of a fine grid. The adapted grid employed 88.2% less CPU time when compared to the solutions obtained by the fine grid.

  7. The biometric-based module of smart grid system

    NASA Astrophysics Data System (ADS)

    Engel, E.; Kovalev, I. V.; Ermoshkina, A.

    2015-10-01

    Within Smart Grid concept the flexible biometric-based module base on Principal Component Analysis (PCA) and selective Neural Network is developed. The formation of the selective Neural Network the biometric-based module uses the method which includes three main stages: preliminary processing of the image, face localization and face recognition. Experiments on the Yale face database show that (i) selective Neural Network exhibits promising classification capability for face detection, recognition problems; and (ii) the proposed biometric-based module achieves near real-time face detection, recognition speed and the competitive performance, as compared to some existing subspaces-based methods.

  8. Grid-Based Fourier Transform Phase Contrast Imaging

    NASA Astrophysics Data System (ADS)

    Tahir, Sajjad

    Low contrast in x-ray attenuation imaging between different materials of low electron density is a limitation of traditional x-ray radiography. Phase contrast imaging offers the potential to improve the contrast between such materials, but due to the requirements on the spatial coherence of the x-ray beam, practical implementation of such systems with tabletop (i.e. non-synchrotron) sources has been limited. One recently developed phase imaging technique employs multiple fine-pitched gratings. However, the strict manufacturing tolerances and precise alignment requirements have limited the widespread adoption of grating-based techniques. In this work, we have investigated a technique recently demonstrated by Bennett et al. that utilizes a single grid of much coarser pitch. Our system consisted of a low power 100 microm spot Mo source, a CCD with 22 microm pixel pitch, and either a focused mammography linear grid or a stainless steel woven mesh. Phase is extracted from a single image by windowing and comparing data localized about harmonics of the grid in the Fourier domain. A Matlab code was written to perform the image processing. For the first time, the effects on the diffraction phase contrast and scattering amplitude images of varying grid types and periods, and of varying the window function type used to separate the harmonics, and the window widths, were investigated. Using the wire mesh, derivatives of the phase along two orthogonal directions were obtained and new methods investigated to form improved phase contrast images.

  9. An APEL Tool Based CPU Usage Accounting Infrastructure for Large Scale Computing Grids

    NASA Astrophysics Data System (ADS)

    Jiang, Ming; Novales, Cristina Del Cano; Mathieu, Gilles; Casson, John; Rogers, William; Gordon, John

    The APEL (Accounting Processor for Event Logs) is the fundamental tool for the CPU usage accounting infrastructure deployed within the WLCG and EGEE Grids. In these Grids, jobs are submitted by users to computing resources via a Grid Resource Broker (e.g. gLite Workload Management System). As a log processing tool, APEL interprets logs of Grid gatekeeper (e.g. globus) and batch system logs (e.g. PBS, LSF, SGE and Condor) to produce CPU job accounting records identified with Grid identities. These records provide a complete description of usage of computing resources by user's jobs. APEL publishes accounting records into an accounting record repository at a Grid Operations Centre (GOC) for the access from a GUI web tool. The functions of log files parsing, records generation and publication are implemented by the APEL Parser, APEL Core, and APEL Publisher component respectively. Within the distributed accounting infrastructure, accounting records are transported from APEL Publishers at Grid sites to either a regionalised accounting system or the central one by choice via a common ActiveMQ message broker network. This provides an open transport layer for other accounting systems to publish relevant accounting data to a central accounting repository via a unified interface provided an APEL Publisher and also will give regional/National Grid Initiatives (NGIs) Grids the flexibility in their choice of accounting system. The robust and secure delivery of accounting record messages at an NGI level and between NGI accounting instances and the central one are achieved by using configurable APEL Publishers and an ActiveMQ message broker network.

  10. Grid-based representation and dynamic visualization of ionospheric tomography

    NASA Astrophysics Data System (ADS)

    He, L. M.; Yang, Y.; Su, C.; Yu, J. Q.; Yang, F.; Wu, L. X.

    2013-10-01

    The ionosphere is a dynamic system with complex structures. With the development of abundant global navigation satellite systems, the ionospheric electron density in different altitudes and its time variations can be obtained by ionospheric tomography technique using GNSS observations collected by the continuously operating GNSS tracking stations distributed over globe. However, it is difficult to represent and analyze global and local ionospheric electron density variations in three-dimensional (3D) space due to its complex structures. In this paper, we introduce a grid-based system to overcome this constraint. First, we give the principles, algorithms and procedures of GNSS-based ionospheric tomography technique. Then, the earth system spatial grid (ESSG) based on the spheroid degenerated octree grid (SDOG) is introduced in detail. Finally, more than 400 continuously operating GNSS receivers from the International GNSS Service are utilized to realize global ionospheric tomography, and then the ESSG is used to organize and express the tomography results in 4D, including 3 spatial dimensions and time.

  11. Design and Implementation of Real-Time Off-Grid Detection Tool Based on FNET/GridEye

    SciTech Connect

    Guo, Jiahui; Zhang, Ye; Liu, Yilu; Young II, Marcus Aaron; Irminger, Philip; Dimitrovski, Aleksandar D; Willging, Patrick

    2014-01-01

    Real-time situational awareness tools are of critical importance to power system operators, especially during emergencies. The availability of electric power has become a linchpin of most post disaster response efforts as it is the primary dependency for public and private sector services, as well as individuals. Knowledge of the scope and extent of facilities impacted, as well as the duration of their dependence on backup power, enables emergency response officials to plan for contingencies and provide better overall response. Based on real-time data acquired by Frequency Disturbance Recorders (FDRs) deployed in the North American power grid, a real-time detection method is proposed. This method monitors critical electrical loads and detects the transition of these loads from an on-grid state, where the loads are fed by the power grid to an off-grid state, where the loads are fed by an Uninterrupted Power Supply (UPS) or a backup generation system. The details of the proposed detection algorithm are presented, and some case studies and off-grid detection scenarios are also provided to verify the effectiveness and robustness. Meanwhile, the algorithm has already been implemented based on the Grid Solutions Framework (GSF) and has effectively detected several off-grid situations.

  12. Invulnerability of power grids based on maximum flow theory

    NASA Astrophysics Data System (ADS)

    Fan, Wenli; Huang, Shaowei; Mei, Shengwei

    2016-11-01

    The invulnerability analysis against cascades is of great significance in evaluating the reliability of power systems. In this paper, we propose a novel cascading failure model based on the maximum flow theory to analyze the invulnerability of power grids. In the model, node initial loads are built on the feasible flows of nodes with a tunable parameter γ used to control the initial node load distribution. The simulation results show that both the invulnerability against cascades and the tolerance parameter threshold αT are affected by node load distribution greatly. As γ grows, the invulnerability shows the distinct change rules under different attack strategies and different tolerance parameters α respectively. These results are useful in power grid planning and cascading failure prevention.

  13. An agent-based multilayer architecture for bioinformatics grids.

    PubMed

    Bartocci, Ezio; Cacciagrano, Diletta; Cannata, Nicola; Corradini, Flavio; Merelli, Emanuela; Milanesi, Luciano; Romano, Paolo

    2007-06-01

    Due to the huge volume and complexity of biological data available today, a fundamental component of biomedical research is now in silico analysis. This includes modelling and simulation of biological systems and processes, as well as automated bioinformatics analysis of high-throughput data. The quest for bioinformatics resources (including databases, tools, and knowledge) becomes therefore of extreme importance. Bioinformatics itself is in rapid evolution and dedicated Grid cyberinfrastructures already offer easier access and sharing of resources. Furthermore, the concept of the Grid is progressively interleaving with those of Web Services, semantics, and software agents. Agent-based systems can play a key role in learning, planning, interaction, and coordination. Agents constitute also a natural paradigm to engineer simulations of complex systems like the molecular ones. We present here an agent-based, multilayer architecture for bioinformatics Grids. It is intended to support both the execution of complex in silico experiments and the simulation of biological systems. In the architecture a pivotal role is assigned to an "alive" semantic index of resources, which is also expected to facilitate users' awareness of the bioinformatics domain.

  14. Grid Task Execution

    NASA Technical Reports Server (NTRS)

    Hu, Chaumin

    2007-01-01

    IPG Execution Service is a framework that reliably executes complex jobs on a computational grid, and is part of the IPG service architecture designed to support location-independent computing. The new grid service enables users to describe the platform on which they need a job to run, which allows the service to locate the desired platform, configure it for the required application, and execute the job. After a job is submitted, users can monitor it through periodic notifications, or through queries. Each job consists of a set of tasks that performs actions such as executing applications and managing data. Each task is executed based on a starting condition that is an expression of the states of other tasks. This formulation allows tasks to be executed in parallel, and also allows a user to specify tasks to execute when other tasks succeed, fail, or are canceled. The two core components of the Execution Service are the Task Database, which stores tasks that have been submitted for execution, and the Task Manager, which executes tasks in the proper order, based on the user-specified starting conditions, and avoids overloading local and remote resources while executing tasks.

  15. On NUFFT-based gridding for non-Cartesian MRI.

    PubMed

    Fessler, Jeffrey A

    2007-10-01

    For MRI with non-Cartesian sampling, the conventional approach to reconstructing images is to use the gridding method with a Kaiser-Bessel (KB) interpolation kernel. Recently, Sha et al. [L. Sha, H. Guo, A.W. Song, An improved gridding method for spiral MRI using nonuniform fast Fourier transform, J. Magn. Reson. 162(2) (2003) 250-258] proposed an alternative method based on a nonuniform FFT (NUFFT) with least-squares (LS) design of the interpolation coefficients. They described this LS_NUFFT method as shift variant and reported that it yielded smaller reconstruction approximation errors than the conventional shift-invariant KB approach. This paper analyzes the LS_NUFFT approach in detail. We show that when one accounts for a certain linear phase factor, the core of the LS_NUFFT interpolator is in fact real and shift invariant. Furthermore, we find that the KB approach yields smaller errors than the original LS_NUFFT approach. We show that optimizing certain scaling factors can lead to a somewhat improved LS_NUFFT approach, but the high computation cost seems to outweigh the modest reduction in reconstruction error. We conclude that the standard KB approach, with appropriate parameters as described in the literature, remains the practical method of choice for gridding reconstruction in MRI. PMID:17689121

  16. On NUFFT-based gridding for non-Cartesian MRI

    NASA Astrophysics Data System (ADS)

    Fessler, Jeffrey A.

    2007-10-01

    For MRI with non-Cartesian sampling, the conventional approach to reconstructing images is to use the gridding method with a Kaiser-Bessel (KB) interpolation kernel. Recently, Sha et al. [L. Sha, H. Guo, A.W. Song, An improved gridding method for spiral MRI using nonuniform fast Fourier transform, J. Magn. Reson. 162(2) (2003) 250-258] proposed an alternative method based on a nonuniform FFT (NUFFT) with least-squares (LS) design of the interpolation coefficients. They described this LS_NUFFT method as shift variant and reported that it yielded smaller reconstruction approximation errors than the conventional shift-invariant KB approach. This paper analyzes the LS_NUFFT approach in detail. We show that when one accounts for a certain linear phase factor, the core of the LS_NUFFT interpolator is in fact real and shift invariant. Furthermore, we find that the KB approach yields smaller errors than the original LS_NUFFT approach. We show that optimizing certain scaling factors can lead to a somewhat improved LS_NUFFT approach, but the high computation cost seems to outweigh the modest reduction in reconstruction error. We conclude that the standard KB approach, with appropriate parameters as described in the literature, remains the practical method of choice for gridding reconstruction in MRI.

  17. Jobs, Jobs, Jobs!

    ERIC Educational Resources Information Center

    Jacobson, Linda

    2011-01-01

    Teaching is not the safe career bet that it once was. The thinking used to be: New students will always be entering the public schools, and older teachers will always be retiring, so new teachers will always be needed. But teaching jobs aren't secure enough to stand up to the "Great Recession," as this drawn-out downturn has been called. Across…

  18. The Knowledge Base Interface for Parametric Grid Information

    SciTech Connect

    Hipp, James R.; Simons, Randall W.; Young, Chris J.

    1999-08-03

    The parametric grid capability of the Knowledge Base (KBase) provides an efficient robust way to store and access interpolatable information that is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use an approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation. The method involves three basic steps: data preparation, data storage, and data access. In past presentations we have discussed in detail the first step. In this paper we focus on the latter two, describing in detail the type of information which must be stored and the interface used to retrieve parametric grid data from the Knowledge Base. Once data have been properly prepared, the information (tessellation and associated value surfaces) needed to support the interface functionality, can be entered into the KBase. The primary types of parametric grid data that must be stored include (1) generic header information; (2) base model, station, and phase names and associated ID's used to construct surface identifiers; (3) surface accounting information; (4) tessellation accounting information; (5) mesh data for each tessellation; (6) correction data defined for each surface at each node of the surfaces owning tessellation (7) mesh refinement calculation set-up and flag information; and (8) kriging calculation set-up and flag information. The eight data components not only represent the results of the data preparation process but also include all required input information for several population tools that would enable the complete regeneration of the data results if that should be necessary.

  19. Classroom-Based Interventions and Teachers' Perceived Job Stressors and Confidence: Evidence from a Randomized Trial in Head Start Settings

    ERIC Educational Resources Information Center

    Zhai, Fuhua; Raver, C. Cybele; Li-Grining, Christine

    2011-01-01

    Preschool teachers' job stressors have received increasing attention but have been understudied in the literature. We investigated the impacts of a classroom-based intervention, the Chicago School Readiness Project (CSRP), on teachers' perceived job stressors and confidence, as indexed by their perceptions of job control, job resources, job…

  20. Organizational and Environmental Predictors of Job Satisfaction in Community-based HIV/AIDS Service Organizations.

    ERIC Educational Resources Information Center

    Gimbel, Ronald W.; Lehrman, Sue; Strosberg, Martin A.; Ziac, Veronica; Freedman, Jay; Savicki, Karen; Tackley, Lisa

    2002-01-01

    Using variables measuring organizational characteristics and environmental influences, this study analyzed job satisfaction in community-based HIV/AIDS organizations. Organizational characteristics were found to predict job satisfaction among employees with varying intensity based on position within the organization. Environmental influences had…

  1. Improving merge methods for grid-based digital elevation models

    NASA Astrophysics Data System (ADS)

    Leitão, J. P.; Prodanović, D.; Maksimović, Č.

    2016-03-01

    Digital Elevation Models (DEMs) are used to represent the terrain in applications such as, for example, overland flow modelling or viewshed analysis. DEMs generated from digitising contour lines or obtained by LiDAR or satellite data are now widely available. However, in some cases, the area of study is covered by more than one of the available elevation data sets. In these cases the relevant DEMs may need to be merged. The merged DEM must retain the most accurate elevation information available while generating consistent slopes and aspects. In this paper we present a thorough analysis of three conventional grid-based DEM merging methods that are available in commercial GIS software. These methods are evaluated for their applicability in merging DEMs and, based on evaluation results, a method for improving the merging of grid-based DEMs is proposed. DEMs generated by the proposed method, called MBlend, showed significant improvements when compared to DEMs produced by the three conventional methods in terms of elevation, slope and aspect accuracy, ensuring also smooth elevation transitions between the original DEMs. The results produced by the improved method are highly relevant different applications in terrain analysis, e.g., visibility, or spotting irregularities in landforms and for modelling terrain phenomena, such as overland flow.

  2. Priority-Based Job Scheduling in Distributed Systems

    NASA Astrophysics Data System (ADS)

    Bansal, Sunita; Hota, Chittaranjan

    Global computing systems like SETI@home tie together the unused CPU cycles, buffer space and secondary storage resources over the Internet for solving large scale computing problems like weather forecasting, and image processing that require high volume of computing power. In this paper we address issues that are critical to distributed scheduling environments such as job priorities, length of jobs, and resource heterogeneity. However, researchers have used metrics like resource availability at the new location, and response time of jobs in deciding upon the job transfer. Our load sharing algorithms use dynamic sender initiated approach to transfer a job. We implemented distributed algorithms using a centralized approach that improves average response time of jobs while considering their priorities. The job arrival process and the CPU service times are modeled using M/M/1 queuing model. We compared the performance of our algorithms with similar algorithms in the literature. We evaluated our algorithms using simulation and presented the results that show the effectiveness of our approach.

  3. Environmental applications based on GIS and GRID technologies

    NASA Astrophysics Data System (ADS)

    Demontis, R.; Lorrai, E.; Marrone, V. A.; Muscas, L.; Spanu, V.; Vacca, A.; Valera, P.

    2009-04-01

    In the last decades, the collection and use of environmental data has enormously increased in a wide range of applications. Simultaneously, the explosive development of information technology and its ever wider data accessibility have made it possible to store and manipulate huge quantities of data. In this context, the GRID approach is emerging worldwide as a tool allowing to provision a computational task with administratively-distant resources. The aim of this paper is to present three environmental applications (Land Suitability, Desertification Risk Assessment, Georesources and Environmental Geochemistry) foreseen within the AGISGRID (Access and query of a distributed GIS/Database within the GRID infrastructure, http://grida3.crs4.it/enginframe/agisgrid/index.xml) activities of the GRIDA3 (Administrator of sharing resources for data analysis and environmental applications, http://grida3.crs4.it) project. This project, co-funded by the Italian Ministry of research, is based on the use of shared environmental data through GRID technologies and accessible by a WEB interface, aimed at public and private users in the field of environmental management and land use planning. The technologies used for AGISGRID include: - the client-server-middleware iRODS™ (Integrated Rule-Oriented Data System) (https://irods.org); - the EnginFrame system (http://www.nice-italy.com/main/index.php?id=32), the grid portal that supplies a frame to make available, via Intranet/Internet, the developed GRID applications; - the software GIS GRASS (Geographic Resources Analysis Support System) (http://grass.itc.it); - the relational database PostgreSQL (http://www.posgresql.org) and the spatial database extension PostGis; - the open source multiplatform Mapserver (http://mapserver.gis.umn.edu), used to represent the geospatial data through typical WEB GIS functionalities. Three GRID nodes are directly involved in the applications: the application workflow is implemented at the CRS4 (Pula

  4. An enhanced grid-based Bayesian array for target tracking

    NASA Astrophysics Data System (ADS)

    Sang, Qian; Lin, Zongli; Acton, Scott T.

    2013-02-01

    A grid-based Bayesian array (GBA) for robust visual tracking has recently been developed, which proposes a novel method of deterministic sample generation and sample weighting for position estimation. In particular, a target motion model is constructed, predicting target position in the next frame based on estimations in previous frames. Samples are generated by gridding within an ellipsoid centered at the prediction. For localization, radial edge detection is applied for each sample to determine if it is inside the target boundary. Sample weights are then assigned according to the number of the edge points detected around the sample and its distance from the predicted position. The position estimation is computed as the weighted sum of the sample set. In this paper, we enhance the capacity of the GBA tracker in accommodating the tracking of targets in video with erratic motion, by introducing adaptation in the motion model and iterative position estimation. The improved tracking performance over the original GBA tracker are demonstrated in tracking a single leukocyte in vivo and ground vehicle target observed from UAV videos, both undergoing abrupt changes in motion. The experimental results show that the enhanced GBA tracker outperforms the original by tracking more than 10% of the total number of frames, and increases the number of video sequences with all frames tracked by greater than 20%.

  5. Improving mobile robot localization: grid-based approach

    NASA Astrophysics Data System (ADS)

    Yan, Junchi

    2012-02-01

    Autonomous mobile robots have been widely studied not only as advanced facilities for industrial and daily life automation, but also as a testbed in robotics competitions for extending the frontier of current artificial intelligence. In many of such contests, the robot is supposed to navigate on the ground with a grid layout. Based on this observation, we present a localization error correction method by exploring the geometric feature of the tile patterns. On top of the classical inertia-based positioning, our approach employs three fiber-optic sensors that are assembled under the bottom of the robot, presenting an equilateral triangle layout. The sensor apparatus, together with the proposed supporting algorithm, are designed to detect a line's direction (vertical or horizontal) by monitoring the grid crossing events. As a result, the line coordinate information can be fused to rectify the cumulative localization deviation from inertia positioning. The proposed method is analyzed theoretically in terms of its error bound and also has been implemented and tested on a customary developed two-wheel autonomous mobile robot.

  6. Towards observation-based gridded runoff estimates for Europe

    NASA Astrophysics Data System (ADS)

    Gudmundsson, L.; Seneviratne, S. I.

    2015-06-01

    Terrestrial water variables are the key to understanding ecosystem processes, feed back on weather and climate, and are a prerequisite for human activities. To provide context for local investigations and to better understand phenomena that only emerge at large spatial scales, reliable information on continental-scale freshwater dynamics is necessary. To date streamflow is among the best-observed variables of terrestrial water systems. However, observation networks have a limited station density and often incomplete temporal coverage, limiting investigations to locations and times with observations. This paper presents a methodology to estimate continental-scale runoff on a 0.5° spatial grid with monthly resolution. The methodology is based on statistical upscaling of observed streamflow from small catchments in Europe and exploits readily available gridded atmospheric forcing data combined with the capability of machine learning techniques. The resulting runoff estimates are validated against (1) runoff from small catchments that were not used for model training, (2) river discharge from nine continental-scale river basins and (3) independent estimates of long-term mean evapotranspiration at the pan-European scale. In addition it is shown that the produced gridded runoff compares on average better to observations than a multi-model ensemble of comprehensive land surface models (LSMs), making it an ideal candidate for model evaluation and model development. In particular, the presented machine learning approach may help determining which factors are most relevant for an efficient modelling of runoff at regional scales. Finally, the resulting data product is used to derive a comprehensive runoff climatology for Europe and its potential for drought monitoring is illustrated.

  7. GSIMF : a web service based software and database management system for the generation grids.

    SciTech Connect

    Wang, N.; Ananthan, B.; Gieraltowski, G.; May, E.; Vaniachine, A.; Tech-X Corp.

    2008-01-01

    To process the vast amount of data from high energy physics experiments, physicists rely on Computational and Data Grids; yet, the distribution, installation, and updating of a myriad of different versions of different programs over the Grid environment is complicated, time-consuming, and error-prone. Our Grid Software Installation Management Framework (GSIMF) is a set of Grid Services that has been developed for managing versioned and interdependent software applications and file-based databases over the Grid infrastructure. This set of Grid services provide a mechanism to install software packages on distributed Grid computing elements, thus automating the software and database installation management process on behalf of the users. This enables users to remotely install programs and tap into the computing power provided by Grids.

  8. Modeling earthquake activity using a memristor-based cellular grid

    NASA Astrophysics Data System (ADS)

    Vourkas, Ioannis; Sirakoulis, Georgios Ch.

    2013-04-01

    Earthquakes are absolutely among the most devastating natural phenomena because of their immediate and long-term severe consequences. Earthquake activity modeling, especially in areas known to experience frequent large earthquakes, could lead to improvements in infrastructure development that will prevent possible loss of lives and property damage. An earthquake process is inherently a nonlinear complex system and lately scientists have become interested in finding possible analogues of earthquake dynamics. The majority of the models developed so far were based on a mass-spring model of either one or two dimensions. An early approach towards the reordering and the improvement of existing models presenting the capacitor-inductor (LC) analogue, where the LC circuit resembles a mass-spring system and simulates earthquake activity, was also published recently. Electromagnetic oscillation occurs when energy is transferred between the capacitor and the inductor. This energy transformation is similar to the mechanical oscillation that takes place in the mass-spring system. A few years ago memristor-based oscillators were used as learning circuits exposed to a train of voltage pulses that mimic environment changes. The mathematical foundation of the memristor (memory resistor), as the fourth fundamental passive element, has been expounded by Leon Chua and later extended to a more broad class of memristors, known as memristive devices and systems. This class of two-terminal passive circuit elements with memory performs both information processing and storing of computational data on the same physical platform. Importantly, the states of these devices adjust to input signals and provide analog capabilities unavailable in standard circuit elements, resulting in adaptive circuitry and providing analog parallel computation. In this work, a memristor-based cellular grid is used to model earthquake activity. An LC contour along with a memristor is used to model seismic activity

  9. Transaction-Based Controls for Building-Grid Integration: VOLTTRON™

    SciTech Connect

    Akyol, Bora A.; Haack, Jereme N.; Hernandez, George; Katipamula, Srinivas; Widergren, Steven E.

    2015-07-01

    The U.S. Department of Energy’s (DOE’s) Building Technologies Office (BTO) is supporting the development of a “transactional network” concept that supports energy, operational, and financial transactions between building systems (e.g., rooftop units -- RTUs), and the electric power grid using applications, or 'agents', that reside either on the equipment, on local building controllers, or in the Cloud. The transactional network vision is delivered using a real-time, scalable reference platform called VOLTTRON that supports the needs of the changing energy system. VOLTTRON is an agent execution and an innovative distributed control and sensing software platform that supports modern control strategies, including agent-based and transaction-based controls. It enables mobile and stationary software agents to perform information gathering, processing, and control actions.

  10. Knowledge-based zonal grid generation for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Andrews, Alison E.

    1988-01-01

    Automation of flow field zoning in two dimensions is an important step towards reducing the difficulty of three-dimensional grid generation in computational fluid dynamics. Using a knowledge-based approach makes sense, but problems arise which are caused by aspects of zoning involving perception, lack of expert consensus, and design processes. These obstacles are overcome by means of a simple shape and configuration language, a tunable zoning archetype, and a method of assembling plans from selected, predefined subplans. A demonstration system for knowledge-based two-dimensional flow field zoning has been successfully implemented and tested on representative aerodynamic configurations. The results show that this approach can produce flow field zonings that are acceptable to experts with differing evaluation criteria.

  11. A framework for graph-based synthesis, analysis, and visualization of HPC cluster job data.

    SciTech Connect

    Mayo, Jackson R.; Kegelmeyer, W. Philip, Jr.; Wong, Matthew H.; Pebay, Philippe Pierre; Gentile, Ann C.; Thompson, David C.; Roe, Diana C.; De Sapio, Vincent; Brandt, James M.

    2010-08-01

    The monitoring and system analysis of high performance computing (HPC) clusters is of increasing importance to the HPC community. Analysis of HPC job data can be used to characterize system usage and diagnose and examine failure modes and their effects. This analysis is not straightforward, however, due to the complex relationships that exist between jobs. These relationships are based on a number of factors, including shared compute nodes between jobs, proximity of jobs in time, etc. Graph-based techniques represent an approach that is particularly well suited to this problem, and provide an effective technique for discovering important relationships in job queuing and execution data. The efficacy of these techniques is rooted in the use of a semantic graph as a knowledge representation tool. In a semantic graph job data, represented in a combination of numerical and textual forms, can be flexibly processed into edges, with corresponding weights, expressing relationships between jobs, nodes, users, and other relevant entities. This graph-based representation permits formal manipulation by a number of analysis algorithms. This report presents a methodology and software implementation that leverages semantic graph-based techniques for the system-level monitoring and analysis of HPC clusters based on job queuing and execution data. Ontology development and graph synthesis is discussed with respect to the domain of HPC job data. The framework developed automates the synthesis of graphs from a database of job information. It also provides a front end, enabling visualization of the synthesized graphs. Additionally, an analysis engine is incorporated that provides performance analysis, graph-based clustering, and failure prediction capabilities for HPC systems.

  12. Grid regulation services for energy storage devices based on grid frequency

    DOEpatents

    Pratt, Richard M; Hammerstrom, Donald J; Kintner-Meyer, Michael C.W.; Tuffner, Francis K

    2014-04-15

    Disclosed herein are representative embodiments of methods, apparatus, and systems for charging and discharging an energy storage device connected to an electrical power distribution system. In one exemplary embodiment, a controller monitors electrical characteristics of an electrical power distribution system and provides an output to a bi-directional charger causing the charger to charge or discharge an energy storage device (e.g., a battery in a plug-in hybrid electric vehicle (PHEV)). The controller can help stabilize the electrical power distribution system by increasing the charging rate when there is excess power in the electrical power distribution system (e.g., when the frequency of an AC power grid exceeds an average value), or by discharging power from the energy storage device to stabilize the grid when there is a shortage of power in the electrical power distribution system (e.g., when the frequency of an AC power grid is below an average value).

  13. Grid regulation services for energy storage devices based on grid frequency

    DOEpatents

    Pratt, Richard M; Hammerstrom, Donald J; Kintner-Meyer, Michael C.W.; Tuffner, Francis K

    2013-07-02

    Disclosed herein are representative embodiments of methods, apparatus, and systems for charging and discharging an energy storage device connected to an electrical power distribution system. In one exemplary embodiment, a controller monitors electrical characteristics of an electrical power distribution system and provides an output to a bi-directional charger causing the charger to charge or discharge an energy storage device (e.g., a battery in a plug-in hybrid electric vehicle (PHEV)). The controller can help stabilize the electrical power distribution system by increasing the charging rate when there is excess power in the electrical power distribution system (e.g., when the frequency of an AC power grid exceeds an average value), or by discharging power from the energy storage device to stabilize the grid when there is a shortage of power in the electrical power distribution system (e.g., when the frequency of an AC power grid is below an average value).

  14. Integrating grid-based and topological maps for mobile robot navigation

    SciTech Connect

    Thrun, S.; Buecken, A.

    1996-12-31

    Research on mobile robot navigation has produced two major paradigms for mapping indoor environments: grid-based and topological. While grid-based methods produce accurate metric maps, their complexity often prohibits efficient planning and problem solving in large-scale indoor environments. Topological maps, on the other hand, can be used much more efficiently, yet accurate and consistent topological maps are considerably difficult to learn in large-scale environments. This paper describes an approach that integrates both paradigms: grid-based and topological. Grid-based maps are learned using artificial neural networks and Bayesian integration. Topological maps are generated on top of the grid-based maps, by partitioning the latter into coherent regions. By combining both paradigms-grid-based and topological, the approach presented here gains the best of both worlds: accuracy/consistency and efficiency. The paper gives results for autonomously operating a mobile robot equipped with sonar sensors in populated multi-room environments.

  15. The agent-based spatial information semantic grid

    NASA Astrophysics Data System (ADS)

    Cui, Wei; Zhu, YaQiong; Zhou, Yong; Li, Deren

    2006-10-01

    Analyzing the characteristic of multi-Agent and geographic Ontology, The concept of the Agent-based Spatial Information Semantic Grid (ASISG) is defined and the architecture of the ASISG is advanced. ASISG is composed with Multi-Agents and geographic Ontology. The Multi-Agent Systems are composed with User Agents, General Ontology Agent, Geo-Agents, Broker Agents, Resource Agents, Spatial Data Analysis Agents, Spatial Data Access Agents, Task Execution Agent and Monitor Agent. The architecture of ASISG have three layers, they are the fabric layer, the grid management layer and the application layer. The fabric layer what is composed with Data Access Agent, Resource Agent and Geo-Agent encapsulates the data of spatial information system so that exhibits a conceptual interface for the Grid management layer. The Grid management layer, which is composed with General Ontology Agent, Task Execution Agent and Monitor Agent and Data Analysis Agent, used a hybrid method to manage all resources that were registered in a General Ontology Agent that is described by a General Ontology System. The hybrid method is assembled by resource dissemination and resource discovery. The resource dissemination push resource from Local Ontology Agent to General Ontology Agent and the resource discovery pull resource from the General Ontology Agent to Local Ontology Agents. The Local Ontology Agent is derived from special domain and describes the semantic information of local GIS. The nature of the Local Ontology Agents can be filtrated to construct a virtual organization what could provides a global scheme. The virtual organization lightens the burdens of guests because they need not search information site by site manually. The application layer what is composed with User Agent, Geo-Agent and Task Execution Agent can apply a corresponding interface to a domain user. The functions that ASISG should provide are: 1) It integrates different spatial information systems on the semantic The Grid

  16. 75 FR 24990 - Proposed Information Collection for the Evaluation of the Community-Based Job Training Grants...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-06

    ...-Based Job Training Grants; Comment Request AGENCY: Employment and Training Administration. ACTION... comments on a new data collection for the Evaluation of the Community- Based Job Training Grants. A copy of...@DOL.gov . SUPPLEMENTARY INFORMATION: I. Background The Community-Based Job Training Grants...

  17. Grid-based Methods in Relativistic Hydrodynamics and Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Martí, José María; Müller, Ewald

    2015-12-01

    An overview of grid-based numerical methods used in relativistic hydrodynamics (RHD) and magnetohydrodynamics (RMHD) is presented. Special emphasis is put on a comprehensive review of the application of high-resolution shock-capturing methods. Results of a set of demanding test bench simulations obtained with different numerical methods are compared in an attempt to assess the present capabilities and limits of the various numerical strategies. Applications to three astrophysical phenomena are briefly discussed to motivate the need for and to demonstrate the success of RHD and RMHD simulations in their understanding. The review further provides FORTRAN programs to compute the exact solution of the Riemann problem in RMHD, and to simulate 1D RMHD flows in Cartesian coordinates.

  18. Performance of standard fluoroscopy antiscatter grids in flat-detector-based cone-beam CT

    NASA Astrophysics Data System (ADS)

    Wiegert, Jens; Bertram, Matthias; Schaefer, Dirk; Conrads, Norbert; Timmer, Jan; Aach, Til; Rose, Georg

    2004-05-01

    In this paper, the performance of focused lamellar anti-scatter grids, which are currently used in fluoroscopy, is studied in order to determine guidelines of grid usage for flat detector based cone beam CT. The investigation aims at obtaining the signal to noise ratio improvement factor by the use of anti-scatter grids. First, the results of detailed Monte Carlo simulations as well as measurements are presented. From these the general characteristics of the impinging field of scattered and primary photons are derived. Phantoms modeling the head, thorax and pelvis regions have been studied for various imaging geometries with varying phantom size, cone and fan angles and patient-detector distances. Second, simulation results are shown for ideally focused and vacuum spaced grids as best case approach as well as for grids with realistic spacing materials. The grid performance is evaluated by means of the primary and scatter transmission and the signal to noise ratio improvement factor as function of imaging geometry and grid parameters. For a typical flat detector cone beam CT setup, the grid selectivity and thus the performance of anti-scatter grids is much lower compared to setups where the grid is located directly behind the irradiated object. While for small object-to-grid distances a standard grid improves the SNR, the SNR for geometries as used in flat detector based cone beam CT is deteriorated by the use of an anti-scatter grid for many application scenarios. This holds even for the pelvic region. Standard fluoroscopy anti-scatter grids were found to decrease the SNR in many application scenarios of cone beam CT due to the large patient-detector distance and have, therefore, only a limited benefit in flat detector based cone beam CT.

  19. A peer-to-peer resource scheduling approach for photonic grid network based on OBGP

    NASA Astrophysics Data System (ADS)

    Wu, Runze; Ji, Yuefeng

    2005-11-01

    In this paper we present a resource scheduling mechanism for providing dynamic lightpaths to photonic grid network and point out that grid enabled by optical network has huge potential effect on pushing the next optical network applications. Furthermore we investigate photonic grid architecture and control plane based on peer-to-peer is also provided to control optical network communication resources dynamically. We also certificate the idea of extending BGP towards optical network, which is called Optical Border Gateway Protocol used to provide IP-based protocols to control optical network, and gives a dynamic lightpath scheduling approach over multi-wavelength optical network as a new grid service based on OBGP.

  20. The Construction of an Ontology-Based Ubiquitous Learning Grid

    ERIC Educational Resources Information Center

    Liao, Ching-Jung; Chou, Chien-Chih; Yang, Jin-Tan David

    2009-01-01

    The purpose of this study is to incorporate adaptive ontology into ubiquitous learning grid to achieve seamless learning environment. Ubiquitous learning grid uses ubiquitous computing environment to infer and determine the most adaptive learning contents and procedures in anytime, any place and with any device. To achieve the goal, an…

  1. Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd

    2015-01-01

    Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.

  2. Interviewing for the Principal's Job: A Behavior-Based Approach

    ERIC Educational Resources Information Center

    Clement, Mary C.

    2009-01-01

    The stakes are high when one decides to leave a tenured teaching position or an assistant principalship to interview for a principal's position. However, the stakes are high for the future employer as well. The school district needs to know that the applicant is ready for a job that is very complex. As a new principal, the applicant will be…

  3. Job Search Methods: Consequences for Gender-based Earnings Inequality.

    ERIC Educational Resources Information Center

    Huffman, Matt L.; Torres, Lisa

    2001-01-01

    Data from adults in Atlanta, Boston, and Los Angeles (n=1,942) who searched for work using formal (ads, agencies) or informal (networks) methods indicated that type of method used did not contribute to the gender gap in earnings. Results do not support formal job search as a way to reduce gender inequality. (Contains 55 references.) (SK)

  4. Environmental applications based on GIS and GRID technologies

    NASA Astrophysics Data System (ADS)

    Demontis, R.; Lorrai, E.; Marrone, V. A.; Muscas, L.; Spanu, V.; Vacca, A.; Valera, P.

    2009-04-01

    In the last decades, the collection and use of environmental data has enormously increased in a wide range of applications. Simultaneously, the explosive development of information technology and its ever wider data accessibility have made it possible to store and manipulate huge quantities of data. In this context, the GRID approach is emerging worldwide as a tool allowing to provision a computational task with administratively-distant resources. The aim of this paper is to present three environmental applications (Land Suitability, Desertification Risk Assessment, Georesources and Environmental Geochemistry) foreseen within the AGISGRID (Access and query of a distributed GIS/Database within the GRID infrastructure, http://grida3.crs4.it/enginframe/agisgrid/index.xml) activities of the GRIDA3 (Administrator of sharing resources for data analysis and environmental applications, http://grida3.crs4.it) project. This project, co-funded by the Italian Ministry of research, is based on the use of shared environmental data through GRID technologies and accessible by a WEB interface, aimed at public and private users in the field of environmental management and land use planning. The technologies used for AGISGRID include: - the client-server-middleware iRODS™ (Integrated Rule-Oriented Data System) (https://irods.org); - the EnginFrame system (http://www.nice-italy.com/main/index.php?id=32), the grid portal that supplies a frame to make available, via Intranet/Internet, the developed GRID applications; - the software GIS GRASS (Geographic Resources Analysis Support System) (http://grass.itc.it); - the relational database PostgreSQL (http://www.posgresql.org) and the spatial database extension PostGis; - the open source multiplatform Mapserver (http://mapserver.gis.umn.edu), used to represent the geospatial data through typical WEB GIS functionalities. Three GRID nodes are directly involved in the applications: the application workflow is implemented at the CRS4 (Pula

  5. Study on the grid-based distributed virtual geo-environment (DVGE-G)

    NASA Astrophysics Data System (ADS)

    Tang, Lu-liang; Li, Qing-quan

    2005-10-01

    It is publicly considered that the next generational Internet technology is grid computing, which supports the sharing and coordinated use of diverse resources in dynamic virtual organizations from geographically and organizationally distributed components. Grid computing characters strong computing ability and broad width information exchange. After analyzing the characteristic of grid computing, this paper expatiates on current application status of grid computing with middleware technology on DVGE-G and the problems it faces. Cooperating with IBM, Microsoft and HP, Globus Toolkit as a standard for grid computing is widely used to develop application on grid, which can run on Unix and Windows operation systems. On the basis of "the five-tiers sandglass structure" and web services technology, Globus presented Open Grid Services Architecture (OGSA), which centered on grid services. According to the characteristic of DVGE-G and the development of current grid computing, this paper put forward the Grid-Oriented Distributed Network Model for DVGE-G. Virtual group is corresponding with the Virtual Organization in OGSA service, which is easier and more directly for the dynamic virtual groups in GDNM to utilize the grid source and communication each other. The GDNM is not only more advantage to the distributed database consistency management, but also it is more convenient to the virtual group users acquiring the DVGE-G data information, The architecture of DVGE-G designed in this paper is based on OGSA and web services, which is keep to "the five-tiers sandglass structure" of the OGSA. This architecture is more convenient to utilizing grid service and decreasing the conflict with the grid environment. At last, this paper presents the implementation of DVGE-G and the interfaces of Grid Service.

  6. Analyzing data flows of WLCG jobs at batch job level

    NASA Astrophysics Data System (ADS)

    Kuehn, Eileen; Fischer, Max; Giffels, Manuel; Jung, Christopher; Petzold, Andreas

    2015-05-01

    With the introduction of federated data access to the workflows of WLCG, it is becoming increasingly important for data centers to understand specific data flows regarding storage element accesses, firewall configurations, as well as the scheduling of batch jobs themselves. As existing batch system monitoring and related system monitoring tools do not support measurements at batch job level, a new tool has been developed and put into operation at the GridKa Tier 1 center for monitoring continuous data streams and characteristics of WLCG jobs and pilots. Long term measurements and data collection are in progress. These measurements already have been proven to be useful analyzing misbehaviors and various issues. Therefore we aim for an automated, realtime approach for anomaly detection. As a requirement, prototypes for standard workflows have to be examined. Based on measurements of several months, different features of HEP jobs are evaluated regarding their effectiveness for data mining approaches to identify these common workflows. The paper will introduce the actual measurement approach and statistics as well as the general concept and first results classifying different HEP job workflows derived from the measurements at GridKa.

  7. Burnout, psychological morbidity, job satisfaction, and stress: a survey of Canadian hospital based child protection professionals

    PubMed Central

    Bennett, S; Plint, A; Clifford, T

    2005-01-01

    Aims: (1) To measure the prevalence of burnout, psychological morbidity, job satisfaction, job stress, and consideration of alternate work among multidisciplinary hospital based child and youth protection (CYP) professionals; (2) to understand the relations between these variables; and (3) to understand the reasons for leaving among former programme members. Methods: Mailed survey of current and former members of all Canadian academic hospital based CYP programmes. Surveys for current members contained validated measures of burnout, psychological morbidity, job satisfaction/stress, and questions about consideration of alternate work. Surveys for former members examined motivation(s) for leaving. Results: One hundred and twenty six of 165 current members (76.4%) and 13/14 (92.9%) former members responded. Over one third (34.1%) of respondents exhibited burnout while psychological morbidity was present in 13.5%. Job satisfaction was high, with 68.8% finding their job "extremely" or "quite" satisfying, whereas 26.2% found their job "extremely" or "quite" stressful. Psychological morbidity, job satisfaction, and job stress were not associated with any of the demographic variables measured, but burnout was most prevalent among non-physician programme members. Almost two thirds of current members indicated that they had seriously considered a change in work situation. Former members indicated that burnout and high levels of job stress were most responsible for their decision to leave and that increasing the number of programme staff and, consequently, reducing the number of hours worked would have influenced their decision to stay. Conclusions: Current levels of burnout and the large proportion of individuals who have contemplated leaving the service suggest a potential crisis in Canadian hospital based CYP services. PMID:16243862

  8. Risk Aware Overbooking for Commercial Grids

    NASA Astrophysics Data System (ADS)

    Birkenheuer, Georg; Brinkmann, André; Karl, Holger

    The commercial exploitation of the emerging Grid and Cloud markets needs SLAs to sell computing run times. Job traces show that users have a limited ability to estimate the resource needs of their applications. This offers the possibility to apply overbooking to negotiation, but overbooking increases the risk of SLA violations. This work presents an overbooking approach with an integrated risk assessment model. Simulations for this model, which are based on real-world job traces, show that overbooking offers significant opportunities for Grid and Cloud providers.

  9. Faster GPU-based convolutional gridding via thread coarsening

    NASA Astrophysics Data System (ADS)

    Merry, B.

    2016-07-01

    Convolutional gridding is a processor-intensive step in interferometric imaging. While it is possible to use graphics processing units (GPUs) to accelerate this operation, existing methods use only a fraction of the available flops. We apply thread coarsening to improve the efficiency of an existing algorithm, and observe performance gains of up to 3.2 × for single-polarization gridding and 1.9 × for quad-polarization gridding on a GeForce GTX 980, and smaller but still significant gains on a Radeon R9 290X.

  10. Skill-based job descriptions for sterile processing technicians--a total quality approach.

    PubMed

    Doyle, F F; Marriott, M A

    1994-05-01

    Rochester General Hospital in Rochester, NY, included as part of its total quality management effort the task of revising job descriptions for its sterile processing technicians as a way to decrease turnover and increase job satisfaction, teamwork and quality output. The department's quality team developed "skill banding," a tool that combines skill-based pay with large salary ranges that span job classifications normally covered by several separate salary ranges. They defined the necessary competencies needed to move through five skill bands and worked with the rest of the department to fine-tune the details. The process has only recently been implemented, but department employees are enthusiastic about it.

  11. Grid-based platform for training in Earth Observation

    NASA Astrophysics Data System (ADS)

    Petcu, Dana; Zaharie, Daniela; Panica, Silviu; Frincu, Marc; Neagul, Marian; Gorgan, Dorian; Stefanut, Teodor

    2010-05-01

    GiSHEO platform [1] providing on-demand services for training and high education in Earth Observation is developed, in the frame of an ESA funded project through its PECS programme, to respond to the needs of powerful education resources in remote sensing field. It intends to be a Grid-based platform of which potential for experimentation and extensibility are the key benefits compared with a desktop software solution. Near-real time applications requiring simultaneous multiple short-time-response data-intensive tasks, as in the case of a short time training event, are the ones that are proved to be ideal for this platform. The platform is based on Globus Toolkit 4 facilities for security and process management, and on the clusters of four academic institutions involved in the project. The authorization uses a VOMS service. The main public services are the followings: the EO processing services (represented through special WSRF-type services); the workflow service exposing a particular workflow engine; the data indexing and discovery service for accessing the data management mechanisms; the processing services, a collection allowing easy access to the processing platform. The WSRF-type services for basic satellite image processing are reusing free image processing tools, OpenCV and GDAL. New algorithms and workflows were develop to tackle with challenging problems like detecting the underground remains of old fortifications, walls or houses. More details can be found in [2]. Composed services can be specified through workflows and are easy to be deployed. The workflow engine, OSyRIS (Orchestration System using a Rule based Inference Solution), is based on DROOLS, and a new rule-based workflow language, SILK (SImple Language for worKflow), has been built. Workflow creation in SILK can be done with or without a visual designing tools. The basics of SILK are the tasks and relations (rules) between them. It is similar with the SCUFL language, but not relying on XML in

  12. Suit alleges Chicago schools denied job based on HIV.

    PubMed

    1997-04-18

    In 1996, the Lambda Legal Defense and Education Fund persuaded the Chicago Board of Education to revoke a policy that demands applicants to disclose their HIV status. The Board promised to revise the policy, but on March 27, 1997 Lambda filed suit in U.S. District Court against the school board on behalf of an applicant who says he continues to be denied a teaching job because of his positive HIV status. The lawsuit claims that the board of education's requirement for any job applicant to provide a complete medical history and to submit to a medical examination is tantamount to requiring HIV status disclosure. The lawsuit states that the board is violating the Americans with Disabilities Act (ADA), the Rehabilitation Act, and Federal and State constitutional guarantees to privacy and equal protection under the law. The suit also says the board lacks procedural safeguards to ensure confidentiality of applicants' medical information. PMID:11364234

  13. The Prediction of Job Ability Requirements Using Attribute Data Based Upon the Position Analysis Questionnaire (PAQ). Technical Report No. 1.

    ERIC Educational Resources Information Center

    Shaw, James B.; McCormick, Ernest J.

    The study was directed towards the further exploration of the use of attribute ratings as the basis for establishing the job component validity of tests, in particular by using different methods of combining "attribute-based" data with "job analysis" data to form estimates of the aptitude requirements of jobs. The primary focus of the study…

  14. An adaptive grid-based all hexahedral meshing algorithm based on 2-refinement.

    SciTech Connect

    Edgel, Jared; Benzley, Steven E.; Owen, Steven James

    2010-08-01

    Most adaptive mesh generation algorithms employ a 3-refinement method. This method, although easy to employ, provides a mesh that is often too coarse in some areas and over refined in other areas. Because this method generates 27 new hexes in place of a single hex, there is little control on mesh density. This paper presents an adaptive all-hexahedral grid-based meshing algorithm that employs a 2-refinement method. 2-refinement is based on dividing the hex to be refined into eight new hexes. This method allows a greater control on mesh density when compared to a 3-refinement procedure. This adaptive all-hexahedral meshing algorithm provides a mesh that is efficient for analysis by providing a high element density in specific locations and a reduced mesh density in other areas. In addition, this tool can be effectively used for inside-out hexahedral grid based schemes, using Cartesian structured grids for the base mesh, which have shown great promise in accommodating automatic all-hexahedral algorithms. This adaptive all-hexahedral grid-based meshing algorithm employs a 2-refinement insertion method. This allows greater control on mesh density when compared to 3-refinement methods. This algorithm uses a two layer transition zone to increase element quality and keeps transitions from lower to higher mesh densities smooth. Templates were introduced to allow both convex and concave refinement.

  15. Integrating reconfigurable hardware-based grid for high performance computing.

    PubMed

    Dondo Gazzano, Julio; Sanchez Molina, Francisco; Rincon, Fernando; López, Juan Carlos

    2015-01-01

    FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC). The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process.

  16. Optimal grid-based methods for thin film micromagnetics simulations

    NASA Astrophysics Data System (ADS)

    Muratov, C. B.; Osipov, V. V.

    2006-08-01

    Thin film micromagnetics are a broad class of materials with many technological applications, primarily in magnetic memory. The dynamics of the magnetization distribution in these materials is traditionally modeled by the Landau-Lifshitz-Gilbert (LLG) equation. Numerical simulations of the LLG equation are complicated by the need to compute the stray field due to the inhomogeneities in the magnetization which presents the chief bottleneck for the simulation speed. Here, we introduce a new method for computing the stray field in a sample for a reduced model of ultra-thin film micromagnetics. The method uses a recently proposed idea of optimal finite difference grids for approximating Neumann-to-Dirichlet maps and has an advantage of being able to use non-uniform discretization in the film plane, as well as an efficient way of dealing with the boundary conditions at infinity for the stray field. We present several examples of the method's implementation and give a detailed comparison of its performance for studying domain wall structures compared to the conventional FFT-based methods.

  17. Integrating reconfigurable hardware-based grid for high performance computing.

    PubMed

    Dondo Gazzano, Julio; Sanchez Molina, Francisco; Rincon, Fernando; López, Juan Carlos

    2015-01-01

    FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC). The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process. PMID:25874241

  18. Integrating Reconfigurable Hardware-Based Grid for High Performance Computing

    PubMed Central

    Dondo Gazzano, Julio; Sanchez Molina, Francisco; Rincon, Fernando; López, Juan Carlos

    2015-01-01

    FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC). The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process. PMID:25874241

  19. OPNET/Simulink Based Testbed for Disturbance Detection in the Smart Grid

    SciTech Connect

    Sadi, Mohammad A. H.; Dasgupta, Dipankar; Ali, Mohammad Hassan; Abercrombie, Robert K

    2015-01-01

    The important backbone of the smart grid is the cyber/information infrastructure, which is primarily used to communicate with different grid components. A smart grid is a complex cyber physical system containing a numerous and variety number of sources, devices, controllers and loads. Therefore, the smart grid is vulnerable to grid related disturbances. For such dynamic system, disturbance and intrusion detection is a paramount issue. This paper presents a Simulink and Opnet based co-simulated platform to carry out a cyber-intrusion in cyber network for modern power systems and the smart grid. The IEEE 30 bus power system model is used to demonstrate the effectiveness of the simulated testbed. The experiments were performed by disturbing the circuit breakers reclosing time through a cyber-attack. Different disturbance situations in the considered test system are considered and the results indicate the effectiveness of the proposed co-simulated scheme.

  20. A Grid storage accounting system based on DGAS and HLRmon

    NASA Astrophysics Data System (ADS)

    Cristofori, A.; Fattibene, E.; Gaido, L.; Guarise, A.; Veronesi, P.

    2012-12-01

    Accounting in a production-level Grid infrastructure is of paramount importance in order to measure the utilization of the available resources. While several CPU accounting systems are deployed within the European Grid Infrastructure (EGI), storage accounting systems, stable enough to be adopted in a production environment are not yet available. As a consequence, there is a growing interest in storage accounting and work on this is being carried out in the Open Grid Forum (OGF) where a Usage Record (UR) definition suitable for storage resources has been proposed for standardization. In this paper we present a storage accounting system which is composed of three parts: a sensor layer, a data repository with a transport layer (Distributed Grid Accounting System - DGAS) and a web portal providing graphical and tabular reports (HLRmon). The sensor layer is responsible for the creation of URs according to the schema (described in this paper) that is currently being discussed within OGF. DGAS is one of the CPU accounting systems used within EGI, in particular by the Italian Grid Infrastructure (IGI) and some other National Grid Initiatives (NGIs) and projects. DGAS architecture is evolving in order to collect Usage Records for different types of resources. This improvement allows DGAS to be used as a ‘general’ data repository and transport layer. HLRmon is the web portal acting as an interface to DGAS. It has been improved to retrieve storage accounting data from the DGAS repository and create reports in an easy way. This is very useful not only for the Grid users and administrators but also for the stakeholders.

  1. The Anatomy of a Grid portal

    NASA Astrophysics Data System (ADS)

    Licari, Daniele; Calzolari, Federico

    2011-12-01

    In this paper we introduce a new way to deal with Grid portals referring to our implementation. L-GRID is a light portal to access the EGEE/EGI Grid infrastructure via Web, allowing users to submit their jobs from a common Web browser in a few minutes, without any knowledge about the Grid infrastructure. It provides the control over the complete lifecycle of a Grid Job, from its submission and status monitoring, to the output retrieval. The system, implemented as client-server architecture, is based on the Globus Grid middleware. The client side application is based on a java applet; the server relies on a Globus User Interface. There is no need of user registration on the server side, and the user needs only his own X.509 personal certificate. The system is user-friendly, secure (it uses SSL protocol, mechanism for dynamic delegation and identity creation in public key infrastructures), highly customizable, open source, and easy to install. The X.509 personal certificate does not get out from the local machine. It allows to reduce the time spent for the job submission, granting at the same time a higher efficiency and a better security level in proxy delegation and management.

  2. Development, use, and availability of a job exposure matrix based on national occupational hazard survey data.

    PubMed

    Sieber, W K; Sundin, D S; Frazier, T M; Robinson, C F

    1991-01-01

    A job exposure matrix has been developed based on potential exposure data collected during the 1972-1974 National Occupational Hazard Survey (NOHS). The survey sample was representative of all U.S. non-agricultural businesses covered under the Occupational Safety and Health Act of 1970 and employing eight or more employees. Potential worker exposure to all chemical, physical, or biological agents was recorded during the field survey if certain minimum guidelines for exposure were met. The job exposure matrix (JEM) itself is a computerized database that assists the user in determining potential chemical or physical exposures in occupational settings. We describe the structure and possible uses of the job exposure matrix. In one example, potential occupational exposures to elemental lead were grouped by industry and occupation. In a second example, the matrix was used to determine exposure classifications in a hypothetical case-control study. Present availability as well as future enhancements of the job exposure matrix are described.

  3. A Grid-Based Image Archival and Analysis System

    PubMed Central

    Hastings, Shannon; Oster, Scott; Langella, Stephen; Kurc, Tahsin M.; Pan, Tony; Catalyurek, Umit V.; Saltz, Joel H.

    2005-01-01

    Here the authors present a Grid-aware middleware system, called GridPACS, that enables management and analysis of images in a massive scale, leveraging distributed software components coupled with interconnected computation and storage platforms. The need for this infrastructure is driven by the increasing biomedical role played by complex datasets obtained through a variety of imaging modalities. The GridPACS architecture is designed to support a wide range of biomedical applications encountered in basic and clinical research, which make use of large collections of images. Imaging data yield a wealth of metabolic and anatomic information from macroscopic (e.g., radiology) to microscopic (e.g., digitized slides) scale. Whereas this information can significantly improve understanding of disease pathophysiology as well as the noninvasive diagnosis of disease in patients, the need to process, analyze, and store large amounts of image data presents a great challenge. PMID:15684129

  4. CMS Configuration Editor: GUI based application for user analysis job

    NASA Astrophysics Data System (ADS)

    de Cosa, A.

    2011-12-01

    We present the user interface and the software architecture of the Configuration Editor for the CMS experiment. The analysis workflow is organized in a modular way integrated within the CMS framework that organizes in a flexible way user analysis code. The Python scripting language is adopted to define the job configuration that drives the analysis workflow. It could be a challenging task for users, especially for newcomers, to develop analysis jobs managing the configuration of many required modules. For this reason a graphical tool has been conceived in order to edit and inspect configuration files. A set of common analysis tools defined in the CMS Physics Analysis Toolkit (PAT) can be steered and configured using the Config Editor. A user-defined analysis workflow can be produced starting from a standard configuration file, applying and configuring PAT tools according to the specific user requirements. CMS users can adopt this tool, the Config Editor, to create their analysis visualizing in real time which are the effects of their actions. They can visualize the structure of their configuration, look at the modules included in the workflow, inspect the dependences existing among the modules and check the data flow. They can visualize at which values parameters are set and change them according to what is required by their analysis task. The integration of common tools in the GUI needed to adopt an object-oriented structure in the Python definition of the PAT tools and the definition of a layer of abstraction from which all PAT tools inherit.

  5. Grid-based medical image workflow and archiving for research and enterprise PACS applications

    NASA Astrophysics Data System (ADS)

    Erberich, Stephan G.; Dixit, Manasee; Chen, Vincent; Chervenak, Ann; Nelson, Marvin D.; Kesselmann, Carl

    2006-03-01

    PACS provides a consistent model to communicate and to store images with recent additions to fault tolerance and disaster reliability. However PACS still lacks fine granulated user based authentication and authorization, flexible data distribution, and semantic associations between images and their embedded information. These are critical components for future Enterprise operations in dynamic medical research and health care environments. Here we introduce a flexible Grid based model of a PACS in order to add these methods and to describe its implementation in the Children's Oncology Group (COG) Grid. The combination of existing standards for medical images, DICOM, and the abstraction to files and meta catalog information in the Grid domain provides new flexibility beyond traditional PACS design. We conclude that Grid technology demonstrates a reliable and efficient distributed informatics infrastructure which is well applicable to medical informatics as described in this work. Grid technology will provide new opportunities for PACS deployment and subsequently new medical image applications.

  6. SARS Grid--an AG-based disease management and collaborative platform.

    PubMed

    Hung, Shu-Hui; Hung, Tsung-Chieh; Juang, Jer-Nan

    2006-01-01

    This paper describes the development of the NCHC's Severe Acute Respiratory Syndrome (SARS) Grid project-An Access Grid (AG)-based disease management and collaborative platform that allowed for SARS patient's medical data to be dynamically shared and discussed between hospitals and doctors using AG's video teleconferencing (VTC) capabilities. During the height of the SARS epidemic in Asia, SARS Grid and the SARShope website significantly curved the spread of SARS by helping doctors manage the in-hospital and in-home care of quarantined SARS patients through medical data exchange and the monitoring of the patient's symptoms. Now that the SARS epidemic has ended, the primary function of the SARS Grid project is that of a web-based informatics tool to increase pubic awareness of SARS and other epidemic diseases. Additionally, the SARS Grid project can be viewed and further studied as an outstanding model of epidemic disease prevention and/or containment.

  7. Resistor array infrared nonuniformity correction based on sparse grid

    NASA Astrophysics Data System (ADS)

    He, Xudong; Qiu, Jiang; Zhang, Qiao; Du, Huijie; Zhao, Hongming

    2013-10-01

    Resistor array plays a vital role in emulation of the IR control and guide system. However, its serious nonuniformity confines the range of its application. Therefore, in order to obtain an available IR image, nonuniformity correction (NUC) is necessary. The traditional method is sparse grid and flood which only take the array's nonuniformity into account. In this paper we present an improved sparse grid method which considers the whole system which affects the array's nonuniformity by dividing the NUC process into different gray levels. In each gray level, we can take two points or several points to calculate the nonuniformity of every block which is divided before correction. After that, we can have several characteristic curves which will be operated with curve fitting. By this means, we will correct the nonuniformity. At last, through the experiment of a number of images, we find the residual nonuniformity associated with random noise is about 0.2% after the correction.

  8. Direct care worker's perceptions of job satisfaction following implementation of work-based learning.

    PubMed

    Lopez, Cynthia; White, Diana L; Carder, Paula C

    2014-02-01

    The purpose of this study was to understand the impact of a work-based learning program on the work lives of Direct Care Workers (DCWs) at assisted living (AL) residences. The research questions were addressed using focus group data collected as part of a larger evaluation of a work-based learning (WBL) program called Jobs to Careers. The theoretical perspective of symbolic interactionism was used to frame the qualitative data analysis. Results indicated that the WBL program impacted DCWs' job satisfaction through the program curriculum and design and through three primary categories: relational aspects of work, worker identity, and finding time. This article presents a conceptual model for understanding how these categories are interrelated and the implications for WBL programs. Job satisfaction is an important topic that has been linked to quality of care and reduced turnover in long-term care settings. PMID:24652945

  9. Micro-grid platform based on NODE.JS architecture, implemented in electrical network instrumentation

    NASA Astrophysics Data System (ADS)

    Duque, M.; Cando, E.; Aguinaga, A.; Llulluna, F.; Jara, N.; Moreno, T.

    2016-05-01

    In this document, I propose a theory about the impact of systems based on microgrids in non-industrialized countries that have the goal to improve energy exploitation through alternatives methods of a clean and renewable energy generation and the creation of the app to manage the behavior of the micro-grids based on the NodeJS, Django and IOJS technologies. The micro-grids allow the optimal way to manage energy flow by electric injection directly in electric network small urban's cells in a low cost and available way. In difference from conventional systems, micro-grids can communicate between them to carry energy to places that have higher demand in accurate moments. This system does not require energy storage, so, costs are lower than conventional systems like fuel cells, solar panels or else; even though micro-grids are independent systems, they are not isolated. The impact that this analysis will generate, is the improvement of the electrical network without having greater control than an intelligent network (SMART-GRID); this leads to move to a 20% increase in energy use in a specified network; that suggest there are others sources of energy generation; but for today's needs, we need to standardize methods and remain in place to support all future technologies and the best option are the Smart Grids and Micro-Grids.

  10. Smart Energy Management and Control for Fuel Cell Based Micro-Grid Connected Neighborhoods

    SciTech Connect

    Dr. Mohammad S. Alam

    2006-03-15

    Fuel cell power generation promises to be an efficient, pollution-free, reliable power source in both large scale and small scale, remote applications. DOE formed the Solid State Energy Conversion Alliance with the intention of breaking one of the last barriers remaining for cost effective fuel cell power generation. The Alliance’s goal is to produce a core solid-state fuel cell module at a cost of no more than $400 per kilowatt and ready for commercial application by 2010. With their inherently high, 60-70% conversion efficiencies, significantly reduced carbon dioxide emissions, and negligible emissions of other pollutants, fuel cells will be the obvious choice for a broad variety of commercial and residential applications when their cost effectiveness is improved. In a research program funded by the Department of Energy, the research team has been investigating smart fuel cell-operated residential micro-grid communities. This research has focused on using smart control systems in conjunction with fuel cell power plants, with the goal to reduce energy consumption, reduce demand peaks and still meet the energy requirements of any household in a micro-grid community environment. In Phases I and II, a SEMaC was developed and extended to a micro-grid community. In addition, an optimal configuration was determined for a single fuel cell power plant supplying power to a ten-home micro-grid community. In Phase III, the plan is to expand this work to fuel cell based micro-grid connected neighborhoods (mini-grid). The economic implications of hydrogen cogeneration will be investigated. These efforts are consistent with DOE’s mission to decentralize domestic electric power generation and to accelerate the onset of the hydrogen economy. A major challenge facing the routine implementation and use of a fuel cell based mini-grid is the varying electrical demand of the individual micro-grids, and, therefore, analyzing these issues is vital. Efforts are needed to determine

  11. Gratia: New Challenges in Grid Accounting

    NASA Astrophysics Data System (ADS)

    Canal, Philippe

    2011-12-01

    Gratia originated as an accounting system for batch systems and Linux process accounting. In production since 2006 at FNAL, it was adopted by the Open Science Grid as a distributed, grid-wide accounting system in 2007. Since adoption Gratia's next challenge has been to adapt to an explosive increase in data volume and to handle several new categories of accounting data. Gratia now accounts for regular grid jobs, file transfers, glide-in jobs, and the state of grid services. We show that Gratia gives access to a thorough picture of the OSG and discuss the complexity caused by newer grid techniques such as pilot jobs, job forwarding, and backfill.

  12. A grid-based infrastructure for ecological forecasting of rice land Anopheles arabiensis aquatic larval habitats

    PubMed Central

    Jacob, Benjamin G; Muturi, Ephantus J; Funes, Jose E; Shililu, Josephat I; Githure, John I; Kakoma, Ibulaimu I; Novak, Robert J

    2006-01-01

    Background For remote identification of mosquito habitats the first step is often to construct a discrete tessellation of the region. In applications where complex geometries do not need to be represented such as urban habitats, regular orthogonal grids are constructed in GIS and overlaid on satellite images. However, rice land vector mosquito aquatic habitats are rarely uniform in space or character. An orthogonal grid overlaid on satellite data of rice-land areas may fail to capture physical or man-made structures, i.e paddies, canals, berms at these habitats. Unlike an orthogonal grid, digitizing each habitat converts a polygon into a grid cell, which may conform to rice-land habitat boundaries. This research illustrates the application of a random sampling methodology, comparing an orthogonal and a digitized grid for assessment of rice land habitats. Methods A land cover map was generated in Erdas Imagine V8.7® using QuickBird data acquired July 2005, for three villages within the Mwea Rice Scheme, Kenya. An orthogonal grid was overlaid on the images. In the digitized dataset, each habitat was traced in Arc Info 9.1®. All habitats in each study site were stratified based on levels of rice stage Results The orthogonal grid did not identify any habitat while the digitized grid identified every habitat by strata and study site. An analysis of variance test indicated the relative abundance of An. arabiensis at the three study sites to be significantly higher during the post-transplanting stage of the rice cycle. Conclusion Regions of higher Anopheles abundance, based on digitized grid cell information probably reflect underlying differences in abundance of mosquito habitats in a rice land environment, which is where limited control resources could be concentrated to reduce vector abundance. PMID:17062142

  13. Grid Based Techniques for Visualization in the Geosciences

    NASA Astrophysics Data System (ADS)

    Bollig, E. F.; Sowell, B.; Lu, Z.; Erlebacher, G.; Yuen, D. A.

    2005-12-01

    As experiments and simulations in the geosciences grow larger and more complex, it has become increasingly important to develop methods of processing and sharing data in a distributed computing environment. In recent years, the scientific community has shown growing interest in exploiting the powerful assets of Grid computing to this end, but the complexity of the Grid has prevented many scientists from converting their applications and embracing this possibility. We are investigating methods for development and deployment of data extraction and visualization services across the NaradaBrokering [1] Grid infrastructure. With the help of gSOAP [2], we have developed a series of C/C++ services for wavelet transforms, earthquake clustering, and basic 3D visualization. We will demonstrate the deployment and collaboration of these services across a network of NaradaBrokering nodes, concentrating on the challenges faced in inter-service communication, service/client division, and particularly web service visualization. Renderings in a distributed environment can be handled in three ways: 1) the data extraction service computes and renders everything locally and sends results to the client as a bitmap image, 2) the data extraction service sends results to a separate visualization service for rendering, which in turn sends results to a client as a bitmap image, and 3) the client itself renders images locally. The first two options allow for large visualizations in a distributed and collaborative environment, but limit interactivity of the client. To address this problem we are investigating the advantages of the JOGL OpenGL library [3] to perform renderings on the client side using the client's hardware for increased performance. We will present benchmarking results to ascertain the relative advantage of the three aforementioned techniques as a function of datasize and visualization task. [1] The NaradaBrokering Project, http://www.naradabrokering.org [2] gSOAP: C/C++ Web

  14. Applying for Jobs Online: Examining the Legality of Internet-based Application Forms.

    ERIC Educational Resources Information Center

    Wallace, J. Craig; Tye, Mary G.; Vodanovich, Stephen J.

    2000-01-01

    Results of an examination of 41 Internet-based state job application forms indicated that 97.5 percent possessed at least one inadvisable question especially related to past salary, age, and driver's license information. States with larger populations had more problematic questions than less populated states. (JOW)

  15. Community Based Organizations. The Challenges of the Job Training Partnership Act.

    ERIC Educational Resources Information Center

    Brown, Larry

    The advent of the Job Training Partnership Act (JTPA) has not been favorable to community-based organizations (CBOs) serving unemployed young people. The overall decline in the amount of money available for employment training is one reason for the reduction in services, but it is not the sole reason. The transition to the new act itself is also…

  16. Organizational Culture's Role in the Relationship between Power Bases and Job Stress

    ERIC Educational Resources Information Center

    Erkutlu, Hakan; Chafra, Jamel; Bumin, Birol

    2011-01-01

    The purpose of this research is to examine the moderating role of organizational culture in the relationship between leader's power bases and subordinate's job stress. Totally 622 lecturers and their superiors (deans) from 13 state universities chosen by random method in Ankara, Istanbul, Izmir, Antalya, Samsun, Erzurum and Gaziantep in 2008-2009…

  17. Examining Reactions to Employer Information Using a Simulated Web-Based Job Fair

    ERIC Educational Resources Information Center

    Highhouse, Scott; Stanton, Jeffrey M.; Reeve, Charlie L.

    2004-01-01

    The approach taken in the present investigation was to examine reactions to positive and negative employer information by eliciting online (i.e., moment-to-moment) reactions in a simulated computer-based job fair. Reactions to positive and negative information commonly reveal a negatively biased asymmetry. Positively biased asymmetries have been…

  18. Characteristics of the Community-Based Job Training Grant (CBJTG) Program

    ERIC Educational Resources Information Center

    Eyster, Lauren; Stanczyk, Alexandra; Nightingale, Demetra Smith; Martinson, Karin; Trutko, John

    2009-01-01

    This is the first report from the evaluation of the Community-Based Job Training Grants (CBJTG) being conducted by the Urban Institute, with its partners Johns Hopkins University and Capital Research Corporation. The CBJTG program focuses on building the capacity of community colleges to provide training to workers for high-growth, high-demand…

  19. Academic Job Placements in Library and Information Science Field: A Case Study Performed on ALISE Web-Based Postings

    ERIC Educational Resources Information Center

    Abouserie, Hossam Eldin Mohamed Refaat

    2010-01-01

    The study investigated and analyzed the state of academic web-based job announcements in Library and Information Science Field. The purpose of study was to get in depth understanding about main characteristics and trends of academic job market in Library and Information science field. The study focused on web-based version announcement as it was…

  20. Workforce Strategy Center Comments on President Bush's Call for Community-Based Job Training Grants. Policy Brief

    ERIC Educational Resources Information Center

    Workforce Strategy Center, 2004

    2004-01-01

    After careful review Workforce Strategy Center has articulated a policy assessment of President George W. Bush's Community-Based Job Training Grants initiative. WSC called the $250 million Community-Based Job Training Grants plan right on track by emphasizing: A Strong Role of Community Colleges in Workforce Development: All available research…

  1. Grid technology in tissue-based diagnosis: fundamentals and potential developments.

    PubMed

    Görtler, Jürgen; Berghoff, Martin; Kayser, Gian; Kayser, Klaus

    2006-01-01

    Tissue-based diagnosis still remains the most reliable and specific diagnostic medical procedure. It is involved in all technological developments in medicine and biology and incorporates tools of quite different applications. These range from molecular genetics to image acquisition and recognition algorithms (for image analysis), or from tissue culture to electronic communication services. Grid technology seems to possess all features to efficiently target specific constellations of an individual patient in order to obtain a detailed and accurate diagnosis in providing all relevant information and references. Grid technology can be briefly explained by so-called nodes that are linked together and share certain communication rules in using open standards. The number of nodes can vary as well as their functionality, depending on the needs of a specific user at a given point in time. In the beginning of grid technology, the nodes were used as supercomputers in combining and enhancing the computation power. At present, at least five different Grid functions can be distinguished, that comprise 1) computation services, 2) data services, 3) application services, 4) information services, and 5) knowledge services. The general structures and functions of a Grid are described, and their potential implementation into virtual tissue-based diagnosis is analyzed. As a result Grid technology offers a new dimension to access distributed information and knowledge and to improving the quality in tissue-based diagnosis and therefore improving the medical quality.

  2. The Particle Physics Data Grid. Final Report

    SciTech Connect

    Livny, Miron

    2002-08-16

    The main objective of the Particle Physics Data Grid (PPDG) project has been to implement and evaluate distributed (Grid-enabled) data access and management technology for current and future particle and nuclear physics experiments. The specific goals of PPDG have been to design, implement, and deploy a Grid-based software infrastructure capable of supporting the data generation, processing and analysis needs common to the physics experiments represented by the participants, and to adapt experiment-specific software to operate in the Grid environment and to exploit this infrastructure. To accomplish these goals, the PPDG focused on the implementation and deployment of several critical services: reliable and efficient file replication service, high-speed data transfer services, multisite file caching and staging service, and reliable and recoverable job management services. The focus of the activity was the job management services and the interplay between these services and distributed data access in a Grid environment. Software was developed to study the interaction between HENP applications and distributed data storage fabric. One key conclusion was the need for a reliable and recoverable tool for managing large collections of interdependent jobs. An attached document provides an overview of the current status of the Directed Acyclic Graph Manager (DAGMan) with its main features and capabilities.

  3. AVQS: Attack Route-Based Vulnerability Quantification Scheme for Smart Grid

    PubMed Central

    Lim, Hyunwoo; Lee, Seokjun; Shon, Taeshik

    2014-01-01

    A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification. PMID:25152923

  4. AVQS: attack route-based vulnerability quantification scheme for smart grid.

    PubMed

    Ko, Jongbin; Lim, Hyunwoo; Lee, Seokjun; Shon, Taeshik

    2014-01-01

    A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification.

  5. AVQS: attack route-based vulnerability quantification scheme for smart grid.

    PubMed

    Ko, Jongbin; Lim, Hyunwoo; Lee, Seokjun; Shon, Taeshik

    2014-01-01

    A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification. PMID:25152923

  6. OGC and Grid Interoperability in enviroGRIDS Project

    NASA Astrophysics Data System (ADS)

    Gorgan, Dorian; Rodila, Denisa; Bacu, Victor; Giuliani, Gregory; Ray, Nicolas

    2010-05-01

    EnviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is a 4-years FP7 Project aiming to address the subjects of ecologically unsustainable development and inadequate resource management. The project develops a Spatial Data Infrastructure of the Black Sea Catchment region. The geospatial technologies offer very specialized functionality for Earth Science oriented applications as well as the Grid oriented technology that is able to support distributed and parallel processing. One challenge of the enviroGRIDS project is the interoperability between geospatial and Grid infrastructures by providing the basic and the extended features of the both technologies. The geospatial interoperability technology has been promoted as a way of dealing with large volumes of geospatial data in distributed environments through the development of interoperable Web service specifications proposed by the Open Geospatial Consortium (OGC), with applications spread across multiple fields but especially in Earth observation research. Due to the huge volumes of data available in the geospatial domain and the additional introduced issues (data management, secure data transfer, data distribution and data computation), the need for an infrastructure capable to manage all those problems becomes an important aspect. The Grid promotes and facilitates the secure interoperations of geospatial heterogeneous distributed data within a distributed environment, the creation and management of large distributed computational jobs and assures a security level for communication and transfer of messages based on certificates. This presentation analysis and discusses the most significant use cases for enabling the OGC Web services interoperability with the Grid environment and focuses on the description and implementation of the most promising one. In these use cases we give a special attention to issues such as: the relations between computational grid and

  7. Developing Information Power Grid Based Algorithms and Software

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This was an exploratory study to enhance our understanding of problems involved in developing large scale applications in a heterogeneous distributed environment. It is likely that the large scale applications of the future will be built by coupling specialized computational modules together. For example, efforts now exist to couple ocean and atmospheric prediction codes to simulate a more complete climate system. These two applications differ in many respects. They have different grids, the data is in different unit systems and the algorithms for inte,-rating in time are different. In addition the code for each application is likely to have been developed on different architectures and tend to have poor performance when run on an architecture for which the code was not designed, if it runs at all. Architectural differences may also induce differences in data representation which effect precision and convergence criteria as well as data transfer issues. In order to couple such dissimilar codes some form of translation must be present. This translation should be able to handle interpolation from one grid to another as well as construction of the correct data field in the correct units from available data. Even if a code is to be developed from scratch, a modular approach will likely be followed in that standard scientific packages will be used to do the more mundane tasks such as linear algebra or Fourier transform operations. This approach allows the developers to concentrate on their science rather than becoming experts in linear algebra or signal processing. Problems associated with this development approach include difficulties associated with data extraction and translation from one module to another, module performance on different nodal architectures, and others. In addition to these data and software issues there exists operational issues such as platform stability and resource management.

  8. Job Stress of School-Based Speech-Language Pathologists

    ERIC Educational Resources Information Center

    Harris, Stephanie Ferney; Prater, Mary Anne; Dyches, Tina Taylor; Heath, Melissa Allen

    2009-01-01

    Stress and burnout contribute significantly to the shortages of school-based speech-language pathologists (SLPs). At the request of the Utah State Office of Education, the researchers measured the stress levels of 97 school-based SLPs using the "Speech-Language Pathologist Stress Inventory." Results indicated that participants' emotional-fatigue…

  9. A Computer-Based, Interactive Videodisc Job Aid and Expert System for Electron Beam Lithography Integration and Diagnostic Procedures.

    ERIC Educational Resources Information Center

    Stevenson, Kimberly

    This master's thesis describes the development of an expert system and interactive videodisc computer-based instructional job aid used for assisting in the integration of electron beam lithography devices. Comparable to all comprehensive training, expert system and job aid development require a criterion-referenced systems approach treatment to…

  10. A Correlational Study of Telework Frequency, Information Communication Technology, and Job Satisfaction of Home-Based Teleworkers

    ERIC Educational Resources Information Center

    Webster-Trotman, Shana P.

    2010-01-01

    In 2008, 33.7 million Americans teleworked from home. The Telework Enhancement Act (S. 707) and the Telework Improvements Act (H.R. 1722) of 2009 were designed to increase the number of teleworkers. The research problem addressed was the lack of understanding of factors that influence home-based teleworkers' job satisfaction. Job dissatisfaction…

  11. Burnout in Medical Residents: A Study Based on the Job Demands-Resources Model

    PubMed Central

    2014-01-01

    Purpose. Burnout is a prolonged response to chronic emotional and interpersonal stressors on the job. The purpose of our cross-sectional study was to estimate the burnout rates among medical residents in the largest Greek hospital in 2012 and identify factors associated with it, based on the job demands-resources model (JD-R). Method. Job demands were examined via a 17-item questionnaire assessing 4 characteristics (emotional demands, intellectual demands, workload, and home-work demands' interface) and job resources were measured via a 14-item questionnaire assessing 4 characteristics (autonomy, opportunities for professional development, support from colleagues, and supervisor's support). The Maslach Burnout Inventory (MBI) was used to measure burnout. Results. Of the 290 eligible residents, 90.7% responded. In total 14.4% of the residents were found to experience burnout. Multiple logistic regression analysis revealed that each increased point in the JD-R questionnaire score regarding home-work interface was associated with an increase in the odds of burnout by 25.5%. Conversely, each increased point for autonomy, opportunities in professional development, and each extra resident per specialist were associated with a decrease in the odds of burnout by 37.1%, 39.4%, and 59.0%, respectively. Conclusions. Burnout among medical residents is associated with home-work interface, autonomy, professional development, and resident to specialist ratio. PMID:25531003

  12. Multigrid-based grid-adaptive solution of the Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Michelsen, Jess

    A finite volume scheme for solution of the incompressible Navier-Stokes equations in two dimensions and axisymmetry is described. Solutions are obtained on nonorthogonal, solution adaptive BFC grids, based on the Brackbill-Saltzman generator. Adaptivity is achieved by the use of a single control function based on the local kinetic energy production. Nonstaggered allocation of pressure and Cartesian velocity components avoids the introduction of curvature terms associated with the use of a grid-direction vector-base. A special interpolation of the pressure correction equation in the SIMPLE algorithm ensures firm coupling between velocity and pressure field. Steady-state solutions are accelerated by a full approximation multigrid scheme working on the decoupled grid-flow problem, while an algebraic multigrid scheme is employed for the pressure correction equation.

  13. Microcontroller based spectrophotometer using compact disc as diffraction grid

    NASA Astrophysics Data System (ADS)

    Bano, Saleha; Altaf, Talat; Akbar, Sunila

    2010-12-01

    This paper describes the design and implementation of a portable, inexpensive and cost effective spectrophotometer. The device combines the use of compact disc (CD) media as diffraction grid and 60 watt bulb as a light source. Moreover it employs a moving slit along with stepper motor for obtaining a monochromatic light, photocell with spectral sensitivity in visible region to determine the intensity of light and an amplifier with a very high gain as well as an advanced virtual RISC (AVR) microcontroller ATmega32 as a control unit. The device was successfully applied to determine the absorbance and transmittance of KMnO4 and the unknown concentration of KMnO4 with the help of calibration curve. For comparison purpose a commercial spectrophotometer was used. There are not significant differences between the absorbance and transmittance values estimated by the two instruments. Furthermore, good results are obtained at all visible wavelengths of light. Therefore, the designed instrument offers an economically feasible alternative for spectrophotometric sample analysis in small routine, research and teaching laboratories, because the components used in the designing of the device are cheap and of easy acquisition.

  14. An Interoperable GridWorkflow Management System

    NASA Astrophysics Data System (ADS)

    Mirto, Maria; Passante, Marco; Epicoco, Italo; Aloisio, Giovanni

    A WorkFlow Management System (WFMS) is a fundamental componentenabling to integrate data, applications and a wide set of project resources. Although a number of scientific WFMSs support this task, many analysis pipelines require large-scale Grid computing infrastructures to cope with their high compute and storage requirements. Such scientific workflows complicate the management of resources, especially in cases where they are offered by several resource providers, managed by different Grid middleware, since resource access must be synchronised in advance to allow reliable workflow execution. Different types of Grid middleware such as gLite, Unicore and Globus are used around the world and may cause interoperability issues if applications involve two or more of them. In this paperwe describe the ProGenGrid Workflow Management System which the main goal is to provide interoperability among these different grid middleware when executing workflows. It allows the composition of batch; parameter sweep and MPI based jobs. The ProGenGrid engine implements the logic to execute such jobs by using a standard language OGF compliant such as JSDL that has been extended for this purpose. Currently, we are testing our system on some bioinformatics case studies in the International Laboratory of Bioinformatics (LIBI) Project (www.libi.it).

  15. Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY

    NASA Astrophysics Data System (ADS)

    Bystritskaya, Elena; Fomenko, Alexander; Gogitidze, Nelly; Lobodzinski, Bogdan

    2014-06-01

    The H1 Virtual Organization (VO), as one of the small VOs, employs most components of the EMI or gLite Middleware. In this framework, a monitoring system is designed for the H1 Experiment to identify and recognize within the GRID the best suitable resources for execution of CPU-time consuming Monte Carlo (MC) simulation tasks (jobs). Monitored resources are Computer Elements (CEs), Storage Elements (SEs), WMS-servers (WMSs), CernVM File System (CVMFS) available to the VO HONE and local GRID User Interfaces (UIs). The general principle of monitoring GRID elements is based on the execution of short test jobs on different CE queues using submission through various WMSs and directly to the CREAM-CEs as well. Real H1 MC Production jobs with a small number of events are used to perform the tests. Test jobs are periodically submitted into GRID queues, the status of these jobs is checked, output files of completed jobs are retrieved, the result of each job is analyzed and the waiting time and run time are derived. Using this information, the status of the GRID elements is estimated and the most suitable ones are included in the automatically generated configuration files for use in the H1 MC production. The monitoring system allows for identification of problems in the GRID sites and promptly reacts on it (for example by sending GGUS (Global Grid User Support) trouble tickets). The system can easily be adapted to identify the optimal resources for tasks other than MC production, simply by changing to the relevant test jobs. The monitoring system is written mostly in Python and Perl with insertion of a few shell scripts. In addition to the test monitoring system we use information from real production jobs to monitor the availability and quality of the GRID resources. The monitoring tools register the number of job resubmissions, the percentage of failed and finished jobs relative to all jobs on the CEs and determine the average values of waiting and running time for the

  16. The Utility of Job Dimensions Based on Form B of the Position Analysis Questionnaire (PAQ) in a Job Component Validation Model. Report No. 5.

    ERIC Educational Resources Information Center

    Marquardt, Lloyd D.; McCormick, Ernest J.

    The study involved the use of a structured job analysis instrument called the Position Analysis Questionnaire (PAQ) as the direct basis for the establishment of the job component validity of aptitude tests (that is, a procedure for estimating the aptitude requirements for jobs strictly on the basis of job analysis data). The sample of jobs used…

  17. Design of a nonlinear backstepping control strategy of grid interconnected wind power system based PMSG

    NASA Astrophysics Data System (ADS)

    Errami, Y.; Obbadi, A.; Sahnoun, S.; Benhmida, M.; Ouassaid, M.; Maaroufi, M.

    2016-07-01

    This paper presents nonlinear backstepping control for Wind Power Generation System (WPGS) based Permanent Magnet Synchronous Generator (PMSG) and connected to utility grid. The block diagram of the WPGS with PMSG and the grid side back-to-back converter is established with the dq frame of axes. This control scheme emphasises the regulation of the dc-link voltage and the control of the power factor at changing wind speed. Besides, in the proposed control strategy of WPGS, Maximum Power Point Tracking (MPPT) technique and pitch control are provided. The stability of the regulators is assured by employing Lyapunov analysis. The proposed control strategy for the system has been validated by MATLAB simulations under varying wind velocity and the grid fault condition. In addition, a comparison of simulation results based on the proposed Backstepping strategy and conventional Vector Control is provided.

  18. Off-Grid DOA Estimation Based on Analysis of the Convexity of Maximum Likelihood Function

    NASA Astrophysics Data System (ADS)

    LIU, Liang; WEI, Ping; LIAO, Hong Shu

    Spatial compressive sensing (SCS) has recently been applied to direction-of-arrival (DOA) estimation owing to advantages over conventional ones. However the performance of compressive sensing (CS)-based estimation methods decreases when true DOAs are not exactly on the discretized sampling grid. We solve the off-grid DOA estimation problem using the deterministic maximum likelihood (DML) estimation method. In this work, we analyze the convexity of the DML function in the vicinity of the global solution. Especially under the condition of large array, we search for an approximately convex range around the ture DOAs to guarantee the DML function convex. Based on the convexity of the DML function, we propose a computationally efficient algorithm framework for off-grid DOA estimation. Numerical experiments show that the rough convex range accords well with the exact convex range of the DML function with large array and demonstrate the superior performance of the proposed methods in terms of accuracy, robustness and speed.

  19. A methodology toward manufacturing grid-based virtual enterprise operation platform

    NASA Astrophysics Data System (ADS)

    Tan, Wenan; Xu, Yicheng; Xu, Wei; Xu, Lida; Zhao, Xianhua; Wang, Li; Fu, Liuliu

    2010-08-01

    Virtual enterprises (VEs) have become one of main types of organisations in the manufacturing sector through which the consortium companies organise their manufacturing activities. To be competitive, a VE relies on the complementary core competences among members through resource sharing and agile manufacturing capacity. Manufacturing grid (M-Grid) is a platform in which the production resources can be shared. In this article, an M-Grid-based VE operation platform (MGVEOP) is presented as it enables the sharing of production resources among geographically distributed enterprises. The performance management system of the MGVEOP is based on the balanced scorecard and has the capacity of self-learning. The study shows that a MGVEOP can make a semi-automated process possible for a VE, and the proposed MGVEOP is efficient and agile.

  20. Discrete Adjoint-Based Design Optimization of Unsteady Turbulent Flows on Dynamic Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Diskin, Boris; Yamaleev, Nail K.

    2009-01-01

    An adjoint-based methodology for design optimization of unsteady turbulent flows on dynamic unstructured grids is described. The implementation relies on an existing unsteady three-dimensional unstructured grid solver capable of dynamic mesh simulations and discrete adjoint capabilities previously developed for steady flows. The discrete equations for the primal and adjoint systems are presented for the backward-difference family of time-integration schemes on both static and dynamic grids. The consistency of sensitivity derivatives is established via comparisons with complex-variable computations. The current work is believed to be the first verified implementation of an adjoint-based optimization methodology for the true time-dependent formulation of the Navier-Stokes equations in a practical computational code. Large-scale shape optimizations are demonstrated for turbulent flows over a tiltrotor geometry and a simulated aeroelastic motion of a fighter jet.

  1. Global 3D-Grids Based on Great Circle Arc QTM Sphere Octree and Its Application

    NASA Astrophysics Data System (ADS)

    Wang, J. X.; Li, Y. H.; Zheng, Y. S.; Liu, J. N.

    2013-10-01

    With the development of computers, network communications, scientific computing, mapping remote sensing and geographic information technologies, Discrete Global Grids (DGGs) and Earth System Spatial Grid(ESSG)have become the integrated spatial data model facing the large-scale and global-scale problems and the complex geo-computation. This paper discusses the property and character of the global spatial data at first. Then it introduces the grid division system based on large arc QTM octree and compares this scheme with degradation octree scheme. At last, it introduces the application of the scheme in land surface, underground and aerial geographic entity modeling. The study suggests that: the grid division system based on large arc QTM octree has the potential to integrate the whole spatial data of different layers of the geospatial. And it will have a broad application prospect in complex large-scale geographic computing.

  2. An adaptive grid for graph-based segmentation in retinal OCT

    PubMed Central

    Lang, Andrew; Carass, Aaron; Calabresi, Peter A.; Ying, Howard S.; Prince, Jerry L.

    2016-01-01

    Graph-based methods for retinal layer segmentation have proven to be popular due to their efficiency and accuracy. These methods build a graph with nodes at each voxel location and use edges connecting nodes to encode the hard constraints of each layer’s thickness and smoothness. In this work, we explore deforming the regular voxel grid to allow adjacent vertices in the graph to more closely follow the natural curvature of the retina. This deformed grid is constructed by fixing node locations based on a regression model of each layer’s thickness relative to the overall retina thickness, thus we generate a subject specific grid. Graph vertices are not at voxel locations, which allows for control over the resolution that the graph represents. By incorporating soft constraints between adjacent nodes, segmentation on this grid will favor smoothly varying surfaces consistent with the shape of the retina. Our final segmentation method then follows our previous work. Boundary probabilities are estimated using a random forest classifier followed by an optimal graph search algorithm on the new adaptive grid to produce a final segmentation. Our method is shown to produce a more consistent segmentation with an overall accuracy of 3.38 μm across all boundaries.

  3. Are health workers motivated by income? Job motivation of Cambodian primary health workers implementing performance-based financing

    PubMed Central

    Khim, Keovathanak

    2016-01-01

    Background Financial incentives are widely used in performance-based financing (PBF) schemes, but their contribution to health workers’ incomes and job motivation is poorly understood. Cambodia undertook health sector reform from the middle of 2009 and PBF was employed as a part of the reform process. Objective This study examines job motivation for primary health workers (PHWs) under PBF reform in Cambodia and assesses the relationship between job motivation and income. Design A cross-sectional self-administered survey was conducted on 266 PHWs, from 54 health centers in the 15 districts involved in the reform. The health workers were asked to report all sources of income from public sector jobs and provide answers to 20 items related to job motivation. Factor analysis was conducted to identify the latent variables of job motivation. Factors associated with motivation were identified through multivariable regression. Results PHWs reported multiple sources of income and an average total income of US$190 per month. Financial incentives under the PBF scheme account for 42% of the average total income. PHWs had an index motivation score of 4.9 (on a scale from one to six), suggesting they had generally high job motivation that was related to a sense of community service, respect, and job benefits. Regression analysis indicated that income and the perception of a fair distribution of incentives were both statistically significant in association with higher job motivation scores. Conclusions Financial incentives used in the reform formed a significant part of health workers’ income and influenced their job motivation. Improving job motivation requires fixing payment mechanisms and increasing the size of incentives. PBF is more likely to succeed when income, training needs, and the desire for a sense of community service are addressed and institutionalized within the health system. PMID:27319575

  4. A Fast and Robust Poisson-Boltzmann Solver Based on Adaptive Cartesian Grids.

    PubMed

    Boschitsch, Alexander H; Fenley, Marcia O

    2011-05-10

    An adaptive Cartesian grid (ACG) concept is presented for the fast and robust numerical solution of the 3D Poisson-Boltzmann Equation (PBE) governing the electrostatic interactions of large-scale biomolecules and highly charged multi-biomolecular assemblies such as ribosomes and viruses. The ACG offers numerous advantages over competing grid topologies such as regular 3D lattices and unstructured grids. For very large biological molecules and multi-biomolecule assemblies, the total number of grid-points is several orders of magnitude less than that required in a conventional lattice grid used in the current PBE solvers thus allowing the end user to obtain accurate and stable nonlinear PBE solutions on a desktop computer. Compared to tetrahedral-based unstructured grids, ACG offers a simpler hierarchical grid structure, which is naturally suited to multigrid, relieves indirect addressing requirements and uses fewer neighboring nodes in the finite difference stencils. Construction of the ACG and determination of the dielectric/ionic maps are straightforward, fast and require minimal user intervention. Charge singularities are eliminated by reformulating the problem to produce the reaction field potential in the molecular interior and the total electrostatic potential in the exterior ionic solvent region. This approach minimizes grid-dependency and alleviates the need for fine grid spacing near atomic charge sites. The technical portion of this paper contains three parts. First, the ACG and its construction for general biomolecular geometries are described. Next, a discrete approximation to the PBE upon this mesh is derived. Finally, the overall solution procedure and multigrid implementation are summarized. Results obtained with the ACG-based PBE solver are presented for: (i) a low dielectric spherical cavity, containing interior point charges, embedded in a high dielectric ionic solvent - analytical solutions are available for this case, thus allowing rigorous

  5. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect

    Jakeman, J.D. Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  6. Elliptic Curve Cryptography-Based Authentication with Identity Protection for Smart Grids.

    PubMed

    Zhang, Liping; Tang, Shanyu; Luo, He

    2016-01-01

    In a smart grid, the power service provider enables the expected power generation amount to be measured according to current power consumption, thus stabilizing the power system. However, the data transmitted over smart grids are not protected, and then suffer from several types of security threats and attacks. Thus, a robust and efficient authentication protocol should be provided to strength the security of smart grid networks. As the Supervisory Control and Data Acquisition system provides the security protection between the control center and substations in most smart grid environments, we focus on how to secure the communications between the substations and smart appliances. Existing security approaches fail to address the performance-security balance. In this study, we suggest a mitigation authentication protocol based on Elliptic Curve Cryptography with privacy protection by using a tamper-resistant device at the smart appliance side to achieve a delicate balance between performance and security of smart grids. The proposed protocol provides some attractive features such as identity protection, mutual authentication and key agreement. Finally, we demonstrate the completeness of the proposed protocol using the Gong-Needham-Yahalom logic. PMID:27007951

  7. Elliptic Curve Cryptography-Based Authentication with Identity Protection for Smart Grids.

    PubMed

    Zhang, Liping; Tang, Shanyu; Luo, He

    2016-01-01

    In a smart grid, the power service provider enables the expected power generation amount to be measured according to current power consumption, thus stabilizing the power system. However, the data transmitted over smart grids are not protected, and then suffer from several types of security threats and attacks. Thus, a robust and efficient authentication protocol should be provided to strength the security of smart grid networks. As the Supervisory Control and Data Acquisition system provides the security protection between the control center and substations in most smart grid environments, we focus on how to secure the communications between the substations and smart appliances. Existing security approaches fail to address the performance-security balance. In this study, we suggest a mitigation authentication protocol based on Elliptic Curve Cryptography with privacy protection by using a tamper-resistant device at the smart appliance side to achieve a delicate balance between performance and security of smart grids. The proposed protocol provides some attractive features such as identity protection, mutual authentication and key agreement. Finally, we demonstrate the completeness of the proposed protocol using the Gong-Needham-Yahalom logic.

  8. Information Security Risk Assessment of Smart Grid Based on Absorbing Markov Chain and SPA

    NASA Astrophysics Data System (ADS)

    Jianye, Zhang; Qinshun, Zeng; Yiyang, Song; Cunbin, Li

    2014-12-01

    To assess and prevent the smart grid information security risks more effectively, this paper provides risk index quantitative calculation method based on absorbing Markov chain to overcome the deficiencies that links between system components were not taken into consideration and studies mostly were limited to static evaluation. The method avoids the shortcomings of traditional Expert Score with significant subjective factors and also considers the links between information system components, which make the risk index system closer to the reality. Then, a smart grid information security risk assessment model on the basis of set pair analysis improved by Markov chain was established. Using the identity, discrepancy, and contradiction of connection degree to dynamically reflect the trend of smart grid information security risk and combining with the Markov chain to calculate connection degree of the next period, the model implemented the smart grid information security risk assessment comprehensively and dynamically. Finally, this paper proves that the established model is scientific, effective, and feasible to dynamically evaluate the smart grid information security risks.

  9. Elliptic Curve Cryptography-Based Authentication with Identity Protection for Smart Grids

    PubMed Central

    Zhang, Liping; Tang, Shanyu; Luo, He

    2016-01-01

    In a smart grid, the power service provider enables the expected power generation amount to be measured according to current power consumption, thus stabilizing the power system. However, the data transmitted over smart grids are not protected, and then suffer from several types of security threats and attacks. Thus, a robust and efficient authentication protocol should be provided to strength the security of smart grid networks. As the Supervisory Control and Data Acquisition system provides the security protection between the control center and substations in most smart grid environments, we focus on how to secure the communications between the substations and smart appliances. Existing security approaches fail to address the performance-security balance. In this study, we suggest a mitigation authentication protocol based on Elliptic Curve Cryptography with privacy protection by using a tamper-resistant device at the smart appliance side to achieve a delicate balance between performance and security of smart grids. The proposed protocol provides some attractive features such as identity protection, mutual authentication and key agreement. Finally, we demonstrate the completeness of the proposed protocol using the Gong-Needham- Yahalom logic. PMID:27007951

  10. Cygrid: A fast Cython-powered convolution-based gridding module for Python

    NASA Astrophysics Data System (ADS)

    Winkel, B.; Lenz, D.; Flöer, L.

    2016-06-01

    Context. Data gridding is a common task in astronomy and many other science disciplines. It refers to the resampling of irregularly sampled data to a regular grid. Aims: We present cygrid, a library module for the general purpose programming language Python. Cygrid can be used to resample data to any collection of target coordinates, although its typical application involves FITS maps or data cubes. The FITS world coordinate system standard is supported. Methods: The regridding algorithm is based on the convolution of the original samples with a kernel of arbitrary shape. We introduce a lookup table scheme that allows us to parallelize the gridding and combine it with the HEALPix tessellation of the sphere for fast neighbor searches. Results: We show that for n input data points, cygrids runtime scales between O(n) and O(nlog n) and analyze the performance gain that is achieved using multiple CPU cores. We also compare the gridding speed with other techniques, such as nearest-neighbor, and linear and cubic spline interpolation. Conclusions: Cygrid is a very fast and versatile gridding library that significantly outperforms other third-party Python modules, such as the linear and cubic spline interpolation provided by SciPy. http://https://github.com/bwinkel/cygrid

  11. HPM-Based Dynamic Sparse Grid Approach for Perona-Malik Equation

    PubMed Central

    Mei, Shu-Li; Zhu, De-Hai

    2014-01-01

    The Perona-Malik equation is a famous image edge-preserved denoising model, which is represented as a nonlinear 2-dimension partial differential equation. Based on the homotopy perturbation method (HPM) and the multiscale interpolation theory, a dynamic sparse grid method for Perona-Malik was constructed in this paper. Compared with the traditional multiscale numerical techniques, the proposed method is independent of the basis function. In this method, a dynamic choice scheme of external grid points is proposed to eliminate the artifacts introduced by the partitioning technique. In order to decrease the calculation amount introduced by the change of the external grid points, the Newton interpolation technique is employed instead of the traditional Lagrange interpolation operator, and the condition number of the discretized matrix different equations is taken into account of the choice of the external grid points. Using the new numerical scheme, the time complexity of the sparse grid method for the image denoising is decreased to O(4J+2j) from O(43J), (j ≪ J). The experiment results show that the dynamic choice scheme of the external gird points can eliminate the boundary effect effectively and the efficiency can also be improved greatly comparing with the classical interval wavelets numerical methods. PMID:25050394

  12. The design and implementation of a remote sensing image processing system based on grid middleware

    NASA Astrophysics Data System (ADS)

    Zhong, Liang; Ma, Hongchao; Xu, Honggen; Ding, Yi

    2009-10-01

    In this article, a remote sensing image processing system is established to carry out the significant scientific problem that processing and distributing the mass earth-observed data quantitatively and intelligently with high efficiency under the Condor Environment. This system includes the submitting of the long-distantly task, the Grid middleware in the mass image processing and the quick distribution of the remote-sensing images, etc. A conclusion can be gained from the application of this system based on Grid environment. It proves to be an effective way to solve the present problem of fast processing, quick distribution and sharing of the mass remote-sensing images.

  13. Application of a Scalable, Parallel, Unstructured-Grid-Based Navier-Stokes Solver

    NASA Technical Reports Server (NTRS)

    Parikh, Paresh

    2001-01-01

    A parallel version of an unstructured-grid based Navier-Stokes solver, USM3Dns, previously developed for efficient operation on a variety of parallel computers, has been enhanced to incorporate upgrades made to the serial version. The resultant parallel code has been extensively tested on a variety of problems of aerospace interest and on two sets of parallel computers to understand and document its characteristics. An innovative grid renumbering construct and use of non-blocking communication are shown to produce superlinear computing performance. Preliminary results from parallelization of a recently introduced "porous surface" boundary condition are also presented.

  14. Implementation of fuzzy-sliding mode based control of a grid connected photovoltaic system.

    PubMed

    Menadi, Abdelkrim; Abdeddaim, Sabrina; Ghamri, Ahmed; Betka, Achour

    2015-09-01

    The present work describes an optimal operation of a small scale photovoltaic system connected to a micro-grid, based on both sliding mode and fuzzy logic control. Real time implementation is done through a dSPACE 1104 single board, controlling a boost chopper on the PV array side and a voltage source inverter (VSI) on the grid side. The sliding mode controller tracks permanently the maximum power of the PV array regardless of atmospheric condition variations, while The fuzzy logic controller (FLC) regulates the DC-link voltage, and ensures via current control of the VSI a quasi-total transit of the extracted PV power to the grid under a unity power factor operation. Simulation results, carried out via Matlab-Simulink package were approved through experiment, showing the effectiveness of the proposed control techniques. PMID:26243440

  15. An unstructured grid, three-dimensional model based on the shallow water equations

    USGS Publications Warehouse

    Casulli, V.; Walters, R.A.

    2000-01-01

    A semi-implicit finite difference model based on the three-dimensional shallow water equations is modified to use unstructured grids. There are obvious advantages in using unstructured grids in problems with a complicated geometry. In this development, the concept of unstructured orthogonal grids is introduced and applied to this model. The governing differential equations are discretized by means of a semi-implicit algorithm that is robust, stable and very efficient. The resulting model is relatively simple, conserves mass, can fit complicated boundaries and yet is sufficiently flexible to permit local mesh refinements in areas of interest. Moreover, the simulation of the flooding and drying is included in a natural and straightforward manner. These features are illustrated by a test case for studies of convergence rates and by examples of flooding on a river plain and flow in a shallow estuary. Copyright ?? 2000 John Wiley & Sons, Ltd.

  16. Probability-Based Software for Grid Optimization: Improved Power System Operations Using Advanced Stochastic Optimization

    SciTech Connect

    2012-02-24

    GENI Project: Sandia National Laboratories is working with several commercial and university partners to develop software for market management systems (MMSs) that enable greater use of renewable energy sources throughout the grid. MMSs are used to securely and optimally determine which energy resources should be used to service energy demand across the country. Contributions of electricity to the grid from renewable energy sources such as wind and solar are intermittent, introducing complications for MMSs, which have trouble accommodating the multiple sources of price and supply uncertainties associated with bringing these new types of energy into the grid. Sandia’s software will bring a new, probability-based formulation to account for these uncertainties. By factoring in various probability scenarios for electricity production from renewable energy sources in real time, Sandia’s formula can reduce the risk of inefficient electricity transmission, save ratepayers money, conserve power, and support the future use of renewable energy.

  17. Implementation of fuzzy-sliding mode based control of a grid connected photovoltaic system.

    PubMed

    Menadi, Abdelkrim; Abdeddaim, Sabrina; Ghamri, Ahmed; Betka, Achour

    2015-09-01

    The present work describes an optimal operation of a small scale photovoltaic system connected to a micro-grid, based on both sliding mode and fuzzy logic control. Real time implementation is done through a dSPACE 1104 single board, controlling a boost chopper on the PV array side and a voltage source inverter (VSI) on the grid side. The sliding mode controller tracks permanently the maximum power of the PV array regardless of atmospheric condition variations, while The fuzzy logic controller (FLC) regulates the DC-link voltage, and ensures via current control of the VSI a quasi-total transit of the extracted PV power to the grid under a unity power factor operation. Simulation results, carried out via Matlab-Simulink package were approved through experiment, showing the effectiveness of the proposed control techniques.

  18. CHARACTERIZING SPATIAL AND TEMPORAL DYNAMICS: DEVELOPMENT OF A GRID-BASED WATERSHED MERCURY LOADING MODEL

    EPA Science Inventory

    A distributed grid-based watershed mercury loading model has been developed to characterize spatial and temporal dynamics of mercury from both point and non-point sources. The model simulates flow, sediment transport, and mercury dynamics on a daily time step across a diverse lan...

  19. A Grid-resolved Analysis of Base Flowfield for a Four-Engine Clustered Nozzle Configuration

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See

    1993-01-01

    The objective of this study is to propose a computational methodology that can effectively anchor the base flowfield of a four-engine clustered nozzle configuration. This computational methodology is based on a three-dimensional, viscous flow, pressure-based computational fluid dynamics (CFD) formulation. For efficient CFD calculation, a Prandtl-Meyer solution treatment is applied to the algebraic grid lines for initial plume expansion resolution. As the solution evolves, the computational grid is adapted to the pertinent flow gradients. The CFD model employs an upwind scheme in which second- and fourth-order central differencing schemes with artificial dissipation are used. The computed quantitative base flow properties such as the radial base pressure distributions, model centerline static pressure, Mach number and impact pressure variations, and base pressure characteristic curve agreed reasonably well with those of the measurement.

  20. The CrossGrid project

    NASA Astrophysics Data System (ADS)

    Kunze, M.; CrossGrid Collaboration

    2003-04-01

    There are many large-scale problems that require new approaches to computing, such as earth observation, environmental management, biomedicine, industrial and scientific modeling. The CrossGrid project addresses realistic problems in medicine, environmental protection, flood prediction, and physics analysis and is oriented towards specific end-users: Medical doctors, who could obtain new tools to help them to obtain correct diagnoses and to guide them during operations; industries, that could be advised on the best timing for some critical operations involving risk of pollution; flood crisis teams, that could predict the risk of a flood on the basis of historical records and actual hydrological and meteorological data; physicists, who could optimize the analysis of massive volumes of data distributed across countries and continents. Corresponding applications will be based on Grid technology and could be complex and difficult to use: the CrossGrid project aims at developing several tools that will make the Grid more friendly for average users. Portals for specific applications will be designed, that should allow for easy connection to the Grid, create a customized work environment, and provide users with all necessary information to get their job done.

  1. Simulating Runoff from a Grid Based Mercury Model: Flow Comparisons

    EPA Science Inventory

    Several mercury cycling models, including general mass balance approaches, mixed-batch reactors in streams or lakes, or regional process-based models, exist to assess the ecological exposure risks associated with anthropogenically increased atmospheric mercury (Hg) deposition, so...

  2. Experience with Remote Job Execution

    SciTech Connect

    Lynch, Vickie E; Cobb, John W; Green, Mark L; Kohl, James Arthur; Miller, Stephen D; Ren, Shelly; Smith, Bradford C; Vazhkudai, Sudharshan S

    2008-01-01

    The Neutron Science Portal at Oak Ridge National Laboratory submits jobs to the TeraGrid for remote job execution. The TeraGrid is a network of high performance computers supported by the US National Science Foundation. There are eleven partner facilities with over a petaflop of peak computing performance and sixty petabytes of long-term storage. Globus is installed on a local machine and used for job submission. The graphical user interface is produced by java coding that reads an XML file. After submission, the status of the job is displayed in a Job Information Service window which queries globus for the status. The output folder produced in the scratch directory of the TeraGrid machine is returned to the portal with globus-url-copy command that uses the gridftp servers on the TeraGrid machines. This folder is copied from the stage-in directory of the community account to the user's results directory where the output can be plotted using the portal's visualization services. The primary problem with remote job execution is diagnosing execution problems. We have daily tests of submitting multiple remote jobs from the portal. When these jobs fail on a computer, it is difficult to diagnose the problem from the globus output. Successes and problems will be presented.

  3. Agent-based simulation of building evacuation using a grid graph-based model

    NASA Astrophysics Data System (ADS)

    Tan, L.; Lin, H.; Hu, M.; Che, W.

    2014-02-01

    Shifting from macroscope models to microscope models, the agent-based approach has been widely used to model crowd evacuation as more attentions are paid on individualized behaviour. Since indoor evacuation behaviour is closely related to spatial features of the building, effective representation of indoor space is essential for the simulation of building evacuation. The traditional cell-based representation has limitations in reflecting spatial structure and is not suitable for topology analysis. Aiming at incorporating powerful topology analysis functions of GIS to facilitate agent-based simulation of building evacuation, we used a grid graph-based model in this study to represent the indoor space. Such model allows us to establish an evacuation network at a micro level. Potential escape routes from each node thus could be analysed through GIS functions of network analysis considering both the spatial structure and route capacity. This would better support agent-based modelling of evacuees' behaviour including route choice and local movements. As a case study, we conducted a simulation of emergency evacuation from the second floor of an official building using Agent Analyst as the simulation platform. The results demonstrate the feasibility of the proposed method, as well as the potential of GIS in visualizing and analysing simulation results.

  4. BaBar MC production on the Canadian grid using a web services approach

    NASA Astrophysics Data System (ADS)

    Agarwal, A.; Armstrong, P.; Desmarais, R.; Gable, I.; Popov, S.; Ramage, S.; Schaffer, S.; Sobie, C.; Sobie, R.; Sulivan, T.; Vanderster, D.; Mateescu, G.; Podaima, W.; Charbonneau, A.; Impey, R.; Viswanathan, M.; Quesnel, D.

    2008-07-01

    The present paper highlights the approach used to design and implement a web services based BaBar Monte Carlo (MC) production grid using Globus Toolkit version 4. The grid integrates the resources of two clusters at the University of Victoria, using the ClassAd mechanism provided by the Condor-G metascheduler. Each cluster uses the Portable Batch System (PBS) as its local resource management system (LRMS). Resource brokering is provided by the Condor matchmaking process, whereby the job and resource attributes are expressed as ClassAds. The important features of the grid are automatic registering of resource ClassAds to the central registry, ClassAds extraction from the registry to the metascheduler for matchmaking, and the incorporation of input/output file staging. Web-based monitoring is employed to track the status of grid resources and the jobs for an efficient operation of the grid. The performance of this new grid for BaBar jobs, and the existing Canadian computational grid (GridX1) based on Globus Toolkit version 2 is found to be consistent.

  5. Grid occupancy estimation for environment perception based on belief functions and PCR6

    NASA Astrophysics Data System (ADS)

    Moras, Julien; Dezert, Jean; Pannetier, Benjamin

    2015-05-01

    In this contribution, we propose to improve the grid map occupancy estimation method developed so far based on belief function modeling and the classical Dempster's rule of combination. Grid map offers a useful representation of the perceived world for mobile robotics navigation. It will play a major role for the security (obstacle avoidance) of next generations of terrestrial vehicles, as well as for future autonomous navigation systems. In a grid map, the occupancy of each cell representing a small piece of the surrounding area of the robot must be estimated at first from sensors measurements (typically LIDAR, or camera), and then it must also be classified into different classes in order to get a complete and precise perception of the dynamic environment where the robot moves. So far, the estimation and the grid map updating have been done using fusion techniques based on the probabilistic framework, or on the classical belief function framework thanks to an inverse model of the sensors. Mainly because the latter offers an interesting management of uncertainties when the quality of available information is low, and when the sources of information appear as conflicting. To improve the performances of the grid map estimation, we propose in this paper to replace Dempster's rule of combination by the PCR6 rule (Proportional Conflict Redistribution rule #6) proposed in DSmT (Dezert-Smarandache) Theory. As an illustrating scenario, we consider a platform moving in dynamic area and we compare our new realistic simulation results (based on a LIDAR sensor) with those obtained by the probabilistic and the classical belief-based approaches.

  6. Variogram-based approach for selection of grid size in groundwater modeling

    NASA Astrophysics Data System (ADS)

    Mohammadi, Zargham

    2013-09-01

    In this paper a geostatistical approach based on the variogram concept is used for decisions about the size of a grid network applicable to a groundwater model. One of the important properties of the variogram function is the range of influence, which is interpreted as a measure of similarity and correlation distance between spatial phenomena. Taking the concept of range into account, several available spatial variables of the under study aquifer were used to plot the variogram in different directions. The study area is an unconfined aquifer (Boushkan Plain in southwest Iran) with an average thickness of 50 m and area of 100 km2. The variables used for computing the variogram include aquifer thickness ( D), groundwater pumping rate ( Q) and electrical resistivity ( R) of the aquifer material. The range of influence was estimated to be 2,000, 2,500, and 2,000 m for D, Q and R, respectively. Comparisons of statistical parameters of spatial variables over a grid size ranging from 1,000 to 5,000 m were done to confirm the proposed grid size according to variogram plots. The bounded area of each cell could be considered as homogenous media according to the spatial variation of the variables used. The optimum size of the grid network was selected according to the minimum of the variance and coefficient of variation over each cell size. Results suggest a 2,500 × 2,500 m grid size for the modeling process in the studied aquifer. The results emphasize the role of variogram function in selection of grid size for the groundwater model.

  7. AstroGrid-D: Grid technology for astronomical science

    NASA Astrophysics Data System (ADS)

    Enke, Harry; Steinmetz, Matthias; Adorf, Hans-Martin; Beck-Ratzka, Alexander; Breitling, Frank; Brüsemeister, Thomas; Carlson, Arthur; Ensslin, Torsten; Högqvist, Mikael; Nickelt, Iliya; Radke, Thomas; Reinefeld, Alexander; Reiser, Angelika; Scholl, Tobias; Spurzem, Rainer; Steinacker, Jürgen; Voges, Wolfgang; Wambsganß, Joachim; White, Steve

    2011-02-01

    We present status and results of AstroGrid-D, a joint effort of astrophysicists and computer scientists to employ grid technology for scientific applications. AstroGrid-D provides access to a network of distributed machines with a set of commands as well as software interfaces. It allows simple use of computer and storage facilities and to schedule or monitor compute tasks and data management. It is based on the Globus Toolkit middleware (GT4). Chapter 1 describes the context which led to the demand for advanced software solutions in Astrophysics, and we state the goals of the project. We then present characteristic astrophysical applications that have been implemented on AstroGrid-D in chapter 2. We describe simulations of different complexity, compute-intensive calculations running on multiple sites (Section 2.1), and advanced applications for specific scientific purposes (Section 2.2), such as a connection to robotic telescopes (Section 2.2.3). We can show from these examples how grid execution improves e.g. the scientific workflow. Chapter 3 explains the software tools and services that we adapted or newly developed. Section 3.1 is focused on the administrative aspects of the infrastructure, to manage users and monitor activity. Section 3.2 characterises the central components of our architecture: The AstroGrid-D information service to collect and store metadata, a file management system, the data management system, and a job manager for automatic submission of compute tasks. We summarise the successfully established infrastructure in chapter 4, concluding with our future plans to establish AstroGrid-D as a platform of modern e-Astronomy.

  8. Locating and navigation mechanism based on place-cell and grid-cell models.

    PubMed

    Yan, Chuankui; Wang, Rubin; Qu, Jingyi; Chen, Guanrong

    2016-08-01

    Extensive experiments on rats have shown that environmental cues play an important role in goal locating and navigation. Major studies about locating and navigation are carried out based only on place cells. Nevertheless, it is known that navigation may also rely on grid cells. Therefore, we model locating and navigation based on both, thus developing a novel grid-cell model, from which firing fields of grid cells can be obtained. We found a continuous-time dynamic system to describe learning and direction selection. In our simulation experiment, according to the results from physiology experiments, we successfully rebuild place fields of place cells and firing fields of grid cells. We analyzed the factors affecting the locating accuracy. Results show that the learning rate, firing threshold and cell number can influence the outcomes from various tasks. We used our system model to perform a goal navigation task and showed that paths that are changed for every run in one experiment converged to a stable one after several runs. PMID:27468322

  9. Locating and navigation mechanism based on place-cell and grid-cell models.

    PubMed

    Yan, Chuankui; Wang, Rubin; Qu, Jingyi; Chen, Guanrong

    2016-08-01

    Extensive experiments on rats have shown that environmental cues play an important role in goal locating and navigation. Major studies about locating and navigation are carried out based only on place cells. Nevertheless, it is known that navigation may also rely on grid cells. Therefore, we model locating and navigation based on both, thus developing a novel grid-cell model, from which firing fields of grid cells can be obtained. We found a continuous-time dynamic system to describe learning and direction selection. In our simulation experiment, according to the results from physiology experiments, we successfully rebuild place fields of place cells and firing fields of grid cells. We analyzed the factors affecting the locating accuracy. Results show that the learning rate, firing threshold and cell number can influence the outcomes from various tasks. We used our system model to perform a goal navigation task and showed that paths that are changed for every run in one experiment converged to a stable one after several runs.

  10. Job burnout: toward an integration of two dominant resource-based models.

    PubMed

    Akhtar, Syed; Lee, Jenny S Y

    2010-08-01

    The goal of this study was to integrate the job demands-resources model and the conservation of resources model of job burnout into a unified theoretical framework. The data were collected through a mail questionnaire survey among nurses holding managerial positions in the Hospital Authority of Hong Kong. From a computer-generated random sample of nurses, 543 (84.3% women) returned usable surveys, amounting to a response rate of 24.2%. Structural equation modeling was used to test the proposed paths originating from job demands and job resources to the core job burnout dimensions, namely, emotional exhaustion and depersonalization. Results supported the integrated model, indicating that job demands and job resources had differing effects on the burnout dimensions. The effect of job demands was stronger and partially mediated the effect of job resources. Implications of the results from this study on management practices were discussed.

  11. A priori parameter estimates for a distributed, grid-based Xinanjiang model using geographically based information

    NASA Astrophysics Data System (ADS)

    Yao, Cheng; Li, Zhijia; Yu, Zhongbo; Zhang, Ke

    2012-10-01

    SummaryAn improved form of spatially distributed Grid-Xinanjiang model (GXM), which integrates features of a well-tested conceptual rainfall-runoff model and a physically based flow routing model, has been proposed for simulating hydrologic processes and forecasting flood events in watersheds. The digital elevation model (DEM) is utilized in the GXM to derive computational flow direction, routing sequencing, and hillslope and channel slopes. The processes in the model include canopy interception, direct channel precipitation, evapotranspiration, as well as runoff generation via a saturation excess mechanism. A two-step finite difference solution of the diffusion wave approximation of the St. Venant equations with second-order accuracy is used in the model to simulate the flow routed along the hillslope and channel on a cell basis with consideration of upstream inflow and flow partition to the channels. A physically, empirically based approach using geographically based information such as topography, soil data and land use/land cover data is employed for estimating spatially varied parameters. GXM is applied at a 1-km grid scale to a nested watershed located in Anhui province, China. The parent Tunxi watershed, with a drainage area of 2692.7 km2, contains five internal points with available observed streamflow data, allowing us to evaluate model's ability to simulate the hydrologic processes within the watershed. Calibration and verification of the proposed GXM are carried out for both daily and hourly time scales using daily rainfall-runoff data and hourly streamflow data. Model performance is assessed by comparing simulated and observed flows at the watershed outlet and interior gauging stations. Initial tests indicate that the parameter estimation approach is efficient and the developed model can satisfactorily simulate not only the streamflow at the parent watershed outlet, but also the flood hydrograph at the interior gauging points without model recalibration

  12. Analysis of the Multi Strategy Goal Programming for Micro-Grid Based on Dynamic ant Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Qiu, J. P.; Niu, D. X.

    Micro-grid is one of the key technologies of the future energy supplies. Take economic planning. reliability, and environmental protection of micro grid as a basis for the analysis of multi-strategy objective programming problems for micro grid which contains wind power, solar power, and battery and micro gas turbine. Establish the mathematical model of each power generation characteristics and energy dissipation. and change micro grid planning multi-objective function under different operating strategies to a single objective model based on AHP method. Example analysis shows that in combination with dynamic ant mixed genetic algorithm can get the optimal power output of this model.

  13. Supporting grid-based clinical trials in Scotland.

    PubMed

    Sinnott, R O; Stell, A J; Ajayi, O

    2008-06-01

    A computational infrastructure to underpin complex clinical trials and medical population studies is highly desirable. This should allow access to a range of distributed clinical data sets; support the efficient processing and analysis of the data obtained; have security at its heart; and ensure that authorized individuals are able to see privileged data and no more. Each clinical trial has its own requirements on data sets and how they are used; hence a reusable and flexible framework offers many advantages. The MRC funded Virtual Organisations for Trials and Epidemiological Studies (VOTES) is a collaborative project involving several UK universities specifically to explore this space. This article presents the experiences of developing the Scottish component of this nationwide infrastructure, by the National e-Science Centre (NeSC) based at the University of Glasgow, and the issues inherent in accessing and using the clinical data sets in a flexible, dynamic and secure manner. PMID:18477596

  14. Supporting grid-based clinical trials in Scotland.

    PubMed

    Sinnott, R O; Stell, A J; Ajayi, O

    2008-06-01

    A computational infrastructure to underpin complex clinical trials and medical population studies is highly desirable. This should allow access to a range of distributed clinical data sets; support the efficient processing and analysis of the data obtained; have security at its heart; and ensure that authorized individuals are able to see privileged data and no more. Each clinical trial has its own requirements on data sets and how they are used; hence a reusable and flexible framework offers many advantages. The MRC funded Virtual Organisations for Trials and Epidemiological Studies (VOTES) is a collaborative project involving several UK universities specifically to explore this space. This article presents the experiences of developing the Scottish component of this nationwide infrastructure, by the National e-Science Centre (NeSC) based at the University of Glasgow, and the issues inherent in accessing and using the clinical data sets in a flexible, dynamic and secure manner.

  15. Photovoltaic Grid-Connected Modeling and Characterization Based on Experimental Results

    PubMed Central

    Humada, Ali M.; Hojabri, Mojgan; Sulaiman, Mohd Herwan Bin; Hamada, Hussein M.; Ahmed, Mushtaq N.

    2016-01-01

    A grid-connected photovoltaic (PV) system operates under fluctuated weather condition has been modeled and characterized based on specific test bed. A mathematical model of a small-scale PV system has been developed mainly for residential usage, and the potential results have been simulated. The proposed PV model based on three PV parameters, which are the photocurrent, IL, the reverse diode saturation current, Io, the ideality factor of diode, n. Accuracy of the proposed model and its parameters evaluated based on different benchmarks. The results showed that the proposed model fitting the experimental results with high accuracy compare to the other models, as well as the I-V characteristic curve. The results of this study can be considered valuable in terms of the installation of a grid-connected PV system in fluctuated climatic conditions. PMID:27035575

  16. Photovoltaic Grid-Connected Modeling and Characterization Based on Experimental Results.

    PubMed

    Humada, Ali M; Hojabri, Mojgan; Sulaiman, Mohd Herwan Bin; Hamada, Hussein M; Ahmed, Mushtaq N

    2016-01-01

    A grid-connected photovoltaic (PV) system operates under fluctuated weather condition has been modeled and characterized based on specific test bed. A mathematical model of a small-scale PV system has been developed mainly for residential usage, and the potential results have been simulated. The proposed PV model based on three PV parameters, which are the photocurrent, IL, the reverse diode saturation current, Io, the ideality factor of diode, n. Accuracy of the proposed model and its parameters evaluated based on different benchmarks. The results showed that the proposed model fitting the experimental results with high accuracy compare to the other models, as well as the I-V characteristic curve. The results of this study can be considered valuable in terms of the installation of a grid-connected PV system in fluctuated climatic conditions. PMID:27035575

  17. A Grid-Based Architecture for Coupling Hydro-Meteorological Models

    NASA Astrophysics Data System (ADS)

    Schiffers, Michael; Straube, Christian; gentschen Felde, Nils; Clematis, Andrea; Galizia, Antonella; D'Agostino, Daniele; Danovaro, Emanuele

    2014-05-01

    Computational hydro-meteorological research (HMR) requires the execution of various meteorological, hydrological, hydraulic, and impact models, either standalone or as well-orchestrated chains (workflows). While the former approach is straightforward, the latter one is not because consecutive models may depend on different execution environments, on organizational constraints, and on separate data formats and semantics to be bridged. Consequently, in order to gain the most benefit from HMR model chains, it is of paramount interest a) to seamlessly couple heterogeneous models; b) to access models and data in various administrative domains; c) to execute models on the most appropriate resources available in right time. In this contribution we present our experience in using a Grid-based computing infrastructure for HMR. In particular we will first explore various coupling mechanisms. We then specify an enabling Grid infrastructure to support dynamic model chains. Using the DRIHM project as an example we report on implementation details, especially in the context of the European Grid Infrastructure (EGI). Finally, we apply the architecture for hydro-meteorological disaster management and elaborate on the opportunities the Grid infrastructure approach offers in a worldwide context.

  18. A Current Sensor Based on the Giant Magnetoresistance Effect: Design and Potential Smart Grid Applications

    PubMed Central

    Ouyang, Yong; He, Jinliang; Hu, Jun; Wang, Shan X.

    2012-01-01

    Advanced sensing and measurement techniques are key technologies to realize a smart grid. The giant magnetoresistance (GMR) effect has revolutionized the fields of data storage and magnetic measurement. In this work, a design of a GMR current sensor based on a commercial analog GMR chip for applications in a smart grid is presented and discussed. Static, dynamic and thermal properties of the sensor were characterized. The characterizations showed that in the operation range from 0 to ±5 A, the sensor had a sensitivity of 28 mV·A−1, linearity of 99.97%, maximum deviation of 2.717%, frequency response of −1.5 dB at 10 kHz current measurement, and maximum change of the amplitude response of 0.0335%·°C−1 with thermal compensation. In the distributed real-time measurement and monitoring of a smart grid system, the GMR current sensor shows excellent performance and is cost effective, making it suitable for applications such as steady-state and transient-state monitoring. With the advantages of having a high sensitivity, high linearity, small volume, low cost, and simple structure, the GMR current sensor is promising for the measurement and monitoring of smart grids. PMID:23202221

  19. Solving large-scale real-world telecommunication problems using a grid-based genetic algorithm

    NASA Astrophysics Data System (ADS)

    Luna, Francisco; Nebro, Antonio; Alba, Enrique; Durillo, Juan

    2008-11-01

    This article analyses the use of a grid-based genetic algorithm (GrEA) to solve a real-world instance of a problem from the telecommunication domain. The problem, known as automatic frequency planning (AFP), is used in a global system for mobile communications (GSM) networks to assign a number of fixed frequencies to a set of GSM transceivers located in the antennae of a cellular phone network. Real data instances of the AFP are very difficult to solve owing to the NP-hard nature of the problem, so combining grid computing and metaheuristics turns out to be a way to provide satisfactory solutions in a reasonable amount of time. GrEA has been deployed on a grid with up to 300 processors to solve an AFP instance of 2612 transceivers. The results not only show that significant running time reductions are achieved, but that the search capability of GrEA clearly outperforms that of the equivalent non-grid algorithm.

  20. A current sensor based on the giant magnetoresistance effect: design and potential smart grid applications.

    PubMed

    Ouyang, Yong; He, Jinliang; Hu, Jun; Wang, Shan X

    2012-11-09

    Advanced sensing and measurement techniques are key technologies to realize a smart grid. The giant magnetoresistance (GMR) effect has revolutionized the fields of data storage and magnetic measurement. In this work, a design of a GMR current sensor based on a commercial analog GMR chip for applications in a smart grid is presented and discussed. Static, dynamic and thermal properties of the sensor were characterized. The characterizations showed that in the operation range from 0 to ±5 A, the sensor had a sensitivity of 28 mV·A(-1), linearity of 99.97%, maximum deviation of 2.717%, frequency response of −1.5 dB at 10 kHz current measurement, and maximum change of the amplitude response of 0.0335%·°C(-1) with thermal compensation. In the distributed real-time measurement and monitoring of a smart grid system, the GMR current sensor shows excellent performance and is cost effective, making it suitable for applications such as steady-state and transient-state monitoring. With the advantages of having a high sensitivity, high linearity, small volume, low cost, and simple structure, the GMR current sensor is promising for the measurement and monitoring of smart grids.

  1. A current sensor based on the giant magnetoresistance effect: design and potential smart grid applications.

    PubMed

    Ouyang, Yong; He, Jinliang; Hu, Jun; Wang, Shan X

    2012-01-01

    Advanced sensing and measurement techniques are key technologies to realize a smart grid. The giant magnetoresistance (GMR) effect has revolutionized the fields of data storage and magnetic measurement. In this work, a design of a GMR current sensor based on a commercial analog GMR chip for applications in a smart grid is presented and discussed. Static, dynamic and thermal properties of the sensor were characterized. The characterizations showed that in the operation range from 0 to ±5 A, the sensor had a sensitivity of 28 mV·A(-1), linearity of 99.97%, maximum deviation of 2.717%, frequency response of −1.5 dB at 10 kHz current measurement, and maximum change of the amplitude response of 0.0335%·°C(-1) with thermal compensation. In the distributed real-time measurement and monitoring of a smart grid system, the GMR current sensor shows excellent performance and is cost effective, making it suitable for applications such as steady-state and transient-state monitoring. With the advantages of having a high sensitivity, high linearity, small volume, low cost, and simple structure, the GMR current sensor is promising for the measurement and monitoring of smart grids. PMID:23202221

  2. Hierarchical Grid-based Multi-People Tracking-by-Detection With Global Optimization.

    PubMed

    Chen, Lili; Wang, Wei; Panin, Giorgio; Knoll, Alois

    2015-11-01

    We present a hierarchical grid-based, globally optimal tracking-by-detection approach to track an unknown number of targets in complex and dense scenarios, particularly addressing the challenges of complex interaction and mutual occlusion. Frame-by-frame detection is performed by hierarchical likelihood grids, matching shape templates through a fast oriented distance transform. To allow recovery from misdetections, common heuristics such as nonmaxima suppression within observations is eschewed. Within a discretized state-space, the data association problem is formulated as a grid-based network flow model, resulting in a convex problem casted into an integer linear programming form, giving a global optimal solution. In addition, we show how a behavior cue (body orientation) can be integrated into our association affinity model, providing valuable hints for resolving ambiguities between crossing trajectories. Unlike traditional motion-based approaches, we estimate body orientation by a hybrid methodology, which combines the merits of motion-based and 3D appearance-based orientation estimation, thus being capable of dealing also with still-standing or slowly moving targets. The performance of our method is demonstrated through experiments on a large variety of benchmark video sequences, including both indoor and outdoor scenarios.

  3. Navigation in Grid Space with the NAS Grid Benchmarks

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Hood, Robert; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We present a navigational tool for computational grids. The navigational process is based on measuring the grid characteristics with the NAS Grid Benchmarks (NGB) and using the measurements to assign tasks of a grid application to the grid machines. The tool allows the user to explore the grid space and to navigate the execution at a grid application to minimize its turnaround time. We introduce the notion of gridscape as a user view of the grid and show how it can be me assured by NGB, Then we demonstrate how the gridscape can be used with two different schedulers to navigate a grid application through a rudimentary grid.

  4. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE PAGESBeta

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  5. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  6. Adult Competency Education Kit. Basic Skills in Speaking, Math, and Reading for Employment. Part J. ACE Competency Based Job Descriptions: Sales Core Job Description; #36--Sales, Automotive Parts; #37--Sales, Retail; #38--Salesperson, Garden & Housewares; #39--Salesperson, Women's Garments.

    ERIC Educational Resources Information Center

    San Mateo County Office of Education, Redwood City, CA. Career Preparation Centers.

    This seventh of fifteen sets of Adult Competency Education (ACE) Competency Based Job Descriptions in the ACE kit contains job descriptions for Salesperson, Automotive Parts; Sales Clerk, Retail; Salesperson, Garden and Housewares; and Salesperson, Women's Garments. Each begins with a fact sheet that includes this information: occupational title,…

  7. PLL Based Energy Efficient PV System with Fuzzy Logic Based Power Tracker for Smart Grid Applications.

    PubMed

    Rohini, G; Jamuna, V

    2016-01-01

    This work aims at improving the dynamic performance of the available photovoltaic (PV) system and maximizing the power obtained from it by the use of cascaded converters with intelligent control techniques. Fuzzy logic based maximum power point technique is embedded on the first conversion stage to obtain the maximum power from the available PV array. The cascading of second converter is needed to maintain the terminal voltage at grid potential. The soft-switching region of three-stage converter is increased with the proposed phase-locked loop based control strategy. The proposed strategy leads to reduction in the ripple content, rating of components, and switching losses. The PV array is mathematically modeled and the system is simulated and the results are analyzed. The performance of the system is compared with the existing maximum power point tracking algorithms. The authors have endeavored to accomplish maximum power and improved reliability for the same insolation of the PV system. Hardware results of the system are also discussed to prove the validity of the simulation results.

  8. PLL Based Energy Efficient PV System with Fuzzy Logic Based Power Tracker for Smart Grid Applications.

    PubMed

    Rohini, G; Jamuna, V

    2016-01-01

    This work aims at improving the dynamic performance of the available photovoltaic (PV) system and maximizing the power obtained from it by the use of cascaded converters with intelligent control techniques. Fuzzy logic based maximum power point technique is embedded on the first conversion stage to obtain the maximum power from the available PV array. The cascading of second converter is needed to maintain the terminal voltage at grid potential. The soft-switching region of three-stage converter is increased with the proposed phase-locked loop based control strategy. The proposed strategy leads to reduction in the ripple content, rating of components, and switching losses. The PV array is mathematically modeled and the system is simulated and the results are analyzed. The performance of the system is compared with the existing maximum power point tracking algorithms. The authors have endeavored to accomplish maximum power and improved reliability for the same insolation of the PV system. Hardware results of the system are also discussed to prove the validity of the simulation results. PMID:27294189

  9. PLL Based Energy Efficient PV System with Fuzzy Logic Based Power Tracker for Smart Grid Applications

    PubMed Central

    Rohini, G.; Jamuna, V.

    2016-01-01

    This work aims at improving the dynamic performance of the available photovoltaic (PV) system and maximizing the power obtained from it by the use of cascaded converters with intelligent control techniques. Fuzzy logic based maximum power point technique is embedded on the first conversion stage to obtain the maximum power from the available PV array. The cascading of second converter is needed to maintain the terminal voltage at grid potential. The soft-switching region of three-stage converter is increased with the proposed phase-locked loop based control strategy. The proposed strategy leads to reduction in the ripple content, rating of components, and switching losses. The PV array is mathematically modeled and the system is simulated and the results are analyzed. The performance of the system is compared with the existing maximum power point tracking algorithms. The authors have endeavored to accomplish maximum power and improved reliability for the same insolation of the PV system. Hardware results of the system are also discussed to prove the validity of the simulation results. PMID:27294189

  10. Spatial services grid

    NASA Astrophysics Data System (ADS)

    Cao, Jian; Li, Qi; Cheng, Jicheng

    2005-10-01

    This paper discusses the concept, key technologies and main application of Spatial Services Grid. The technologies of Grid computing and Webservice is playing a revolutionary role in studying the spatial information services. The concept of the SSG (Spatial Services Grid) is put forward based on the SIG (Spatial Information Grid) and OGSA (open grid service architecture). Firstly, the grid computing is reviewed and the key technologies of SIG and their main applications are reviewed. Secondly, the grid computing and three kinds of SIG (in broad sense)--SDG (spatial data grid), SIG (spatial information grid) and SSG (spatial services grid) and their relationships are proposed. Thirdly, the key technologies of the SSG (spatial services grid) is put forward. Finally, three representative applications of SSG (spatial services grid) are discussed. The first application is urban location based services gird, which is a typical spatial services grid and can be constructed on OGSA (Open Grid Services Architecture) and digital city platform. The second application is region sustainable development grid which is the key to the urban development. The third application is Region disaster and emergency management services grid.

  11. A Modified Rife Algorithm for Off-Grid DOA Estimation Based on Sparse Representations.

    PubMed

    Chen, Tao; Wu, Huanxin; Guo, Limin; Liu, Lutao

    2015-01-01

    In this paper we address the problem of off-grid direction of arrival (DOA) estimation based on sparse representations in the situation of multiple measurement vectors (MMV). A novel sparse DOA estimation method which changes MMV problem to SMV is proposed. This method uses sparse representations based on weighted eigenvectors (SRBWEV) to deal with the MMV problem. MMV problem can be changed to single measurement vector (SMV) problem by using the linear combination of eigenvectors of array covariance matrix in signal subspace as a new SMV for sparse solution calculation. So the complexity of this proposed algorithm is smaller than other DOA estimation algorithms of MMV. Meanwhile, it can overcome the limitation of the conventional sparsity-based DOA estimation approaches that the unknown directions belong to a predefined discrete angular grid, so it can further improve the DOA estimation accuracy. The modified Rife algorithm for DOA estimation (MRife-DOA) is simulated based on SRBWEV algorithm. In this proposed algorithm, the largest and sub-largest inner products between the measurement vector or its residual and the atoms in the dictionary are utilized to further modify DOA estimation according to the principle of Rife algorithm and the basic idea of coarse-to-fine estimation. Finally, simulation experiments show that the proposed algorithm is effective and can reduce the DOA estimation error caused by grid effect with lower complexity. PMID:26610521

  12. A Modified Rife Algorithm for Off-Grid DOA Estimation Based on Sparse Representations

    PubMed Central

    Chen, Tao; Wu, Huanxin; Guo, Limin; Liu, Lutao

    2015-01-01

    In this paper we address the problem of off-grid direction of arrival (DOA) estimation based on sparse representations in the situation of multiple measurement vectors (MMV). A novel sparse DOA estimation method which changes MMV problem to SMV is proposed. This method uses sparse representations based on weighted eigenvectors (SRBWEV) to deal with the MMV problem. MMV problem can be changed to single measurement vector (SMV) problem by using the linear combination of eigenvectors of array covariance matrix in signal subspace as a new SMV for sparse solution calculation. So the complexity of this proposed algorithm is smaller than other DOA estimation algorithms of MMV. Meanwhile, it can overcome the limitation of the conventional sparsity-based DOA estimation approaches that the unknown directions belong to a predefined discrete angular grid, so it can further improve the DOA estimation accuracy. The modified Rife algorithm for DOA estimation (MRife-DOA) is simulated based on SRBWEV algorithm. In this proposed algorithm, the largest and sub-largest inner products between the measurement vector or its residual and the atoms in the dictionary are utilized to further modify DOA estimation according to the principle of Rife algorithm and the basic idea of coarse-to-fine estimation. Finally, simulation experiments show that the proposed algorithm is effective and can reduce the DOA estimation error caused by grid effect with lower complexity. PMID:26610521

  13. Scalability of grid- and subbasin-based land surface modeling approaches for hydrologic simulations

    SciTech Connect

    Tesfa, Teklu K.; Leung, Lai-Yung R.; Huang, Maoyi; Li, Hongyi; Voisin, Nathalie; Wigmosta, Mark S.

    2014-03-27

    This paper investigates the relative merits of grid- and subbasin-based land surface modeling approaches for hydrologic simulations, with a focus on their scalability (i.e., abilities to perform consistently across a range of spatial resolutions) in simulating runoff generation. Simulations produced by the grid- and subbasin-based configurations of the Community Land Model (CLM) are compared at four spatial resolutions (0.125o, 0.25o, 0.5o and 1o) over the topographically diverse region of the U.S. Pacific Northwest. Using the 0.125o resolution simulation as the “reference”, statistical skill metrics are calculated and compared across simulations at 0.25o, 0.5o and 1o spatial resolutions of each modeling approach at basin and topographic region levels. Results suggest significant scalability advantage for the subbasin-based approach compared to the grid-based approach for runoff generation. Basin level annual average relative errors of surface runoff at 0.25o, 0.5o, and 1o compared to 0.125o are 3%, 4%, and 6% for the subbasin-based configuration and 4%, 7%, and 11% for the grid-based configuration, respectively. The scalability advantages of the subbasin-based approach are more pronounced during winter/spring and over mountainous regions. The source of runoff scalability is found to be related to the scalability of major meteorological and land surface parameters of runoff generation. More specifically, the subbasin-based approach is more consistent across spatial scales than the grid-based approach in snowfall/rainfall partitioning, which is related to air temperature and surface elevation. Scalability of a topographic parameter used in the runoff parameterization also contributes to improved scalability of the rain driven saturated surface runoff component, particularly during winter. Hence this study demonstrates the importance of spatial structure for multi-scale modeling of hydrological processes, with implications to surface heat fluxes in coupled land

  14. Improved halftoning method for autostereoscopic display based on float grid-division multiplexing.

    PubMed

    Chen, Duo; Sang, Xinzhu; Yu, Xunbo; Chen, Zhidong; Wang, Peng; Gao, Xin; Guo, Nan; Xie, Songlin

    2016-08-01

    Autostereoscopic printing is one of the most common ways for three-dimensional display, because it can present finer results by printing higher dots per inches (DPI). However, there are some problems for current methods. First, errors caused by dislocation between integer grids and non-customized lenticular lens result in severe vision quality. Second, the view-number and gray-level cannot be set arbitrarily. In this paper, an improved halftoning method for autostereoscopic printing based on float grid-division multiplexing (fGDM) is proposed. FGDM effectively addresses above two problems. GPU based program of fGDM is enabled to achieve the result very fast. Films with lenticular lens array are implemented in experiments to verify the effectiveness of proposed method which provides an improved three-dimensional performance, compared with the AM screening and random screening. PMID:27505777

  15. Grid-based International Network for Flu observation (g-INFO).

    PubMed

    Doan, Trung-Tung; Bernard, Aurélien; Da-Costa, Ana Lucia; Bloch, Vincent; Le, Thanh-Hoa; Legre, Yannick; Maigne, Lydia; Salzemann, Jean; Sarramia, David; Nguyen, Hong-Quang; Breton, Vincent

    2010-01-01

    The 2009 H1N1 outbreak has demonstrated that continuing vigilance, planning, and strong public health research capability are essential defenses against emerging health threats. Molecular epidemiology of influenza virus strains provides scientists with clues about the temporal and geographic evolution of the virus. In the present paper, researchers from France and Vietnam are proposing a global surveillance network based on grid technology: the goal is to federate influenza data servers and deploy automatically molecular epidemiology studies. A first prototype based on AMGA and the WISDOM Production Environment extracts daily from NCBI influenza H1N1 sequence data which are processed through a phylogenetic analysis pipeline deployed on EGEE and AuverGrid e-infrastructures. The analysis results are displayed on a web portal (http://g-info.healthgrid.org) for epidemiologists to monitor H1N1 pandemics.

  16. Creating analytically divergence-free velocity fields from grid-based data

    NASA Astrophysics Data System (ADS)

    Ravu, Bharath; Rudman, Murray; Metcalfe, Guy; Lester, Daniel R.; Khakhar, Devang V.

    2016-10-01

    We present a method, based on B-splines, to calculate a C2 continuous analytic vector potential from discrete 3D velocity data on a regular grid. A continuous analytically divergence-free velocity field can then be obtained from the curl of the potential. This field can be used to robustly and accurately integrate particle trajectories in incompressible flow fields. Based on the method of Finn and Chacon (2005) [10] this new method ensures that the analytic velocity field matches the grid values almost everywhere, with errors that are two to four orders of magnitude lower than those of existing methods. We demonstrate its application to three different problems (each in a different coordinate system) and provide details of the specifics required in each case. We show how the additional accuracy of the method results in qualitatively and quantitatively superior trajectories that results in more accurate identification of Lagrangian coherent structures.

  17. Grid-based modeling for land use planning and environmental resource mapping.

    SciTech Connect

    Kuiper, J. A.

    1999-08-04

    Geographic Information System (GIS) technology is used by land managers and natural resource planners for examining resource distribution and conducting project planning, often by visually interpreting spatial data representing environmental or regulatory variables. Frequently, many variables influence the decision-making process, and modeling can improve results with even a small investment of time and effort. Presented are several grid-based GIS modeling projects, including: (1) land use optimization under environmental and regulatory constraints; (2) identification of suitable wetland mitigation sites; and (3) predictive mapping of prehistoric cultural resource sites. As different as the applications are, each follows a similar process of problem conceptualization, implementation of a practical grid-based GIS model, and evaluation of results.

  18. A computational-grid based system for continental drainage network extraction using SRTM digital elevation models

    NASA Technical Reports Server (NTRS)

    Curkendall, David W.; Fielding, Eric J.; Pohl, Josef M.; Cheng, Tsan-Huei

    2003-01-01

    We describe a new effort for the computation of elevation derivatives using the Shuttle Radar Topography Mission (SRTM) results. Jet Propulsion Laboratory's (JPL) SRTM has produced a near global database of highly accurate elevation data. The scope of this database enables computing precise stream drainage maps and other derivatives on Continental scales. We describe a computing architecture for this computationally very complex task based on NASA's Information Power Grid (IPG), a distributed high performance computing network based on the GLOBUS infrastructure. The SRTM data characteristics and unique problems they present are discussed. A new algorithm for organizing the conventional extraction algorithms [1] into a cooperating parallel grid is presented as an essential component to adapt to the IPG computing structure. Preliminary results are presented for a Southern California test area, established for comparing SRTM and its results against those produced using the USGS National Elevation Data (NED) model.

  19. Discrete Adjoint-Based Design for Unsteady Turbulent Flows On Dynamic Overset Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Diskin, Boris

    2012-01-01

    A discrete adjoint-based design methodology for unsteady turbulent flows on three-dimensional dynamic overset unstructured grids is formulated, implemented, and verified. The methodology supports both compressible and incompressible flows and is amenable to massively parallel computing environments. The approach provides a general framework for performing highly efficient and discretely consistent sensitivity analysis for problems involving arbitrary combinations of overset unstructured grids which may be static, undergoing rigid or deforming motions, or any combination thereof. General parent-child motions are also accommodated, and the accuracy of the implementation is established using an independent verification based on a complex-variable approach. The methodology is used to demonstrate aerodynamic optimizations of a wind turbine geometry, a biologically-inspired flapping wing, and a complex helicopter configuration subject to trimming constraints. The objective function for each problem is successfully reduced and all specified constraints are satisfied.

  20. A comprehensive WSN-based approach to efficiently manage a Smart Grid.

    PubMed

    Martinez-Sandoval, Ruben; Garcia-Sanchez, Antonio-Javier; Garcia-Sanchez, Felipe; Garcia-Haro, Joan; Flynn, David

    2014-10-10

    The Smart Grid (SG) is conceived as the evolution of the current electrical grid representing a big leap in terms of efficiency, reliability and flexibility compared to today's electrical network. To achieve this goal, the Wireless Sensor Networks (WSNs) are considered by the scientific/engineering community to be one of the most suitable technologies to apply SG technology to due to their low-cost, collaborative and long-standing nature. However, the SG has posed significant challenges to utility operators-mainly very harsh radio propagation conditions and the lack of appropriate systems to empower WSN devices-making most of the commercial widespread solutions inadequate. In this context, and as a main contribution, we have designed a comprehensive ad-hoc WSN-based solution for the Smart Grid (SENSED-SG) that focuses on specific implementations of the MAC, the network and the application layers to attain maximum performance and to successfully deal with any arising hurdles. Our approach has been exhaustively evaluated by computer simulations and mathematical analysis, as well as validation within real test-beds deployed in controlled environments. In particular, these test-beds cover two of the main scenarios found in a SG; on one hand, an indoor electrical substation environment, implemented in a High Voltage AC/DC laboratory, and, on the other hand, an outdoor case, deployed in the Transmission and Distribution segment of a power grid. The results obtained show that SENSED-SG performs better and is more suitable for the Smart Grid than the popular ZigBee WSN approach.

  1. A comprehensive WSN-based approach to efficiently manage a Smart Grid.

    PubMed

    Martinez-Sandoval, Ruben; Garcia-Sanchez, Antonio-Javier; Garcia-Sanchez, Felipe; Garcia-Haro, Joan; Flynn, David

    2014-01-01

    The Smart Grid (SG) is conceived as the evolution of the current electrical grid representing a big leap in terms of efficiency, reliability and flexibility compared to today's electrical network. To achieve this goal, the Wireless Sensor Networks (WSNs) are considered by the scientific/engineering community to be one of the most suitable technologies to apply SG technology to due to their low-cost, collaborative and long-standing nature. However, the SG has posed significant challenges to utility operators-mainly very harsh radio propagation conditions and the lack of appropriate systems to empower WSN devices-making most of the commercial widespread solutions inadequate. In this context, and as a main contribution, we have designed a comprehensive ad-hoc WSN-based solution for the Smart Grid (SENSED-SG) that focuses on specific implementations of the MAC, the network and the application layers to attain maximum performance and to successfully deal with any arising hurdles. Our approach has been exhaustively evaluated by computer simulations and mathematical analysis, as well as validation within real test-beds deployed in controlled environments. In particular, these test-beds cover two of the main scenarios found in a SG; on one hand, an indoor electrical substation environment, implemented in a High Voltage AC/DC laboratory, and, on the other hand, an outdoor case, deployed in the Transmission and Distribution segment of a power grid. The results obtained show that SENSED-SG performs better and is more suitable for the Smart Grid than the popular ZigBee WSN approach. PMID:25310468

  2. A Comprehensive WSN-Based Approach to Efficiently Manage a Smart Grid

    PubMed Central

    Martinez-Sandoval, Ruben; Garcia-Sanchez, Antonio-Javier; Garcia-Sanchez, Felipe; Garcia-Haro, Joan; Flynn, David

    2014-01-01

    The Smart Grid (SG) is conceived as the evolution of the current electrical grid representing a big leap in terms of efficiency, reliability and flexibility compared to today's electrical network. To achieve this goal, the Wireless Sensor Networks (WSNs) are considered by the scientific/engineering community to be one of the most suitable technologies to apply SG technology to due to their low-cost, collaborative and long-standing nature. However, the SG has posed significant challenges to utility operators—mainly very harsh radio propagation conditions and the lack of appropriate systems to empower WSN devices—making most of the commercial widespread solutions inadequate. In this context, and as a main contribution, we have designed a comprehensive ad-hoc WSN-based solution for the Smart Grid (SENSED-SG) that focuses on specific implementations of the MAC, the network and the application layers to attain maximum performance and to successfully deal with any arising hurdles. Our approach has been exhaustively evaluated by computer simulations and mathematical analysis, as well as validation within real test-beds deployed in controlled environments. In particular, these test-beds cover two of the main scenarios found in a SG; on one hand, an indoor electrical substation environment, implemented in a High Voltage AC/DC laboratory, and, on the other hand, an outdoor case, deployed in the Transmission and Distribution segment of a power grid. The results obtained show that SENSED-SG performs better and is more suitable for the Smart Grid than the popular ZigBee WSN approach. PMID:25310468

  3. Effects of a Peer Assessment System Based on a Grid-Based Knowledge Classification Approach on Computer Skills Training

    ERIC Educational Resources Information Center

    Hsu, Ting-Chia

    2016-01-01

    In this study, a peer assessment system using the grid-based knowledge classification approach was developed to improve students' performance during computer skills training. To evaluate the effectiveness of the proposed approach, an experiment was conducted in a computer skills certification course. The participants were divided into three…

  4. Grid Computing

    NASA Astrophysics Data System (ADS)

    Foster, Ian

    2001-08-01

    The term "Grid Computing" refers to the use, for computational purposes, of emerging distributed Grid infrastructures: that is, network and middleware services designed to provide on-demand and high-performance access to all important computational resources within an organization or community. Grid computing promises to enable both evolutionary and revolutionary changes in the practice of computational science and engineering based on new application modalities such as high-speed distributed analysis of large datasets, collaborative engineering and visualization, desktop access to computation via "science portals," rapid parameter studies and Monte Carlo simulations that use all available resources within an organization, and online analysis of data from scientific instruments. In this article, I examine the status of Grid computing circa 2000, briefly reviewing some relevant history, outlining major current Grid research and development activities, and pointing out likely directions for future work. I also present a number of case studies, selected to illustrate the potential of Grid computing in various areas of science.

  5. Detection of Power Grid Harmonic Pollution Sources based on Upgraded Power Meters

    NASA Astrophysics Data System (ADS)

    Petković, Predrag; Stevanović, Dejan

    2014-05-01

    The paper suggests a new and efficient method for location of nonlinear loads on a grid. It is based on measuring of distortion power. The paper reviews different definitions of distortion power and proves that the method is feasible independently on particular definition. The obtained results of simulation and measurement confirm the effectiveness and applicability of the method. The proposed solution is suitable for software update of existing electronic power-meters or can be implement as hardware upgrade.

  6. An Analysis for an Internet Grid to Support Space Based Operations

    NASA Technical Reports Server (NTRS)

    Bradford, Robert; McNair, Ann R. (Technical Monitor)

    2002-01-01

    Currently, and in the past, dedicated communication circuits and "network services" with very stringent performance requirements have been used to support manned and unmanned mission critical ground operations at GSFC, JSC, MSFC, KSC and other NASA facilities. Because of the evolution of network technology, it is time to investigate other approaches to providing mission services for space ground and flight operations. In various scientific disciplines, effort is under way to develop network/komputing grids. These grids consisting of networks and computing equipment are enabling lower cost science. Specifically, earthquake research is headed in this direction. With a standard for network and computing interfaces using a grid, a researcher would not be required to develop and engineer NASA/DoD specific interfaces with the attendant increased cost. Use of the Internet Protocol (IP), CCSDS packet spec, and reed-solomon for satellite error correction etc. can be adopted/standardized to provide these interfaces. Generally most interfaces are developed at least to some degree end to end. This study would investigate the feasibility of using existing standards and protocols necessary to implement a SpaceOps Grid. New interface definitions or adoption/modification of existing ones for the various space operational services is required for voice both space based and ground, video, telemetry, commanding and planning may play a role to some undefined level. Security will be a separate focus in the study since security is such a large issue in using public networks. This SpaceOps Grid would be transparent to users. It would be anagulous to the Ethernet protocol's ease of use in that a researcher would plug in their experiment or instrument at one end and would be connected to the appropriate host or server without further intervention. Free flyers would be in this category as well. They would be launched and would transmit without any further intervention with the researcher or

  7. An Introduction to Competency-Based Employment and Training Programming for Youth under the Job Training Partnership Act.

    ERIC Educational Resources Information Center

    Druian, Greg; Spill, Rick

    This guide provides an introduction to competency-based employment and training under the Job Training Partnership Act (JTPA). The guide describes in general terms the steps service delivery areas should take to implement competency-based employment and training systems for youth. The content is based on the experiences of practitioners, and it is…

  8. Enabling Campus Grids with Open Science Grid Technology

    NASA Astrophysics Data System (ADS)

    Weitzel, Derek; Bockelman, Brian; Fraser, Dan; Pordes, Ruth; Swanson, David

    2011-12-01

    The Open Science Grid is a recognized key component of the US national cyber-infrastructure enabling scientific discovery through advanced high throughput computing. The principles and techniques that underlie the Open Science Grid can also be applied to Campus Grids since many of the requirements are the same, even if the implementation technologies differ. We find five requirements for a campus grid: trust relationships, job submission, resource independence, accounting, and data management. The Holland Computing Center's campus grid at the University of Nebraska-Lincoln was designed to fulfill the requirements of a campus grid. A bridging daemon was designed to bring non-Condor clusters into a grid managed by Condor. Condor features which make it possible to bridge Condor sites into a multi-campus grid have been exploited at the Holland Computing Center as well.

  9. Novel grid-based optical Braille conversion: from scanning to wording

    NASA Astrophysics Data System (ADS)

    Yoosefi Babadi, Majid; Jafari, Shahram

    2011-12-01

    Grid-based optical Braille conversion (GOBCO) is explained in this article. The grid-fitting technique involves processing scanned images taken from old hard-copy Braille manuscripts, recognising and converting them into English ASCII text documents inside a computer. The resulted words are verified using the relevant dictionary to provide the final output. The algorithms employed in this article can be easily modified to be implemented on other visual pattern recognition systems and text extraction applications. This technique has several advantages including: simplicity of the algorithm, high speed of execution, ability to help visually impaired persons and blind people to work with fax machines and the like, and the ability to help sighted people with no prior knowledge of Braille to understand hard-copy Braille manuscripts.

  10. Coupling ensemble weather predictions based on TIGGE database with Grid-Xinanjiang model for flood forecast

    NASA Astrophysics Data System (ADS)

    Bao, H.-J.; Zhao, L.-N.; He, Y.; Li, Z.-J.; Wetterhall, F.; Cloke, H. L.; Pappenberger, F.; Manful, D.

    2011-02-01

    The incorporation of numerical weather predictions (NWP) into a flood forecasting system can increase forecast lead times from a few hours to a few days. A single NWP forecast from a single forecast centre, however, is insufficient as it involves considerable non-predictable uncertainties and lead to a high number of false alarms. The availability of global ensemble numerical weather prediction systems through the THORPEX Interactive Grand Global Ensemble' (TIGGE) offers a new opportunity for flood forecast. The Grid-Xinanjiang distributed hydrological model, which is based on the Xinanjiang model theory and the topographical information of each grid cell extracted from the Digital Elevation Model (DEM), is coupled with ensemble weather predictions based on the TIGGE database (CMC, CMA, ECWMF, UKMO, NCEP) for flood forecast. This paper presents a case study using the coupled flood forecasting model on the Xixian catchment (a drainage area of 8826 km2) located in Henan province, China. A probabilistic discharge is provided as the end product of flood forecast. Results show that the association of the Grid-Xinanjiang model and the TIGGE database gives a promising tool for an early warning of flood events several days ahead.

  11. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    NASA Astrophysics Data System (ADS)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  12. Grid digital elevation model based algorithms for determination of hillslope width functions through flow distance transforms

    NASA Astrophysics Data System (ADS)

    Liu, Jintao; Chen, Xi; Zhang, Xingnan; Hoagland, Kyle D.

    2012-04-01

    Recently developed hillslope storage dynamics theory can represent the essential physical behavior of a natural system by accounting explicitly for the plan shape of a hillslope in an elegant and simple way. As a result, this theory is promising for improving catchment-scale hydrologic modeling. In this study, grid digital elevation model (DEM) based algorithms for determination of hillslope geometric characteristics (e.g., hillslope units and width functions in hillslope storage dynamics models) are presented. This study further develops a method for hillslope partitioning, established by Fan and Bras (1998), by applying it on a grid network. On the basis of hillslope unit derivation, a flow distance transforms method (TD∞) is suggested in order to decrease the systematic error of grid DEM-based flow distance calculation caused by flow direction approximation to streamlines. Hillslope width transfer functions are then derived to convert the probability density functions of flow distance into hillslope width functions. These algorithms are applied and evaluated on five abstract hillslopes, and detailed tests and analyses are carried out by comparing the derivation results with theoretical width functions. The results demonstrate that the TD∞ improves estimations of the flow distance and thus hillslope width function. As the proposed procedures are further applied in a natural catchment, we find that the natural hillslope width function can be well fitted by the Gaussian function. This finding is very important for applying the newly developed hillslope storage dynamics models in a real catchment.

  13. First-principles calculation method for electron transport based on the grid Lippmann-Schwinger equation

    NASA Astrophysics Data System (ADS)

    Egami, Yoshiyuki; Iwase, Shigeru; Tsukamoto, Shigeru; Ono, Tomoya; Hirose, Kikuji

    2015-09-01

    We develop a first-principles electron-transport simulator based on the Lippmann-Schwinger (LS) equation within the framework of the real-space finite-difference scheme. In our fully real-space-based LS (grid LS) method, the ratio expression technique for the scattering wave functions and the Green's function elements of the reference system is employed to avoid numerical collapse. Furthermore, we present analytical expressions and/or prominent calculation procedures for the retarded Green's function, which are utilized in the grid LS approach. In order to demonstrate the performance of the grid LS method, we simulate the electron-transport properties of the semiconductor-oxide interfaces sandwiched between semi-infinite jellium electrodes. The results confirm that the leakage current through the (001 )Si -SiO2 model becomes much larger when the dangling-bond state is induced by a defect in the oxygen layer, while that through the (001 )Ge -GeO2 model is insensitive to the dangling bond state.

  14. New insights in quantum chemical topology studies using numerical grid-based analyses.

    PubMed

    Kozlowski, David; Pilmé, Julien

    2011-11-30

    New insights in Quantum Chemical Topology of one-electron density functions have been proposed here by using a recent grid-based algorithm (Tang et al., J Phys Condens Matter 2009, 21, 084204), initially designed for the decomposition of the electron density. Beyond the charge analysis, we show that this algorithm is suitable for different scalar functions showing a more complex topology, that is, the Laplacian of the electron density, the electron localization function (ELF), and the molecular electrostatic potential (MEP). This algorithm makes use of a robust methodology enabling to numerically assign the data points of three-dimensional grids to basin volumes, and it has the advantage of requiring only the values of the scalar function without details on the wave function used to build the grid. Our implementation is briefly outlined (program named TopChem), its capabilities are examined, and technical aspects in terms of CPU requirement and accuracy of the results are discussed. Illustrative examples for individual molecules and crystalline solids obtained with gaussian and plane-wave-based density functional theory calculations are presented. Special attention was given to the MEP because its topological analysis is complex and scarce.

  15. Creative Engineering Based Education with Autonomous Robots Considering Job Search Support

    NASA Astrophysics Data System (ADS)

    Takezawa, Satoshi; Nagamatsu, Masao; Takashima, Akihiko; Nakamura, Kaeko; Ohtake, Hideo; Yoshida, Kanou

    The Robotics Course in our Mechanical Systems Engineering Department offers “Robotics Exercise Lessons” as one of its Problem-Solution Based Specialized Subjects. This is intended to motivate students learning and to help them acquire fundamental items and skills on mechanical engineering and improve understanding of Robotics Basic Theory. Our current curriculum was established to accomplish this objective based on two pieces of research in 2005: an evaluation questionnaire on the education of our Mechanical Systems Engineering Department for graduates and a survey on the kind of human resources which companies are seeking and their expectations for our department. This paper reports the academic results and reflections of job search support in recent years as inherited and developed from the previous curriculum.

  16. RGLite, an interface between ROOT and gLite—proof on the grid

    NASA Astrophysics Data System (ADS)

    Malzacher, P.; Manafov, A.; Schwarz, K.

    2008-07-01

    Using the gLitePROOF package it is possible to perform PROOF-based distributed data analysis on the gLite Grid. The LHC experiments managed to run globally distributed Monte Carlo productions on the Grid, now the development of tools for data analysis is in the foreground. To grant access interfaces must be provided. The ROOT/PROOF framework is used as a starting point. Using abstract ROOT classes (TGrid, ...) interfaces can be implemented, via which Grid access from ROOT can be accomplished. A concrete implementation exists for the ALICE Grid environment AliEn. Within the D-Grid project an interface to the common Grid middleware of all LHC experiments, gLite, has been created. Therefore it is possible to query Grid File Catalogues from ROOT for the location of the data to be analysed. Grid jobs can be submitted into a gLite based Grid. The status of the jobs can be asked for, and their results can be obtained.

  17. How do people differentiate between jobs: and how do they define a good job?

    PubMed

    Jones, Wendy; Haslam, Roger; Haslam, Cheryl

    2012-01-01

    Employed individuals from a range of jobs (n=18) were interviewed using a repertory grid technique, to explore the criteria they used to distinguish between different jobs. The concepts of 'a good job' and 'a job good for health' were also discussed. Interactions with others and the job itself were the most commonly used criteria and were also the most common features of a 'good job'. Pay and security were mentioned frequently but were less important when comparing jobs and when defining a 'good job'. Physical activity was rarely associated by interviewees with a 'good job' but was frequently associated with a 'job good for health'. A comprehensive definition of a 'good job' needs to take all these factors into account. PMID:22316822

  18. SoilGrids1km — Global Soil Information Based on Automated Mapping

    PubMed Central

    Hengl, Tomislav; de Jesus, Jorge Mendes; MacMillan, Robert A.; Batjes, Niels H.; Heuvelink, Gerard B. M.; Ribeiro, Eloi; Samuel-Rosa, Alessandro; Kempen, Bas; Leenaars, Johan G. B.; Walsh, Markus G.; Gonzalez, Maria Ruiperez

    2014-01-01

    Background Soils are widely recognized as a non-renewable natural resource and as biophysical carbon sinks. As such, there is a growing requirement for global soil information. Although several global soil information systems already exist, these tend to suffer from inconsistencies and limited spatial detail. Methodology/Principal Findings We present SoilGrids1km — a global 3D soil information system at 1 km resolution — containing spatial predictions for a selection of soil properties (at six standard depths): soil organic carbon (g kg−1), soil pH, sand, silt and clay fractions (%), bulk density (kg m−3), cation-exchange capacity (cmol+/kg), coarse fragments (%), soil organic carbon stock (t ha−1), depth to bedrock (cm), World Reference Base soil groups, and USDA Soil Taxonomy suborders. Our predictions are based on global spatial prediction models which we fitted, per soil variable, using a compilation of major international soil profile databases (ca. 110,000 soil profiles), and a selection of ca. 75 global environmental covariates representing soil forming factors. Results of regression modeling indicate that the most useful covariates for modeling soils at the global scale are climatic and biomass indices (based on MODIS images), lithology, and taxonomic mapping units derived from conventional soil survey (Harmonized World Soil Database). Prediction accuracies assessed using 5–fold cross-validation were between 23–51%. Conclusions/Significance SoilGrids1km provide an initial set of examples of soil spatial data for input into global models at a resolution and consistency not previously available. Some of the main limitations of the current version of SoilGrids1km are: (1) weak relationships between soil properties/classes and explanatory variables due to scale mismatches, (2) difficulty to obtain covariates that capture soil forming factors, (3) low sampling density and spatial clustering of soil profile locations. However, as the SoilGrids

  19. Construction of the Fock Matrix on a Grid-Based Molecular Orbital Basis Using GPGPUs.

    PubMed

    Losilla, Sergio A; Watson, Mark A; Aspuru-Guzik, Alán; Sundholm, Dage

    2015-05-12

    We present a GPGPU implementation of the construction of the Fock matrix in the molecular orbital basis using the fully numerical, grid-based bubbles representation. For a test set of molecules containing up to 90 electrons, the total Hartree-Fock energies obtained from reference GTO-based calculations are reproduced within 10(-4) Eh to 10(-8) Eh for most of the molecules studied. Despite the very large number of arithmetic operations involved, the high performance obtained made the calculations possible on a single Nvidia Tesla K40 GPGPU card.

  20. Observation-based gridded runoff estimates for Europe (E-RUN version 1.1)

    NASA Astrophysics Data System (ADS)

    Gudmundsson, Lukas; Seneviratne, Sonia I.

    2016-07-01

    River runoff is an essential climate variable as it is directly linked to the terrestrial water balance and controls a wide range of climatological and ecological processes. Despite its scientific and societal importance, there are to date no pan-European observation-based runoff estimates available. Here we employ a recently developed methodology to estimate monthly runoff rates on regular spatial grid in Europe. For this we first assemble an unprecedented collection of river flow observations, combining information from three distinct databases. Observed monthly runoff rates are subsequently tested for homogeneity and then related to gridded atmospheric variables (E-OBS version 12) using machine learning. The resulting statistical model is then used to estimate monthly runoff rates (December 1950-December 2015) on a 0.5° × 0.5° grid. The performance of the newly derived runoff estimates is assessed in terms of cross validation. The paper closes with example applications, illustrating the potential of the new runoff estimates for climatological assessments and drought monitoring. The newly derived data are made publicly available at doi:10.1594/PANGAEA.861371.

  1. Research and implementation of MODIS L1B data process based on grid computing

    NASA Astrophysics Data System (ADS)

    Tao, Liang; Chen, Deqing; Meng, Lingkui; Li, Jiyuan; Wang, Zhanfeng

    2008-12-01

    MODIS (Moderate Resolution Imaging Spectroradiometer, Moderate Resolution Imaging Spectroradiometer) is carrying on a major satellite remote sensing sensors of EOS series in the United States. MODIS remote sensing data is the new generation of satellite remote sensing information sources; it has broad application prospects in ecological research, environmental monitoring, global climate change and agricultural resources survey and other studies. MODIS data has featured a large volume of data and dealing with complex. In this paper Grid Computing technology brought to the processing of MODIS L1B Data is in order to improve the efficiency. First of all, this paper gives a brief introduction of MODIS L1B data and its application status, also talks about gird computing. Then the structure of MODIS L1B Data Process Based on Grid Computing (MLDPGRID) on logic is given, also explain the function of three tiers. In the realization section, receiving of MODIS L1B data, Grid Platform, software environment and network architecture, processing of MODIS L1B data and portal of MLDPGRID are all discussed. Finally, the paper gives the evaluation and conclusion of the MLDPGRID, meanwhile the optimization strategy and future work are discussed.

  2. Structuring Job Related Information on the Intranet: An Experimental Comparison of Task vs. an Organization-Based Approach

    ERIC Educational Resources Information Center

    Cozijn, Reinier; Maes, Alfons; Schackman, Didie; Ummelen, Nicole

    2007-01-01

    In this article, we present a usability experiment in which participants were asked to make intensive use of information on an intranet in order to execute job-related tasks. Participants had to work with one of two versions of an intranet: one with an organization-based hyperlink structure, and one with a task-based hyperlink structure.…

  3. 20 CFR 670.520 - Are students permitted to hold jobs other than work-based learning opportunities?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Are students permitted to hold jobs other than work-based learning opportunities? 670.520 Section 670.520 Employees' Benefits EMPLOYMENT AND...-based learning opportunities? Yes, a center operator may authorize a student to participate in...

  4. 20 CFR 670.520 - Are students permitted to hold jobs other than work-based learning opportunities?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Are students permitted to hold jobs other than work-based learning opportunities? 670.520 Section 670.520 Employees' Benefits EMPLOYMENT AND...-based learning opportunities? Yes, a center operator may authorize a student to participate in...

  5. Grid-based methods for biochemical ab initio quantum chemical applications

    SciTech Connect

    Colvin, M.E.; Nelson, J.S.; Mori, E.

    1997-01-01

    A initio quantum chemical methods are seeing increased application in a large variety of real-world problems including biomedical applications ranging from drug design to the understanding of environmental mutagens. The vast majority of these quantum chemical methods are {open_quotes}spectral{close_quotes}, that is they describe the charge distribution around the nuclear framework in terms of a fixed analytic basis set. Despite the additional complexity they bring, methods involving grid representations of the electron or solvent charge can provide more efficient schemes for evaluating spectral operators, inexpensive methods for calculating electron correlation, and methods for treating the electrostatic energy of salvation in polar solvents. The advantage of mixed or {open_quotes}pseudospectral{close_quotes} methods is that they allow individual non-linear operators in the partial differential equations, such as coulomb operators, to be calculated in the most appropriate regime. Moreover, these molecular grids can be used to integrate empirical functionals of the electron density. These so-called density functional methods (DFT) are an extremely promising alternative to conventional post-Hartree Fock quantum chemical methods. The introduction of a grid at the molecular solvent-accessible surface allows a very sophisticated treatment of a polarizable continuum solvent model (PCM). Where most PCM approaches use a truncated expansion of the solute`s electric multipole expansion, e.g. net charge (Born model) or dipole moment (Onsager model), such a grid-based boundary-element method (BEM) yields a nearly exact treatment of the solute`s electric field. This report describes the use of both DFT and BEM methods in several biomedical chemical applications.

  6. High-Efficiency Food Production in a Renewable Energy Based Micro-Grid Power System

    NASA Technical Reports Server (NTRS)

    Bubenheim, David; Meiners, Dennis

    2016-01-01

    Controlled Environment Agriculture (CEA) systems can be used to produce high-quality, desirable food year round, and the fresh produce can positively contribute to the health and well being of residents in communities with difficult supply logistics. While CEA has many positive outcomes for a remote community, the associated high electric demands have prohibited widespread implementation in what is typically already a fully subscribed power generation and distribution system. Recent advances in CEA technologies as well as renewable power generation, storage, and micro-grid management are increasing system efficiency and expanding the possibilities for enhancing community supporting infrastructure without increasing demands for outside supplied fuels. We will present examples of how new lighting, nutrient delivery, and energy management and control systems can enable significant increases in food production efficiency while maintaining high yields in CEA. Examples from Alaskan communities where initial incorporation of renewable power generation, energy storage and grid management techniques have already reduced diesel fuel consumption for electric generation by more than 40% and expanded grid capacity will be presented. We will discuss how renewable power generation, efficient grid management to extract maximum community service per kW, and novel energy storage approaches can expand the food production, water supply, waste treatment, sanitation and other community support services without traditional increases of consumable fuels supplied from outside the community. These capabilities offer communities with a range of choices to enhance their communities. The examples represent a synergy of technology advancement efforts to develop sustainable community support systems for future space-based human habitats and practical implementation of infrastructure components to increase efficiency and enhance health and well being in remote communities today and tomorrow.

  7. Incentive-compatible demand-side management for smart grids based on review strategies

    NASA Astrophysics Data System (ADS)

    Xu, Jie; van der Schaar, Mihaela

    2015-12-01

    Demand-side load management is able to significantly improve the energy efficiency of smart grids. Since the electricity production cost depends on the aggregate energy usage of multiple consumers, an important incentive problem emerges: self-interested consumers want to increase their own utilities by consuming more than the socially optimal amount of energy during peak hours since the increased cost is shared among the entire set of consumers. To incentivize self-interested consumers to take the socially optimal scheduling actions, we design a new class of protocols based on review strategies. These strategies work as follows: first, a review stage takes place in which a statistical test is performed based on the daily prices of the previous billing cycle to determine whether or not the other consumers schedule their electricity loads in a socially optimal way. If the test fails, the consumers trigger a punishment phase in which, for a certain time, they adjust their energy scheduling in such a way that everybody in the consumer set is punished due to an increased price. Using a carefully designed protocol based on such review strategies, consumers then have incentives to take the socially optimal load scheduling to avoid entering this punishment phase. We rigorously characterize the impact of deploying protocols based on review strategies on the system's as well as the users' performance and determine the optimal design (optimal billing cycle, punishment length, etc.) for various smart grid deployment scenarios. Even though this paper considers a simplified smart grid model, our analysis provides important and useful insights for designing incentive-compatible demand-side management schemes based on aggregate energy usage information in a variety of practical scenarios.

  8. Production of BaBar Skimmed Analysis Datasets Using the Grid

    SciTech Connect

    Brew, C.A.J.; Wilson, F.F.; Castelli, G.; Adye, T.; Roethel, W.; Luppi, E.; Andreotti, D.; Smith, D.; Khan, A.; Barrett, M.; Barlow, R.; Bailey, D.; /Manchester U.

    2011-11-10

    The BABAR Collaboration, based at Stanford Linear Accelerator Center (SLAC), Stanford, US, has been performing physics reconstruction, simulation studies and data analysis for 8 years using a number of compute farms around the world. Recent developments in Grid technologies could provide a way to manage the distributed resources in a single coherent structure. We describe enhancements to the BABAR experiment's distributed skimmed dataset production system to make use of European Grid resources and present the results with regard to BABAR's latest cycle of skimmed dataset production. We compare the benefits of a local and Grid-based systems, the ease with which the system is managed and the challenges of integrating the Grid with legacy software. We compare job success rates and manageability issues between Grid and non-Grid production.

  9. New gridded daily climatology of Finland: Permutation-based uncertainty estimates and temporal trends in climate

    NASA Astrophysics Data System (ADS)

    Aalto, Juha; Pirinen, Pentti; Jylhä, Kirsti

    2016-04-01

    Long-term time series of key climate variables with a relevant spatiotemporal resolution are essential for environmental science. Moreover, such spatially continuous data, based on weather observations, are commonly used in, e.g., downscaling and bias correcting of climate model simulations. Here we conducted a comprehensive spatial interpolation scheme where seven climate variables (daily mean, maximum, and minimum surface air temperatures, daily precipitation sum, relative humidity, sea level air pressure, and snow depth) were interpolated over Finland at the spatial resolution of 10 × 10 km2. More precisely, (1) we produced daily gridded time series (FMI_ClimGrid) of the variables covering the period of 1961-2010, with a special focus on evaluation and permutation-based uncertainty estimates, and (2) we investigated temporal trends in the climate variables based on the gridded data. National climate station observations were supplemented by records from the surrounding countries, and kriging interpolation was applied to account for topography and water bodies. For daily precipitation sum and snow depth, a two-stage interpolation with a binary classifier was deployed for an accurate delineation of areas with no precipitation or snow. A robust cross-validation indicated a good agreement between the observed and interpolated values especially for the temperature variables and air pressure, although the effect of seasons was evident. Permutation-based analysis suggested increased uncertainty toward northern areas, thus identifying regions with suboptimal station density. Finally, several variables had a statistically significant trend indicating a clear but locally varying signal of climate change during the last five decades.

  10. QoS Differential Scheduling in Cognitive-Radio-Based Smart Grid Networks: An Adaptive Dynamic Programming Approach.

    PubMed

    Yu, Rong; Zhong, Weifeng; Xie, Shengli; Zhang, Yan; Zhang, Yun

    2016-02-01

    As the next-generation power grid, smart grid will be integrated with a variety of novel communication technologies to support the explosive data traffic and the diverse requirements of quality of service (QoS). Cognitive radio (CR), which has the favorable ability to improve the spectrum utilization, provides an efficient and reliable solution for smart grid communications networks. In this paper, we study the QoS differential scheduling problem in the CR-based smart grid communications networks. The scheduler is responsible for managing the spectrum resources and arranging the data transmissions of smart grid users (SGUs). To guarantee the differential QoS, the SGUs are assigned to have different priorities according to their roles and their current situations in the smart grid. Based on the QoS-aware priority policy, the scheduler adjusts the channels allocation to minimize the transmission delay of SGUs. The entire transmission scheduling problem is formulated as a semi-Markov decision process and solved by the methodology of adaptive dynamic programming. A heuristic dynamic programming (HDP) architecture is established for the scheduling problem. By the online network training, the HDP can learn from the activities of primary users and SGUs, and adjust the scheduling decision to achieve the purpose of transmission delay minimization. Simulation results illustrate that the proposed priority policy ensures the low transmission delay of high priority SGUs. In addition, the emergency data transmission delay is also reduced to a significantly low level, guaranteeing the differential QoS in smart grid.

  11. General rectangular grid based time-space domain high-order finite-difference methods for modeling scalar wave propagation

    NASA Astrophysics Data System (ADS)

    Chen, Hanming; Zhou, Hui; Sheng, Shanbo

    2016-10-01

    We develop the general rectangular grid discretization based time-space domain high-order staggered-grid finite-difference (SGFD) methods for modeling three-dimension (3D) scalar wave propagation. The proposed two high-order SGFD schemes can achieve the arbitrary even-order accuracy in space, and the fourth- and sixth-order accuracies in time, respectively. We derive the analytical expression of the high-order FD coefficients based on a general rectangular grid discretization with different grid spacing in all axial directions. The general rectangular grid discretization makes our time-space domain SGFD schemes more flexible than the existing ones developed on the cubic grid with the same grid spacing in the axial directions. Theoretical analysis indicates that our time-space domain SGFD schemes have a better stability and a higher accuracy than the traditional temporal second-order SGFD scheme. Our time-space domain SGFD schemes allow larger time steps than the traditional SGFD scheme for attaining a similar accuracy, and thus are more efficient. Numerical example further confirms the superior accuracy, stability and efficiency of our time-space domain SGFD schemes.

  12. GPU accelerated cell-based adaptive mesh refinement on unstructured quadrilateral grid

    NASA Astrophysics Data System (ADS)

    Luo, Xisheng; Wang, Luying; Ran, Wei; Qin, Fenghua

    2016-10-01

    A GPU accelerated inviscid flow solver is developed on an unstructured quadrilateral grid in the present work. For the first time, the cell-based adaptive mesh refinement (AMR) is fully implemented on GPU for the unstructured quadrilateral grid, which greatly reduces the frequency of data exchange between GPU and CPU. Specifically, the AMR is processed with atomic operations to parallelize list operations, and null memory recycling is realized to improve the efficiency of memory utilization. It is found that results obtained by GPUs agree very well with the exact or experimental results in literature. An acceleration ratio of 4 is obtained between the parallel code running on the old GPU GT9800 and the serial code running on E3-1230 V2. With the optimization of configuring a larger L1 cache and adopting Shared Memory based atomic operations on the newer GPU C2050, an acceleration ratio of 20 is achieved. The parallelized cell-based AMR processes have achieved 2x speedup on GT9800 and 18x on Tesla C2050, which demonstrates that parallel running of the cell-based AMR method on GPU is feasible and efficient. Our results also indicate that the new development of GPU architecture benefits the fluid dynamics computing significantly.

  13. The Cluster Analysis of Jobs Based on Data from the Position Analysis Questionnaire (PAQ). Report No. 7.

    ERIC Educational Resources Information Center

    DeNisi, Angelo S.; McCormick, Ernest J.

    The Position Analysis Questionnaire (PAQ) is a structured job analysis procedure that provides for the analysis of jobs in terms of each of 187 job elements, these job elements being grouped into six divisions: information input, mental processes, work output, relationships with other persons, job context, and other job characteristics. Two…

  14. Web-based interactive visualization in a Grid-enabled neuroimaging application using HTML5.

    PubMed

    Siewert, René; Specovius, Svenja; Wu, Jie; Krefting, Dagmar

    2012-01-01

    Interactive visualization and correction of intermediate results are required in many medical image analysis pipelines. To allow certain interaction in the remote execution of compute- and data-intensive applications, new features of HTML5 are used. They allow for transparent integration of user interaction into Grid- or Cloud-enabled scientific workflows. Both 2D and 3D visualization and data manipulation can be performed through a scientific gateway without the need to install specific software or web browser plugins. The possibilities of web-based visualization are presented along the FreeSurfer-pipeline, a popular compute- and data-intensive software tool for quantitative neuroimaging. PMID:22942008

  15. Cygrid: Cython-powered convolution-based gridding module for Python

    NASA Astrophysics Data System (ADS)

    Winkel, B.; Lenz, D.; Flöer, L.

    2016-06-01

    The Python module Cygrid grids (resamples) data to any collection of spherical target coordinates, although its typical application involves FITS maps or data cubes. The module supports the FITS world coordinate system (WCS) standard; its underlying algorithm is based on the convolution of the original samples with a 2D Gaussian kernel. A lookup table scheme allows parallelization of the code and is combined with the HEALPix tessellation of the sphere for fast neighbor searches. Cygrid's runtime scales between O(n) and O(nlog n), with n being the number of input samples.

  16. Grid Collector: Facilitating Efficient Selective Access from DataGrids

    SciTech Connect

    Wu, Kesheng; Gu, Junmin; Lauret, Jerome; Poskanzer, Arthur M.; Shoshani, Arie; Sim, Alexander; Zhang, Wei-Ming

    2005-05-17

    The Grid Collector is a system that facilitates the effective analysis and spontaneous exploration of scientific data. It combines an efficient indexing technology with a Grid file management technology to speed up common analysis jobs on high-energy physics data and to enable some previously impractical analysis jobs. To analyze a set of high-energy collision events, one typically specifies the files containing the events of interest, reads all the events in the files, and filters out unwanted ones. Since most analysis jobs filter out significant number of events, a considerable amount of time is wasted by reading the unwanted events. The Grid Collector removes this inefficiency by allowing users to specify more precisely what events are of interest and to read only the selected events. This speeds up most analysis jobs. In existing analysis frameworks, the responsibility of bringing files from tertiary storages or remote sites to local disks falls on the users. This forces most of analysis jobs to be performed at centralized computer facilities where commonly used files are kept on large shared file systems. The Grid Collector automates file management tasks and eliminates the labor-intensive manual file transfers. This makes it much easier to perform analyses that require data files on tertiary storages and remote sites. It also makes more computer resources available for analysis jobs since they are no longer bound to the centralized facilities.

  17. CDF GlideinWMS usage in grid computing of high energy physics

    SciTech Connect

    Zvada, Marian; Benjamin, Doug; Sfiligoi, Igor; /Fermilab

    2010-01-01

    Many members of large science collaborations already have specialized grids available to advance their research in the need of getting more computing resources for data analysis. This has forced the Collider Detector at Fermilab (CDF) collaboration to move beyond the usage of dedicated resources and start exploiting Grid resources. Nowadays, CDF experiment is increasingly relying on glidein-based computing pools for data reconstruction. Especially, Monte Carlo production and user data analysis, serving over 400 users by central analysis farm middleware (CAF) on the top of Condor batch system and CDF Grid infrastructure. Condor is designed as distributed architecture and its glidein mechanism of pilot jobs is ideal for abstracting the Grid computing by making a virtual private computing pool. We would like to present the first production use of the generic pilot-based Workload Management System (glideinWMS), which is an implementation of the pilot mechanism based on the Condor distributed infrastructure. CDF Grid computing uses glideinWMS for its data reconstruction on the FNAL campus Grid, user analysis and Monte Carlo production across Open Science Grid (OSG). We review this computing model and setup used including CDF specific configuration within the glideinWMS system which provides powerful scalability and makes Grid computing working like in a local batch environment with ability to handle more than 10000 running jobs at a time.

  18. CDF GlideinWMS usage in Grid computing of high energy physics

    NASA Astrophysics Data System (ADS)

    Zvada, Marian; Benjamin, Doug; Sfiligoi, Igor

    2010-04-01

    Many members of large science collaborations already have specialized grids available to advance their research in the need of getting more computing resources for data analysis. This has forced the Collider Detector at Fermilab (CDF) collaboration to move beyond the usage of dedicated resources and start exploiting Grid resources. Nowadays, CDF experiment is increasingly relying on glidein-based computing pools for data reconstruction. Especially, Monte Carlo production and user data analysis, serving over 400 users by central analysis farm middleware (CAF) on the top of Condor batch system and CDF Grid infrastructure. Condor is designed as distributed architecture and its glidein mechanism of pilot jobs is ideal for abstracting the Grid computing by making a virtual private computing pool. We would like to present the first production use of the generic pilot-based Workload Management System (glideinWMS), which is an implementation of the pilot mechanism based on the Condor distributed infrastructure. CDF Grid computing uses glideinWMS for its data reconstruction on the FNAL campus Grid, user analysis and Monte Carlo production across Open Science Grid (OSG). We review this computing model and setup used including CDF specific configuration within the glideinWMS system which provides powerful scalability and makes Grid computing working like in a local batch environment with ability to handle more than 10000 running jobs at a time.

  19. Unstructured hexahedral mesh generation of complex vascular trees using a multi-block grid-based approach.

    PubMed

    Bols, Joris; Taelman, L; De Santis, G; Degroote, J; Verhegghe, B; Segers, P; Vierendeels, J

    2016-01-01

    The trend towards realistic numerical models of (pathologic) patient-specific vascular structures brings along larger computational domains and more complex geometries, increasing both the computation time and the operator time. Hexahedral grids effectively lower the computational run time and the required computational infrastructure, but at high cost in terms of operator time and minimal cell quality, especially when the computational analyses are targeting complex geometries such as aneurysm necks, severe stenoses and bifurcations. Moreover, such grids generally do not allow local refinements. As an attempt to overcome these limitations, a novel approach to hexahedral meshing is proposed in this paper, which combines the automated generation of multi-block structures with a grid-based method. The robustness of the novel approach is tested on common complex geometries, such as tree-like structures (including trifurcations), stenoses, and aneurysms. Additionally, the performance of the generated grid is assessed using two numerical examples. In the first example, a grid sensitivity analysis is performed for blood flow simulated in an abdominal mouse aorta and compared to tetrahedral grids with a prismatic boundary layer. In the second example, the fluid-structure interaction in a model of an aorta with aortic coarctation is simulated and the effect of local grid refinement is analyzed. PMID:26208183

  20. Unstructured hexahedral mesh generation of complex vascular trees using a multi-block grid-based approach.

    PubMed

    Bols, Joris; Taelman, L; De Santis, G; Degroote, J; Verhegghe, B; Segers, P; Vierendeels, J

    2016-01-01

    The trend towards realistic numerical models of (pathologic) patient-specific vascular structures brings along larger computational domains and more complex geometries, increasing both the computation time and the operator time. Hexahedral grids effectively lower the computational run time and the required computational infrastructure, but at high cost in terms of operator time and minimal cell quality, especially when the computational analyses are targeting complex geometries such as aneurysm necks, severe stenoses and bifurcations. Moreover, such grids generally do not allow local refinements. As an attempt to overcome these limitations, a novel approach to hexahedral meshing is proposed in this paper, which combines the automated generation of multi-block structures with a grid-based method. The robustness of the novel approach is tested on common complex geometries, such as tree-like structures (including trifurcations), stenoses, and aneurysms. Additionally, the performance of the generated grid is assessed using two numerical examples. In the first example, a grid sensitivity analysis is performed for blood flow simulated in an abdominal mouse aorta and compared to tetrahedral grids with a prismatic boundary layer. In the second example, the fluid-structure interaction in a model of an aorta with aortic coarctation is simulated and the effect of local grid refinement is analyzed.

  1. Optimisation of sensing time and transmission time in cognitive radio-based smart grid networks

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Fu, Yuli; Yang, Junjie

    2016-07-01

    Cognitive radio (CR)-based smart grid (SG) networks have been widely recognised as emerging communication paradigms in power grids. However, a sufficient spectrum resource and reliability are two major challenges for real-time applications in CR-based SG networks. In this article, we study the traffic data collection problem. Based on the two-stage power pricing model, the power price is associated with the efficient received traffic data in a metre data management system (MDMS). In order to minimise the system power price, a wideband hybrid access strategy is proposed and analysed, to share the spectrum between the SG nodes and CR networks. The sensing time and transmission time are jointly optimised, while both the interference to primary users and the spectrum opportunity loss of secondary users are considered. Two algorithms are proposed to solve the joint optimisation problem. Simulation results show that the proposed joint optimisation algorithms outperform the fixed parameters (sensing time and transmission time) algorithms, and the power cost is reduced efficiently.

  2. The Construction of Job Families Based on the Component and Overall Dimensions of the PAQ.

    ERIC Educational Resources Information Center

    Taylor, L. R.

    1978-01-01

    Seventy-six insurance company jobs were analyzed by 203 raters in an effort to assess the potential usefulness of the Position Analysis Questionnaire (PAQ) as a job analysis device to be employed in a more extensive, company-wide research program. (Editor/RK)

  3. Predicting Teacher Job Satisfaction Based on Principals' Instructional Supervision Behaviours: A Study of Turkish Teachers

    ERIC Educational Resources Information Center

    Ilgan, Abdurrahman; Parylo, Oksana; Sungu, Hilmi

    2015-01-01

    This quantitative research examined instructional supervision behaviours of school principals as a predictor of teacher job satisfaction through the analysis of Turkish teachers' perceptions of principals' instructional supervision behaviours. There was a statistically significant difference found between the teachers' job satisfaction level and…

  4. The CMS integration grid testbed

    SciTech Connect

    Graham, Gregory E.

    2004-08-26

    The CMS Integration Grid Testbed (IGT) comprises USCMS Tier-1 and Tier-2 hardware at the following sites: the California Institute of Technology, Fermi National Accelerator Laboratory, the University of California at San Diego, and the University of Florida at Gainesville. The IGT runs jobs using the Globus Toolkit with a DAGMan and Condor-G front end. The virtual organization (VO) is managed using VO management scripts from the European Data Grid (EDG). Gridwide monitoring is accomplished using local tools such as Ganglia interfaced into the Globus Metadata Directory Service (MDS) and the agent based Mona Lisa. Domain specific software is packaged and installed using the Distribution After Release (DAR) tool of CMS, while middleware under the auspices of the Virtual Data Toolkit (VDT) is distributed using Pacman. During a continuous two month span in Fall of 2002, over 1 million official CMS GEANT based Monte Carlo events were generated and returned to CERN for analysis while being demonstrated at SC2002. In this paper, we describe the process that led to one of the world's first continuously available, functioning grids.

  5. Faculty in Faith-Based Institutions: Participation in Decision-Making and Its Impact on Job Satisfaction

    ERIC Educational Resources Information Center

    Metheny, Glen A.; West, G. Bud; Winston, Bruce E.; Wood, J. Andy

    2015-01-01

    This study examined full-time faculty in Christian, faith-based colleges and universities and investigated the type of impact their participation in the decision-making process had on job satisfaction. Previous studies have examined relationships among faculty at state universities and community colleges, yet little research has been examined in…

  6. Facilitating Integration of Electron Beam Lithography Devices with Interactive Videodisc, Computer-Based Simulation and Job Aids.

    ERIC Educational Resources Information Center

    Von Der Linn, Robert Christopher

    A needs assessment of the Grumman E-Beam Systems Group identified the requirement for additional skill mastery for the engineers who assemble, integrate, and maintain devices used to manufacture integrated circuits. Further analysis of the tasks involved led to the decision to develop interactive videodisc, computer-based job aids to enable…

  7. Time-domain analysis of planar microstrip devices using a generalized Yee-algorithm based on unstructured grids

    NASA Technical Reports Server (NTRS)

    Gedney, Stephen D.; Lansing, Faiza

    1993-01-01

    The generalized Yee-algorithm is presented for the temporal full-wave analysis of planar microstrip devices. This algorithm has the significant advantage over the traditional Yee-algorithm in that it is based on unstructured and irregular grids. The robustness of the generalized Yee-algorithm is that structures that contain curved conductors or complex three-dimensional geometries can be more accurately, and much more conveniently modeled using standard automatic grid generation techniques. This generalized Yee-algorithm is based on the the time-marching solution of the discrete form of Maxwell's equations in their integral form. To this end, the electric and magnetic fields are discretized over a dual, irregular, and unstructured grid. The primary grid is assumed to be composed of general fitted polyhedra distributed throughout the volume. The secondary grid (or dual grid) is built up of the closed polyhedra whose edges connect the centroid's of adjacent primary cells, penetrating shared faces. Faraday's law and Ampere's law are used to update the fields normal to the primary and secondary grid faces, respectively. Subsequently, a correction scheme is introduced to project the normal fields onto the grid edges. It is shown that this scheme is stable, maintains second-order accuracy, and preserves the divergenceless nature of the flux densities. Finally, for computational efficiency the algorithm is structured as a series of sparse matrix-vector multiplications. Based on this scheme, the generalized Yee-algorithm has been implemented on vector and parallel high performance computers in a highly efficient manner.

  8. A Development of Lightweight Grid Interface

    NASA Astrophysics Data System (ADS)

    Iwai, G.; Kawai, Y.; Sasaki, T.; Watase, Y.

    2011-12-01

    In order to help a rapid development of Grid/Cloud aware applications, we have developed API to abstract the distributed computing infrastructures based on SAGA (A Simple API for Grid Applications). SAGA, which is standardized in the OGF (Open Grid Forum), defines API specifications to access distributed computing infrastructures, such as Grid, Cloud and local computing resources. The Universal Grid API (UGAPI), which is a set of command line interfaces (CLI) and APIs, aims to offer simpler API to combine several SAGA interfaces with richer functionalities. These CLIs of the UGAPI offer typical functionalities required by end users for job management and file access to the different distributed computing infrastructures as well as local computing resources. We have also built a web interface for the particle therapy simulation and demonstrated the large scale calculation using the different infrastructures at the same time. In this paper, we would like to present how the web interface based on UGAPI and SAGA achieve more efficient utilization of computing resources over the different infrastructures with technical details and practical experiences.

  9. PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems.

    PubMed

    Mohamed, Mohamed A; Eltamaly, Ali M; Alolah, Abdulrahman I

    2016-01-01

    This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers. PMID:27513000

  10. PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems.

    PubMed

    Mohamed, Mohamed A; Eltamaly, Ali M; Alolah, Abdulrahman I

    2016-01-01

    This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers.

  11. PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems

    PubMed Central

    Mohamed, Mohamed A.; Eltamaly, Ali M.; Alolah, Abdulrahman I.

    2016-01-01

    This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers. PMID:27513000

  12. An Adaptive Reputation-Based Algorithm for Grid Virtual Organization Formation

    NASA Astrophysics Data System (ADS)

    Cui, Yongrui; Li, Mingchu; Ren, Yizhi; Sakurai, Kouichi

    A novel adaptive reputation-based virtual organization formation is proposed. It restrains the bad performers effectively based on the consideration of the global experience of the evaluator and evaluates the direct trust relation between two grid nodes accurately by consulting the previous trust value rationally. It also consults and improves the reputation evaluation process in PathTrust model by taking account of the inter-organizational trust relationship and combines it with direct and recommended trust in a weighted way, which makes the algorithm more robust against collusion attacks. Additionally, the proposed algorithm considers the perspective of the VO creator and takes required VO services as one of the most important fine-grained evaluation criterion, which makes the algorithm more suitable for constructing VOs in grid environments that include autonomous organizations. Simulation results show that our algorithm restrains the bad performers and resists against fake transaction attacks and badmouth attacks effectively. It provides a clear advantage in the design of a VO infrastructure.

  13. Spectrum survey for reliable communications of cognitive radio based smart grid network

    NASA Astrophysics Data System (ADS)

    Farah Aqilah, Wan; Jayavalan, Shanjeevan; Mohd Aripin, Norazizah; Mohamad, Hafizal; Ismail, Aiman

    2013-06-01

    The smart grid (SG) system is expected to involve huge amount of data with different levels of priorities to different applications or users. The traditional grid which tend to deploy propriety networks with limited coverage and bandwidth, is not sufficient to support large scale SG network. Cognitive radio (CR) is a promising communication platform for SG network by utilizing potentially all available spectrum resources, subject to interference constraint. In order to develop a reliable communication framework for CR based SG network, thorough investigations on the current radio spectrum are required. This paper presents the spectrum utilization in Malaysia, specifically in the UHF/VHF bands, cellular (GSM 900, GSM 1800 and 3G), WiMAX, ISM and LTE band. The goal is to determine the potential spectrum that can be exploit by the CR users in the SG network. Measurements was conducted for 24 hours to quantify the average spectrum usage and the amount of available bandwidth. The findings in this paper are important to provide insight of actual spectrum utilization prior to developing a reliable communication platform for CR based SG network.

  14. Climate Simulations based on a different-grid nested and coupled model

    NASA Astrophysics Data System (ADS)

    Li, Dan; Ji, Jinjun; Li, Yinpeng

    2002-05-01

    An atmosphere-vegetation interaction model (A VIM) has been coupled with a nine-layer General Cir-culation Model (GCM) of Institute of Atmospheic Physics/State Key Laboratory of Numerical Modeling for Atmospheric Sciences and Geophysical Fluid Dynamics (IAP/LASG), which is rhomboidally truncated at zonal wave number 15, to simulate global climatic mean states. A VIM is a model having inter-feedback between land surface processes and eco-physiological processes on land. As the first step to couple land with atmosphere completely, the physiological processes are fixed and only the physical part (generally named the SVAT (soil-vegetation-atmosphere-transfer scheme) model) of AVIM is nested into IAP/LASG L9R15 GCM. The ocean part of GCM is prescribed and its monthly sea surface temperature (SST) is the climatic mean value. With respect to the low resolution of GCM, i.e., each grid cell having lon-gitude 7.5° and latitude 4.5°, the vegetation is given a high resolution of 1.5° by 1.5° to nest and couple the fine grid cells of land with the coarse grid cells of atmosphere. The coupling model has been integrated for 15 years and its last ten-year mean of outputs was chosen for analysis. Compared with observed data and NCEP reanalysis, the coupled model simulates the main characteris-tics of global atmospheric circulation and the fields of temperature and moisture. In particular, the simu-lated precipitation and surface air temperature have sound results. The work creates a solid base on coupling climate models with the biosphere.

  15. Model atmospheres for M (sub)dwarf stars. 1: The base model grid

    NASA Technical Reports Server (NTRS)

    Allard, France; Hauschildt, Peter H.

    1995-01-01

    We have calculated a grid of more than 700 model atmospheres valid for a wide range of parameters encompassing the coolest known M dwarfs, M subdwarfs, and brown dwarf candidates: 1500 less than or equal to T(sub eff) less than or equal to 4000 K, 3.5 less than or equal to log g less than or equal to 5.5, and -4.0 less than or equal to (M/H) less than or equal to +0.5. Our equation of state includes 105 molecules and up to 27 ionization stages of 39 elements. In the calculations of the base grid of model atmospheres presented here, we include over 300 molecular bands of four molecules (TiO, VO, CaH, FeH) in the JOLA approximation, the water opacity of Ludwig (1971), collision-induced opacities, b-f and f-f atomic processes, as well as about 2 million spectral lines selected from a list with more than 42 million atomic and 24 million molecular (H2, CH, NH, OH, MgH, SiH, C2, CN, CO, SiO) lines. High-resolution synthetic spectra are obtained using an opacity sampling method. The model atmospheres and spectra are calculated with the generalized stellar atmosphere code PHOENIX, assuming LTE, plane-parallel geometry, energy (radiative plus convective) conservation, and hydrostatic equilibrium. The model spectra give close agreement with observations of M dwarfs across a wide spectral range from the blue to the near-IR, with one notable exception: the fit to the water bands. We discuss several practical applications of our model grid, e.g., broadband colors derived from the synthetic spectra. In light of current efforts to identify genuine brown dwarfs, we also show how low-resolution spectra of cool dwarfs vary with surface gravity, and how the high-regulation line profile of the Li I resonance doublet depends on the Li abundance.

  16. A brief comparison between grid based real space algorithms andspectrum algorithms for electronic structure calculations

    SciTech Connect

    Wang, Lin-Wang

    2006-12-01

    Quantum mechanical ab initio calculation constitutes the biggest portion of the computer time in material science and chemical science simulations. As a computer center like NERSC, to better serve these communities, it will be very useful to have a prediction for the future trends of ab initio calculations in these areas. Such prediction can help us to decide what future computer architecture can be most useful for these communities, and what should be emphasized on in future supercomputer procurement. As the size of the computer and the size of the simulated physical systems increase, there is a renewed interest in using the real space grid method in electronic structure calculations. This is fueled by two factors. First, it is generally assumed that the real space grid method is more suitable for parallel computation for its limited communication requirement, compared with spectrum method where a global FFT is required. Second, as the size N of the calculated system increases together with the computer power, O(N) scaling approaches become more favorable than the traditional direct O(N{sup 3}) scaling methods. These O(N) methods are usually based on localized orbital in real space, which can be described more naturally by the real space basis. In this report, the author compares the real space methods versus the traditional plane wave (PW) spectrum methods, for their technical pros and cons, and the possible of future trends. For the real space method, the author focuses on the regular grid finite different (FD) method and the finite element (FE) method. These are the methods used mostly in material science simulation. As for chemical science, the predominant methods are still Gaussian basis method, and sometime the atomic orbital basis method. These two basis sets are localized in real space, and there is no indication that their roles in quantum chemical simulation will change anytime soon. The author focuses on the density functional theory (DFT), which is the

  17. Overview of the NASA Glenn Flux Reconstruction Based High-Order Unstructured Grid Code

    NASA Technical Reports Server (NTRS)

    Spiegel, Seth C.; DeBonis, James R.; Huynh, H. T.

    2016-01-01

    A computational fluid dynamics code based on the flux reconstruction (FR) method is currently being developed at NASA Glenn Research Center to ultimately provide a large- eddy simulation capability that is both accurate and efficient for complex aeropropulsion flows. The FR approach offers a simple and efficient method that is easy to implement and accurate to an arbitrary order on common grid cell geometries. The governing compressible Navier-Stokes equations are discretized in time using various explicit Runge-Kutta schemes, with the default being the 3-stage/3rd-order strong stability preserving scheme. The code is written in modern Fortran (i.e., Fortran 2008) and parallelization is attained through MPI for execution on distributed-memory high-performance computing systems. An h- refinement study of the isentropic Euler vortex problem is able to empirically demonstrate the capability of the FR method to achieve super-accuracy for inviscid flows. Additionally, the code is applied to the Taylor-Green vortex problem, performing numerous implicit large-eddy simulations across a range of grid resolutions and solution orders. The solution found by a pseudo-spectral code is commonly used as a reference solution to this problem, and the FR code is able to reproduce this solution using approximately the same grid resolution. Finally, an examination of the code's performance demonstrates good parallel scaling, as well as an implementation of the FR method with a computational cost/degree- of-freedom/time-step that is essentially independent of the solution order of accuracy for structured geometries.

  18. A High Performance Computing Platform for Performing High-Volume Studies With Windows-based Power Grid Tools

    SciTech Connect

    Chen, Yousu; Huang, Zhenyu

    2014-08-31

    Serial Windows-based programs are widely used in power utilities. For applications that require high volume simulations, the single CPU runtime can be on the order of days or weeks. The lengthy runtime, along with the availability of low cost hardware, is leading utilities to seriously consider High Performance Computing (HPC) techniques. However, the vast majority of the HPC computers are still Linux-based and many HPC applications have been custom developed external to the core simulation engine without consideration for ease of use. This has created a technical gap for applying HPC-based tools to today’s power grid studies. To fill this gap and accelerate the acceptance and adoption of HPC for power grid applications, this paper presents a prototype of generic HPC platform for running Windows-based power grid programs on Linux-based HPC environment. The preliminary results show that the runtime can be reduced from weeks to hours to improve work efficiency.

  19. Uncertainties in asteroseismic grid-based estimates of stellar ages. SCEPtER: Stellar CharactEristics Pisa Estimation gRid

    NASA Astrophysics Data System (ADS)

    Valle, G.; Dell'Omodarme, M.; Prada Moroni, P. G.; Degl'Innocenti, S.

    2015-03-01

    Context. Stellar age determination by means of grid-based techniques that adopt asteroseismic constraints is a well established method nowadays. However some theoretical aspects of the systematic and statistical errors affecting these age estimates still have to be investigated. Aims: We study the impact on stellar age determination of the uncertainty in the radiative opacity, in the initial helium abundance, in the mixing-length value, in the convective core overshooting, and in the microscopic diffusion efficiency adopted in stellar model computations. Methods: We extended our SCEPtER grid to include stars with mass in the range [0.8; 1.6] M⊙ and evolutionary stages from the zero-age main sequence to the central hydrogen depletion. For the age estimation we adopted the same maximum likelihood technique as described in our previous work. To quantify the systematic errors arising from the current uncertainty in model computations, many synthetic grids of stellar models with perturbed input were adopted. Results: We found that the current typical uncertainty in the observations accounts for 1σ statistical relative error in age determination, which on average ranges from about -35% to +42%, depending on the mass. However, owing to the strong dependence on the evolutionary phase, the age's relative error can be higher than 120% for stars near the zero-age main sequence, while it is typically of the order of 20% or lower in the advanced main-sequence phase. The systematic bias on age determination due to a variation of ±1 in the helium-to-metal enrichment ratio ΔY/ΔZ is about one-fourth of the statistical error in the first 30% of the evolution, while it is negligible for more evolved stages. The maximum bias due to the presence of the convective core overshooting is -7% and -13% for mild and strong overshooting scenarios. For all the examined models, the impact of a variation of ±5% in the radiative opacity was found to be negligible. The most important source

  20. Thread Group Multithreading: Accelerating the Computation of an Agent-Based Power System Modeling and Simulation Tool -- C GridLAB-D

    SciTech Connect

    Jin, Shuangshuang; Chassin, David P.

    2014-01-06

    GridLAB-DTM is an open source next generation agent-based smart-grid simulator that provides unprecedented capability to model the performance of smart grid technologies. Over the past few years, GridLAB-D has been used to conduct important analyses of smart grid concepts, but it is still quite limited by its computational performance. In order to break through the performance bottleneck to meet the need for large scale power grid simulations, we develop a thread group mechanism to implement highly granular multithreaded computation in GridLAB-D. We achieve close to linear speedups on multithreading version compared against the single-thread version of the same code running on general purpose multi-core commodity for a benchmark simple house model. The performance of the multithreading code shows favorable scalability properties and resource utilization, and much shorter execution time for large-scale power grid simulations.

  1. Differential Evolution Based IDWNN Controller for Fault Ride-Through of Grid-Connected Doubly Fed Induction Wind Generators.

    PubMed

    Manonmani, N; Subbiah, V; Sivakumar, L

    2015-01-01

    The key objective of wind turbine development is to ensure that output power is continuously increased. It is authenticated that wind turbines (WTs) supply the necessary reactive power to the grid at the time of fault and after fault to aid the flowing grid voltage. At this juncture, this paper introduces a novel heuristic based controller module employing differential evolution and neural network architecture to improve the low-voltage ride-through rate of grid-connected wind turbines, which are connected along with doubly fed induction generators (DFIGs). The traditional crowbar-based systems were basically applied to secure the rotor-side converter during the occurrence of grid faults. This traditional controller is found not to satisfy the desired requirement, since DFIG during the connection of crowbar acts like a squirrel cage module and absorbs the reactive power from the grid. This limitation is taken care of in this paper by introducing heuristic controllers that remove the usage of crowbar and ensure that wind turbines supply necessary reactive power to the grid during faults. The controller is designed in this paper to enhance the DFIG converter during the grid fault and this controller takes care of the ride-through fault without employing any other hardware modules. The paper introduces a double wavelet neural network controller which is appropriately tuned employing differential evolution. To validate the proposed controller module, a case study of wind farm with 1.5 MW wind turbines connected to a 25 kV distribution system exporting power to a 120 kV grid through a 30 km 25 kV feeder is carried out by simulation.

  2. Differential Evolution Based IDWNN Controller for Fault Ride-Through of Grid-Connected Doubly Fed Induction Wind Generators

    PubMed Central

    Manonmani, N.; Subbiah, V.; Sivakumar, L.

    2015-01-01

    The key objective of wind turbine development is to ensure that output power is continuously increased. It is authenticated that wind turbines (WTs) supply the necessary reactive power to the grid at the time of fault and after fault to aid the flowing grid voltage. At this juncture, this paper introduces a novel heuristic based controller module employing differential evolution and neural network architecture to improve the low-voltage ride-through rate of grid-connected wind turbines, which are connected along with doubly fed induction generators (DFIGs). The traditional crowbar-based systems were basically applied to secure the rotor-side converter during the occurrence of grid faults. This traditional controller is found not to satisfy the desired requirement, since DFIG during the connection of crowbar acts like a squirrel cage module and absorbs the reactive power from the grid. This limitation is taken care of in this paper by introducing heuristic controllers that remove the usage of crowbar and ensure that wind turbines supply necessary reactive power to the grid during faults. The controller is designed in this paper to enhance the DFIG converter during the grid fault and this controller takes care of the ride-through fault without employing any other hardware modules. The paper introduces a double wavelet neural network controller which is appropriately tuned employing differential evolution. To validate the proposed controller module, a case study of wind farm with 1.5 MW wind turbines connected to a 25 kV distribution system exporting power to a 120 kV grid through a 30 km 25 kV feeder is carried out by simulation. PMID:26516636

  3. Differential Evolution Based IDWNN Controller for Fault Ride-Through of Grid-Connected Doubly Fed Induction Wind Generators.

    PubMed

    Manonmani, N; Subbiah, V; Sivakumar, L

    2015-01-01

    The key objective of wind turbine development is to ensure that output power is continuously increased. It is authenticated that wind turbines (WTs) supply the necessary reactive power to the grid at the time of fault and after fault to aid the flowing grid voltage. At this juncture, this paper introduces a novel heuristic based controller module employing differential evolution and neural network architecture to improve the low-voltage ride-through rate of grid-connected wind turbines, which are connected along with doubly fed induction generators (DFIGs). The traditional crowbar-based systems were basically applied to secure the rotor-side converter during the occurrence of grid faults. This traditional controller is found not to satisfy the desired requirement, since DFIG during the connection of crowbar acts like a squirrel cage module and absorbs the reactive power from the grid. This limitation is taken care of in this paper by introducing heuristic controllers that remove the usage of crowbar and ensure that wind turbines supply necessary reactive power to the grid during faults. The controller is designed in this paper to enhance the DFIG converter during the grid fault and this controller takes care of the ride-through fault without employing any other hardware modules. The paper introduces a double wavelet neural network controller which is appropriately tuned employing differential evolution. To validate the proposed controller module, a case study of wind farm with 1.5 MW wind turbines connected to a 25 kV distribution system exporting power to a 120 kV grid through a 30 km 25 kV feeder is carried out by simulation. PMID:26516636

  4. 3D inversion based on multi-grid approach of magnetotelluric data from Northern Scandinavia

    NASA Astrophysics Data System (ADS)

    Cherevatova, M.; Smirnov, M.; Korja, T. J.; Egbert, G. D.

    2012-12-01

    In this work we investigate the geoelectrical structure of the cratonic margin of Fennoscandian Shield by means of magnetotelluric (MT) measurements carried out in Northern Norway and Sweden during summer 2011-2012. The project Magnetotellurics in the Scandes (MaSca) focuses on the investigation of the crust, upper mantle and lithospheric structure in a transition zone from a stable Precambrian cratonic interior to a passive continental margin beneath the Caledonian Orogen and the Scandes Mountains in western Fennoscandia. Recent MT profiles in the central and southern Scandes indicated a large contrast in resistivity between Caledonides and Precambrian basement. The alum shales as a highly conductive layers between the resistive Precambrian basement and the overlying Caledonian nappes are revealed from this profiles. Additional measurements in the Northern Scandes were required. All together data from 60 synchronous long period (LMT) and about 200 broad band (BMT) sites were acquired. The array stretches from Lofoten and Bodo (Norway) in the west to Kiruna and Skeleftea (Sweden) in the east covering an area of 500x500 square kilometers. LMT sites were occupied for about two months, while most of the BMT sites were measured during one day. We have used new multi-grid approach for 3D electromagnetic (EM) inversion and modelling. Our approach is based on the OcTree discretization where the spatial domain is represented by rectangular cells, each of which might be subdivided (recursively) into eight sub-cells. In this simplified implementation the grid is refined only in the horizontal direction, uniformly in each vertical layer. Using multi-grid we manage to have a high grid resolution near the surface (for instance, to tackle with galvanic distortions) and lower resolution at greater depth as the EM fields decay in the Earth according to the diffusion equation. We also have a benefit in computational costs as number of unknowns decrease. The multi-grid forward

  5. Power system voltage stability and agent based distribution automation in smart grid

    NASA Astrophysics Data System (ADS)

    Nguyen, Cuong Phuc

    2011-12-01

    Our interconnected electric power system is presently facing many challenges that it was not originally designed and engineered to handle. The increased inter-area power transfers, aging infrastructure, and old technologies, have caused many problems including voltage instability, widespread blackouts, slow control response, among others. These problems have created an urgent need to transform the present electric power system to a highly stable, reliable, efficient, and self-healing electric power system of the future, which has been termed "smart grid". This dissertation begins with an investigation of voltage stability in bulk transmission networks. A new continuation power flow tool for studying the impacts of generator merit order based dispatch on inter-area transfer capability and static voltage stability is presented. The load demands are represented by lumped load models on the transmission system. While this representation is acceptable in traditional power system analysis, it may not be valid in the future smart grid where the distribution system will be integrated with intelligent and quick control capabilities to mitigate voltage problems before they propagate into the entire system. Therefore, before analyzing the operation of the whole smart grid, it is important to understand the distribution system first. The second part of this dissertation presents a new platform for studying and testing emerging technologies in advanced Distribution Automation (DA) within smart grids. Due to the key benefits over the traditional centralized approach, namely flexible deployment, scalability, and avoidance of single-point-of-failure, a new distributed approach is employed to design and develop all elements of the platform. A multi-agent system (MAS), which has the three key characteristics of autonomy, local view, and decentralization, is selected to implement the advanced DA functions. The intelligent agents utilize a communication network for cooperation and

  6. Occupational stressors and hypertension: a multi-method study using observer-based job analysis and self-reports in urban transit operators.

    PubMed

    Greiner, Birgit A; Krause, Niklas; Ragland, David; Fisher, June M

    2004-09-01

    This multi-method study aimed to disentangle objective and subjective components of job stressors and determine the role of each for hypertension risk. Because research on job stressors and hypertension has been exclusively based on self-reports of stressors, the tendency of some individuals to use denial and repressive coping might be responsible for the inconclusive results in previous studies. Stressor measures with different degrees of objectivity were contrasted, including (1) an observer-based measure of stressors (job barriers, time pressure) obtained from experts, (2) self-reported frequency and appraised intensity of job problems and time pressures averaged per workplace (group level), (3) self-reported frequency of job problems and time pressures at the individual level, and (4) self-reported appraised intensity of job problems and time pressures at the individual level. The sample consisted of 274 transit operators working on 27 different transit lines and four different vehicle types. Objective stressors (job barriers and time pressure) were each significantly associated with hypertension (casual blood pressure readings and/or currently taking anti-hypertensive medication) after adjustment for age, gender and seniority. Self-reported stressors at the individual level were positively but not significantly associated with hypertension. At the group level, only appraisal of job problems significantly predicted hypertension. In a composite regression model, both observer-based job barriers and self-reported intensity of job problems were independently and significantly associated with hypertension. Associations between self-reported job problems (individual level) and hypertension were dependent on the level of objective stressors. When observer-based stressor level was low, the association between self-reported frequency of stressors and hypertension was high. When the observer-based stressor level was high the association was inverse; this might be

  7. Creative Job Search Technique

    ERIC Educational Resources Information Center

    Canadian Vocational Journal, 1974

    1974-01-01

    Creative Job Search Technique is based on the premise that most people have never learned how to systematically look for a job. A person who is unemployed can be helped to take a hard look at his acquired skills and relate those skills to an employer's needs. (Author)

  8. Job Placement Handbook.

    ERIC Educational Resources Information Center

    Los Angeles Unified School District, CA. Div. of Career and Continuing Education.

    Designed to serve as a guide for job placement personnel, this handbook is written from the point of view of a school or job preparation facility, based on methodology applicable to the placement function in any setting. Factors identified as critical to a successful placement operation are utilization of a systems approach, establishment of…

  9. Planning for distributed workflows: constraint-based coscheduling of computational jobs and data placement in distributed environments

    NASA Astrophysics Data System (ADS)

    Makatun, Dzmitry; Lauret, Jérôme; Rudová, Hana; Šumbera, Michal

    2015-05-01

    When running data intensive applications on distributed computational resources long I/O overheads may be observed as access to remotely stored data is performed. Latencies and bandwidth can become the major limiting factor for the overall computation performance and can reduce the CPU/WallTime ratio to excessive IO wait. Reusing the knowledge of our previous research, we propose a constraint programming based planner that schedules computational jobs and data placements (transfers) in a distributed environment in order to optimize resource utilization and reduce the overall processing completion time. The optimization is achieved by ensuring that none of the resources (network links, data storages and CPUs) are oversaturated at any moment of time and either (a) that the data is pre-placed at the site where the job runs or (b) that the jobs are scheduled where the data is already present. Such an approach eliminates the idle CPU cycles occurring when the job is waiting for the I/O from a remote site and would have wide application in the community. Our planner was evaluated and simulated based on data extracted from log files of batch and data management systems of the STAR experiment. The results of evaluation and estimation of performance improvements are discussed in this paper.

  10. Calcium-based multi-element chemistry for grid-scale electrochemical energy storage

    PubMed Central

    Ouchi, Takanari; Kim, Hojong; Spatocco, Brian L.; Sadoway, Donald R.

    2016-01-01

    Calcium is an attractive material for the negative electrode in a rechargeable battery due to its low electronegativity (high cell voltage), double valence, earth abundance and low cost; however, the use of calcium has historically eluded researchers due to its high melting temperature, high reactivity and unfavorably high solubility in molten salts. Here we demonstrate a long-cycle-life calcium-metal-based rechargeable battery for grid-scale energy storage. By deploying a multi-cation binary electrolyte in concert with an alloyed negative electrode, calcium solubility in the electrolyte is suppressed and operating temperature is reduced. These chemical mitigation strategies also engage another element in energy storage reactions resulting in a multi-element battery. These initial results demonstrate how the synergistic effects of deploying multiple chemical mitigation strategies coupled with the relaxation of the requirement of a single itinerant ion can unlock calcium-based chemistries and produce a battery with enhanced performance. PMID:27001915

  11. Calcium-based multi-element chemistry for grid-scale electrochemical energy storage.

    PubMed

    Ouchi, Takanari; Kim, Hojong; Spatocco, Brian L; Sadoway, Donald R

    2016-01-01

    Calcium is an attractive material for the negative electrode in a rechargeable battery due to its low electronegativity (high cell voltage), double valence, earth abundance and low cost; however, the use of calcium has historically eluded researchers due to its high melting temperature, high reactivity and unfavorably high solubility in molten salts. Here we demonstrate a long-cycle-life calcium-metal-based rechargeable battery for grid-scale energy storage. By deploying a multi-cation binary electrolyte in concert with an alloyed negative electrode, calcium solubility in the electrolyte is suppressed and operating temperature is reduced. These chemical mitigation strategies also engage another element in energy storage reactions resulting in a multi-element battery. These initial results demonstrate how the synergistic effects of deploying multiple chemical mitigation strategies coupled with the relaxation of the requirement of a single itinerant ion can unlock calcium-based chemistries and produce a battery with enhanced performance.

  12. MEDUSA - An overset grid flow solver for network-based parallel computer systems

    NASA Technical Reports Server (NTRS)

    Smith, Merritt H.; Pallis, Jani M.

    1993-01-01

    Continuing improvement in processing speed has made it feasible to solve the Reynolds-Averaged Navier-Stokes equations for simple three-dimensional flows on advanced workstations. Combining multiple workstations into a network-based heterogeneous parallel computer allows the application of programming principles learned on MIMD (Multiple Instruction Multiple Data) distributed memory parallel computers to the solution of larger problems. An overset-grid flow solution code has been developed which uses a cluster of workstations as a network-based parallel computer. Inter-process communication is provided by the Parallel Virtual Machine (PVM) software. Solution speed equivalent to one-third of a Cray-YMP processor has been achieved from a cluster of nine commonly used engineering workstation processors. Load imbalance and communication overhead are the principal impediments to parallel efficiency in this application.

  13. Calcium-based multi-element chemistry for grid-scale electrochemical energy storage.

    PubMed

    Ouchi, Takanari; Kim, Hojong; Spatocco, Brian L; Sadoway, Donald R

    2016-01-01

    Calcium is an attractive material for the negative electrode in a rechargeable battery due to its low electronegativity (high cell voltage), double valence, earth abundance and low cost; however, the use of calcium has historically eluded researchers due to its high melting temperature, high reactivity and unfavorably high solubility in molten salts. Here we demonstrate a long-cycle-life calcium-metal-based rechargeable battery for grid-scale energy storage. By deploying a multi-cation binary electrolyte in concert with an alloyed negative electrode, calcium solubility in the electrolyte is suppressed and operating temperature is reduced. These chemical mitigation strategies also engage another element in energy storage reactions resulting in a multi-element battery. These initial results demonstrate how the synergistic effects of deploying multiple chemical mitigation strategies coupled with the relaxation of the requirement of a single itinerant ion can unlock calcium-based chemistries and produce a battery with enhanced performance. PMID:27001915

  14. Calcium-based multi-element chemistry for grid-scale electrochemical energy storage

    NASA Astrophysics Data System (ADS)

    Ouchi, Takanari; Kim, Hojong; Spatocco, Brian L.; Sadoway, Donald R.

    2016-03-01

    Calcium is an attractive material for the negative electrode in a rechargeable battery due to its low electronegativity (high cell voltage), double valence, earth abundance and low cost; however, the use of calcium has historically eluded researchers due to its high melting temperature, high reactivity and unfavorably high solubility in molten salts. Here we demonstrate a long-cycle-life calcium-metal-based rechargeable battery for grid-scale energy storage. By deploying a multi-cation binary electrolyte in concert with an alloyed negative electrode, calcium solubility in the electrolyte is suppressed and operating temperature is reduced. These chemical mitigation strategies also engage another element in energy storage reactions resulting in a multi-element battery. These initial results demonstrate how the synergistic effects of deploying multiple chemical mitigation strategies coupled with the relaxation of the requirement of a single itinerant ion can unlock calcium-based chemistries and produce a battery with enhanced performance.

  15. 3D Continuum Radiative Transfer. An adaptive grid construction algorithm based on the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Niccolini, G.; Alcolea, J.

    Solving the radiative transfer problem is a common problematic to may fields in astrophysics. With the increasing angular resolution of spatial or ground-based telescopes (VLTI, HST) but also with the next decade instruments (NGST, ALMA, ...), astrophysical objects reveal and will certainly reveal complex spatial structures. Consequently, it is necessary to develop numerical tools being able to solve the radiative transfer equation in three dimensions in order to model and interpret these observations. I present a 3D radiative transfer program, using a new method for the construction of an adaptive spatial grid, based on the Monte Claro method. With the help of this tools, one can solve the continuum radiative transfer problem (e.g. a dusty medium), computes the temperature structure of the considered medium and obtain the flux of the object (SED and images).

  16. A Selective Vision and Landmark based Approach to Improve the Efficiency of Position Probability Grid Localization

    NASA Astrophysics Data System (ADS)

    Loukianov, Andrey A.; Sugisaka, Masanori

    This paper presents a vision and landmark based approach to improve the efficiency of probability grid Markov localization for mobile robots. The proposed approach uses visual landmarks that can be detected by a rotating video camera on the robot. We assume that visual landmark positions in the map are known and that each landmark can be assigned to a certain landmark class. The method uses classes of observed landmarks and their relative arrangement to select regions in the robot posture space where the location probability density function is to be updated. Subsequent computations are performed only in these selected update regions thus the computational workload is significantly reduced. Probabilistic landmark-based localization method, details of the map and robot perception are discussed. A technique to compute the update regions and their parameters for selective computation is introduced. Simulation results are presented to show the effectiveness of the approach.

  17. A new algorithm for grid-based hydrologic analysis by incorporating stormwater infrastructure

    NASA Astrophysics Data System (ADS)

    Choi, Yosoon; Yi, Huiuk; Park, Hyeong-Dong

    2011-08-01

    We developed a new algorithm, the Adaptive Stormwater Infrastructure (ASI) algorithm, to incorporate ancillary data sets related to stormwater infrastructure into the grid-based hydrologic analysis. The algorithm simultaneously considers the effects of the surface stormwater collector network (e.g., diversions, roadside ditches, and canals) and underground stormwater conveyance systems (e.g., waterway tunnels, collector pipes, and culverts). The surface drainage flows controlled by the surface runoff collector network are superimposed onto the flow directions derived from a DEM. After examining the connections between inlets and outfalls in the underground stormwater conveyance system, the flow accumulation and delineation of watersheds are calculated based on recursive computations. Application of the algorithm to the Sangdong tailings dam in Korea revealed superior performance to that of a conventional D8 single-flow algorithm in terms of providing reasonable hydrologic information on watersheds with stormwater infrastructure.

  18. Lambda Station: On-demand flow based routing for data intensive Grid applications over multitopology networks

    SciTech Connect

    Bobyshev, A.; Crawford, M.; DeMar, P.; Grigaliunas, V.; Grigoriev, M.; Moibenko, A.; Petravick, D.; Rechenmacher, R.; Newman, H.; Bunn, J.; Van Lingen, F.; Nae, D.; Ravot, S.; Steenberg, C.; Su, X.; Thomas, M.; Xia, Y.; /Caltech

    2006-08-01

    Lambda Station is an ongoing project of Fermi National Accelerator Laboratory and the California Institute of Technology. The goal of this project is to design, develop and deploy network services for path selection, admission control and flow based forwarding of traffic among data-intensive Grid applications such as are used in High Energy Physics and other communities. Lambda Station deals with the last-mile problem in local area networks, connecting production clusters through a rich array of wide area networks. Selective forwarding of traffic is controlled dynamically at the demand of applications. This paper introduces the motivation of this project, design principles and current status. Integration of Lambda Station client API with the essential Grid middleware such as the dCache/SRM Storage Resource Manager is also described. Finally, the results of applying Lambda Station services to development and production clusters at Fermilab and Caltech over advanced networks such as DOE's UltraScience Net and NSF's UltraLight is covered.

  19. Parallel level-set methods on adaptive tree-based grids

    NASA Astrophysics Data System (ADS)

    Mirzadeh, Mohammad; Guittet, Arthur; Burstedde, Carsten; Gibou, Frederic

    2016-10-01

    We present scalable algorithms for the level-set method on dynamic, adaptive Quadtree and Octree Cartesian grids. The algorithms are fully parallelized and implemented using the MPI standard and the open-source p4est library. We solve the level set equation with a semi-Lagrangian method which, similar to its serial implementation, is free of any time-step restrictions. This is achieved by introducing a scalable global interpolation scheme on adaptive tree-based grids. Moreover, we present a simple parallel reinitialization scheme using the pseudo-time transient formulation. Both parallel algorithms scale on the Stampede supercomputer, where we are currently using up to 4096 CPU cores, the limit of our current account. Finally, a relevant application of the algorithms is presented in modeling a crystallization phenomenon by solving a Stefan problem, illustrating a level of detail that would be impossible to achieve without a parallel adaptive strategy. We believe that the algorithms presented in this article will be of interest and useful to researchers working with the level-set framework and modeling multi-scale physics in general.

  20. Branch-Based Centralized Data Collection for Smart Grids Using Wireless Sensor Networks

    PubMed Central

    Kim, Kwangsoo; Jin, Seong-il

    2015-01-01

    A smart grid is one of the most important applications in smart cities. In a smart grid, a smart meter acts as a sensor node in a sensor network, and a central device collects power usage from every smart meter. This paper focuses on a centralized data collection problem of how to collect every power usage from every meter without collisions in an environment in which the time synchronization among smart meters is not guaranteed. To solve the problem, we divide a tree that a sensor network constructs into several branches. A conflict-free query schedule is generated based on the branches. Each power usage is collected according to the schedule. The proposed method has important features: shortening query processing time and avoiding collisions between a query and query responses. We evaluate this method using the ns-2 simulator. The experimental results show that this method can achieve both collision avoidance and fast query processing at the same time. The success rate of data collection at a sink node executing this method is 100%. Its running time is about 35 percent faster than that of the round-robin method, and its memory size is reduced to about 10% of that of the depth-first search method. PMID:26007734

  1. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  2. Web-based visualization of gridded dataset usings OceanBrowser

    NASA Astrophysics Data System (ADS)

    Barth, Alexander; Watelet, Sylvain; Troupin, Charles; Beckers, Jean-Marie

    2015-04-01

    OceanBrowser is a web-based visualization tool for gridded oceanographic data sets. Those data sets are typically four-dimensional (longitude, latitude, depth and time). OceanBrowser allows one to visualize horizontal sections at a given depth and time to examine the horizontal distribution of a given variable. It also offers the possibility to display the results on an arbitrary vertical section. To study the evolution of the variable in time, the horizontal and vertical sections can also be animated. Vertical section can be generated by using a fixed distance from coast or fixed ocean depth. The user can customize the plot by changing the color-map, the range of the color-bar, the type of the plot (linearly interpolated color, simple contours, filled contours) and download the current view as a simple image or as Keyhole Markup Language (KML) file for visualization in applications such as Google Earth. The data products can also be accessed as NetCDF files and through OPeNDAP. Third-party layers from a web map service can also be integrated. OceanBrowser is used in the frame of the SeaDataNet project (http://gher-diva.phys.ulg.ac.be/web-vis/) and EMODNET Chemistry (http://oceanbrowser.net/emodnet/) to distribute gridded data sets interpolated from in situ observation using DIVA (Data-Interpolating Variational Analysis).

  3. Branch-based centralized data collection for smart grids using wireless sensor networks.

    PubMed

    Kim, Kwangsoo; Jin, Seong-il

    2015-05-21

    A smart grid is one of the most important applications in smart cities. In a smart grid, a smart meter acts as a sensor node in a sensor network, and a central device collects power usage from every smart meter. This paper focuses on a centralized data collection problem of how to collect every power usage from every meter without collisions in an environment in which the time synchronization among smart meters is not guaranteed. To solve the problem, we divide a tree that a sensor network constructs into several branches. A conflict-free query schedule is generated based on the branches. Each power usage is collected according to the schedule. The proposed method has important features: shortening query processing time and avoiding collisions between a query and query responses. We evaluate this method using the ns-2 simulator. The experimental results show that this method can achieve both collision avoidance and fast query processing at the same time. The success rate of data collection at a sink node executing this method is 100%. Its running time is about 35 percent faster than that of the round-robin method, and its memory size is reduced to about 10% of that of the depth-first search method.

  4. Grid-cell-based crop water accounting for the famine early warning system

    USGS Publications Warehouse

    Verdin, J.; Klaver, R.

    2002-01-01

    Rainfall monitoring is a regular activity of food security analysts for sub-Saharan Africa due to the potentially disastrous impact of drought. Crop water accounting schemes are used to track rainfall timing and amounts relative to phenological requirements, to infer water limitation impacts on yield. Unfortunately, many rain gauge reports are available only after significant delays, and the gauge locations leave large gaps in coverage. As an alternative, a grid-cell-based formulation for the water requirement satisfaction index (WRSI) was tested for maize in Southern Africa. Grids of input variables were obtained from remote sensing estimates of rainfall, meteorological models, and digital soil maps. The spatial WRSI was computed for the 1996-97 and 1997-98 growing seasons. Maize yields were estimated by regression and compared with a limited number of reports from the field for the 1996-97 season in Zimbabwe. Agreement at a useful level (r = 0.80) was observed. This is comparable to results from traditional analysis with station data. The findings demonstrate the complementary role that remote sensing, modelling, and geospatial analysis can play in an era when field data collection in sub-Saharan Africa is suffering an unfortunate decline. Published in 2002 by John Wiley & Sons, Ltd.

  5. Toward a Career-Based Theory of Job Involvement: A Study of Scientists and Engineers

    ERIC Educational Resources Information Center

    McKelvey, Bill; Sekaran, Uma

    1977-01-01

    Multiple regression analyses are used to determine the relative importance of 49 factors to job involvement in a study of 441 scientists and engineers. Of particular importance are career and personality factors. (Author)

  6. Prediction of Job Satisfaction Based on Workplace Facets for Adjunct Business Faculty at Four-Year Universities

    ERIC Educational Resources Information Center

    Lewis, Vance Johnson

    2012-01-01

    The purpose of this study was to examine the job satisfaction of adjuncts in the curriculum area of business at four-year universities and to determine the roles that individual job facets play in creating overall job satisfaction. To explore which job facets and demographics predict job satisfaction for the population, participants were asked to…

  7. Job descriptions and job matching.

    PubMed

    Pirie, Susan

    2004-10-01

    As the date for national roll-out and the implementation for Agenda for Change draws near, many of you will be involved in the job matching process. This is designed to measure your job against a national job profile, thus establishing which pay band you will be placed in and so determining your salary.

  8. Adult Competency Education Kit. Basic Skills in Speaking, Math, and Reading for Employment. Part G. ACE Competency Based Job Descriptions: #22--Refrigerator Mechanic; #24--Motorcycle Repairperson.

    ERIC Educational Resources Information Center

    San Mateo County Office of Education, Redwood City, CA. Career Preparation Centers.

    This fourth of fifteen sets of Adult Competency Education (ACE) Competency Based Job Descriptions in the ACE kit contains job descriptions for Refrigerator Mechanic and Motorcycle Repairperson. Each begins with a fact sheet that includes this information: occupational title, D.O.T. code, ACE number, career ladder, D.O.T. general educational…

  9. Computer-Based Video Instruction to Teach Young Adults with Moderate Intellectual Disabilities to Perform Multiple Step, Job Tasks in a Generalized Setting

    ERIC Educational Resources Information Center

    Mechling, Linda C.; Ortega-Hurndon, Fanny

    2007-01-01

    This study evaluated the effectiveness of computer-based video instruction (CBVI) to teach three young adults with moderate intellectual disabilities to perform complex, multiple step, job tasks in a generalized setting. A multiple probe design across three job tasks and replicated across three students was used to evaluate the effectiveness of…

  10. Associations between five-factor model traits and perceived job strain: a population-based study.

    PubMed

    Törnroos, Maria; Hintsanen, Mirka; Hintsa, Taina; Jokela, Markus; Pulkki-Råback, Laura; Hutri-Kähönen, Nina; Keltikangas-Järvinen, Liisa

    2013-10-01

    This study examined the association between Five-Factor Model personality traits and perceived job strain. The sample consisted of 758 women and 614 men (aged 30-45 years in 2007) participating in the Young Finns study. Personality was assessed with the Neuroticism, Extraversion, Openness, Five-Factor Inventory (NEO-FFI) questionnaire and work stress according to Karasek's demand-control model of job strain. The associations between personality traits and job strain and its components were measured by linear regression analyses where the traits were first entered individually and then simultaneously. The results for the associations between individually entered personality traits showed that high neuroticism, low extraversion, low openness, low conscientiousness, and low agreeableness were associated with high job strain. High neuroticism, high openness, and low agreeableness were related to high demands, whereas high neuroticism, low extraversion, low openness, low conscientiousness, and low agreeableness were associated with low control. In the analyses for the simultaneously entered traits, high neuroticism, low openness, and low conscientiousness were associated with high job strain. In addition, high neuroticism was related to high demands and low control, whereas low extraversion was related to low demands and low control. Low openness and low conscientiousness were also related to low control. This study suggests that personality is related to perceived job strain. Perceptions of work stressors and decision latitude are not only indicators of structural aspects of work but also indicate that there are individual differences in how individuals experience their work environment.

  11. FermiGrid - experience and future plans

    SciTech Connect

    Chadwick, K.; Berman, E.; Canal, P.; Hesselroth, T.; Garzoglio, G.; Levshina, T.; Sergeev, V.; Sfiligoi, I.; Timm, S.; Yocum, D.; /Fermilab

    2007-09-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid and the WLCG. FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the Open Science Grid (OSG), EGEE and the Worldwide LHC Computing Grid Collaboration (WLCG). Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure--the successes and the problems.

  12. An Optimization-oriented Simulation-based Job Shop Scheduling Method with Four Parameters Using Pattern Search

    NASA Astrophysics Data System (ADS)

    Arakawa, Masahiro; Fuyuki, Masahiko; Inoue, Ichiro

    Aiming at the elimination of tardy jobs in a job shop production schedule, an optimization-oriented simulation-based scheduling (OSBS) method incorporating capacity adjustment function is proposed. In order to determine the pertinent additional capacities and to control job allocations simultaneously the proposed method incorporates the parameter-space search improvement (PSSI) method into the scheduling procedure. In previous papers, we have introduced four parameters; two of them are used to control the upper limit to the additional capacity and the balance of the capacity distribution among machines, while the others are used to control the job allocation procedure. We found that a ‘direct' optimization procedure which uses the enumeration method produces a best solution with practical significance, but it takes too much computation time for practical use. In this paper, we propose a new method which adopts a pattern search method in the schedule generation procedure to obtain an approximate optimal solution. It is found that the computation time becomes short enough for a practical use. Moreover, the extension of the parameter domain yields an approximate optimal solution which is better than the best solution obtained by the ‘direct' optimization.

  13. QoS Differential Scheduling in Cognitive-Radio-Based Smart Grid Networks: An Adaptive Dynamic Programming Approach.

    PubMed

    Yu, Rong; Zhong, Weifeng; Xie, Shengli; Zhang, Yan; Zhang, Yun

    2016-02-01

    As the next-generation power grid, smart grid will be integrated with a variety of novel communication technologies to support the explosive data traffic and the diverse requirements of quality of service (QoS). Cognitive radio (CR), which has the favorable ability to improve the spectrum utilization, provides an efficient and reliable solution for smart grid communications networks. In this paper, we study the QoS differential scheduling problem in the CR-based smart grid communications networks. The scheduler is responsible for managing the spectrum resources and arranging the data transmissions of smart grid users (SGUs). To guarantee the differential QoS, the SGUs are assigned to have different priorities according to their roles and their current situations in the smart grid. Based on the QoS-aware priority policy, the scheduler adjusts the channels allocation to minimize the transmission delay of SGUs. The entire transmission scheduling problem is formulated as a semi-Markov decision process and solved by the methodology of adaptive dynamic programming. A heuristic dynamic programming (HDP) architecture is established for the scheduling problem. By the online network training, the HDP can learn from the activities of primary users and SGUs, and adjust the scheduling decision to achieve the purpose of transmission delay minimization. Simulation results illustrate that the proposed priority policy ensures the low transmission delay of high priority SGUs. In addition, the emergency data transmission delay is also reduced to a significantly low level, guaranteeing the differential QoS in smart grid. PMID:25910254

  14. GLIDE: a grid-based light-weight infrastructure for data-intensive environments

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris A.; Malek, Sam; Beckman, Nels; Mikic-Rakic, Marija; Medvidovic, Nenad; Chrichton, Daniel J.

    2005-01-01

    The promise of the grid is that it will enable public access and sharing of immense amounts of computational and data resources among dynamic coalitions of individuals and institutions. However, the current grid solutions make several limiting assumptions that curtail their widespread adoption. To address these limitations, we present GLIDE, a prototype light-weight, data-intensive middleware infrastructure that enables access to the robust data and computational power of the grid on DREAM platforms.

  15. Empowering the Older Job Seeker: Experimental Evaluation of the Older Worker Job Club.

    ERIC Educational Resources Information Center

    Gray, Denis

    Because older job seekers have been shown to exhibit less job search motivation and competence than other groups, a job club program based on learning and self help principles was developed to empower the older job seeker. Of persons (N=48) who requested assistance from a local area agency on aging, half entered the job club program and half were…

  16. Long Range Debye-Hückel Correction for Computation of Grid-based Electrostatic Forces Between Biomacromolecules

    SciTech Connect

    Mereghetti, Paolo; Martinez, M.; Wade, Rebecca C.

    2014-06-17

    Brownian dynamics (BD) simulations can be used to study very large molecular systems, such as models of the intracellular environment, using atomic-detail structures. Such simulations require strategies to contain the computational costs, especially for the computation of interaction forces and energies. A common approach is to compute interaction forces between macromolecules by precomputing their interaction potentials on three-dimensional discretized grids. For long-range interactions, such as electrostatics, grid-based methods are subject to finite size errors. We describe here the implementation of a Debye-Hückel correction to the grid-based electrostatic potential used in the SDA BD simulation software that was applied to simulate solutions of bovine serum albumin and of hen egg white lysozyme.

  17. A grid-based implementation of XDS-I as a part of a metropolitan EHR in Shanghai

    NASA Astrophysics Data System (ADS)

    Zhang, Jianguo; Zhang, Chenghao; Sun, Jianyong, Sr.; Yang, Yuanyuan; Jin, Jin; Yu, Fenghai; He, Zhenyu; Zheng, Xichuang; Qin, Huanrong; Feng, Jie; Zhang, Guozheng

    2007-03-01

    A number of hospitals in Shanghai are piloting the development of an EHR solution based on a grid concept with a service-oriented architecture (SOA). The first phase of the project targets the Diagnostic Imaging domain and allows seamless sharing of images and reports across the multiple hospitals. The EHR solution is fully aligned with the IHE XDS-I integration profile and consists of the components of the XDS-I Registry, Repository, Source and Consumer actors. By using SOA, the solution uses ebXML over secured http for all transactions with in the grid. However, communication with the PACS and RIS is DICOM and HL7 v3.x. The solution was installed in three hospitals and one date center in Shanghai and tested for performance of data publication, user query and image retrieval. The results are extremely positive and demonstrate that the EHR solution based on SOA with grid concept can scale effectively to server a regional implementation.

  18. Near-Body Grid Adaption for Overset Grids

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2016-01-01

    A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.

  19. Department 1824 Job Card System: A new web-based business tool

    SciTech Connect

    Brangan, J.R.

    1998-02-01

    The Analytical Chemistry Department uses a system of job cards to control and monitor the work through the organization. In the past, many different systems have been developed to allow each laboratory to monitor their individual work and report data. Unfortunately, these systems were separate and unique which caused difficulty in ascertaining any overall picture of the Department`s workload. To overcome these shortcomings, a new Job Card System was developed on Lotus Notes/Domino{trademark} for tracking the work through the laboratory. This application is groupware/database software and is located on the Sandia Intranet which allows users of any type of computer running a network browser to access the system. Security is provided through the use of logons and passwords for users who must add and/or modify information on the system. Customers may view the jobs in process by entering the system as an anonymous user. An overall view of the work in the department can be obtained by selecting from a variety of on screen reports. This enables the analysts, customers, customer contacts, and the Department Manager to quickly evaluate the work in process, the resources required, and the availability of equipment. On-line approval of the work and e-mail messaging of completed jobs has been provided to streamline the review and approval cycle. This paper provides a guide for the use of the Job Card System and information on maintenance of the system.

  20. Analysis and Validation of Grid dem Generation Based on Gaussian Markov Random Field

    NASA Astrophysics Data System (ADS)

    Aguilar, F. J.; Aguilar, M. A.; Blanco, J. L.; Nemmaoui, A.; García Lorca, A. M.

    2016-06-01

    Digital Elevation Models (DEMs) are considered as one of the most relevant geospatial data to carry out land-cover and land-use classification. This work deals with the application of a mathematical framework based on a Gaussian Markov Random Field (GMRF) to interpolate grid DEMs from scattered elevation data. The performance of the GMRF interpolation model was tested on a set of LiDAR data (0.87 points/m2) provided by the Spanish Government (PNOA Programme) over a complex working area mainly covered by greenhouses in Almería, Spain. The original LiDAR data was decimated by randomly removing different fractions of the original points (from 10% to up to 99% of points removed). In every case, the remaining points (scattered observed points) were used to obtain a 1 m grid spacing GMRF-interpolated Digital Surface Model (DSM) whose accuracy was assessed by means of the set of previously extracted checkpoints. The GMRF accuracy results were compared with those provided by the widely known Triangulation with Linear Interpolation (TLI). Finally, the GMRF method was applied to a real-world case consisting of filling the LiDAR-derived DSM gaps after manually filtering out non-ground points to obtain a Digital Terrain Model (DTM). Regarding accuracy, both GMRF and TLI produced visually pleasing and similar results in terms of vertical accuracy. As an added bonus, the GMRF mathematical framework makes possible to both retrieve the estimated uncertainty for every interpolated elevation point (the DEM uncertainty) and include break lines or terrain discontinuities between adjacent cells to produce higher quality DTMs.

  1. Old and Unemployable? How Age‐Based Stereotypes Affect Willingness to Hire Job Candidates

    PubMed Central

    Swift, Hannah J.; Drury, Lisbeth

    2016-01-01

    Across the world, people are required, or want, to work until an increasingly old age. But how might prospective employers view job applicants who have skills and qualities that they associate with older adults? This article draws on social role theory, age stereotypes and research on hiring biases, and reports three studies using age‐diverse North American participants. These studies reveal that: (1) positive older age stereotype characteristics are viewed less favorably as criteria for job hire, (2) even when the job role is low‐status, a younger stereotype profile tends to be preferred, and (3) an older stereotype profile is only considered hirable when the role is explicitly cast as subordinate to that of a candidate with a younger age profile. Implications for age‐positive selection procedures and ways to reduce the impact of implicit age biases are discussed. PMID:27635102

  2. Old and Unemployable? How Age‐Based Stereotypes Affect Willingness to Hire Job Candidates

    PubMed Central

    Swift, Hannah J.; Drury, Lisbeth

    2016-01-01

    Across the world, people are required, or want, to work until an increasingly old age. But how might prospective employers view job applicants who have skills and qualities that they associate with older adults? This article draws on social role theory, age stereotypes and research on hiring biases, and reports three studies using age‐diverse North American participants. These studies reveal that: (1) positive older age stereotype characteristics are viewed less favorably as criteria for job hire, (2) even when the job role is low‐status, a younger stereotype profile tends to be preferred, and (3) an older stereotype profile is only considered hirable when the role is explicitly cast as subordinate to that of a candidate with a younger age profile. Implications for age‐positive selection procedures and ways to reduce the impact of implicit age biases are discussed.

  3. Adapting a commercial power system simulator for smart grid based system study and vulnerability assessment

    NASA Astrophysics Data System (ADS)

    Navaratne, Uditha Sudheera

    The smart grid is the future of the power grid. Smart meters and the associated network play a major role in the distributed system of the smart grid. Advance Metering Infrastructure (AMI) can enhance the reliability of the grid, generate efficient energy management opportunities and many innovations around the future smart grid. These innovations involve intense research not only on the AMI network itself but as also on the influence an AMI network can have upon the rest of the power grid. This research describes a smart meter testbed with hardware in loop that can facilitate future research in an AMI network. The smart meters in the testbed were developed such that their functionality can be customized to simulate any given scenario such as integrating new hardware components into a smart meter or developing new encryption algorithms in firmware. These smart meters were integrated into the power system simulator to simulate the power flow variation in the power grid on different AMI activities. Each smart meter in the network also provides a communication interface to the home area network. This research delivers a testbed for emulating the AMI activities and monitoring their effect on the smart grid.

  4. Using a representative sample of workers for constructing the SUMEX French general population based job-exposure matrix

    PubMed Central

    Gueguen, A; Goldberg, M; Bonenfant, S; Martin, J

    2004-01-01

    Background: Job-exposure matrices (JEMs) applicable to the general population are usually constructed by using only the expertise of specialists. Aims: To construct a population based JEM for chemical agents from data based on a sample of French workers for surveillance purposes. Methods: The SUMEX job-exposure matrix was constructed from data collected via a cross-sectional survey of a sample of French workers representative of the main economic sectors through the SUMER-94 survey: 1205 occupational physicians questioned 48 156 workers, and inventoried exposure to 102 chemicals. The companies' economic activities and the workers' occupations were coded according to the official French nomenclatures. A segmentation method was used to construct job groups that were homogeneous for exposure prevalence to chemical agents. The matrix was constructed in two stages: consolidation of occupations according to exposure prevalence; and establishment of exposure indices based on individual data from all the subjects in the sample. Results: An agent specific matrix could be constructed for 80 of the chemicals. The quality of the classification obtained for each was variable: globally, the performance of the method was better for less specific and therefore more easy to assess agents, and for exposures specific to certain occupations. Conclusions: Software has been developed to enable the SUMEX matrix to be used by occupational physicians and other prevention professionals responsible for surveillance of the health of the workforce in France. PMID:15208374

  5. glideinWMS - A generic pilot-based Workload Management System

    SciTech Connect

    Sfiligoi, Igor; /Fermilab

    2007-09-01

    The Grid resources are distributed among hundreds of independent Grid sites, requiring a higher level Workload Management System (WMS) to be used efficiently. Pilot jobs have been used for this purpose by many communities, bringing increased reliability, global fair share and just in time resource matching. GlideinWMS is a WMS based on the Condor glidein concept, i.e. a regular Condor pool, with the Condor daemons (startds) being started by pilot jobs, and real jobs being vanilla, standard or MPI universe jobs. The glideinWMS is composed of a set of Glidein Factories, handling the submission of pilot jobs to a set of Grid sites, and a set of VO Frontends, requesting pilot submission based on the status of user jobs. This paper contains the structural overview of glideinWMS as well as a detailed description of the current implementation and the current scalability limits.

  6. glideinWMS—a generic pilot-based workload management system

    NASA Astrophysics Data System (ADS)

    Sfiligoi, I.

    2008-07-01

    The Grid resources are distributed among hundreds of independent Grid sites, requiring a higher level Workload Management System (WMS) to be used efficiently. Pilot jobs have been used for this purpose by many communities, bringing increased reliability, global fair share and just in time resource matching. glideinWMS is a WMS based on the Condor glidein concept, i.e. a regular Condor pool, with the Condor daemons (startds) being started by pilot jobs, and real jobs being vanilla, standard or MPI universe jobs. The glideinWMS is composed of a set of Glidein Factories, handling the submission of pilot jobs to a set of Grid sites, and a set of VO Frontends, requesting pilot submission based on the status of user jobs. This paper contains the structural overview of glideinWMS as well as a detailed description of the current implementation and the current scalability limits.

  7. Software Based Barriers To Integration Of Renewables To The Future Distribution Grid

    SciTech Connect

    Stewart, Emma; Kiliccote, Sila

    2014-06-01

    The future distribution grid has complex analysis needs, which may not be met with the existing processes and tools. In addition there is a growing number of measured and grid model data sources becoming available. For these sources to be useful they must be accurate, and interpreted correctly. Data accuracy is a key barrier to the growth of the future distribution grid. A key goal for California, and the United States, is increasing the renewable penetration on the distribution grid. To increase this penetration measured and modeled representations of generation must be accurate and validated, giving distribution planners and operators confidence in their performance. This study will review the current state of these software and modeling barriers and opportunities for the future distribution grid.

  8. A procedure for the estimation of the numerical uncertainty of CFD calculations based on grid refinement studies

    SciTech Connect

    Eça, L.; Hoekstra, M.

    2014-04-01

    This paper offers a procedure for the estimation of the numerical uncertainty of any integral or local flow quantity as a result of a fluid flow computation; the procedure requires solutions on systematically refined grids. The error is estimated with power series expansions as a function of the typical cell size. These expansions, of which four types are used, are fitted to the data in the least-squares sense. The selection of the best error estimate is based on the standard deviation of the fits. The error estimate is converted into an uncertainty with a safety factor that depends on the observed order of grid convergence and on the standard deviation of the fit. For well-behaved data sets, i.e. monotonic convergence with the expected observed order of grid convergence and no scatter in the data, the method reduces to the well known Grid Convergence Index. Examples of application of the procedure are included. - Highlights: • Estimation of the numerical uncertainty of any integral or local flow quantity. • Least squares fits to power series expansions to handle noisy data. • Excellent results obtained for manufactured solutions. • Consistent results obtained for practical CFD calculations. • Reduces to the well known Grid Convergence Index for well-behaved data sets.

  9. The impact of job crafting on job demands, job resources, and well-being.

    PubMed

    Tims, Maria; Bakker, Arnold B; Derks, Daantje

    2013-04-01

    This longitudinal study examined whether employees can impact their own well-being by crafting their job demands and resources. Based on the job demands-resources model, we hypothesized that employee job crafting would have an impact on work engagement, job satisfaction, and burnout through changes in job demands and job resources. Data was collected in a chemical plant at three time points with one month in between the measurement waves (N = 288). The results of structural equation modeling showed that employees who crafted their job resources in the first month of the study showed an increase in their structural and social resources over the course of the study (2 months). This increase in job resources was positively related to employee well-being (increased engagement and job satisfaction, and decreased burnout). Crafting job demands did not result in a change in job demands, but results revealed direct effects of crafting challenging demands on increases in well-being. We conclude that employee job crafting has a positive impact on well-being and that employees therefore should be offered opportunities to craft their own jobs.

  10. Active Job Monitoring in Pilots

    NASA Astrophysics Data System (ADS)

    Kuehn, Eileen; Fischer, Max; Giffels, Manuel; Jung, Christopher; Petzold, Andreas

    2015-12-01

    Recent developments in high energy physics (HEP) including multi-core jobs and multi-core pilots require data centres to gain a deep understanding of the system to monitor, design, and upgrade computing clusters. Networking is a critical component. Especially the increased usage of data federations, for example in diskless computing centres or as a fallback solution, relies on WAN connectivity and availability. The specific demands of different experiments and communities, but also the need for identification of misbehaving batch jobs, requires an active monitoring. Existing monitoring tools are not capable of measuring fine-grained information at batch job level. This complicates network-aware scheduling and optimisations. In addition, pilots add another layer of abstraction. They behave like batch systems themselves by managing and executing payloads of jobs internally. The number of real jobs being executed is unknown, as the original batch system has no access to internal information about the scheduling process inside the pilots. Therefore, the comparability of jobs and pilots for predicting run-time behaviour or network performance cannot be ensured. Hence, identifying the actual payload is important. At the GridKa Tier 1 centre a specific tool is in use that allows the monitoring of network traffic information at batch job level. This contribution presents the current monitoring approach and discusses recent efforts and importance to identify pilots and their substructures inside the batch system. It will also show how to determine monitoring data of specific jobs from identified pilots. Finally, the approach is evaluated.

  11. DISTRIBUTED GRID-CONNECTED PHOTOVOLTAIC POWER SYSTEM EMISSION OFFSET ASSESSMENT: STATISTICAL TEST OF SIMULATED- AND MEASURED-BASED DATA

    EPA Science Inventory

    This study assessed the pollutant emission offset potential of distributed grid-connected photovoltaic (PV) power systems. Computer-simulated performance results were utilized for 211 PV systems located across the U.S. The PV systems' monthly electrical energy outputs were based ...

  12. Using Grid Benchmarks for Dynamic Scheduling of Grid Applications

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Hood, Robert

    2003-01-01

    Navigation or dynamic scheduling of applications on computational grids can be improved through the use of an application-specific characterization of grid resources. Current grid information systems provide a description of the resources, but do not contain any application-specific information. We define a GridScape as dynamic state of the grid resources. We measure the dynamic performance of these resources using the grid benchmarks. Then we use the GridScape for automatic assignment of the tasks of a grid application to grid resources. The scalability of the system is achieved by limiting the navigation overhead to a few percent of the application resource requirements. Our task submission and assignment protocol guarantees that the navigation system does not cause grid congestion. On a synthetic data mining application we demonstrate that Gridscape-based task assignment reduces the application tunaround time.

  13. Job enrichment in job design.

    PubMed

    Bobeng, B J

    1977-03-01

    For optimal operation in labor-intensive industries, such as foodservice, not only scientific management principles but also behavioral aspects (the people) must be considered in designing job content. Three psychologic states--work that is meaningful, responsibility for outcomes, and knowledge of outcomes--are critical in motivating people. These, in turn encompass the core dimensions of skill variety, task identity, task significance, autonomy, and feedback. Job enrichment and job enlargement--related but not identical means of expanding job content--when combined, offer the likelihood of redesigned jobs in the core dimensions. Effective implementation of a job enrichment program hinges on diagnosing problems in the work system, actual changes in the work, and systematic evaluation of the changes. The importance of the contribution of the behavioral sciences to management cannot be neglected.

  14. Smart grid initialization reduces the computational complexity of multi-objective image registration based on a dual-dynamic transformation model to account for large anatomical differences

    NASA Astrophysics Data System (ADS)

    Bosman, Peter A. N.; Alderliesten, Tanja

    2016-03-01

    We recently demonstrated the strong potential of using dual-dynamic transformation models when tackling deformable image registration problems involving large anatomical differences. Dual-dynamic transformation models employ two moving grids instead of the common single moving grid for the target image (and single fixed grid for the source image). We previously employed powerful optimization algorithms to make use of the additional flexibility offered by a dual-dynamic transformation model with good results, directly obtaining insight into the trade-off between important registration objectives as a result of taking a multi-objective approach to optimization. However, optimization has so far been initialized using two regular grids, which still leaves a great potential of dual-dynamic transformation models untapped: a-priori grid alignment with image structures/areas that are expected to deform more. This allows (far) less grid points to be used, compared to using a sufficiently refined regular grid, leading to (far) more efficient optimization, or, equivalently, more accurate results using the same number of grid points. We study the implications of exploiting this potential by experimenting with two new smart grid initialization procedures: one manual expert-based and one automated image-feature-based. We consider a CT test case with large differences in bladder volume with and without a multi-resolution scheme and find a substantial benefit of using smart grid initialization.

  15. Development of Smart Grid for Community and Cyber based Landslide Hazard Monitoring and Early Warning System

    NASA Astrophysics Data System (ADS)

    Karnawati, D.; Wilopo, W.; Fathani, T. F.; Fukuoka, H.; Andayani, B.

    2012-12-01

    A Smart Grid is a cyber-based tool to facilitate a network of sensors for monitoring and communicating the landslide hazard and providing the early warning. The sensor is designed as an electronic sensor installed in the existing monitoring and early warning instruments, and also as the human sensors which comprise selected committed-people at the local community, such as the local surveyor, local observer, member of the local task force for disaster risk reduction, and any person at the local community who has been registered to dedicate their commitments for sending reports related to the landslide symptoms observed at their living environment. This tool is designed to be capable to receive up to thousands of reports/information at the same time through the electronic sensors, text message (mobile phone), the on-line participatory web as well as various social media such as Twitter and Face book. The information that should be recorded/ reported by the sensors is related to the parameters of landslide symptoms, for example the progress of cracks occurrence, ground subsidence or ground deformation. Within 10 minutes, this tool will be able to automatically elaborate and analyse the reported symptoms to predict the landslide hazard and risk levels. The predicted level of hazard/ risk can be sent back to the network of electronic and human sensors as the early warning information. The key parameters indicating the symptoms of landslide hazard were recorded/ monitored by the electrical and the human sensors. Those parameters were identified based on the investigation on geological and geotechnical conditions, supported with the laboratory analysis. The cause and triggering mechanism of landslide in the study area was also analysed in order to define the critical condition to launch the early warning. However, not only the technical but also social system were developed to raise community awareness and commitments to serve the mission as the human sensors, which will

  16. Improved mask-based CD uniformity for gridded-design-rule lithography

    NASA Astrophysics Data System (ADS)

    Faivishevsky, Lev; Khristo, Sergey; Sagiv, Amir; Mangan, Shmoolik

    2009-03-01

    The difficulties encountered during lithography of state-of-the-art 2D patterns are formidable, and originate from the fact that deep sub-wavelength features are being printed. This results in a practical limit of k1 >=0.4 as well as a multitude of complex restrictive design rules, in order to mitigate or minimize lithographic hot spots. An alternative approach, that is gradually attracting the lithographic community's attention, restricts the design of critical layers to straight, dense lines (a 1D grid), that can be relatively easily printed using current lithographic technology. This is then followed by subsequent, less critical trimming stages to obtain circuit functionality. Thus, the 1D gridded approach allows hotspot-free, proximity-effect free lithography of ultra low- k1 features. These advantages must be supported by a stable CD control mechanism. One of the overriding parameters impacting CDU performance is photo mask quality. Previous publications have demonstrated that IntenCDTM - a novel, mask-based CDU mapping technology running on Applied Materials' Aera2TM aerial imaging mask inspection tool - is ideally fit for detecting mask-based CDU issues in 1D (L&S) patterned masks for memory production. Owing to the aerial nature of image formation, IntenCD directly probes the CD as it is printed on the wafer. In this paper we suggest that IntenCD is naturally fit for detecting mask-based CDU issues in 1D GDR masks. We then study a novel method of recovering and quantifying the physical source of printed CDU, using a novel implementation of the IntenCD technology. We demonstrate that additional, simple measurements, which can be readily performed on board the Aera2TM platform with minimal throughput penalty, may complement IntenCD and allow a robust estimation of the specific nature and strength of mask error source, such as pattern width variation or phase variation, which leads to CDU issues on the printed wafer. We finally discuss the roles played by

  17. The Impact of Diagnosis on Job Retention: A Danish Registry-Based Cohort Study.

    PubMed

    Espersen, Rasmus; Jensen, Vibeke; Berg Johansen, Martin; Fonager, Kirsten

    2015-01-01

    Background. In 1998, Denmark introduced the flex job scheme to ensure employment of people with a permanent reduced work capacity. This study investigated the association between select diagnoses and the risk of disability pension among persons eligible for the scheme. Methods. Using the national DREAM database we identified all persons eligible for the flex job scheme from 2001 to 2008. This information piece was linked to the hospital discharge registry. Selected participants were followed for 5 years. Results. From the 72,629 persons identified, our study included 329 patients with rheumatoid arthritis, 10,120 patients with spine disorders, 2179 patients with ischemic heart disease, and 1765 patients with functional disorders. A reduced risk of disability pension was found in the group with rheumatoid arthritis (hazard ratio = 0.69 (0.53-0.90)) compared to the group with spine disorders. No differences were found when comparing ischemic heart disease and functional disorders. Employment during the first 3 months of the flex job scheme increased the degree of employment for all groups. Conclusion. Differences in the risk of disability pension were identified only in patients with rheumatoid arthritis. This study demonstrates the importance of obtaining employment immediately after allocation to the flex job scheme, regardless of diagnosis. PMID:26697223

  18. A Gender Based Study on Job Satisfaction among Higher Secondary School Heads in Khyber Pakhtunkhwa, (Pakistan)

    ERIC Educational Resources Information Center

    Mumtaz, Safina; Suleman, Qaiser; Ahmad, Zubair

    2016-01-01

    The purpose of the study was to analyze and compare the job satisfaction with twenty dimensions of male and female higher secondary school heads in Khyber Pakhtunkhwa. A total of 108 higher secondary school heads were selected from eleven districts as sample through multi-stage sampling technique in which 66 were male and 42 were female. The study…

  19. Beginning Teachers' Job Satisfaction: The Impact of School-Based Factors

    ERIC Educational Resources Information Center

    Lam, Bick-har; Yan, Hoi-fai

    2011-01-01

    Using a longitudinal design, the job satisfaction and career development of beginning teachers are explored in the present study. Beginning teachers were initially interviewed after graduation from the teacher training programme and then after gaining a two-year teaching experience. The results are presented in a fourfold typology in which the…

  20. Performance-Based Certification in Georgia. Teaching Field Criterion-Referenced Tests Development. On-the-Job Assessment Development. Schedule for Implementation of Performance-Based Certification.

    ERIC Educational Resources Information Center

    Georgia State Dept. of Education, Atlanta. Office of Planning and Development.

    Performance-based teacher certification in Georgia is centered on: criterion-referenced tests, and on-the-job assessment procedures for student teachers and beginning teachers. A state-funded contract was awarded to develop teacher certification tests, resulting in a 250-item pool and 15 criterion-referenced tests for 32 teaching fields. Cutting…

  1. Safe Grid

    NASA Technical Reports Server (NTRS)

    Chow, Edward T.; Stewart, Helen; Korsmeyer, David (Technical Monitor)

    2003-01-01

    The biggest users of GRID technologies came from the science and technology communities. These consist of government, industry and academia (national and international). The NASA GRID is moving into a higher technology readiness level (TRL) today; and as a joint effort among these leaders within government, academia, and industry, the NASA GRID plans to extend availability to enable scientists and engineers across these geographical boundaries collaborate to solve important problems facing the world in the 21 st century. In order to enable NASA programs and missions to use IPG resources for program and mission design, the IPG capabilities needs to be accessible from inside the NASA center networks. However, because different NASA centers maintain different security domains, the GRID penetration across different firewalls is a concern for center security people. This is the reason why some IPG resources are been separated from the NASA center network. Also, because of the center network security and ITAR concerns, the NASA IPG resource owner may not have full control over who can access remotely from outside the NASA center. In order to obtain organizational approval for secured remote access, the IPG infrastructure needs to be adapted to work with the NASA business process. Improvements need to be made before the IPG can be used for NASA program and mission development. The Secured Advanced Federated Environment (SAFE) technology is designed to provide federated security across NASA center and NASA partner's security domains. Instead of one giant center firewall which can be difficult to modify for different GRID applications, the SAFE "micro security domain" provide large number of professionally managed "micro firewalls" that can allow NASA centers to accept remote IPG access without the worry of damaging other center resources. The SAFE policy-driven capability-based federated security mechanism can enable joint organizational and resource owner approved remote

  2. Sound Source Localization for HRI Using FOC-Based Time Difference Feature and Spatial Grid Matching.

    PubMed

    Li, Xiaofei; Liu, Hong

    2013-08-01

    In human-robot interaction (HRI), speech sound source localization (SSL) is a convenient and efficient way to obtain the relative position between a speaker and a robot. However, implementing a SSL system based on TDOA method encounters many problems, such as noise of real environments, the solution of nonlinear equations, switch between far field and near field. In this paper, fourth-order cumulant spectrum is derived, based on which a time delay estimation (TDE) algorithm that is available for speech signal and immune to spatially correlated Gaussian noise is proposed. Furthermore, time difference feature of sound source and its spatial distribution are analyzed, and a spatial grid matching (SGM) algorithm is proposed for localization step, which handles some problems that geometric positioning method faces effectively. Valid feature detection algorithm and a decision tree method are also suggested to improve localization performance and reduce computational complexity. Experiments are carried out in real environments on a mobile robot platform, in which thousands of sets of speech data with noise collected by four microphones are tested in 3D space. The effectiveness of our TDE method and SGM algorithm is verified. PMID:26502430

  3. High Energy IED measurements with MEMs based Si grid technology inside a 300mm Si wafer

    NASA Astrophysics Data System (ADS)

    Funk, Merritt

    2012-10-01

    The measurement of ion energy at the wafer surface for commercial equipment and process development without extensive modification of the reactor geometry has been an industry challenge. High energy, wide frequency range, process gases tolerant, contamination free and accurate ion energy measurements are the base requirements. In this work we will report on the complete system developed to achieve the base requirements. The system includes: a reusable silicon ion energy analyzer (IEA) wafer, signal feed through, RF confinement, and high voltage measurement and control. The IEA wafer design required carful understanding of the relationships between the plasma Debye length, the number of grids, intergrid charge exchange (spacing), capacitive coupling, materials, and dielectric flash over constraints. RF confinement with measurement transparency was addressed so as not to disturb the chamber plasma, wafer sheath and DC self-bias as well as to achieve spectral accuracy The experimental results were collected using a commercial parallel plate etcher powered by a dual frequency (VHF + LF). Modeling and Simulations also confirmed the details captured in the IED.

  4. Sound Source Localization for HRI Using FOC-Based Time Difference Feature and Spatial Grid Matching.

    PubMed

    Li, Xiaofei; Liu, Hong

    2013-08-01

    In human-robot interaction (HRI), speech sound source localization (SSL) is a convenient and efficient way to obtain the relative position between a speaker and a robot. However, implementing a SSL system based on TDOA method encounters many problems, such as noise of real environments, the solution of nonlinear equations, switch between far field and near field. In this paper, fourth-order cumulant spectrum is derived, based on which a time delay estimation (TDE) algorithm that is available for speech signal and immune to spatially correlated Gaussian noise is proposed. Furthermore, time difference feature of sound source and its spatial distribution are analyzed, and a spatial grid matching (SGM) algorithm is proposed for localization step, which handles some problems that geometric positioning method faces effectively. Valid feature detection algorithm and a decision tree method are also suggested to improve localization performance and reduce computational complexity. Experiments are carried out in real environments on a mobile robot platform, in which thousands of sets of speech data with noise collected by four microphones are tested in 3D space. The effectiveness of our TDE method and SGM algorithm is verified.

  5. Information Theoretically Secure, Enhanced Johnson Noise Based Key Distribution over the Smart Grid with Switched Filters

    PubMed Central

    2013-01-01

    We introduce a protocol with a reconfigurable filter system to create non-overlapping single loops in the smart power grid for the realization of the Kirchhoff-Law-Johnson-(like)-Noise secure key distribution system. The protocol is valid for one-dimensional radial networks (chain-like power line) which are typical of the electricity distribution network between the utility and the customer. The speed of the protocol (the number of steps needed) versus grid size is analyzed. When properly generalized, such a system has the potential to achieve unconditionally secure key distribution over the smart power grid of arbitrary geometrical dimensions. PMID:23936164

  6. Information theoretically secure, enhanced Johnson noise based key distribution over the smart grid with switched filters.

    PubMed

    Gonzalez, Elias; Kish, Laszlo B; Balog, Robert S; Enjeti, Prasad

    2013-01-01

    We introduce a protocol with a reconfigurable filter system to create non-overlapping single loops in the smart power grid for the realization of the Kirchhoff-Law-Johnson-(like)-Noise secure key distribution system. The protocol is valid for one-dimensional radial networks (chain-like power line) which are typical of the electricity distribution network between the utility and the customer. The speed of the protocol (the number of steps needed) versus grid size is analyzed. When properly generalized, such a system has the potential to achieve unconditionally secure key distribution over the smart power grid of arbitrary geometrical dimensions.

  7. Information theoretically secure, enhanced Johnson noise based key distribution over the smart grid with switched filters.

    PubMed

    Gonzalez, Elias; Kish, Laszlo B; Balog, Robert S; Enjeti, Prasad

    2013-01-01

    We introduce a protocol with a reconfigurable filter system to create non-overlapping single loops in the smart power grid for the realization of the Kirchhoff-Law-Johnson-(like)-Noise secure key distribution system. The protocol is valid for one-dimensional radial networks (chain-like power line) which are typical of the electricity distribution network between the utility and the customer. The speed of the protocol (the number of steps needed) versus grid size is analyzed. When properly generalized, such a system has the potential to achieve unconditionally secure key distribution over the smart power grid of arbitrary geometrical dimensions. PMID:23936164

  8. Grid adaption based on modified anisotropic diffusion equations formulated in the parametic domain

    SciTech Connect

    Hagmeijer, R.

    1994-11-01

    A new grid-adaption algorithm for problems in computational fluid dynamics is presented. The basic equations are derived from a variational problem formulated in the parametric domain of the mapping that defines the existing grid. Modification of the basic equations provides desirable properties in boundary layers. The resulting modified anisotropic diffusion equations are solved for the computational coordinates as functions of the parametric coordinates and these functions are numerically inverted. Numerical examples show that the algorithm is robust, that shocks and boundary layers are well-resolved on the adapted grid, and that the flow solution becomes a globally smooth function of the computational coordinates.

  9. ReSS: A Resource Selection Service for the Open Science Grid

    SciTech Connect

    Garzoglio, Gabriele; Levshina, Tanya; Mhashilkar, Parag; Timm, Steve; /Fermilab

    2008-01-01

    The Open Science Grid offers access to hundreds of computing and storage resources via standard Grid interfaces. Before the deployment of an automated resource selection system, users had to submit jobs directly to these resources. They would manually select a resource and specify all relevant attributes in the job description prior to submitting the job. The necessity of a human intervention in resource selection and attribute specification hinders automated job management components from accessing OSG resources and it is inconvenient for the users. The Resource Selection Service (ReSS) project addresses these shortcomings. The system integrates condor technology, for the core match making service, with the gLite CEMon component, for gathering and publishing resource information in the Glue Schema format. Each one of these components communicates over secure protocols via web services interfaces. The system is currently used in production on OSG by the DZero Experiment, the Engagement Virtual Organization, and the Dark Energy. It is also the resource selection service for the Fermilab Campus Grid, FermiGrid. ReSS is considered a lightweight solution to push-based workload management. This paper describes the architecture, performance, and typical usage of the system.

  10. Grid-based estimates of stellar ages in binary systems. SCEPtER: Stellar CharactEristics Pisa Estimation gRid

    NASA Astrophysics Data System (ADS)

    Valle, G.; Dell'Omodarme, M.; Prada Moroni, P. G.; Degl'Innocenti, S.

    2015-07-01

    Aims: We investigate the performance of grid-based techniques in estimating the age of stars in detached eclipsing binary systems. We evaluate the precision of the estimates due to the uncertainty in the observational constraints - masses, radii, effective temperatures, and [Fe/H] - and the systematic bias caused by the uncertainty in convective core overshooting, element diffusion, mixing-length value, and initial helium content. Methods: We adopted the SCEPtER grid, which includes stars with mass in the range [0.8; 1.6] M⊙ and evolutionary stages from the zero-age main sequence to the central hydrogen depletion. Age estimates have been obtained by a generalisation of the maximum likelihood technique described in our previous work. Results: We showed that the typical 1σ random error in age estimates - due only to the uncertainty affecting the observational constraints - is about ± 7%, which is nearly independent of the masses of the two stars. However, such an error strongly depends on the evolutionary phase and becomes larger and asymmetric for stars near the zero-age main sequence where it ranges from about + 90% to -25%. The systematic bias due to the including convective core overshooting - for mild and strong overshooting scenarios - is about 50% and 120% of the error due to observational uncertainties. A variation of ± 1 in the helium-to-metal enrichment ratio ΔY/ ΔZ accounts for about ± 150% of the random error. The neglect of microscopic diffusion accounts for a bias of about 60% of the error due to observational uncertainties. We also introduced a statistical test of the expected difference in the recovered age of two coeval stars in a binary system. We find that random fluctuations within the current observational uncertainties can lead genuine coeval binary components to appear to be non-coeval with a difference in age as high as 60%. Appendix A is available in electronic form at http://www.aanda.org

  11. Securing smart grid technology

    NASA Astrophysics Data System (ADS)

    Chaitanya Krishna, E.; Kosaleswara Reddy, T.; Reddy, M. YogaTeja; Reddy G. M., Sreerama; Madhusudhan, E.; AlMuhteb, Sulaiman

    2013-03-01

    In the developing countries electrical energy is very important for its all-round improvement by saving thousands of dollars and investing them in other sector for development. For Growing needs of power existing hierarchical, centrally controlled grid of the 20th Century is not sufficient. To produce and utilize effective power supply for industries or people we should have Smarter Electrical grids that address the challenges of the existing power grid. The Smart grid can be considered as a modern electric power grid infrastructure for enhanced efficiency and reliability through automated control, high-power converters, modern communications infrastructure along with modern IT services, sensing and metering technologies, and modern energy management techniques based on the optimization of demand, energy and network availability and so on. The main objective of this paper is to provide a contemporary look at the current state of the art in smart grid communications as well as critical issues on smart grid technologies primarily in terms of information and communication technology (ICT) issues like security, efficiency to communications layer field. In this paper we propose new model for security in Smart Grid Technology that contains Security Module(SM) along with DEM which will enhance security in Grid. It is expected that this paper will provide a better understanding of the technologies, potential advantages and research challenges of the smart grid and provoke interest among the research community to further explore this promising research area.

  12. Motivating medical information system performance by system quality, service quality, and job satisfaction for evidence-based practice

    PubMed Central

    2012-01-01

    Background No previous studies have addressed the integrated relationships among system quality, service quality, job satisfaction, and system performance; this study attempts to bridge such a gap with evidence-based practice study. Methods The convenience sampling method was applied to the information system users of three hospitals in southern Taiwan. A total of 500 copies of questionnaires were distributed, and 283 returned copies were valid, suggesting a valid response rate of 56.6%. SPSS 17.0 and AMOS 17.0 (structural equation modeling) statistical software packages were used for data analysis and processing. Results The findings are as follows: System quality has a positive influence on service quality (γ11= 0.55), job satisfaction (γ21= 0.32), and system performance (γ31= 0.47). Service quality (β31= 0.38) and job satisfaction (β32= 0.46) will positively influence system performance. Conclusions It is thus recommended that the information office of hospitals and developers take enhancement of service quality and user satisfaction into consideration in addition to placing b on system quality and information quality when designing, developing, or purchasing an information system, in order to improve benefits and gain more achievements generated by hospital information systems. PMID:23171394

  13. Initial Study on the Predictability of Real Power on the Grid based on PMU Data

    SciTech Connect

    Ferryman, Thomas A.; Tuffner, Francis K.; Zhou, Ning; Lin, Guang

    2011-03-23

    Operations on the electric power grid provide highly reliable power to the end users. These operations involve hundreds of human operators and automated control schemes. However, the operations process can often take several minutes to complete. During these several minutes, the operations are often evaluated on a past state of the power system. Proper prediction methods could change this to make the operations evaluate the state of the power grid minutes in advance. Such information allows proactive, rather than reactive, actions on the power system and aids in improving the efficiency and reliability of the power grid as a whole. A successful demonstration of this prediction framework is necessary to evaluate the feasibility of utilizing such predicted states in grid operations.

  14. Grid-based precision aim system and method for disrupting suspect objects

    DOEpatents

    Gladwell, Thomas Scott; Garretson, Justin; Hobart, Clinton G.; Monda, Mark J.

    2014-06-10

    A system and method for disrupting at least one component of a suspect object is provided. The system has a source for passing radiation through the suspect object, a grid board positionable adjacent the suspect object (the grid board having a plurality of grid areas, the radiation from the source passing through the grid board), a screen for receiving the radiation passing through the suspect object and generating at least one image, a weapon for deploying a discharge, and a targeting unit for displaying the image of the suspect object and aiming the weapon according to a disruption point on the displayed image and deploying the discharge into the suspect object to disable the suspect object.

  15. An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    1999-01-01

    An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.

  16. An Updating System for the Gridded Population Database of China Based on Remote Sensing, GIS and Spatial Database Technologies.

    PubMed

    Yang, Xiaohuan; Huang, Yaohuan; Dong, Pinliang; Jiang, Dong; Liu, Honghui

    2009-01-01

    The spatial distribution of population is closely related to land use and land cover (LULC) patterns on both regional and global scales. Population can be redistributed onto geo-referenced square grids according to this relation. In the past decades, various approaches to monitoring LULC using remote sensing and Geographic Information Systems (GIS) have been developed, which makes it possible for efficient updating of geo-referenced population data. A Spatial Population Updating System (SPUS) is developed for updating the gridded population database of China based on remote sensing, GIS and spatial database technologies, with a spatial resolution of 1 km by 1 km. The SPUS can process standard Moderate Resolution Imaging Spectroradiometer (MODIS L1B) data integrated with a Pattern Decomposition Method (PDM) and an LULC-Conversion Model to obtain patterns of land use and land cover, and provide input parameters for a Population Spatialization Model (PSM). The PSM embedded in SPUS is used for generating 1 km by 1 km gridded population data in each population distribution region based on natural and socio-economic variables. Validation results from finer township-level census data of Yishui County suggest that the gridded population database produced by the SPUS is reliable.

  17. Long range Debye-Hückel correction for computation of grid-based electrostatic forces between biomacromolecules

    PubMed Central

    2014-01-01

    Background Brownian dynamics (BD) simulations can be used to study very large molecular systems, such as models of the intracellular environment, using atomic-detail structures. Such simulations require strategies to contain the computational costs, especially for the computation of interaction forces and energies. A common approach is to compute interaction forces between macromolecules by precomputing their interaction potentials on three-dimensional discretized grids. For long-range interactions, such as electrostatics, grid-based methods are subject to finite size errors. We describe here the implementation of a Debye-Hückel correction to the grid-based electrostatic potential used in the SDA BD simulation software that was applied to simulate solutions of bovine serum albumin and of hen egg white lysozyme. Results We found that the inclusion of the long-range electrostatic correction increased the accuracy of both the protein-protein interaction profiles and the protein diffusion coefficients at low ionic strength. Conclusions An advantage of this method is the low additional computational cost required to treat long-range electrostatic interactions in large biomacromolecular systems. Moreover, the implementation described here for BD simulations of protein solutions can also be applied in implicit solvent molecular dynamics simulations that make use of gridded interaction potentials. PMID:25045516

  18. Grid-based Molecular Footprint Comparison Method for Docking and De Novo Design: Application to HIVgp41

    PubMed Central

    Mukherjee, Sudipto; Rizzo, Robert C.

    2014-01-01

    Scoring functions are a critically important component of computer-aided screening methods for the identification of lead compounds during early stages of drug discovery. Here, we present a new multi-grid implementation of the footprint similarity (FPS) scoring function that was recently developed in our laboratory which has proven useful for identification of compounds which bind to a protein on a per-residue basis in a way that resembles a known reference. The grid-based FPS method is much faster than its Cartesian-space counterpart which makes it computationally tractable for on-the-fly docking, virtual screening, or de novo design. In this work, we establish that: (i) relatively few grids can be used to accurately approximate Cartesian space footprint similarity, (ii) the method yields improved success over the standard DOCK energy function for pose identification across a large test set of experimental co-crystal structures, for crossdocking, and for database enrichment, and (iii) grid-based FPS scoring can be used to tailor construction of new molecules to have specific properties, as demonstrated in a series of test cases targeting the viral protein HIVgp41. The method will be made available in the program DOCK6. PMID:23436713

  19. GNARE: an environment for Grid-based high-throughput genome analysis.

    SciTech Connect

    Sulakhe, D.; Rodriguez, A.; D'Souza, M.; Wilde, M.; Nefedova, V.; Foster, I.; Maltsev, N.; Mathematics and Computer Science; Univ. of Chicago

    2005-01-01

    Recent progress in genomics and experimental biology has brought exponential growth of the biological information available for computational analysis in public genomics databases. However, applying the potentially enormous scientific value of this information to the understanding of biological systems requires computing and data storage technology of an unprecedented scale. The grid, with its aggregated and distributed computational and storage infrastructure, offers an ideal platform for high-throughput bioinformatics analysis. To leverage this we have developed the Genome Analysis Research Environment (GNARE) - a scalable computational system for the high-throughput analysis of genomes, which provides an integrated database and computational backend for data-driven bioinformatics applications. GNARE efficiently automates the major steps of genome analysis including acquisition of data from multiple genomic databases; data analysis by a diverse set of bioinformatics tools; and storage of results and annotations. High-throughput computations in GNARE are performed using distributed heterogeneous grid computing resources such as Grid2003, TeraGrid, and the DOE science grid. Multi-step genome analysis workflows involving massive data processing, the use of application-specific toots and algorithms and updating of an integrated database to provide interactive Web access to results are all expressed and controlled by a 'virtual data' model which transparently maps computational workflows to distributed grid resources. This paper describes how Grid technologies such as Globus, Condor, and the Gryphyn virtual data system were applied in the development of GNARE. It focuses on our approach to Grid resource allocation and to the use of GNARE as a computational framework for the development of bioinformatics applications.

  20. Uncertainty Analysis of Power Grid Investment Capacity Based on Monte Carlo

    NASA Astrophysics Data System (ADS)

    Qin, Junsong; Liu, Bingyi; Niu, Dongxiao

    By analyzing the influence factors of the investment capacity of power grid, to depreciation cost, sales price and sales quantity, net profit, financing and GDP of the second industry as the dependent variable to build the investment capacity analysis model. After carrying out Kolmogorov-Smirnov test, get the probability distribution of each influence factor. Finally, obtained the grid investment capacity uncertainty of analysis results by Monte Carlo simulation.

  1. AIRS Observations Based Evaluation of Relative Climate Feedback Strengths on a GCM Grid-Scale

    NASA Astrophysics Data System (ADS)

    Molnar, G. I.; Susskind, J.

    2012-12-01

    Climate feedback strengths, especially those associated with moist processes, still have a rather wide range in GCMs, the primary tools to predict future climate changes associated with man's ever increasing influences on our planet. Here, we make use of the first 10 years of AIRS observations to evaluate interrelationships/correlations of atmospheric moist parameter anomalies computed from AIRS Version 5 Level-3 products, and demonstrate their usefulness to assess relative feedback strengths. Although one may argue about the possible usability of shorter-term, observed climate parameter anomalies for estimating the strength of various (mostly moist processes related) feedbacks, recent works, in particular analyses by Dessler [2008, 2010], have demonstrated their usefulness in assessing global water vapor and cloud feedbacks. First, we create AIRS-observed monthly anomaly time-series (ATs) of outgoing longwave radiation, water vapor, clouds and temperature profile over a 10-year long (Sept. 2002 through Aug. 2012) period using 1x1 degree resolution (a common GCM grid-scale). Next, we evaluate the interrelationships of ATs of the above parameters with the corresponding 1x1 degree, as well as global surface temperature ATs. The latter provides insight comparable with more traditional climate feedback definitions (e. g., Zelinka and Hartmann, 2012) whilst the former is related to a new definition of "local (in surface temperature too) feedback strengths" on a GCM grid-scale. Comparing the correlation maps generated provides valuable new information on the spatial distribution of relative climate feedback strengths. We argue that for GCMs to be trusted for predicting longer-term climate variability, they should be able to reproduce these observed relationships/metrics as closely as possible. For this time period the main climate "forcing" was associated with the El Niño/La Niña variability (e. g., Dessler, 2010), so these assessments may not be descriptive of longer

  2. Grid-based Thomas-Fermi-Amaldi equation with the molecular cusp condition

    NASA Astrophysics Data System (ADS)

    Kim, Min Sung; Youn, Sung-Kie; Kang, Jeung Ku

    2006-03-01

    First, the Thomas-Fermi-Amaldi (TFA) equation was formulated with a newly derived condition to remove the singularities at the nuclei, which coincided with the molecular cusp condition. Next, the collocation method was applied to the TFA equation using the grid-based density functional theory. In this paper, the electron densities and the radial probabilities for specific atoms (He, Be, Ne, Mg, Ar, Ca) were found to agree with those from the Thomas-Fermi-Dirac (TFD) method. Total energies for specific atoms (He, Ne, Ar, Kr, Xe, Rn) and molecules (H2,CH4) were also found to be close to those from the Hartree-Fock method using the Pople basis set 6-311G relative to the TFD method. In addition, the computational expense to determine the electron density and its corresponding energy for a large scale structure, such as a carbon nanotube, is shown to be much more efficient compared to the conventional Hartree-Fock method using the 6-31G Pople basis set.

  3. Implementation of nonlinear registration of brain atlas based on piecewise grid system

    NASA Astrophysics Data System (ADS)

    Liu, Rong; Gu, Lixu; Xu, Jianrong

    2007-12-01

    In this paper, a multi-step registration method of brain atlas and clinical Magnetic Resonance Imaging (MRI) data based on Thin-Plate Splines (TPS) and Piecewise Grid System (PGS) is presented. The method can help doctors to determine the corresponding anatomical structure between patient image and the brain atlas by piecewise nonlinear registration. Since doctors mostly pay attention to particular Region of Interest (ROI), and a global nonlinear registration is quite time-consuming which is not suitable for real-time clinical application, we propose a novel method to conduct linear registration in global area before nonlinear registration is performed in selected ROI. The homogenous feature points are defined to calculate the transform matrix between patient data and the brain atlas to conclude the mapping function. Finally, we integrate the proposed approach into an application of neurosurgical planning and guidance system which lends great efficiency in both neuro-anatomical education and guiding of neurosurgical operations. The experimental results reveal that the proposed approach can keep an average registration error of 0.25mm in near real-time manner.

  4. Optimal RTP Based Power Scheduling for Residential Load in Smart Grid

    NASA Astrophysics Data System (ADS)

    Joshi, Hemant I.; Pandya, Vivek J.

    2015-12-01

    To match supply and demand, shifting of load from peak period to off-peak period is one of the effective solutions. Presently flat rate tariff is used in major part of the world. This type of tariff doesn't give incentives to the customers if they use electrical energy during off-peak period. If real time pricing (RTP) tariff is used, consumers can be encouraged to use energy during off-peak period. Due to advancement in information and communication technology, two-way communications is possible between consumers and utility. To implement this technique in smart grid, home energy controller (HEC), smart meters, home area network (HAN) and communication link between consumers and utility are required. HEC interacts automatically by running an algorithm to find optimal energy consumption schedule for each consumer. However, all the consumers are not allowed to shift their load simultaneously during off-peak period to avoid rebound peak condition. Peak to average ratio (PAR) is considered while carrying out minimization problem. Linear programming problem (LPP) method is used for minimization. The simulation results of this work show the effectiveness of the minimization method adopted. The hardware work is in progress and the program based on the method described here will be made to solve real problem.

  5. Modeling and assessment of civil aircraft evacuation based on finer-grid

    NASA Astrophysics Data System (ADS)

    Fang, Zhi-Ming; Lv, Wei; Jiang, Li-Xue; Xu, Qing-Feng; Song, Wei-Guo

    2016-04-01

    Studying civil aircraft emergency evacuation process by using computer model is an effective way. In this study, the evacuation of Airbus A380 is simulated using a Finer-Grid Civil Aircraft Evacuation (FGCAE) model. In this model, the effect of seat area and others on escape process and pedestrian's "hesitation" before leaving exits are considered, and an optimized rule of exit choice is defined. Simulations reproduce typical characteristics of aircraft evacuation, such as the movement synchronization between adjacent pedestrians, route choice and so on, and indicate that evacuation efficiency will be determined by pedestrian's "preference" and "hesitation". Based on the model, an assessment procedure of aircraft evacuation safety is presented. The assessment and comparison with the actual evacuation test demonstrate that the available exit setting of "one exit from each exit pair" used by practical demonstration test is not the worst scenario. The half exits of one end of the cabin are all unavailable is the worst one, that should be paid more attention to, and even be adopted in the certification test. The model and method presented in this study could be useful for assessing, validating and improving the evacuation performance of aircraft.

  6. Location-Aware Dynamic Session-Key Management for Grid-Based Wireless Sensor Networks

    PubMed Central

    Chen, Chin-Ling; Lin, I-Hsien

    2010-01-01

    Security is a critical issue for sensor networks used in hostile environments. When wireless sensor nodes in a wireless sensor network are distributed in an insecure hostile environment, the sensor nodes must be protected: a secret key must be used to protect the nodes transmitting messages. If the nodes are not protected and become compromised, many types of attacks against the network may result. Such is the case with existing schemes, which are vulnerable to attacks because they mostly provide a hop-by-hop paradigm, which is insufficient to defend against known attacks. We propose a location-aware dynamic session-key management protocol for grid-based wireless sensor networks. The proposed protocol improves the security of a secret key. The proposed scheme also includes a key that is dynamically updated. This dynamic update can lower the probability of the key being guessed correctly. Thus currently known attacks can be defended. By utilizing the local information, the proposed scheme can also limit the flooding region in order to reduce the energy that is consumed in discovering routing paths. PMID:22163606

  7. Wide field-of-view Talbot grid-based microscopy for multicolor fluorescence imaging

    PubMed Central

    Pang, Shuo; Han, Chao; Erath, Jessey; Rodriguez, Ana; Yang, Changhuei

    2013-01-01

    The capability to perform multicolor, wide field-of-view (FOV) fluorescence microscopy imaging is important in screening and pathology applications. We developed a microscopic slide-imaging system that can achieve multicolor, wide FOV, fluorescence imaging based on the Talbot effect. In this system, a light-spot grid generated by the Talbot effect illuminates the sample. By tilting the excitation beam, the Talbot-focused spot scans across the sample. The images are reconstructed by collecting the fluorescence emissions that correspond to each focused spot with a relay optics arrangement. The prototype system achieved an FOV of 12 × 10 mm2 at an acquisition time as fast as 23 s for one fluorescence channel. The resolution is fundamentally limited by spot size, with a demonstrated full-width at half-maximum spot diameter of 1.2 μm. The prototype was used to image green fluorescent beads, double-stained human breast cancer SK-BR-3 cells, Giardia lamblia cysts, and the Cryptosporidium parvum oocysts. This imaging method is scalable and simple for implementation of high-speed wide FOV fluorescence microscopy. PMID:23787643

  8. Response CDF sensitivity and its solution based on sparse grid integration

    NASA Astrophysics Data System (ADS)

    Zhou, Chang-Cong; Lu, Zhen-Zhou; Hu, Ji-Xiang; Yuan, Ming

    2016-02-01

    The sensitivity of the cumulative distribution function (CDF) of the response with respect to the input parameters is studied in this work, to quantify how the model output is affected by input uncertainty. To solve the response CDF sensitivity more efficiently, a novel method based on the sparse grid integration (SGI) is proposed. The response CDF sensitivity is transformed into expressions involving probability moments, which can be efficiently estimated by the SGI technique. Once the response CDF sensitivity at one percentile level of the response is obtained, the sensitivity values at any other percentile level can be immediately obtained with no further call to the performance function. The proposed method finds a good balance between the computational burden and accuracy, and is applicable for engineering problems involving implicit performance functions. The characteristics and effectiveness of the proposed method are demonstrated by several engineering examples. Discussions on these examples have also validated the significance of the response CDF sensitivity for the purpose of variable screening and ranking.

  9. Optimal file-bundle caching algorithms for data-grids

    SciTech Connect

    Otoo, Ekow; Rotem, Doron; Romosan, Alexandru

    2004-04-24

    The file-bundle caching problem arises frequently in scientific applications where jobs need to process several files simultaneously. Consider a host system in a data-grid that maintains a staging disk or disk cache for servicing jobs of file requests. In this environment, a job can only be serviced if all its file requests are present in the disk cache. Files must be admitted into the cache or replaced in sets of file-bundles, i.e. the set of files that must all be processed simultaneously. In this paper we show that traditional caching algorithms based on file popularity measures do not perform well in such caching environments since they are not sensitive to the inter-file dependencies and may hold in the cache non-relevant combinations of files. We present and analyze a new caching algorithm for maximizing the throughput of jobs and minimizing data replacement costs to such data-grid hosts. We tested the new algorithm using a disk cache simulation model under a wide range of conditions such as file request distributions, relative cache size, file size distribution, etc. In all these cases, the results show significant improvement as compared with traditional caching algorithms.

  10. ASCI Grid Services summary report.

    SciTech Connect

    Hiebert-Dodd, Kathie L.

    2004-03-01

    The ASCI Grid Services (initially called Distributed Resource Management) project was started under DisCom{sup 2} when distant and distributed computing was identified as a technology critical to the success of the ASCI Program. The goals of the Grid Services project has and continues to be to provide easy, consistent access to all the ASCI hardware and software resources across the nuclear weapons complex using computational grid technologies, increase the usability of ASCI hardware and software resources by providing interfaces for resource monitoring, job submission, job monitoring, and job control, and enable the effective use of high-end computing capability through complex-wide resource scheduling and brokering. In order to increase acceptance of the new technology, the goal included providing these services in both the unclassified as well as the classified user's environment. This paper summarizes the many accomplishments and lessons learned over approximately five years of the ASCI Grid Services Project. It also provides suggestions on how to renew/restart the effort for grid services capability when the situation is right for that need.

  11. An on-the-job mindfulness-based intervention for pediatric ICU nurses: a pilot.

    PubMed

    Gauthier, Tina; Meyer, Rika M L; Grefe, Dagmar; Gold, Jeffrey I

    2015-01-01

    The feasibility of a 5-minute mindfulness meditation for PICU nurses before each work-shift to investigate change in nursing stress, burnout, self-compassion, mindfulness, and job satisfaction was explored. Thirty-eight nurses completed measures (Nursing Stress Scale, Maslach Burnout Inventory, Mindfulness Attention Awareness Scale and Self-Compassion Scale) at baseline, post-intervention and 1 month after. The intervention was found to be feasible for nurses on the PICU. A repeated measures ANOVA revealed significant decreases in stress from baseline to post intervention and maintained 1 month following the intervention. Findings may inform future interventions that support on-the-job self-care and stress-reduction within a critical care setting.

  12. Parametric Grid Information in the DOE Knowledge Base: Data Preparation, Storage, and Access

    SciTech Connect

    HIPP,JAMES R.; MOORE,SUSAN G.; MYERS,STEPHEN C.; SCHULTZ,CRAIG A.; SHEPHERD,ELLEN; YOUNG,CHRISTOPHER J.

    1999-10-01

    The parametric grid capability of the Knowledge Base provides an efficient, robust way to store and access interpolatable information which is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use a new approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation (NNI). The method involves three basic steps: data preparation (DP), data storage (DS), and data access (DA). The goal of data preparation is to process a set of raw data points to produce a sufficient basis for accurate NNI of value and error estimates in the Data Access step. This basis includes a set of nodes and their connectedness, collectively known as a tessellation, and the corresponding values and errors that map to each node, which we call surfaces. In many cases, the raw data point distribution is not sufficiently dense to guarantee accurate error estimates from the NNI, so the original data set must be densified using a newly developed interpolation technique known as Modified Bayesian Kriging. Once appropriate kriging parameters have been determined by variogram analysis, the optimum basis for NNI is determined in a process they call mesh refinement, which involves iterative kriging, new node insertion, and Delauny triangle smoothing. The process terminates when an NNI basis has been calculated which will fir the kriged values within a specified tolerance. In the data storage step, the tessellations and surfaces are stored in the Knowledge Base, currently in a binary flatfile format but perhaps in the future in a spatially-indexed database. Finally, in the data access step, a client application makes a request for an interpolated value, which triggers a data fetch from the Knowledge Base through the libKBI interface, a walking triangle search for the containing triangle, and finally the NNI interpolation.

  13. Parametric Grid Information in the DOE Knowledge Base: Data Preparation, Storage and Access.

    SciTech Connect

    Hipp, J. R.; Young, C. J.; Moore, S. G.; Shepherd, E. R.; Schultz, C. A.; Myers, S. C.

    1999-10-01

    The parametric grid capability of the Knowledge Base provides an efficient, robust way to store and access interpolatable information which is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use a new approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation (NNI). The method involves three basic steps: data preparation (DP), data storage (DS), and data access (DA). The goal of data preparation is to process a set of raw data points to produce a sufficient basis for accurate NNI of value and error estimates in the Data Access step. This basis includes a set of nodes and their connectedness, collectively known as a tessellation, and the corresponding values and errors that map to each node, which we call surfaces. In many cases, the raw data point distribution is not sufficiently dense to guarantee accurate error estimates from the NNI, so the original data set must be densified using a newly developed interpolation technique known as Modified Bayesian Kriging. Once appropriate kriging parameters have been determined by variogram analysis, the optimum basis for NNI is determined in a process we call mesh refinement, which involves iterative kriging, new node insertion, and Delauny triangle smoothing. The process terminates when an NNI basis has been calculated which will fit the kriged values within a specified tolerance. In the data storage step, the tessellations and surfaces are stored in the Knowledge Base, currently in a binary flatfile format but perhaps in the future in a spatially-indexed database. Finally, in the data access step, a client application makes a request for an interpolated value, which triggers a data fetch from the Knowledge Base through the libKBI interface, a walking triangle search for the containing triangle, and finally the NNI interpolation.

  14. Job Attitudes of Workers with Two Jobs

    ERIC Educational Resources Information Center

    Zickar, Michael J.; Gibby, Robert E.; Jenny, Tim

    2004-01-01

    This article examines the job attitudes of people who hold more than one job. Satisfaction, stress, and organizational (continuance and affective) commitment were assessed for both primary and secondary jobs for 83 full-time workers who held two jobs concurrently. Consistency between job constructs across jobs was negligible, except for…

  15. Comparison of two expert-based assessments of diesel exhaust exposure in a case-control study: Programmable decision rules versus expert review of individual jobs

    PubMed Central

    Pronk, Anjoeka; Stewart, Patricia A.; Coble, Joseph B.; Katki, Hormuzd A.; Wheeler, David C.; Colt, Joanne S.; Baris, Dalsu; Schwenn, Molly; Karagas, Margaret R.; Johnson, Alison; Waddell, Richard; Verrill, Castine; Cherala, Sai; Silverman, Debra T.; Friesen, Melissa C.

    2012-01-01

    Objectives Professional judgment is necessary to assess occupational exposure in population-based case-control studies; however, the assessments lack transparency and are time-consuming to perform. To improve transparency and efficiency, we systematically applied decision rules to the questionnaire responses to assess diesel exhaust exposure in the New England Bladder Cancer Study, a population-based case-control study. Methods 2,631 participants reported 14,983 jobs; 2,749 jobs were administered questionnaires (‘modules’) with diesel-relevant questions. We applied decision rules to assign exposure metrics based solely on the occupational history responses (OH estimates) and based on the module responses (module estimates); we combined the separate OH and module estimates (OH/module estimates). Each job was also reviewed one at a time to assign exposure (one-by-one review estimates). We evaluated the agreement between the OH, OH/module, and one-by-one review estimates. Results The proportion of exposed jobs was 20–25% for all jobs, depending on approach, and 54–60% for jobs with diesel-relevant modules. The OH/module and one-by-one review had moderately high agreement for all jobs (κw=0.68–0.81) and for jobs with diesel-relevant modules (κw=0.62–0.78) for the probability, intensity, and frequency metrics. For exposed subjects, the Spearman correlation statistic was 0.72 between the cumulative OH/module and one-by-one review estimates. Conclusions The agreement seen here may represent an upper level of agreement because the algorithm and one-by-one review estimates were not fully independent. This study shows that applying decision-based rules can reproduce a one-by-one review, increase transparency and efficiency, and provide a mechanism to replicate exposure decisions in other studies. PMID:22843440

  16. Individual Skills Based Volunteerism and Life Satisfaction among Healthcare Volunteers in Malaysia: Role of Employer Encouragement, Self-Esteem and Job Performance, A Cross-Sectional Study

    PubMed Central

    Veerasamy, Chanthiran; Sambasivan, Murali; Kumar, Naresh

    2013-01-01

    The purpose of this paper is to analyze two important outcomes of individual skills-based volunteerism (ISB-V) among healthcare volunteers in Malaysia. The outcomes are: job performance and life satisfaction. This study has empirically tested the impact of individual dimensions of ISB-V along with their inter-relationships in explaining the life satisfaction and job performance. Besides, the effects of employer encouragement to the volunteers, demographic characteristics of volunteers, and self-esteem of volunteers on job performance and life satisfaction have been studied. The data were collected through a questionnaire distributed to 1000 volunteers of St. John Ambulance in Malaysia. Three hundred and sixty six volunteers responded by giving their feedback. The model was tested using Structural Equation Modeling (SEM). The main results of this study are: (1) Volunteer duration and nature of contact affects life satisfaction, (2) volunteer frequency has impact on volunteer duration, (3) self-esteem of volunteers has significant relationships with volunteer frequency, job performance and life satisfaction, (4) job performance of volunteers affect their life satisfaction and (5) current employment level has significant relationships with duration of volunteering, self esteem, employer encouragement and job performance of volunteers. The model in this study has been able to explain 39% of the variance in life satisfaction and 45% of the variance in job performance. The current study adds significantly to the body of knowledge on healthcare volunteerism. PMID:24194894

  17. ReSS: Resource Selection Service for National and Campus Grid Infrastructure

    SciTech Connect

    Mhashilkar, Parag; Garzoglio, Gabriele; Levshina, Tanya; Timm, Steve; /Fermilab

    2009-05-01

    The Open Science Grid (OSG) offers access to around hundred Compute elements (CE) and storage elements (SE) via standard Grid interfaces. The Resource Selection Service (ReSS) is a push-based workload management system that is integrated with the OSG information systems and resources. ReSS integrates standard Grid tools such as Condor, as a brokering service and the gLite CEMon, for gathering and publishing resource information in GLUE Schema format. ReSS is used in OSG by Virtual Organizations (VO) such as Dark Energy Survey (DES), DZero and Engagement VO. ReSS is also used as a Resource Selection Service for Campus Grids, such as FermiGrid. VOs use ReSS to automate the resource selection in their workload management system to run jobs over the grid. In the past year, the system has been enhanced to enable publication and selection of storage resources and of any special software or software libraries (like MPI libraries) installed at computing resources. In this paper, we discuss the Resource Selection Service, its typical usage on the two scales of a National Cyber Infrastructure Grid, such as OSG, and of a campus Grid, such as FermiGrid.

  18. ReSS: Resource Selection Service for National and Campus Grid Infrastructure

    NASA Astrophysics Data System (ADS)

    Mhashilkar, Parag; Garzoglio, Gabriele; Levshina, Tanya; Timm, Steve

    2010-04-01

    The Open Science Grid (OSG) offers access to around hundred Compute elements (CE) and storage elements (SE) via standard Grid interfaces. The Resource Selection Service (ReSS) is a push-based workload management system that is integrated with the OSG information systems and resources. ReSS integrates standard Grid tools such as Condor, as a brokering service and the gLite CEMon, for gathering and publishing resource information in GLUE Schema format. ReSS is used in OSG by Virtual Organizations (VO) such as Dark Energy Survey (DES), DZero and Engagement VO. ReSS is also used as a Resource Selection Service for Campus Grids, such as FermiGrid. VOs use ReSS to automate the resource selection in their workload management system to run jobs over the grid. In the past year, the system has been enhanced to enable publication and selection of storage resources and of any special software or software libraries (like MPI libraries) installed at computing resources. In this paper, we discuss the Resource Selection Service, its typical usage on the two scales of a National Cyber Infrastructure Grid, such as OSG, and of a campus Grid, such as FermiGrid.

  19. MAGNETIC GRID

    DOEpatents

    Post, R.F.

    1960-08-01

    An electronic grid is designed employing magnetic forces for controlling the passage of charged particles. The grid is particularly applicable to use in gas-filled tubes such as ignitrons. thyratrons, etc., since the magnetic grid action is impartial to the polarity of the charged particles and, accordingly. the sheath effects encountered with electrostatic grids are not present. The grid comprises a conductor having sections spaced apart and extending in substantially opposite directions in the same plane, the ends of the conductor being adapted for connection to a current source.

  20. An Investigation of Wavelet Bases for Grid-Based Multi-Scale Simulations Final Report

    SciTech Connect

    Baty, R.S.; Burns, S.P.; Christon, M.A.; Roach, D.W.; Trucano, T.G.; Voth, T.E.; Weatherby, J.R.; Womble, D.E.

    1998-11-01

    The research summarized in this report is the result of a two-year effort that has focused on evaluating the viability of wavelet bases for the solution of partial differential equations. The primary objective for this work has been to establish a foundation for hierarchical/wavelet simulation methods based upon numerical performance, computational efficiency, and the ability to exploit the hierarchical adaptive nature of wavelets. This work has demonstrated that hierarchical bases can be effective for problems with a dominant elliptic character. However, the strict enforcement of orthogonality was found to be less desirable than weaker semi-orthogonality or bi-orthogonality for solving partial differential equations. This conclusion has led to the development of a multi-scale linear finite element based on a hierarchical change of basis. The reproducing kernel particle method has been found to yield extremely accurate phase characteristics for hyperbolic problems while providing a convenient framework for multi-scale analyses.

  1. Automatic Integration Testbeds validation on Open Science Grid

    NASA Astrophysics Data System (ADS)

    Caballero, J.; Thapa, S.; Gardner, R.; Potekhin, M.

    2011-12-01

    A recurring challenge in deploying high quality production middleware is the extent to which realistic testing occurs before release of the software into the production environment. We describe here an automated system for validating releases of the Open Science Grid software stack that leverages the (pilot-based) PanDA job management system developed and used by the ATLAS experiment. The system was motivated by a desire to subject the OSG Integration Testbed to more realistic validation tests. In particular those which resemble to every extent possible actual job workflows used by the experiments thus utilizing job scheduling at the compute element (CE), use of the worker node execution environment, transfer of data to/from the local storage element (SE), etc. The context is that candidate releases of OSG compute and storage elements can be tested by injecting large numbers of synthetic jobs varying in complexity and coverage of services tested. The native capabilities of the PanDA system can thus be used to define jobs, monitor their execution, and archive the resulting run statistics including success and failure modes. A repository of generic workflows and job types to measure various metrics of interest has been created. A command-line toolset has been developed so that testbed managers can quickly submit "VO-like" jobs into the system when newly deployed services are ready for testing. A system for automatic submission has been crafted to send jobs to integration testbed sites, collecting the results in a central service and generating regular reports for performance and reliability.

  2. Delay grid multiplexing: simple time-based multiplexing and readout method for silicon photomultipliers

    NASA Astrophysics Data System (ADS)

    Won, Jun Yeon; Ko, Guen Bae; Lee, Jae Sung

    2016-10-01

    In this paper, we propose a fully time-based multiplexing and readout method that uses the principle of the global positioning system. Time-based multiplexing allows simplifying the multiplexing circuits where the only innate traces that connect the signal pins of the silicon photomultiplier (SiPM) channels to the readout channels are used as the multiplexing circuit. Every SiPM channel is connected to the delay grid that consists of the traces on a printed circuit board, and the inherent transit times from each SiPM channel to the readout channels encode the position information uniquely. Thus, the position of each SiPM can be identified using the time difference of arrival (TDOA) measurements. The proposed multiplexing can also allow simplification of the readout circuit using the time-to-digital converter (TDC) implemented in a field-programmable gate array (FPGA), where the time-over-threshold (ToT) is used to extract the energy information after multiplexing. In order to verify the proposed multiplexing method, we built a positron emission tomography (PET) detector that consisted of an array of 4  ×  4 LGSO crystals, each with a dimension of 3  ×  3  ×  20 mm3, and one- to-one coupled SiPM channels. We first employed the waveform sampler as an initial study, and then replaced the waveform sampler with an FPGA-TDC to further simplify the readout circuits. The 16 crystals were clearly resolved using only the time information obtained from the four readout channels. The coincidence resolving times (CRTs) were 382 and 406 ps FWHM when using the waveform sampler and the FPGA-TDC, respectively. The proposed simple multiplexing and readout methods can be useful for time-of-flight (TOF) PET scanners.

  3. Geometric grid generation

    NASA Technical Reports Server (NTRS)

    Ives, David

    1995-01-01

    This paper presents a highly automated hexahedral grid generator based on extensive geometrical and solid modeling operations developed in response to a vision of a designer-driven one day turnaround CFD process which implies a designer-driven one hour grid generation process.

  4. Internet 2 Access Grid.

    ERIC Educational Resources Information Center

    Simco, Greg

    2002-01-01

    Discussion of the Internet 2 Initiative, which is based on collaboration among universities, businesses, and government, focuses on the Access Grid, a Computational Grid that includes interactive multimedia within high-speed networks to provide resources to enable remote collaboration among the research community. (Author/LRW)

  5. Low-bit-rate representation of cylindrical volume grids using Chebyshev bases: direct section computation, synthesis, and reconstruction

    NASA Astrophysics Data System (ADS)

    Desai, Ranjit P.; Menon, Jai P.

    1998-12-01

    A large class of high-speed visualization applications use image acquisition and 3D volume reconstruction techniques in cylindrical sampling grids; these include real-time 3D medical reconstruction, and reverse engineering. This paper presents the novel use of Chebyshev bases in such cylindrical grid- based volume applications, to allow efficient computation of cross-sectional planes of interest and partial volumes without the computationally expensive step of volume rendering, for subsequent transmission in constrained bitrate environments. This has important consequences for low-bitrate applications such as video-conferencing and internet-based visualization environments, where interaction and fusion between independently sampled heterogenous data streams (images, video and 3D volumes) from multiple sources is beginning to play an important part. Volumes often embody widely varying physical signals such as those acquired by X-rays, ultrasound sensors in addition to standard c.c.d. cameras. Several benefits of Chebyshev expansions such as fast convergence, bounded error, computational efficiency, and their optimality for cylindrical grids are taken into account. In addition, our method exploits knowledge about the sampling strategy (e.g. position and trajectory of the sensor) used to acquire the original ensemble of images, which in turn makes the overall approach very amenable to internet-based low-bitrate applications.

  6. An expert-based job exposure matrix for large scale epidemiologic studies of primary hip and knee osteoarthritis: The Lower Body JEM

    PubMed Central

    2014-01-01

    Background When conducting large scale epidemiologic studies, it is a challenge to obtain quantitative exposure estimates, which do not rely on self-report where estimates may be influenced by symptoms and knowledge of disease status. In this study we developed a job exposure matrix (JEM) for use in population studies of the work-relatedness of hip and knee osteoarthritis. Methods Based on all 2227 occupational titles in the Danish version of the International Standard Classification of Occupations (D-ISCO 88), we constructed 121 job groups comprising occupational titles with expected homogeneous exposure patterns in addition to a minimally exposed job group, which was not included in the JEM. The job groups were allocated the mean value of five experts’ ratings of daily duration (hours/day) of standing/walking, kneeling/squatting, and whole-body vibration as well as total load lifted (kg/day), and frequency of lifting loads weighing ≥20 kg (times/day). Weighted kappa statistics were used to evaluate inter-rater agreement on rankings of the job groups for four of these exposures (whole-body vibration could not be evaluated due to few exposed job groups). Two external experts checked the face validity of the rankings of the mean values. Results A JEM was constructed and English ISCO codes were provided where possible. The experts’ ratings showed fair to moderate agreement with respect to rankings of the job groups (mean weighted kappa values between 0.36 and 0.49). The external experts agreed on 586 of the 605 rankings. Conclusion The Lower Body JEM based on experts’ ratings was established. Experts agreed on rankings of the job groups, and rankings based on mean values were in accordance with the opinion of external experts. PMID:24927760

  7. Global Renewable Energy-Based Electricity Generation and Smart Grid System for Energy Security

    PubMed Central

    Islam, M. A.; Hasanuzzaman, M.; Rahim, N. A.; Nahar, A.; Hosenuzzaman, M.

    2014-01-01

    Energy is an indispensable factor for the economic growth and development of a country. Energy consumption is rapidly increasing worldwide. To fulfill this energy demand, alternative energy sources and efficient utilization are being explored. Various sources of renewable energy and their efficient utilization are comprehensively reviewed and presented in this paper. Also the trend in research and development for the technological advancement of energy utilization and smart grid system for future energy security is presented. Results show that renewable energy resources are becoming more prevalent as more electricity generation becomes necessary and could provide half of the total energy demands by 2050. To satisfy the future energy demand, the smart grid system can be used as an efficient system for energy security. The smart grid also delivers significant environmental benefits by conservation and renewable generation integration. PMID:25243201

  8. Global renewable energy-based electricity generation and smart grid system for energy security.

    PubMed

    Islam, M A; Hasanuzzaman, M; Rahim, N A; Nahar, A; Hosenuzzaman, M

    2014-01-01

    Energy is an indispensable factor for the economic growth and development of a country. Energy consumption is rapidly increasing worldwide. To fulfill this energy demand, alternative energy sources and efficient utilization are being explored. Various sources of renewable energy and their efficient utilization are comprehensively reviewed and presented in this paper. Also the trend in research and development for the technological advancement of energy utilization and smart grid system for future energy security is presented. Results show that renewable energy resources are becoming more prevalent as more electricity generation becomes necessary and could provide half of the total energy demands by 2050. To satisfy the future energy demand, the smart grid system can be used as an efficient system for energy security. The smart grid also delivers significant environmental benefits by conservation and renewable generation integration. PMID:25243201

  9. Global renewable energy-based electricity generation and smart grid system for energy security.

    PubMed

    Islam, M A; Hasanuzzaman, M; Rahim, N A; Nahar, A; Hosenuzzaman, M

    2014-01-01

    Energy is an indispensable factor for the economic growth and development of a country. Energy consumption is rapidly increasing worldwide. To fulfill this energy demand, alternative energy sources and efficient utilization are being explored. Various sources of renewable energy and their efficient utilization are comprehensively reviewed and presented in this paper. Also the trend in research and development for the technological advancement of energy utilization and smart grid system for future energy security is presented. Results show that renewable energy resources are becoming more prevalent as more electricity generation becomes necessary and could provide half of the total energy demands by 2050. To satisfy the future energy demand, the smart grid system can be used as an efficient system for energy security. The smart grid also delivers significant environmental benefits by conservation and renewable generation integration.

  10. Performance Evaluation of a Microchannel Plate based X-ray Camera with a Reflecting Grid

    NASA Astrophysics Data System (ADS)

    Visco, A.; Drake, R. P.; Harding, E. C.; Rathore, G. K.

    2006-10-01

    Microchannel Plates (MCPs) are used in a variety of imaging systems as a means of amplifying the incident radiation. Using a microchannel plate mount recently developed at the University of Michigan, the effects of a metal reflecting grid are explored. Employing the reflecting grid, we create a potential difference above the MCP input surface that forces ejected electrons back into the pores, which may prove to increase the quantum efficiency of the camera. We investigate the changes in the pulse height distribution, modular transfer function, and Quantum efficiency of MCPs caused by the introduction of the reflecting grid. Work supported by the Naval Research Laboratory, National Nuclear Security Administration under the Stewardship Science Academic Alliances program through DOE Research Grant DE-FG52-03NA00064, and through DE FG53 2005 NA26014, and Livermore National Laboratory.

  11. A gridded hourly rainfall dataset for the UK applied to a national physically-based modelling system

    NASA Astrophysics Data System (ADS)

    Lewis, Elizabeth; Blenkinsop, Stephen; Quinn, Niall; Freer, Jim; Coxon, Gemma; Woods, Ross; Bates, Paul; Fowler, Hayley

    2016-04-01

    An hourly gridded rainfall product has great potential for use in many hydrological applications that require high temporal resolution meteorological data. One important example of this is flood risk management, with flooding in the UK highly dependent on sub-daily rainfall intensities amongst other factors. Knowledge of sub-daily rainfall intensities is therefore critical to designing hydraulic structures or flood defences to appropriate levels of service. Sub-daily rainfall rates are also essential inputs for flood forecasting, allowing for estimates of peak flows and stage for flood warning and response. In addition, an hourly gridded rainfall dataset has significant potential for practical applications such as better representation of extremes and pluvial flash flooding, validation of high resolution climate models and improving the representation of sub-daily rainfall in weather generators. A new 1km gridded hourly rainfall dataset for the UK has been created by disaggregating the daily Gridded Estimates of Areal Rainfall (CEH-GEAR) dataset using comprehensively quality-controlled hourly rain gauge data from over 1300 observation stations across the country. Quality control measures include identification of frequent tips, daily accumulations and dry spells, comparison of daily totals against the CEH-GEAR daily dataset, and nearest neighbour checks. The quality control procedure was validated against historic extreme rainfall events and the UKCP09 5km daily rainfall dataset. General use of the dataset has been demonstrated by testing the sensitivity of a physically-based hydrological modelling system for Great Britain to the distribution and rates of rainfall and potential evapotranspiration. Of the sensitivity tests undertaken, the largest improvements in model performance were seen when an hourly gridded rainfall dataset was combined with potential evapotranspiration disaggregated to hourly intervals, with 61% of catchments showing an increase in NSE between

  12. A SUNTANS-based unstructured grid local exact particle tracking model

    NASA Astrophysics Data System (ADS)

    Liu, Guangliang; Chua, Vivien P.

    2016-07-01

    A parallel particle tracking model, which employs the local exact integration method to achieve high accuracy, has been developed and embedded in an unstructured-grid coastal ocean model, Stanford Unstructured Nonhydrostatic Terrain-following Adaptive Navier-Stokes Simulator (SUNTANS). The particle tracking model is verified and compared with traditional numerical integration methods, such as Runge-Kutta fourth-order methods using several test cases. In two-dimensional linear steady rotating flow, the local exact particle tracking model is able to track particles along the circular streamline accurately, while Runge-Kutta fourth-order methods produce trajectories that deviate from the streamlines. In periodically varying double-gyre flow, the trajectories produced by local exact particle tracking model with time step of 1.0 × 10- 2 s are similar to those trajectories obtained from the numerical integration methods with reduced time steps of 1.0 × 10- 4 s. In three-dimensional steady Arnold-Beltrami-Childress (ABC) flow, the trajectories integrated with the local exact particle tracking model compares well with the approximated true path. The trajectories spiral upward and their projection on the x- y plane is a periodic ellipse. The trajectories derived with the Runge-Kutta fourth-order method deviate from the approximated true path, and their projections on the x- y plane are unclosed ellipses with growing long and short axes. The spatial temporal resolution needs to be carefully chosen before particle tracking models are applied. Our results show that the developed local exact particle tracking model is accurate and suitable for marine Lagrangian (trajectory-based)-related research.

  13. a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.

    2015-04-01

    Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.

  14. Comparisons Between SPH and Grid-Based Simulations of the Common Envelope Phase

    NASA Astrophysics Data System (ADS)

    Passy, Jean-Claude; Fryer, C. L.; Diehl, S.; De Marco, O.; Mac Low, M.; Herwig, F.; Oishi, J. S.

    2011-01-01

    The common envelope (CE) interaction between a giant star and a lower-mass companion provides a formation channel leading eventually to Type Ia supernovae, sdB stars and bipolar PNe. More broadly, it is an essential ingredient for any population synthesis study including binaries, e.g. cataclysmic variables. Occurring on a short time scale - typically between one and ten years, the CE interaction itself has so far never been observed with certainty but the existence of companions in close orbits around evolved stars, whose precursor's radius was larger than today's orbital separation, vouches for such interaction taking place frequently. Via a detailed study of the energetics and the use of stellar evolution models, we derived in our previous paper the efficiency α of the CE interaction from a carefully selected and statistically analyzed sample of systems thought to be outcomes of a CE interaction. We deduced the initial configuration of those systems using stellar models, and derived a possible inverse dependence of α with the companion to primary mass ratio. Here, we compare these predictions to numerical simulations with two different codes. Enzo is a 3D adaptive mesh refinement grid-based code. For our stellar problem we have modified the way gravity and boundary conditions are treated in this code. The SNSPH code is a 3D hydrodynamics SPH code using tree gravity. The results from both codes for different companion masses and different types of primary stars are consistent with each other. Those results include a resolution study of a 0.88 M⊙ red giant interacting with a 0.9, 0.6 and 0.3 M⊙ white dwarf, respectively. Those systems reach a final separation of 25, 18 and 10 R⊙, respectively. In this contribution, we present and discuss those results and compare them to our predictions. This research was funded by NSF grant 0607111.

  15. Verification & Validation of High-Order Short-Characteristics-Based Deterministic Transport Methodology on Unstructured Grids

    SciTech Connect

    Azmy, Yousry; Wang, Yaqi

    2013-12-20

    The research team has developed a practical, high-order, discrete-ordinates, short characteristics neutron transport code for three-dimensional configurations represented on unstructured tetrahedral grids that can be used for realistic reactor physics applications at both the assembly and core levels. This project will perform a comprehensive verification and validation of this new computational tool against both a continuous-energy Monte Carlo simulation (e.g. MCNP) and experimentally measured data, an essential prerequisite for its deployment in reactor core modeling. Verification is divided into three phases. The team will first conduct spatial mesh and expansion order refinement studies to monitor convergence of the numerical solution to reference solutions. This is quantified by convergence rates that are based on integral error norms computed from the cell-by-cell difference between the code’s numerical solution and its reference counterpart. The latter is either analytic or very fine- mesh numerical solutions from independent computational tools. For the second phase, the team will create a suite of code-independent benchmark configurations to enable testing the theoretical order of accuracy of any particular discretization of the discrete ordinates approximation of the transport equation. For each tested case (i.e. mesh and spatial approximation order), researchers will execute the code and compare the resulting numerical solution to the exact solution on a per cell basis to determine the distribution of the numerical error. The final activity comprises a comparison to continuous-energy Monte Carlo solutions for zero-power critical configuration measurements at Idaho National Laboratory’s Advanced Test Reactor (ATR). Results of this comparison will allow the investigators to distinguish between modeling errors and the above-listed discretization errors introduced by the deterministic method, and to separate the sources of uncertainty.

  16. Experimental Demonstration of a Self-organized Architecture for Emerging Grid Computing Applications on OBS Testbed

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Hong, Xiaobin; Wu, Jian; Lin, Jintong

    As Grid computing continues to gain popularity in the industry and research community, it also attracts more attention from the customer level. The large number of users and high frequency of job requests in the consumer market make it challenging. Clearly, all the current Client/Server(C/S)-based architecture will become unfeasible for supporting large-scale Grid applications due to its poor scalability and poor fault-tolerance. In this paper, based on our previous works [1, 2], a novel self-organized architecture to realize a highly scalable and flexible platform for Grids is proposed. Experimental results show that this architecture is suitable and efficient for consumer-oriented Grids.

  17. The Application of Structured Job Analysis Information Based on the Position Analysis Questionnaire (PAQ). Final Report No. 9.

    ERIC Educational Resources Information Center

    McCormick, Ernest J.

    The Position Analysis Questionnaire (PAQ) is a job analysis instrument consisting of 187 job elements organized into six divisions. The PAQ was used in the eight studies summarized in this final report. The studies were: (1) ratings of the attribute requirements of PAQ job elements, (2) a series of principal components analyses of these attribute…

  18. Job Satisfaction of NAIA Head Coaches at Small Faith-Based Colleges: The Teacher-Coach Model

    ERIC Educational Resources Information Center

    Stiemsma, Craig L.

    2010-01-01

    The head coaches at smaller colleges usually have other job responsibilities that include teaching, along with the responsibilities of coaching, recruiting, scheduling, and other coaching-related jobs. There is often a dual role involved for these coaches who try to juggle two different jobs that sometimes require different skill sets and involve…

  19. Adventures in Computational Grids

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Sometimes one supercomputer is not enough. Or your local supercomputers are busy, or not configured for your job. Or you don't have any supercomputers. You might be trying to simulate worldwide weather changes in real time, requiring more compute power than you could get from any one machine. Or you might be collecting microbiological samples on an island, and need to examine them with a special microscope located on the other side of the continent. These are the times when you need a computational grid.

  20. Job Task Analysis.

    ERIC Educational Resources Information Center

    Clemson Univ., SC.

    This publication consists of job task analyses for jobs in textile manufacturing. Information provided for each job in the greige and finishing plants includes job title, job purpose, and job duties with related educational objectives, curriculum, assessment, and outcome. These job titles are included: yarn manufacturing head overhauler, yarn…

  1. An Extensible Scientific Computing Resources Integration Framework Based on Grid Service

    NASA Astrophysics Data System (ADS)

    Cui, Binge; Chen, Xin; Song, Pingjian; Liu, Rongjie

    Scientific computing resources (e.g., components, dynamic linkable libraries, etc) are very valuable assets for the scientific research. However, due to historical reasons, most computing resources can’t be shared by other people. The emergence of Grid computing provides a turning point to solve this problem. The legacy applications can be abstracted and encapsulated into Grid service, and they may be found and invoked on the Web using SOAP messages. The Grid service is loosely coupled with the external JAR or DLL, which builds a bridge from users to computing resources. We defined an XML schema to describe the functions and interfaces of the applications. This information can be acquired by users by invoking the “getCapabilities” operation of the Grid service. We also proposed the concept of class pool to eliminate the memory leaks when invoking the external jars using reflection. The experiment shows that the class pool not only avoids the PermGen space waste and Tomcat server exception, but also significantly improves the application speed. The integration framework has been implemented successfully in a real project.

  2. GridCell: a stochastic particle-based biological system simulator

    PubMed Central

    Boulianne, Laurier; Al Assaad, Sevin; Dumontier, Michel; Gross, Warren J

    2008-01-01

    Background Realistic biochemical simulators aim to improve our understanding of many biological processes that would be otherwise very difficult to monitor in experimental studies. Increasingly accurate simulators may provide insights into the regulation of biological processes due to stochastic or spatial effects. Results We have developed GridCell as a three-dimensional simulation environment for investigating the behaviour of biochemical networks under a variety of spatial influences including crowding, recruitment and localization. GridCell enables the tracking and characterization of individual particles, leading to insights on the behaviour of low copy number molecules participating in signaling networks. The simulation space is divided into a discrete 3D grid that provides ideal support for particle collisions without distance calculation and particle search. SBML support enables existing networks to be simulated and visualized. The user interface provides intuitive navigation that facilitates insights into species behaviour across spatial and temporal dimensions. We demonstrate the effect of crowing on a Michaelis-Menten system. Conclusion GridCell is an effective stochastic particle simulator designed to track the progress of individual particles in a three-dimensional space in which spatial influences such as crowding, co-localization and recruitment may be investigated. PMID:18651956

  3. Computational fluid dynamics for propulsion technology: Geometric grid visualization in CFD-based propulsion technology research

    NASA Technical Reports Server (NTRS)

    Ziebarth, John P.; Meyer, Doug

    1992-01-01

    The coordination is examined of necessary resources, facilities, and special personnel to provide technical integration activities in the area of computational fluid dynamics applied to propulsion technology. Involved is the coordination of CFD activities between government, industry, and universities. Current geometry modeling, grid generation, and graphical methods are established to use in the analysis of CFD design methodologies.

  4. Wiki-Based Rapid Prototyping for Teaching-Material Design in E-Learning Grids

    ERIC Educational Resources Information Center

    Shih, Wen-Chung; Tseng, Shian-Shyong; Yang, Chao-Tung

    2008-01-01

    Grid computing environments with abundant resources can support innovative e-Learning applications, and are promising platforms for e-Learning. To support individualized and adaptive learning, teachers are encouraged to develop various teaching materials according to different requirements. However, traditional methodologies for designing teaching…

  5. PATH: a work sampling-based approach to ergonomic job analysis for construction and other non-repetitive work.

    PubMed

    Buchholz, B; Paquet, V; Punnett, L; Lee, D; Moir, S

    1996-06-01

    A high prevalence and incidence of work-related musculoskeletal disorders have been reported in construction work. Unlike industrial production-line activity, construction work, as well as work in many other occupations (e.g. agriculture, mining), is non-repetitive in nature; job tasks are non-cyclic, or consist of long or irregular cycles. PATH (Posture, Activity, Tools and Handling), a work sampling-based approach, was developed to characterize the ergonomic hazards of construction and other non-repetitive work. The posture codes in the PATH method are based on the Ovako Work Posture Analysing System (OWAS), with other codes included for describing worker activity, tool use, loads handled and grasp type. For heavy highway construction, observations are stratified by construction stage and operation, using a taxonomy developed specifically for this purpose. Observers can code the physical characteristics of the job reliably after about 30 h of training. A pilot study of six construction laborers during four road construction operations suggests that laborers spend large proportions of time in nonneutral trunk postures and spend approximately 20% of their time performing manual material handling tasks. These results demonstrate how the PATH method can be used to identify specific construction operations and tasks that are ergonomically hazardous.

  6. Synchrophasor Sensing and Processing based Smart Grid Security Assessment for Renewable Energy Integration

    NASA Astrophysics Data System (ADS)

    Jiang, Huaiguang

    With the evolution of energy and power systems, the emerging Smart Grid (SG) is mainly featured by distributed renewable energy generations, demand-response control and huge amount of heterogeneous data sources. Widely distributed synchrophasor sensors, such as phasor measurement units (PMUs) and fault disturbance recorders (FDRs), can record multi-modal signals, for power system situational awareness and renewable energy integration. An effective and economical approach is proposed for wide-area security assessment. This approach is based on wavelet analysis for detecting and locating the short-term and long-term faults in SG, using voltage signals collected by distributed synchrophasor sensors. A data-driven approach for fault detection, identification and location is proposed and studied. This approach is based on matching pursuit decomposition (MPD) using Gaussian atom dictionary, hidden Markov model (HMM) of real-time frequency and voltage variation features, and fault contour maps generated by machine learning algorithms in SG systems. In addition, considering the economic issues, the placement optimization of distributed synchrophasor sensors is studied to reduce the number of the sensors without affecting the accuracy and effectiveness of the proposed approach. Furthermore, because the natural hazards is a critical issue for power system security, this approach is studied under different types of faults caused by natural hazards. A fast steady-state approach is proposed for voltage security of power systems with a wind power plant connected. The impedance matrix can be calculated by the voltage and current information collected by the PMUs. Based on the impedance matrix, locations in SG can be identified, where cause the greatest impact on the voltage at the wind power plants point of interconnection. Furthermore, because this dynamic voltage security assessment method relies on time-domain simulations of faults at different locations, the proposed approach

  7. GSHR-Tree: a spatial index tree based on dynamic spatial slot and hash table in grid environments

    NASA Astrophysics Data System (ADS)

    Chen, Zhanlong; Wu, Xin-cai; Wu, Liang

    2008-12-01

    Computation Grids enable the coordinated sharing of large-scale distributed heterogeneous computing resources that can be used to solve computationally intensive problems in science, engineering, and commerce. Grid spatial applications are made possible by high-speed networks and a new generation of Grid middleware that resides between networks and traditional GIS applications. The integration of the multi-sources and heterogeneous spatial information and the management of the distributed spatial resources and the sharing and cooperative of the spatial data and Grid services are the key problems to resolve in the development of the Grid GIS. The performance of the spatial index mechanism is the key technology of the Grid GIS and spatial database affects the holistic performance of the GIS in Grid Environments. In order to improve the efficiency of parallel processing of a spatial mass data under the distributed parallel computing grid environment, this paper presents a new grid slot hash parallel spatial index GSHR-Tree structure established in the parallel spatial indexing mechanism. Based on the hash table and dynamic spatial slot, this paper has improved the structure of the classical parallel R tree index. The GSHR-Tree index makes full use of the good qualities of R-Tree and hash data structure. This paper has constructed a new parallel spatial index that can meet the needs of parallel grid computing about the magnanimous spatial data in the distributed network. This arithmetic splits space in to multi-slots by multiplying and reverting and maps these slots to sites in distributed and parallel system. Each sites constructs the spatial objects in its spatial slot into an R tree. On the basis of this tree structure, the index data was distributed among multiple nodes in the grid networks by using large node R-tree method. The unbalance during process can be quickly adjusted by means of a dynamical adjusting algorithm. This tree structure has considered the

  8. Thin electrodes based on rolled Pb-Sn-Ca grids for VRLA batteries

    NASA Astrophysics Data System (ADS)

    Caballero, A.; Cruz, M.; Hernán, L.; Morales, J.; Sánchez, L.

    Electrodes 0.5 mm thick (i.e. much thinner than conventional ones) and suitable for lead-acid batteries were prepared by using a special pasting procedure that allows plate thickness to be readily controlled. Novel rolled grids of Pb-Sn-low Ca alloys (0.35 mm thick) were used as substrates. Preliminary galvanostatic corrosion tests of the grids revealed an increased corrosion rate relative to conventional casted grids of Pb-Sn alloys (1 mm thick). Cells made with these thin electrodes were cycled under different discharge regimes and the active material at different charge/discharge cycling stages was characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM) and chemical analysis. At a depth of discharge (DOD) of 100%, the cell exhibited a premature capacity loss after the fifth cycle and delivered only a 20% of its nominal capacity after the 10th. By contrast, cycling performance of the electrode was significantly improved at a DOD of 60%. The capacity loss observed at a DOD of 100% can be ascribed to a rapid growth of PbSO 4 crystals reaching several microns in size. Such large crystals tend to deposit onto the grid surface and form an insulating layer that hinders electron transfer at the active material/grid interface. For this reason, the cell fails after few cycles in spite of the high PbO 2 content in the positive active material (PAM). On the other hand, at 60% DOD the submicronic particles produced after formation of the PAM retain their small size, thereby ensuring reversibility in the PbO 2⇔PbSO 4 transformation.

  9. XML-based data model and architecture for a knowledge-based grid-enabled problem-solving environment for high-throughput biological imaging.

    PubMed

    Ahmed, Wamiq M; Lenz, Dominik; Liu, Jia; Paul Robinson, J; Ghafoor, Arif

    2008-03-01

    High-throughput biological imaging uses automated imaging devices to collect a large number of microscopic images for analysis of biological systems and validation of scientific hypotheses. Efficient manipulation of these datasets for knowledge discovery requires high-performance computational resources, efficient storage, and automated tools for extracting and sharing such knowledge among different research sites. Newly emerging grid technologies provide powerful means for exploiting the full potential of these imaging techniques. Efficient utilization of grid resources requires the development of knowledge-based tools and services that combine domain knowledge with analysis algorithms. In this paper, we first investigate how grid infrastructure can facilitate high-throughput biological imaging research, and present an architecture for providing knowledge-based grid services for this field. We identify two levels of knowledge-based services. The first level provides tools for extracting spatiotemporal knowledge from image sets and the second level provides high-level knowledge management and reasoning services. We then present cellular imaging markup language, an extensible markup language-based language for modeling of biological images and representation of spatiotemporal knowledge. This scheme can be used for spatiotemporal event composition, matching, and automated knowledge extraction and representation for large biological imaging datasets. We demonstrate the expressive power of this formalism by means of different examples and extensive experimental results.

  10. Fibonacci Grids

    NASA Technical Reports Server (NTRS)

    Swinbank, Richard; Purser, James

    2006-01-01

    Recent years have seen a resurgence of interest in a variety of non-standard computational grids for global numerical prediction. The motivation has been to reduce problems associated with the converging meridians and the polar singularities of conventional regular latitude-longitude grids. A further impetus has come from the adoption of massively parallel computers, for which it is necessary to distribute work equitably across the processors; this is more practicable for some non-standard grids. Desirable attributes of a grid for high-order spatial finite differencing are: (i) geometrical regularity; (ii) a homogeneous and approximately isotropic spatial resolution; (iii) a low proportion of the grid points where the numerical procedures require special customization (such as near coordinate singularities or grid edges). One family of grid arrangements which, to our knowledge, has never before been applied to numerical weather prediction, but which appears to offer several technical advantages, are what we shall refer to as "Fibonacci grids". They can be thought of as mathematically ideal generalizations of the patterns occurring naturally in the spiral arrangements of seeds and fruit found in sunflower heads and pineapples (to give two of the many botanical examples). These grids possess virtually uniform and highly isotropic resolution, with an equal area for each grid point. There are only two compact singular regions on a sphere that require customized numerics. We demonstrate the practicality of these grids in shallow water simulations, and discuss the prospects for efficiently using these frameworks in three-dimensional semi-implicit and semi-Lagrangian weather prediction or climate models.

  11. Job submission and management through web services: the experience with the CREAM service

    NASA Astrophysics Data System (ADS)

    Aiftimiei, C.; Andreetto, P.; Bertocco, S.; Fina, S. D.; Ronco, S. D.; Dorigo, A.; Gianelle, A.; Marzolla, M.; Mazzucato, M.; Sgaravatto, M.; Verlato, M.; Zangrando, L.; Corvo, M.; Miccio, V.; Sciaba, A.; Cesini, D.; Dongiovanni, D.; Grandi, C.

    2008-07-01

    Modern Grid middleware is built around components providing basic functionality, such as data storage, authentication, security, job management, resource monitoring and reservation. In this paper we describe the Computing Resource Execution and Management (CREAM) service. CREAM provides a Web service-based job execution and management capability for Grid systems; in particular, it is being used within the gLite middleware. CREAM exposes a Web service interface allowing conforming clients to submit and manage computational jobs to a Local Resource Management System. We developed a special component, called ICE (Interface to CREAM Environment) to integrate CREAM in gLite. ICE transfers job submissions and cancellations from the Workload Management System, allowing users to manage CREAM jobs from the gLite User Interface. This paper describes some recent studies aimed at assessing the performance and reliability of CREAM and ICE; those tests have been performed as part of the acceptance tests for integration of CREAM and ICE in gLite. We also discuss recent work towards enhancing CREAM with a BES and JSDL compliant interface.

  12. Grid enabled Service Support Environment - SSE Grid

    NASA Astrophysics Data System (ADS)

    Goor, Erwin; Paepen, Martine

    2010-05-01

    The SSEGrid project is an ESA/ESRIN project which started in 2009 and is executed by two Belgian companies, Spacebel and VITO, and one Dutch company, Dutch Space. The main project objectives are the introduction of a Grid-based processing on demand infrastructure at the Image Processing Centre for earth observation products at VITO and the inclusion of Grid processing services in the Service Support Environment (SSE) at ESRIN. The Grid-based processing on demand infrastructure is meant to support a Grid processing on demand model for Principal Investigators (PI) and allow the design and execution of multi-sensor applications with geographically spread data while minimising the transfer of huge volumes of data. In the first scenario, 'support a Grid processing on demand model for Principal Investigators', we aim to provide processing power close to the EO-data at the processing and archiving centres. We will allow a PI (non-Grid expert user) to upload his own algorithm, as a process, and his own auxiliary data from the SSE Portal and use them in an earth observation workflow on the SSEGrid Infrastructure. The PI can design and submit workflows using his own processes, processes made available by VITO/ESRIN and possibly processes from other users that are available on the Grid. These activities must be user-friendly and not requiring detailed knowledge about the underlying Grid middleware. In the second scenario we aim to design, implement and demonstrate a methodology to set up an earth observation processing facility, which uses large volumes of data from various geographically spread sensors. The aim is to provide solutions for problems that we face today, like wasting bandwidth by copying large volumes of data to one location. We will avoid this by processing the data where they are. The multi-mission Grid-based processing on demand infrastructure will allow developing and executing complex and massive multi-sensor data (re-)processing applications more

  13. A tool for optimization of the production and user analysis on the Grid, C. Grigoras for the ALICE Collaboration

    NASA Astrophysics Data System (ADS)

    Grigoras, Costin; Carminati, Federico; Vladimirovna Datskova, Olga; Schreiner, Steffen; Lee, Sehoon; Zhu, Jianlin; Gheata, Mihaela; Gheata, Andrei; Saiz, Pablo; Betev, Latchezar; Furano, Fabrizio; Mendez Lorenzo, Patricia; Grigoras, Alina Gabriela; Bagnasco, Stefano; Peters, Andreas Joachim; Saiz Santos, Maria Dolores

    2011-12-01

    With the LHC and ALICE entering a full operation and production modes, the amount of Simulation and RAW data processing and end user analysis computational tasks are increasing. The efficient management of all these tasks, all of which have large differences in lifecycle, amounts of processed data and methods to analyze the end result, required the development and deployment of new tools in addition to the already existing Grid infrastructure. To facilitate the management of the large scale simulation and raw data reconstruction tasks, ALICE has developed a production framework called a Lightweight Production Manager (LPM). The LPM is automatically submitting jobs to the Grid based on triggers and conditions, for example after a physics run completion. It follows the evolution of the job and publishes the results on the web for worldwide access by the ALICE physicists. This framework is tightly integrated with the ALICE Grid framework AliEn. In addition to the publication of the job status, LPM is also allowing a fully authenticated interface to the AliEn Grid catalogue, to browse and download files, and in the near future will provide simple types of data analysis through ROOT plugins. The framework is also being extended to allow management of end user jobs.

  14. Autonomous, Decentralized Grid Architecture: Prosumer-Based Distributed Autonomous Cyber-Physical Architecture for Ultra-Reliable Green Electricity Networks

    SciTech Connect

    2012-01-11

    GENI Project: Georgia Tech is developing a decentralized, autonomous, internet-like control architecture and control software system for the electric power grid. Georgia Tech’s new architecture is based on the emerging concept of electricity prosumers—economically motivated actors that can produce, consume, or store electricity. Under Georgia Tech’s architecture, all of the actors in an energy system are empowered to offer associated energy services based on their capabilities. The actors achieve their sustainability, efficiency, reliability, and economic objectives, while contributing to system-wide reliability and efficiency goals. This is in marked contrast to the current one-way, centralized control paradigm.

  15. Expanding access to off-grid rural electrification in Africa: An analysis of community-based micro-grids in Kenya

    NASA Astrophysics Data System (ADS)

    Kirubi, Charles Gathu

    Community micro-grids have played a central role in increasing access to off-grid rural electrification (RE) in many regions of the developing world, notably South Asia. However, the promise of community micro-grids in sub-Sahara Africa remains largely unexplored. My study explores the potential and limits of community micro-grids as options for increasing access to off-grid RE in sub-Sahara Africa. Contextualized in five community micro-grids in rural Kenya, my study is framed through theories of collective action and combines qualitative and quantitative methods, including household surveys, electronic data logging and regression analysis. The main contribution of my research is demonstrating the circumstances under which community micro-grids can contribute to rural development and the conditions under which individuals are likely to initiate and participate in such projects collectively. With regard to rural development, I demonstrate that access to electricity enables the use of electric equipment and tools by small and micro-enterprises, resulting in significant improvement in productivity per worker (100--200% depending on the task at hand) and a corresponding growth in income levels in the order of 20--70%, depending on the product made. Access to electricity simultaneously enables and improves delivery of social and business services from a wide range of village-level infrastructure (e.g. schools, markets, water pumps) while improving the productivity of agricultural activities. Moreover, when local electricity users have an ability to charge and enforce cost-reflective tariffs and electricity consumption is closely linked to productive uses that generate incomes, cost recovery is feasible. By their nature---a new technology delivering highly valued services by the elites and other members, limited local experience and expertise, high capital costs---community micro-grids are good candidates for elite-domination. Even so, elite control does not necessarily

  16. Grid Inertial Response-Based Probabilistic Determination of Energy Storage System Capacity Under High Solar Penetration

    DOE PAGESBeta

    Yue, Meng; Wang, Xiaoyu

    2015-07-01

    It is well-known that responsive battery energy storage systems (BESSs) are an effective means to improve the grid inertial response to various disturbances including the variability of the renewable generation. One of the major issues associated with its implementation is the difficulty in determining the required BESS capacity mainly due to the large amount of inherent uncertainties that cannot be accounted for deterministically. In this study, a probabilistic approach is proposed to properly size the BESS from the perspective of the system inertial response, as an application of probabilistic risk assessment (PRA). The proposed approach enables a risk-informed decision-making processmore » regarding (1) the acceptable level of solar penetration in a given system and (2) the desired BESS capacity (and minimum cost) to achieve an acceptable grid inertial response with a certain confidence level.« less

  17. A bilateral integrative health-care knowledge service mechanism based on 'MedGrid'.

    PubMed

    Liu, Chao; Jiang, Zuhua; Zhen, Lu; Su, Hai

    2008-04-01

    Current health-care organizations are encountering impression of paucity of medical knowledge. This paper classifies medical knowledge with new scopes. The discovery of health-care 'knowledge flow' initiates a bilateral integrative health-care knowledge service, and we make medical knowledge 'flow' around and gain comprehensive effectiveness through six operations (such as knowledge refreshing...). Seizing the active demand of Chinese health-care revolution, this paper presents 'MedGrid', which is a platform with medical ontology and knowledge contents service. Each level and detailed contents are described on MedGrid info-structure. Moreover, a new diagnosis and treatment mechanism are formed by technically connecting with electronic health-care records (EHRs). PMID:18325488

  18. Grid Inertial Response-Based Probabilistic Determination of Energy Storage System Capacity Under High Solar Penetration

    SciTech Connect

    Yue, Meng; Wang, Xiaoyu

    2015-07-01

    It is well-known that responsive battery energy storage systems (BESSs) are an effective means to improve the grid inertial response to various disturbances including the variability of the renewable generation. One of the major issues associated with its implementation is the difficulty in determining the required BESS capacity mainly due to the large amount of inherent uncertainties that cannot be accounted for deterministically. In this study, a probabilistic approach is proposed to properly size the BESS from the perspective of the system inertial response, as an application of probabilistic risk assessment (PRA). The proposed approach enables a risk-informed decision-making process regarding (1) the acceptable level of solar penetration in a given system and (2) the desired BESS capacity (and minimum cost) to achieve an acceptable grid inertial response with a certain confidence level.

  19. A Transflective Nano-Wire Grid Polarizer Based Fiber-Optic Sensor

    PubMed Central

    Feng, Jing; Zhao, Yun; Lin, Xiao-Wen; Hu, Wei; Xu, Fei; Lu, Yan-Qing

    2011-01-01

    A transflective nano-wire grid polarizer is fabricated on a single mode fiber tip by focused ion beam machining. In contrast to conventional absorptive in-line polarizers, the wire grids reflect TE-mode, while transmitting TM-mode light so that no light power is discarded. A reflection contrast of 13.7 dB and a transmission contrast of 4.9 dB are achieved in the 1,550 nm telecom band using a 200-nm wire grid fiber polarizer. With the help of an optic circulator, the polarization states of both the transmissive and reflective lights in the fiber may be monitored simultaneously. A kind of robust fiber optic sensor is thus proposed that could withstand light power variations. To verify the idea, a fiber pressure sensor with the sensitivity of 0.24 rad/N is demonstrated. The corresponding stress-optic coefficient of the fiber is measured. In addition to pressure sensing, this technology could be applied in detecting any polarization state change induced by magnetic fields, electric currents and so on. PMID:22163751

  20. Fabrication of a flexible Ag-grid transparent electrode using ac based electrohydrodynamic Jet printing

    NASA Astrophysics Data System (ADS)

    Park, Jaehong; Hwang, Jungho

    2014-10-01

    In the dc voltage-applied electrohydrodynamic (EHD) jet printing of metal nanoparticles, the residual charge of droplets deposited on a substrate changes the electrostatic field distribution and interrupts the subsequent printing behaviour, especially for insulating substrates that have slow charge decay rates. In this paper, a sinusoidal ac voltage was used in the EHD jet printing process to switch the charge polarity of droplets containing Ag nanoparticles, thereby neutralizing the charge on a polyethylene terephthalate (PET) substrate. Printed Ag lines with a width of 10 µm were invisible to the naked eye. After sintering lines with 500 µm of line pitch at 180 °C, a grid-type transparent electrode (TE) with a sheet resistance of ˜7 Ω sq-1 and a dc to optical conductivity ratio of ˜300 at ˜84.2% optical transmittance was obtained, values that were superior to previously reported results. In order to evaluate the durability of the TE under bending stresses, the sheet resistance was measured as the number of bending cycles was increased. The sheet resistance of the Ag grid electrode increased only slightly, by less than 20% from its original value, even after 500 cycles. To the best of our knowledge, this is the first time that Ag (invisible) grid TEs have been fabricated on PET substrates by ac voltage applied EHD jet printing.

  1. Job Ready.

    ERIC Educational Resources Information Center

    Easter Seal Society for Crippled Children and Adults of Washington, Seattle.

    Intended for use by employers for assessing how "job-ready" their particular business environment may be, the booklet provides information illustrating what physical changes could be made to allow persons with mobility limitations to enter and conduct business independently in a particular building. Illustrations along with brief explanations are…

  2. Job Burnout.

    ERIC Educational Resources Information Center

    Angerer, John M.

    2003-01-01

    Presents an overview of job burnout, discusses the pioneering research and current theories of the burnout construct, along with the history of the main burnout assessment--the Maslach Burnout Inventory. Concludes that an understanding of the interaction between employee and his or her environment is critical for grasping the origin of burnout.…

  3. Your Job.

    ERIC Educational Resources Information Center

    Torre, Liz; And Others

    Information and accompanying exercises are provided in this learning module to reinforce basic reading, writing, and math skills and, at the same time, introduce personal assessment and job-seeking techniques. The module's first section provides suggestions for assessing personal interests and identifying the assets one has to offer an employer.…

  4. Transforming Power Grid Operations

    SciTech Connect

    Huang, Zhenyu; Guttromson, Ross T.; Nieplocha, Jarek; Pratt, Robert G.

    2007-04-15

    While computation is used to plan, monitor, and control power grids, some of the computational technologies now used are more than a hundred years old, and the complex interactions of power grid components impede real-time operations. Thus it is hard to speed up “state estimation,” the procedure used to estimate the status of the power grid from measured input. State estimation is the core of grid operations, including contingency analysis, automatic generation control, and optimal power flow. How fast state estimation and contingency analysis are conducted (currently about every 5 minutes) needs to be increased radically so the analysis of contingencies is comprehensive and is conducted in real time. Further, traditional state estimation is based on a power flow model and only provides a static snapshot—a tiny piece of the state of a large-scale dynamic machine. Bringing dynamic aspects into real-time grid operations poses an even bigger challenge. Working with the latest, most advanced computing techniques and hardware, researchers at Pacific Northwest National Laboratory (PNNL) intend to transform grid operations by increasing computational speed and improving accuracy. Traditional power grid computation is conducted on single PC hardware platforms. This article shows how traditional power grid computation can be reformulated to take advantage of advanced computing techniques and be converted to high-performance computing platforms (e.g., PC clusters, reconfigurable hardware, scalable multicore shared memory computers, or multithreaded architectures). The improved performance is expected to have a huge impact on how power grids are operated and managed and ultimately will lead to more reliability and better asset utilization to the power industry. New computational capabilities will be tested and demonstrated on the comprehensive grid operations platform in the Electricity Infrastructure Operations Center, which is a newly commissioned PNNL facility for

  5. ATLAS job monitoring in the Dashboard Framework

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Campana, S.; Karavakis, E.; Kokoszkiewicz, L.; Saiz, P.; Sargsyan, L.; Schovancova, J.; Tuckett, D.

    2012-12-01

    Monitoring of the large-scale data processing of the ATLAS experiment includes monitoring of production and user analysis jobs. The Experiment Dashboard provides a common job monitoring solution, which is shared by ATLAS and CMS experiments. This includes an accounting portal as well as real-time monitoring. Dashboard job monitoring for ATLAS combines information from the PanDA job processing database, Production system database and monitoring information from jobs submitted through GANGA to Workload Management System (WMS) or local batch systems. Usage of Dashboard-based job monitoring applications will decrease load on the PanDA database and overcome scale limitations in PanDA monitoring caused by the short job rotation cycle in the PanDA database. Aggregation of the task/job metrics from different sources provides complete view of job processing activity in ATLAS scope.

  6. A grid-based distributed flood forecasting model for use with weather radar data: Part 1. Formulation

    NASA Astrophysics Data System (ADS)

    Bell, V. A.; Moore, R. J.

    A practical methodology for distributed rainfall-runoff modelling using grid square weather radar data is developed for use in real-time flood forecasting. The model, called the Grid Model, is configured so as to share the same grid as used by the weather radar, thereby exploiting the distributed rainfall estimates to the full. Each grid square in the catchment is conceptualised as a storage which receives water as precipitation and generates water by overflow and drainage. This water is routed across the catchment using isochrone pathways. These are derived from a digital terrain model assuming two fixed velocities of travel for land and river pathways which are regarded as model parameters to be optimised. Translation of water between isochrones is achieved using a discrete kinematic routing procedure, parameterised through a single dimensionless wave speed parameter, which advects the water and incorporates diffusion effects through the discrete space-time formulation. The basic model routes overflow and drainage separately through a parallel system of kinematic routing reaches, characterised by different wave speeds but using the same isochrone-based space discretisation; these represent fast and slow pathways to the basin outlet, respectively. A variant allows the slow pathway to have separate isochrones calculated using Darcy velocities controlled by the hydraulic gradient as estimated by the local gradient of the terrain. Runoff production within a grid square is controlled by its absorption capacity which is parameterised through a simple linkage function to the mean gradient in the square, as calculated from digital terrain data. This allows absorption capacity to be specified differently for every grid square in the catchment through the use of only two regional parameters and a DTM measurement of mean gradient for each square. An extension of this basic idea to consider the distribution of gradient within the square leads analytically to a Pareto

  7. A Java commodity grid kit.

    SciTech Connect

    von Laszewski, G.; Foster, I.; Gawor, J.; Lane, P.; Mathematics and Computer Science

    2001-07-01

    In this paper we report on the features of the Java Commodity Grid Kit. The Java CoG Kit provides middleware for accessing Grid functionality from the Java framework. Java CoG Kit middleware is general enough to design a variety of advanced Grid applications with quite different user requirements. Access to the Grid is established via Globus protocols, allowing the Java CoG Kit to communicate also with the C Globus reference implementation. Thus, the Java CoG Kit provides Grid developers with the ability to utilize the Grid, as well as numerous additional libraries and frameworks developed by the Java community to enable network, Internet, enterprise, and peer-to peer computing. A variety of projects have successfully used the client libraries of the Java CoG Kit to access Grids driven by the C Globus software. In this paper we also report on the efforts to develop server side Java CoG Kit components. As part of this research we have implemented a prototype pure Java resource management system that enables one to run Globus jobs on platforms on which a Java virtual machine is supported, including Windows NT machines.

  8. Variance-based global sensitivity analysis for multiple scenarios and models with implementation using sparse grid collocation

    NASA Astrophysics Data System (ADS)

    Dai, Heng; Ye, Ming

    2015-09-01

    Sensitivity analysis is a vital tool in hydrological modeling to identify influential parameters for inverse modeling and uncertainty analysis, and variance-based global sensitivity analysis has gained popularity. However, the conventional global sensitivity indices are defined with consideration of only parametric uncertainty. Based on a hierarchical structure of parameter, model, and scenario uncertainties and on recently developed techniques of model- and scenario-averaging, this study derives new global sensitivity indices for multiple models and multiple scenarios. To reduce computational cost of variance-based global sensitivity analysis, sparse grid collocation method is used to evaluate the mean and variance terms involved in the variance-based global sensitivity analysis. In a simple synthetic case of groundwater flow and reactive transport, it is demonstrated that the global sensitivity indices vary substantially between the four models and three scenarios. Not considering the model and scenario uncertainties, might result in biased identification of important model parameters. This problem is resolved by using the new indices defined for multiple models and/or multiple scenarios. This is particularly true when the sensitivity indices and model/scenario probabilities vary substantially. The sparse grid collocation method dramatically reduces the computational cost, in comparison with the popular quasi-random sampling method. The new framework of global sensitivity analysis is mathematically general, and can be applied to a wide range of hydrologic and environmental problems.

  9. Job-Preference and Job-Matching Assessment Results and Their Association with Job Performance and Satisfaction among Young Adults with Developmental Disabilities

    ERIC Educational Resources Information Center

    Hall, Julie; Morgan, Robert L.; Salzberg, Charles L.

    2014-01-01

    We investigated the effects of preference and degree of match on job performance of four 19 to 20-year-old young adults with developmental disabilities placed in community-based job conditions. We identified high-preference, high-matched and low-preference, low-matched job tasks using a video web-based assessment program. The job matching…

  10. The climate of Europe during the Holocene: a gridded pollen-based reconstruction and its multi-proxy evaluation

    NASA Astrophysics Data System (ADS)

    Mauri, A.; Davis, B. A. S.; Collins, P. M.; Kaplan, J. O.

    2015-03-01

    We present a new gridded climate reconstruction for Europe for the last 12,000 years based on pollen data. The reconstruction is an update of Davis et al. (2003) using the same methodology, but with a greatly expanded fossil and surface-sample dataset and more rigorous quality-control. The modern pollen dataset has been increased by more than 80%, and the fossil pollen dataset by more than 50%, representing almost 60,000 individual pollen samples. The climate parameters reconstructed include summer/winter and annual temperatures and precipitation, as well as a measure of moisture balance, and growing degree-days above 5 °C. Confidence limits were established for the reconstruction based on transfer function and interpolation uncertainties. The reconstruction takes account of post-glacial isostatic readjustment which resulted in a potential warming bias of up to +1-2 °C for parts of Fennoscandia in the early Holocene, as well as changes in palaeogeography resulting from decaying ice sheets and rising post-glacial sea-levels. This new dataset has been evaluated against previously published independent quantitative climate reconstructions from a variety of archives on a site-by-site basis across Europe. The results of this comparison are generally very good; only chironomid-based reconstructions showed substantial differences with our values. Our reconstruction is available for download as gridded maps throughout the Holocene on a 1000-year time-step. The gridded format makes our reconstructions suitable for comparison with climate model output and for other applications such as vegetation and land-use modelling. Our new climate reconstruction suggests that warming in Europe during the mid-Holocene was greater in winter than in summer, an apparent paradox that is not consistent with current climate model simulations and traditional interpretations of Milankovitch theory.

  11. Grid-based versus big region approach for inverting CO emissions using Measurement of Pollution in the Troposphere (MOPITT) data

    NASA Astrophysics Data System (ADS)

    Stavrakou, T.; Müller, J.-F.

    2006-08-01

    The CO columns retrieved by the Measurement of Pollution in the Troposphere (MOPITT) satellite instrument between May 2000 and April 2001 are used together with the Intermediate Model for the Annual and Global Evolution of Species (IMAGES) global chemistry transport model and its adjoint to provide top-down estimates for anthropogenic, biomass burning, and biogenic CO emissions on the global scale, as well as for the biogenic volatile organic compounds (VOC) fluxes, whose oxidation constitutes a major indirect CO source. For this purpose, the big region and grid-based Bayesian inversion methods are presented and compared. In the former setup, the monthly emissions over large geographical regions are quantified. In the grid-based setup, the fluxes are optimized at the spatial resolution of the model and on a monthly basis. Source-specific spatiotemporal correlations among errors on the prior emissions are introduced in order to better constrain the inversion problem. Both inversion techniques bring the model columns much closer to the measurements at all latitudes, but the grid-based analysis achieves a higher reduction of the overall model/data bias. Further comparisons with observed mixing ratios at NOAA Climate Monitoring and Diagnostics Laboratory and Global Atmosphere Watch sites, as well as with airborne measurements are also presented. The inferred emission estimates are weakly dependent on the prior errors and correlations. Our best estimate for the global CO source amounts to 2900 Tg CO/yr in both inversion approaches, about 5% higher than the prior. The global anthropogenic emission estimate is 18% larger than the prior, with the biggest increase for east Asia and a substantial decrease in south Asia. The vegetation fire emission estimates decrease as well, from the prior 467 Tg CO/yr to 450 Tg CO/yr in the grid-based solution and 434 Tg CO/yr in the monthly big region setup, mainly due to a significant reduction of African savanna fire emissions. The

  12. Wireless Communications in Smart Grid

    NASA Astrophysics Data System (ADS)

    Bojkovic, Zoran; Bakmaz, Bojan

    Communication networks play a crucial role in smart grid, as the intelligence of this complex system is built based on information exchange across the power grid. Wireless communications and networking are among the most economical ways to build the essential part of the scalable communication infrastructure for smart grid. In particular, wireless networks will be deployed widely in the smart grid for automatic meter reading, remote system and customer site monitoring, as well as equipment fault diagnosing. With an increasing interest from both the academic and industrial communities, this chapter systematically investigates recent advances in wireless communication technology for the smart grid.

  13. A novel multi-model neuro-fuzzy-based MPPT for three-phase grid-connected photovoltaic system

    SciTech Connect

    Chaouachi, Aymen; Kamel, Rashad M.; Nagasaka, Ken

    2010-12-15

    This paper presents a novel methodology for Maximum Power Point Tracking (MPPT) of a grid-connected 20 kW photovoltaic (PV) system using neuro-fuzzy network. The proposed method predicts the reference PV voltage guarantying optimal power transfer between the PV generator and the main utility grid. The neuro-fuzzy network is composed of a fuzzy rule-based classifier and three multi-layered feed forwarded Artificial Neural Networks (ANN). Inputs of the network (irradiance and temperature) are classified before they are fed into the appropriated ANN for either training or estimation process while the output is the reference voltage. The main advantage of the proposed methodology, comparing to a conventional single neural network-based approach, is the distinct generalization ability regarding to the nonlinear and dynamic behavior of a PV generator. In fact, the neuro-fuzzy network is a neural network based multi-model machine learning that defines a set of local models emulating the complex and nonlinear behavior of a PV generator under a wide range of operating conditions. Simulation results under several rapid irradiance variations proved that the proposed MPPT method fulfilled the highest efficiency comparing to a conventional single neural network and the Perturb and Observe (P and O) algorithm dispositive. (author)

  14. A wide field-of-view microscope based on holographic focus grid

    NASA Astrophysics Data System (ADS)

    Wu, Jigang; Cui, Xiquan; Zheng, Guoan; Lee, Lap Man; Yang, Changhuei

    2010-02-01

    We have developed a novel microscope technique that can achieve wide field-of-view (FOV) imaging and yet possess resolution that is comparable to conventional microscope. The principle of wide FOV microscope system breaks the link between resolution and FOV magnitude of traditional microscopes. Furthermore, by eliminating bulky optical elements from its design and utilizing holographic optical elements, the wide FOV microscope system is more cost-effective. In our system, a hologram was made to focus incoming collimated beam into a focus grid. The sample is put in the focal plane and the transmissions of the focuses are detected by an imaging sensor. By scanning the incident angle of the incoming beam, the focus grid will scan across the sample and the time-varying transmission can be detected. We can then reconstruct the transmission image of the sample. The resolution of microscopic image is limited by the size of the focus formed by the hologram. The scanning area of each focus spot is determined by the separation of the focus spots and can be made small for fast imaging speed. We have fabricated a prototype system with a 2.4-mm FOV and 1-μm resolution. The prototype system was used to image onion skin cells for a demonstration. The preliminary experiments prove the feasibility of the wide FOV microscope technique, and the possibility of a wider FOV system with better resolution.

  15. Optimal Operation Method of Smart House by Controllable Loads based on Smart Grid Topology

    NASA Astrophysics Data System (ADS)

    Yoza, Akihiro; Uchida, Kosuke; Yona, Atsushi; Senju, Tomonobu

    2013-08-01

    From the perspective of global warming suppression and depletion of energy resources, renewable energy such as wind generation (WG) and photovoltaic generation (PV) are getting attention in distribution systems. Additionally, all electrification apartment house or residence such as DC smart house have increased in recent years. However, due to fluctuating power from renewable energy sources and loads, supply-demand balancing fluctuations of power system become problematic. Therefore, "smart grid" has become very popular in the worldwide. This article presents a methodology for optimal operation of a smart grid to minimize the interconnection point power flow fluctuations. To achieve the proposed optimal operation, we use distributed controllable loads such as battery and heat pump. By minimizing the interconnection point power flow fluctuations, it is possible to reduce the maximum electric power consumption and the electric cost. This system consists of photovoltaics generator, heat pump, battery, solar collector, and load. In order to verify the effectiveness of the proposed system, MATLAB is used in simulations.

  16. Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2002-01-01

    Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.

  17. Simulation of plasma based semiconductor processing using block structured locally refined grids

    SciTech Connect

    Wake, D.D.

    1998-01-01

    We have described a new numerical method for plasma simulation. Calculations have been presented which show that the method is accurate and suggest the regimes in which the method provides savings in CPU time and memory requirements. A steady state simulation of a four centimeter domain was modeled with sheath scale (150 microns) resolution using only 40 grid points. Simulations of semiconductor processing equipment have been performed which imply the usefulness of the method for engineering applications. It is the author`s opinion that these accomplishments represent a significant contribution to plasma simulation and the efficient numerical solution of certain systems of non-linear partial differential equations. More work needs to be done, however, for the algorithm to be of practical use in an engineering environment. Despite our success at avoiding the dielectric relaxation timestep restrictions the algorithm is still conditionally stable and requires timesteps which are relatively small. This represents a prohibitive runtime for steady state solutions on high resolution grids. Current research suggests that these limitations may be overcome and the use of much larger timesteps will be possible.

  18. MUJER: Mothers United for Jobs, Education, and Results. 1997-8 Project FORWARD Project-based Learning Project Summary.

    ERIC Educational Resources Information Center

    Green, Anson M.

    Students in the Culebra Road GED/JOBS (General Educational Development/Job Opportunities and Basic Skills) class, an adult education class for Temporary Assistance for Needy Families (TANF) students, created their own website. First, students completed a computer literacy survey to gauge their computer skills. Next, students were encouraged to…

  19. Personal vulnerability and work-home interaction: the effect of job performance-based self-esteem on work/home conflict and facilitation.

    PubMed

    Innstrand, Siw Tone; Langballe, Ellen Melbye; Espnes, Geir Arild; Aasland, Olaf Gjerløw; Falkum, Erik

    2010-12-01

    The aim of the present study was to examine the longitudinal relationship between job performance-based self-esteem (JPB-SE) and work-home interaction (WHI) in terms of the direction of the interaction (work-to-home vs. home-to-work) and the effect (conflict vs. facilitation). A sample of 3,475 respondents from eight different occupational groups (lawyers, physicians, nurses, teachers, church ministers, bus drivers, and people working in advertising and information technology) supplied data at two points of time with a two-year time interval. The two-wave, cross-lagged structural equations modeling (SEM) analysis demonstrated reciprocal relationships between these variables, i.e., job performance-based self-esteem may act as a precursor as well as an outcome of work-home interaction. The strongest association was between job performance-based self-esteem and work-to-home conflict. Previous research on work-home interaction has mainly focused on situational factors. This longitudinal study expands the work-home literature by demonstrating how individual vulnerability (job performance-based self-esteem) contributes to the explanation of work-home interactions.

  20. High-, Middle-, and Low-Wage Job Preparatory Programs--The Creation and Use of Policy Tool Based on UI Wages Data. Technical Report.

    ERIC Educational Resources Information Center

    Whittaker, Doug

    This is a report on the 2001 after-college earnings of students from Washington State's community and technical colleges. The state board created a wage-based category system for all 500 vocational/job-preparatory programs offered by the 34 state two-year colleges. The programs were divided into high- ($12 or more per hour), middle- ($10.50-$12…

  1. How Female Professionals Successfully Process and Negotiate Involuntary Job Loss at Faith-Based Colleges and Universities: A Grounded Theory Study

    ERIC Educational Resources Information Center

    Cunningham, Debra Jayne

    2015-01-01

    Using a constructivist grounded theory approach (Charmaz, 2006), this qualitative study examined how eight female senior-level professionals employed at faith-based colleges and universities processed and navigated the experience of involuntary job loss and successfully transitioned to another position. The theoretical framework of psychological…

  2. How Female Professionals Successfully Process and Negotiate Involuntary Job Loss at Faith-Based Colleges and Universities: A Grounded Theory Study

    ERIC Educational Resources Information Center

    Cunningham, Debra Jayne

    2013-01-01

    Using a constructivist grounded theory approach (Charmaz, 2006), this qualitative study examined how 8 female senior-level professionals employed at faith-based colleges and universities processed and navigated the experience of involuntary job loss and successfully transitioned to another position. The purpose of this research was to contribute…

  3. Identifying Success Factors in Community College Grants Awarded under the U.S. Department of Labor's Community-Based Job Training Grants Program, 2005-2008

    ERIC Educational Resources Information Center

    Garrison, Debra Linley

    2010-01-01

    This study provides an in-depth analysis of the Community-Based Job Training Grants awarded by the U.S. Department of Labor from 2005 to 2008. The primary research question is designed to identify the most important factors in meeting grant-training outcomes; however, numerous secondary questions were addressed to provide the reader with in-depth…

  4. Full time and full coverage global observation system for ecological monitoring base on MEO satellite grid constellation

    NASA Astrophysics Data System (ADS)

    You, Rui; Liu, Shuhao

    Human life more and more rely on earth environment and atmosphere, environmental information required by space based monitor is a crucial importance, although GEO and polar weather satellite in orbit by several countries, but it can’t monitor all zone of earth with real time. This paper present a conception proposal which can realize stable, continue and real time observation for any zone(include arctic and ant-arctic zone) of earth and its atmosphere, it base on walker constellation in 20000Km high medium orbit with 24 satellites, payloads configuration with infrared spectrometer, visible camera, ultraviolet ray camera, millimeter wave radiometer, leaser radar, spatial resolution are 1km@ infrared,0.5km@ visible optical. This satellite of grid constellation can monitor any zone of global with 1-3hours retrial observation cycles. Air pollution, ozone of atmosphere, earth surface pollution, desert storm, water pollution, vegetation change, natural disasters, man-made emergency situations, agriculture and climate change can monitor by this MEO satellite grid constellation. This system is a international space infrastructure, use of mature technologies and products, can build by co-operation with multi countries.

  5. Spatiotemporal analysis of urban growth in three African capital cities: A grid-cell-based analysis using remote sensing data

    NASA Astrophysics Data System (ADS)

    Hou, Hao; Estoque, Ronald C.; Murayama, Yuji

    2016-11-01

    Spatiotemporal analysis of urban growth patterns and dynamics is important not only in urban geography but also in landscape and urban planning and sustainability studies. Based on remote sensing-derived land-cover maps and LandScan population data of two time points (ca. 2000 and 2014), this study examines the spatiotemporal patterns and dynamics of the urban growth of three rapidly urbanizing African capital cities, namely, Bamako (Mali), Cairo (Egypt) and Nairobi (Kenya). A grid-cell-based analysis technique was employed to integrate the LandScan population and land-cover data, creating grid maps of population density and the density of each land-cover category. The results revealed that Bamako's urban (built-up) area has been expanding at a rate of 5.37% per year. Nairobi had a lower annual expansion rate (4.99%), but had a higher rate compared to Cairo (2.79%). Bamako's urban expansion was at the expense of its bareland and green spaces (i.e., cropland, grassland and forest), whereas the urban expansions of Cairo and Nairobi were at the cost of their bareland. In all three cities, there was a weak, but significant positive relationship between urban expansion (change in built-up density) and population growth (change in population density). Overall, this study provides an overview of the spatial patterns and dynamics of urban growth in these three African capitals, which might be useful in the context of urban studies and landscape and urban planning.

  6. Service-Oriented Architecture for NVO and TeraGrid Computing

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph; Miller, Craig; Williams, Roy; Steenberg, Conrad; Graham, Matthew

    2008-01-01

    The National Virtual Observatory (NVO) Extensible Secure Scalable Service Infrastructure (NESSSI) is a Web service architecture and software framework that enables Web-based astronomical data publishing and processing on grid computers such as the National Science Foundation's TeraGrid. Characteristics of this architecture include the following: (1) Services are created, managed, and upgraded by their developers, who are trusted users of computing platforms on which the services are deployed. (2) Service jobs can be initiated by means of Java or Python client programs run on a command line or with Web portals. (3) Access is granted within a graduated security scheme in which the size of a job that can be initiated depends on the level of authentication of the user.

  7. Towards risk-based management of critical infrastructures : enabling insights and analysis methodologies from a focused study of the bulk power grid.

    SciTech Connect

    Richardson, Bryan T.; LaViolette, Randall A.; Cook, Benjamin Koger

    2008-02-01

    This report summarizes research on a holistic analysis framework to assess and manage risks in complex infrastructures, with a specific focus on the bulk electric power grid (grid). A comprehensive model of the grid is described that can approximate the coupled dynamics of its physical, control, and market components. New realism is achieved in a power simulator extended to include relevant control features such as relays. The simulator was applied to understand failure mechanisms in the grid. Results suggest that the implementation of simple controls might significantly alter the distribution of cascade failures in power systems. The absence of cascade failures in our results raises questions about the underlying failure mechanisms responsible for widespread outages, and specifically whether these outages are due to a system effect or large-scale component degradation. Finally, a new agent-based market model for bilateral trades in the short-term bulk power market is presented and compared against industry observations.

  8. Using remote sensing and grid-based meteorological datasets for regional soybean crop yield prediction and crop monitoring

    NASA Astrophysics Data System (ADS)

    Mali, Preeti

    Regional crop yield estimations using crop models is a national priority due to its contributions to crop security assessment and food pricing policies. Many of these crop yield assessments are performed using time-consuming, intensive field surveys. This research was initiated to test the applicability of remote sensing and grid-based meteorological model data for providing improved and efficient predictive capabilities for crop bio-productivity. The soybean prediction model (Sinclair model) used in this research, requires daily data inputs to simulate yield which are temperature, precipitation, solar radiation, day length initialization of certain soil moisture parameters for each model run. The traditional meteorological datasets were compared with simulated South American Land Data Assimilation System (SALDAS) meteorological datasets for Sinclair model runs and for initializing soil moisture inputs. Considering the fact that grid-based meteorological data has the resolution of 1/8th of a degree, the estimations demonstrated a reasonable accuracy level and showed promise for increase in efficiency for regional level yield predictions. The research tested daily composited Normalized Difference Vegetation Index (NDVI) from Moderate Resolution Imaging Spectroradiometer (MODIS) sensor (both AQUA and TERRA platform) and simulated Visible/Infrared Imager Radiometer Suite (VIIRS) sensor product (a new sensor planned to be launched in the near future) for crop growth and development based on phenological events. The AQUA and TERRA fusion based daily MODIS NDVI was utilized to develop a planting date estimation method. The results have shown that daily MODIS composited NDVI values have the capability for enhanced monitoring of soybean crop growth and development. The method was able to predict planting date within +/-3.4 days. A geoprocessing framework for extracting data from the grid data sources was developed. Overall, this study was able to demonstrate the utility of

  9. Beyond grid security

    NASA Astrophysics Data System (ADS)

    Hoeft, B.; Epting, U.; Koenig, T.

    2008-07-01

    While many fields relevant to Grid security are already covered by existing working groups, their remit rarely goes beyond the scope of the Grid infrastructure itself. However, security issues pertaining to the internal set-up of compute centres have at least as much impact on Grid security. Thus, this talk will present briefly the EU ISSeG project (Integrated Site Security for Grids). In contrast to groups such as OSCT (Operational Security Coordination Team) and JSPG (Joint Security Policy Group), the purpose of ISSeG is to provide a holistic approach to security for Grid computer centres, from strategic considerations to an implementation plan and its deployment. The generalised methodology of Integrated Site Security (ISS) is based on the knowledge gained during its implementation at several sites as well as through security audits, and this will be briefly discussed. Several examples of ISS implementation tasks at the Forschungszentrum Karlsruhe will be presented, including segregation of the network for administration and maintenance and the implementation of Application Gateways. Furthermore, the web-based ISSeG training material will be introduced. This aims to offer ISS implementation guidance to other Grid installations in order to help avoid common pitfalls.

  10. The effect of job organizational factors on job satisfaction in two automotive industries in Malaysia.

    PubMed

    Dawal, Siti Zawiah Md; Taha, Zahari

    2007-12-01

    A methodology is developed in diagnosing the effect of job organizational factors on job satisfaction in two automotive industries in Malaysia. One hundred and seventy male subjects of age 18-40 years with the mean age of 26.8 and standard deviation (SD) of 5.3 years and the mean work experience of 6.5 years and SD of 4.9 years took part in the study. Five job organizational factors were tested in the study including job rotation, work method, training, problem solving and goal setting. A job organization questionnaire was designed and was based on respondents' perception in relation to job satisfaction. The results showed that job organization factors were significantly related to job satisfaction. Job rotation, work method, training and goal setting showed strong correlation with job satisfaction while problem solving had intermediate correlation in the first automotive industry. On the other hand, most job organization factors showed intermediate correlation with job satisfaction in the second automotive industry except the training factor which had low correlation with job satisfaction. These results highlight that job rotation, work methods, problem solving and goal setting are outstanding factors in the study of job satisfaction for automotive industries.

  11. Regional study on investment for transmission infrastructure in China based on the State Grid data

    NASA Astrophysics Data System (ADS)

    Wei, Wendong; Wu, Xudong; Wu, Xiaofang; Xi, Qiangmin; Ji, Xi; Li, Guoping

    2016-06-01

    Transmission infrastructure is an integral component of safeguarding the stability of electricity delivery. However, existing studies of transmission infrastructure mostly rely on a simple review of the network, while the analysis of investments remains rudimentary. This study conducted the first regionally focused analysis of investments in transmission infrastructure in China to help optimize its structure and reduce investment costs. Using State Grid data, the investment costs, under various voltages, for transmission lines and transformer substations are calculated. By analyzing the regional profile of cumulative investment in transmission infrastructure, we assess correlations between investment, population, and economic development across the regions. The recent development of ultra-high-voltage transmission networks will provide policy-makers new options for policy development.

  12. Comparisons of Ship-based Observations of Air-Sea Energy Budgets with Gridded Flux Products

    NASA Astrophysics Data System (ADS)

    Fairall, C. W.; Blomquist, B.

    2015-12-01

    Air-surface interactions are characterized directly by the fluxes of momentum, heat, moisture, trace gases, and particles near the interface. In the last 20 years advances in observation technologies have greatly expanded the database of high-quality direct (covariance) turbulent flux and irradiance observations from research vessels. In this paper, we will summarize observations from the NOAA sea-going flux system from participation in various field programs executed since 1999 and discuss comparisons with several gridded flux products. We will focus on comparisons of turbulent heat fluxes and solar and IR radiative fluxes. The comparisons are done for observing programs in the equatorial Pacific and Indian Oceans and SE subtropical Pacific.

  13. Inspection Robot Based Mobile Sensing and Power Line Tracking for Smart Grid

    PubMed Central

    Byambasuren, Bat-erdene; Kim, Donghan; Oyun-Erdene, Mandakh; Bold, Chinguun; Yura, Jargalbaatar

    2016-01-01

    Smart sensing and power line tracking is very important in a smart grid system. Illegal electricity usage can be detected by remote current measurement on overhead power lines using an inspection robot. There is a need for accurate detection methods of illegal electricity usage. Stable and correct power line tracking is a very prominent issue. In order to correctly track and make accurate measurements, the swing path of a power line should be previously fitted and predicted by a mathematical function using an inspection robot. After this, the remote inspection robot can follow the power line and measure the current. This paper presents a new power line tracking method using parabolic and circle fitting algorithms for illegal electricity detection. We demonstrate the effectiveness of the proposed tracking method by simulation and experimental results. PMID:26907274

  14. 3D Myocardial Contraction Imaging Based on Dynamic Grid Interpolation: Theory and Simulation Analysis

    NASA Astrophysics Data System (ADS)

    Bu, Shuhui; Shiina, Tsuyoshi; Yamakawa, Makoto; Takizawa, Hotaka

    Accurate assessment of local myocardial contraction is important for diagnosis of ischemic heart disease, because decreases of myocardial motion often appear in the early stages of the disease. Three-dimensional (3-D) assessment of the stiffness distribution is required for accurate diagnosis of ischemic heart disease. Since myocardium motion occurs radially within the left ventricle wall and the ultrasound beam propagates axially, conventional approaches, such as tissue Doppler imaging and strain-rate imaging techniques, cannot provide us with enough quantitative information about local myocardial contraction. In order to resolve this problem, we propose a novel myocardial contraction imaging system which utilizes the weighted phase gradient method, the extended combined autocorrelation method, and the dynamic grid interpolation (DGI) method. From the simulation results, we conclude that the strain image's accuracy and contrast have been improved by the proposed method.

  15. Grid-based Parallel Data Streaming Implemented for the Gyrokinetic Toroidal Code

    SciTech Connect

    S. Klasky; S. Ethier; Z. Lin; K. Martins; D. McCune; R. Samtaney

    2003-09-15

    We have developed a threaded parallel data streaming approach using Globus to transfer multi-terabyte simulation data from a remote supercomputer to the scientist's home analysis/visualization cluster, as the simulation executes, with negligible overhead. Data transfer experiments show that this concurrent data transfer approach is more favorable compared with writing to local disk and then transferring this data to be post-processed. The present approach is conducive to using the grid to pipeline the simulation with post-processing and visualization. We have applied this method to the Gyrokinetic Toroidal Code (GTC), a 3-dimensional particle-in-cell code used to study microturbulence in magnetic confinement fusion from first principles plasma theory.

  16. Inspection Robot Based Mobile Sensing and Power Line Tracking for Smart Grid.

    PubMed

    Byambasuren, Bat-Erdene; Kim, Donghan; Oyun-Erdene, Mandakh; Bold, Chinguun; Yura, Jargalbaatar

    2016-02-19

    Smart sensing and power line tracking is very important in a smart grid system. Illegal electricity usage can be detected by remote current measurement on overhead power lines using an inspection robot. There is a need for accurate detection methods of illegal electricity usage. Stable and correct power line tracking is a very prominent issue. In order to correctly track and make accurate measurements, the swing path of a power line should be previously fitted and predicted by a mathematical function using an inspection robot. After this, the remote inspection robot can follow the power line and measure the current. This paper presents a new power line tracking method using parabolic and circle fitting algorithms for illegal electricity detection. We demonstrate the effectiveness of the proposed tracking method by simulation and experimental results.

  17. Inspection Robot Based Mobile Sensing and Power Line Tracking for Smart Grid.

    PubMed

    Byambasuren, Bat-Erdene; Kim, Donghan; Oyun-Erdene, Mandakh; Bold, Chinguun; Yura, Jargalbaatar

    2016-01-01

    Smart sensing and power line tracking is very important in a smart grid system. Illegal electricity usage can be detected by remote current measurement on overhead power lines using an inspection robot. There is a need for accurate detection methods of illegal electricity usage. Stable and correct power line tracking is a very prominent issue. In order to correctly track and make accurate measurements, the swing path of a power line should be previously fitted and predicted by a mathematical function using an inspection robot. After this, the remote inspection robot can follow the power line and measure the current. This paper presents a new power line tracking method using parabolic and circle fitting algorithms for illegal electricity detection. We demonstrate the effectiveness of the proposed tracking method by simulation and experimental results. PMID:26907274

  18. Magnetic field extraction of trap-based electron beams using a high-permeability grid

    SciTech Connect

    Hurst, N. C.; Danielson, J. R.; Surko, C. M.

    2015-07-15

    A method to form high quality electrostatically guided lepton beams is explored. Test electron beams are extracted from tailored plasmas confined in a Penning-Malmberg trap. The particles are then extracted from the confining axial magnetic field by passing them through a high magnetic permeability grid with radial tines (a so-called “magnetic spider”). An Einzel lens is used to focus and analyze the beam properties. Numerical simulations are used to model non-adiabatic effects due to the spider, and the predictions are compared with the experimental results. Improvements in beam quality are discussed relative to the use of a hole in a high permeability shield (i.e., in lieu of the spider), and areas for further improvement are described.

  19. GaInN-based light emitting diodes embedded with wire grid polarizers

    NASA Astrophysics Data System (ADS)

    Cho, Jaehee; Meyaard, David S.; Ma, Ming; Schubert, E. Fred

    2015-02-01

    The use of liquid crystal displays (LCDs) has become prevalent in our modern, technology driven society. We demonstrate a linearly polarized GaInN light-emitting diode (LED) embedded with a wire-grid polarizer (WGP). A derivation of rigorous coupled-wave analysis is given; starting from Maxwell’s equations and finishing by matching the boundary conditions in the grating and other regions of interest. Simulated results are shown for various grating parameters, including different metals used for the grating and the metal-line dimensions. An LED fabrication process is developed for demonstrating WGP-LEDs. A clear polarization preference for the light coming out of the WGP-LED is experimentally demonstrated with a polarization ratio over 0.90, which is in good agreement with simulation results.

  20. Job Clusters as Perceived by High School Students.

    ERIC Educational Resources Information Center

    Vivekananthan, Pathe S.; Weber, Larry J.

    Career awareness is described as the manner by which students cluster jobs. The clustering of jobs was based on the students perceptions of similarities among job titles. Interest inventories were used as the bases to select 36 job titles. Seventy-eight high school students sorted the stimuli into several categories. The multidimensional scaling…

  1. Impact of Spatial Scale on Calibration and Model Output for a Grid-based SWAT Model

    NASA Astrophysics Data System (ADS)

    Pignotti, G.; Vema, V. K.; Rathjens, H.; Raj, C.; Her, Y.; Chaubey, I.; Crawford, M. M.

    2014-12-01

    The traditional implementation of the Soil and Water Assessment Tool (SWAT) model utilizes common landscape characteristics known as hydrologic response units (HRUs). Discretization into HRUs provides a simple, computationally efficient framework for simulation, but also represents a significant limitation of the model as spatial connectivity between HRUs is ignored. SWATgrid, a newly developed, distributed version of SWAT, provides modified landscape routing via a grid, overcoming these limitations. However, the current implementation of SWATgrid has significant computational overhead, which effectively precludes traditional calibration and limits the total number of grid cells in a given modeling scenario. Moreover, as SWATgrid is a relatively new modeling approach, it remains largely untested with little understanding of the impact of spatial resolution on model output. The objective of this study was to determine the effects of user-defined input resolution on SWATgrid predictions in the Upper Cedar Creek Watershed (near Auburn, IN, USA). Original input data, nominally at 30 m resolution, was rescaled for a range of resolutions between 30 and 4,000 m. A 30 m traditional SWAT model was developed as the baseline for model comparison. Monthly calibration was performed, and the calibrated parameter set was then transferred to all other SWAT and SWATgrid models to focus the effects of resolution on prediction uncertainty relative to the baseline. Model output was evaluated with respect to stream flow at the outlet and water quality parameters. Additionally, output of SWATgrid models were compared to output of traditional SWAT models at each resolution, utilizing the same scaled input data. A secondary objective considered the effect of scale on calibrated parameter values, where each standard SWAT model was calibrated independently, and parameters were transferred to SWATgrid models at equivalent scales. For each model, computational requirements were evaluated

  2. Ground-based magnetometer arrays and geomagnetically induced currents in power grids: science and operations

    NASA Astrophysics Data System (ADS)

    Thomson, A. W.; Beggan, C.; Kelly, G.

    2012-12-01

    Space weather impacts on worldwide technological infrastructures are likely to be at their greatest between 2012 and 2015, during the peak and early descending phase of the current solar cycle. Examples of infrastructures at risk include power grids, pipelines, railways, communications, satellite operations, high latitude air travel and global navigation satellite systems. For example, severe magnetic storms in March 1989 and October 2003, near the peaks of recent solar cycles, were particularly significant in causing problems for a wide variety of technologies. Further back in time, severe storms in September 1859 and May 1921 are known to have been a problem for the more rudimentary technologies of the time. In this talk we will review how magnetic observatory data can best contribute to ongoing efforts to develop new space weather data products, particularly in aiding the management of electrical power transmission networks. Examples of existing and perhaps some suggestions for new data products and services will be given. Throughout, the need for near to real time data will be emphasised. We will also emphasise the importance of developing regional magnetometer networks and promoting magnetic data sharing to help turn research into operations. Developing research consortia, for example as in the European EURISGIC GIC project (www.eurisgic.eu), where magnetic and other data, as well as expertise, is pooled and shared is also recommended and adds to our ability to monitor the dynamic state of magnetospheric and ionospheric currents. We will discuss how industry currently perceives the space weather hazard, using recent examples from the power industry, where the concerns are with the risk to high voltage transformers and the safe and uninterrupted distribution of electrical power. Industry measurements of geomagnetic induced currents (GIC) are also vital for the validation of scientific models of the flow of GIC in power systems. Examples of GIC data sources and

  3. GridTool: A surface modeling and grid generation tool

    NASA Technical Reports Server (NTRS)

    Samareh-Abolhassani, Jamshid

    1995-01-01

    GridTool is designed around the concept that the surface grids are generated on a set of bi-linear patches. This type of grid generation is quite easy to implement, and it avoids the problems associated with complex CAD surface representations and associated surface parameterizations. However, the resulting surface grids are close to but not on the original CAD surfaces. This problem can be alleviated by projecting the resulting surface grids onto the original CAD surfaces. GridTool is designed primary for unstructured grid generation systems. Currently, GridTool supports VGRID and FELISA systems, and it can be easily extended to support other unstructured grid generation systems. The data in GridTool is stored parametrically so that once the problem is set up, one can modify the surfaces and the entire set of points, curves and patches will be updated automatically. This is very useful in a multidisciplinary design and optimization process. GridTool is written entirely in ANSI 'C', the interface is based on the FORMS library, and the graphics is based on the GL library. The code has been tested successfully on IRIS workstations running IRIX4.0 and above. The memory is allocated dynamically, therefore, memory size will depend on the complexity of geometry/grid. GridTool data structure is based on a link-list structure which allows the required memory to expand and contract dynamically according to the user's data size and action. Data structure contains several types of objects such as points, curves, patches, sources and surfaces. At any given time, there is always an active object which is drawn in magenta, or in their highlighted colors as defined by the resource file which will be discussed later.

  4. Impact of Heterogeneity-Based Dose Calculation Using a Deterministic Grid-Based Boltzmann Equation Solver for Intracavitary Brachytherapy

    SciTech Connect

    Mikell, Justin K.; Klopp, Ann H.; Gonzalez, Graciela M.N.; Kisling, Kelly D.; Price, Michael J.; Berner, Paula A.; Eifel, Patricia J.; Mourtada, Firas

    2012-07-01

    Purpose: To investigate the dosimetric impact of the heterogeneity dose calculation Acuros (Transpire Inc., Gig Harbor, WA), a grid-based Boltzmann equation solver (GBBS), for brachytherapy in a cohort of cervical cancer patients. Methods and Materials: The impact of heterogeneities was retrospectively assessed in treatment plans for 26 patients who had previously received {sup 192}Ir intracavitary brachytherapy for cervical cancer with computed tomography (CT)/magnetic resonance-compatible tandems and unshielded colpostats. The GBBS models sources, patient boundaries, applicators, and tissue heterogeneities. Multiple GBBS calculations were performed with and without solid model applicator, with and without overriding the patient contour to 1 g/cm{sup 3} muscle, and with and without overriding contrast materials to muscle or 2.25 g/cm{sup 3} bone. Impact of source and boundary modeling, applicator, tissue heterogeneities, and sensitivity of CT-to-material mapping of contrast were derived from the multiple calculations. American Association of Physicists in Medicine Task Group 43 (TG-43) guidelines and the GBBS were compared for the following clinical dosimetric parameters: Manchester points A and B, International Commission on Radiation Units and Measurements (ICRU) report 38 rectal and bladder points, three and nine o'clock, and {sub D2cm3} to the bladder, rectum, and sigmoid. Results: Points A and B, D{sub 2} cm{sup 3} bladder, ICRU bladder, and three and nine o'clock were within 5% of TG-43 for all GBBS calculations. The source and boundary and applicator account for most of the differences between the GBBS and TG-43 guidelines. The D{sub 2cm3} rectum (n = 3), D{sub 2cm3} sigmoid (n = 1), and ICRU rectum (n = 6) had differences of >5% from TG-43 for the worst case incorrect mapping of contrast to bone. Clinical dosimetric parameters were within 5% of TG-43 when rectal and balloon contrast were mapped to bone and radiopaque packing was not overridden. Conclusions

  5. Enhancing control of grid distribution in algebraic grid generation

    NASA Technical Reports Server (NTRS)

    Steinthorsson, E.; Shih, T. I.-P.; Roelke, R. J.

    1992-01-01

    Three techniques are presented to enhance the control of grid-point distribution for a class of algebraic grid generation methods known as the two-, four- and six-boundary methods. First, multidimensional stretching functions are presented, and a technique is devised to construct them based on the desired distribution of grid points along certain boundaries. Second, a normalization procedure is proposed which allows more effective control over orthogonality of grid lines at boundaries and curvature of grid lines near boundaries. And third, interpolating functions based on tension splines are introduced to control curvature of grid lines in the interior of the spatial domain. In addition to these three techniques, consistency conditions are derived which must be satisfied by all user-specified data employed in the grid generation process to control grid-point distribution. The usefulness of the techniques developed in this study was demonstrated by using them in conjunction with the two- and four-boundary methods to generate several grid systems, including a three-dimensional grid system in the coolant passage of a radial turbine blade with serpentine channels and pin fins.

  6. Grid-based methods for diatomic quantum scattering problems: a finite-element, discrete variable representation in prolate spheroidal coordinates

    SciTech Connect

    Tao, Liang; McCurdy, C.W.; Rescigno, T.N.

    2008-11-25

    We show how to combine finite elements and the discrete variable representation in prolate spheroidal coordinates to develop a grid-based approach for quantum mechanical studies involving diatomic molecular targets. Prolate spheroidal coordinates are a natural choice for diatomic systems and have been used previously in a variety of bound-state applications. The use of exterior complex scaling in the present implementation allows for a transparently simple way of enforcing Coulomb boundary conditions and therefore straightforward application to electronic continuum problems. Illustrative examples involving the bound and continuum states of H2+, as well as the calculation of photoionization cross sections, show that the speed and accuracy of the present approach offer distinct advantages over methods based on single-center expansions.

  7. CRT--Cascade Routing Tool to define and visualize flow paths for grid-based watershed models

    USGS Publications Warehouse

    Henson, Wesley R.; Medina, Rose L.; Mayers, C. Justin; Niswonger, Richard G.; Regan, R.S.

    2013-01-01

    The U.S. Geological Survey Cascade Routing Tool (CRT) is a computer application for watershed models that include the coupled Groundwater and Surface-water FLOW model, GSFLOW, and the Precipitation-Runoff Modeling System (PRMS). CRT generates output to define cascading surface and shallow subsurface flow paths for grid-based model domains. CRT requires a land-surface elevation for each hydrologic response unit (HRU) of the model grid; these elevations can be derived from a Digital Elevation Model raster data set of the area containing the model domain. Additionally, a list is required of the HRUs containing streams, swales, lakes, and other cascade termination features along with indices that uniquely define these features. Cascade flow paths are determined from the altitudes of each HRU. Cascade paths can cross any of the four faces of an HRU to a stream or to a lake within or adjacent to an HRU. Cascades can terminate at a stream, lake, or HRU that has been designated as a watershed outflow location.

  8. Interactions of Copepods with Fractal-Grid Generated Turbulence based on Tomo-PIV and 3D-PTV

    NASA Astrophysics Data System (ADS)

    Sun, Zhengzhong; Krizan, Daniel; Longmire, Ellen

    2014-11-01

    A copepod escapes from predation by sensing fluid motion caused by the predator. It is thought that the escape reaction is elicited by a threshold value of the maximum principal strain rate (MPSR) in the flow. The present experimental work attempts to investigate and quantify the MPSR threshold value. In the experiment, copepods interact with turbulence generated by a fractal grid in a recirculating channel. The turbulent flow is measured by time-resolved Tomo-PIV, while the copepod motion is tracked simultaneously through 3D-PTV. Escape reactions are detected based on copepod trajectories and velocity vectors, while the surrounding hydrodynamic information is retrieved from the corresponding location in the 3D instantaneous flow field. Measurements are performed at three locations downstream of the fractal grid, such that various turbulence levels can be achieved. Preliminary results show that the number of escape reactions decreases at locations with reduced turbulence levels, where shorter jump distances and smaller change of swimming orientation are exhibited. Detailed quantitative results of MPSR threshold values and the dynamics of copepod escape will be presented. Supported by NSF-IDBR Grant #0852875.

  9. Resource parallel provisioning scheme for collaborating service in optical grid network

    NASA Astrophysics Data System (ADS)

    Wu, Runze; Ji, Yuefeng

    2008-11-01

    Divisible loads can be divided into any number independent sub-tasks, and map them on different platform to be processed in parallel jobs. Divisible load theory is introduced into parallel and distributed computing system to obtain available resources distributing on different locations for reaching processing efficiency, which can be extended to distributing multimedia and application system based grid. Light path scheduling algorithm based on DLT is proposed to realize optical resource scheduling on demand in optical grid under the requirement of intensive data applications, especially facing to parallel and distributed system. The proposed algorithm introduces the Divisible Load Theory as load distributing method and is extended for the distributed algorithm of divisible load scheduling to match multichannel application requirement of optical grid network. The proposed method deploys multiple wavelengths for original node, and builds parallel lightpaths to transmit independent divisible loads to collaborating nodes for a big task.

  10. It's My Job: Job Descriptions for Over 30 Camp Jobs.

    ERIC Educational Resources Information Center

    Klein, Edie

    This book was created to assist youth-camp directors define their camp jobs to improve employee performance assessment, training, and hiring. The book, aimed at clarifying issues in fair-hiring practices required by the 1990 Americans with Disabilities Act (ADA), includes the descriptions of 31 jobs. Each description includes the job's minimum…

  11. A bioinformatics knowledge discovery in text application for grid computing

    PubMed Central

    Castellano, Marcello; Mastronardi, Giuseppe; Bellotti, Roberto; Tarricone, Gianfranco

    2009-01-01

    Background A fundamental activity in biomedical research is Knowledge Discovery which has the ability to search through large amounts of biomedical information such as documents and data. High performance computational infrastructures, such as Grid technologies, are emerging as a possible infrastructure to tackle the intensive use of Information and Communication resources in life science. The goal of this work was to develop a software middleware solution in order to exploit the many knowledge discovery applications on scalable and distributed computing systems to achieve intensive use of ICT resources. Methods The development of a grid application for Knowledge Discovery in Text using a middleware solution based methodology is presented. The system must be able to: perform a user application model, process the jobs with the aim of creating many parallel jobs to distribute on the computational nodes. Finally, the system must be aware of the computational resources available, their status and must be able to monitor the execution of parallel jobs. These operative requirements lead to design a middleware to be specialized using user application modules. It included a graphical user interface in order to access to a node search system, a load balancing system and a transfer optimizer to reduce communication costs. Results A middleware solution prototype and the performance evaluation of it in terms of the speed-up factor is shown. It was written in JAVA on Globus Toolkit 4 to build the grid infrastructure based on GNU/Linux computer grid nodes. A test was carried out and the results are shown for the named entity recognition search of symptoms and pathologies. The search was applied to a collection of 5,000 scientific documents taken from PubMed. Conclusion In this paper we discuss the development of a grid application based on a middleware solution. It has been tested on a knowledge discovery in text process to extract new and useful information about symptoms and

  12. Evaluating the Information Power Grid using the NAS Grid Benchmarks

    NASA Technical Reports Server (NTRS)

    VanderWijngaartm Rob F.; Frumkin, Michael A.

    2004-01-01

    The NAS Grid Benchmarks (NGB) are a collection of synthetic distributed applications designed to rate the performance and functionality of computational grids. We compare several implementations of the NGB to determine programmability and efficiency of NASA's Information Power Grid (IPG), whose services are mostly based on the Globus Toolkit. We report on the overheads involved in porting existing NGB reference implementations to the IPG. No changes were made to the component tasks of the NGB can still be improved.

  13. Grid in Geosciences

    NASA Astrophysics Data System (ADS)

    Petitdidier, Monique; Schwichtenberg, Horst

    2010-05-01

    The worldwide Earth science community covers a mosaic of disciplines and players such as academia, industry, national surveys, international organizations, and so forth. It provides a scientific basis for addressing societal issues, which require that the Earth science community utilize massive amounts of data, both in real and remote time. This data is usually distributed among many different organizations and data centers. These facts, the utilization of massive, distributed data amounts, explain the interest of the Earth science community for Grid technology, also noticeable by the variety of applications ported and tools developed. In parallel to the participation in EGEE, other projects involving ES disciplines were or have been carried out as related projects to EGEE (Enabling Grids for E-sciencE) such as CYCLOPS, SEEGrid, EELA2, EUASIA or outside e.g., in the framework of WGISS/CEOS. Numerous applications in atmospheric chemistry, meteorology, seismology, hydrology, pollution, climate and biodiversity were deployed successfully on Grid. In order to fulfill requirements of risk management, several prototype applications have been deployed using OGC (Open geospatial Consortium) components with Grid middleware. Examples are in hydrology for flood or Black Sea Catchment monitoring, and in fire monitoring. Meteorological, pollution and climate applications are based on meteorological models ported on Grid such as MM5 (Mesoscale Model), WRF (Weather Research and Forecasting), RAMS (Regional Atmospheric Modeling System) or CAM (Community Atmosphere Model). Seismological applications on Grid are numerous in locations where their occurrence is important and computer resources too small; then interfaces and gateways have been developed to facilitate the access to data and specific software and avoid work duplication. A portal has been deployed for commercial seismological software, Geocluster, for academic users. In this presentation examples of such applications will

  14. A Competency Based Assessment Approach to Training and Job Performance in the Field of Chemical Dependency.

    ERIC Educational Resources Information Center

    Armstrong, Dennis A.; Ottenheimer, Howard M.

    1979-01-01

    Competency based education is a model which can be utilized for training and evaluating alcohol and drug abuse personnel. An identification and assessment process is described relating to the development and implementation of a competency based bachelor's degree program for students majoring in alcohol and drug abuse related areas. (Author/BEF)

  15. Chicago Principals under School Based Management: New Roles and Realities of the Job.

    ERIC Educational Resources Information Center

    Ford, Darryl J.

    The changing role of the principal under school-based management is examined in this paper. The implementation of the Chicago School Reform Act in 1989 shifted responsibility for school governance from the Central Board of Education to school-based management councils at each of the city's schools. Interviews were conducted with 10 elementary and…

  16. Grid reliability

    NASA Astrophysics Data System (ADS)

    Saiz, P.; Andreeva, J.; Cirstoiu, C.; Gaidioz, B.; Herrala, J.; Maguire, E. J.; Maier, G.; Rocha, R.

    2008-07-01

    Thanks to the Grid, users have access to computing resources distributed all over the world. The Grid hides the complexity and the differences of its heterogeneous components. In such a distributed system, it is clearly very important that errors are detected as soon as possible, and that the procedure to solve them is well established. We focused on two of its main elements: the workload and the data management systems. We developed an application to investigate the efficiency of the different centres. Furthermore, our system can be used to categorize the most common error messages, and control their time evolution.

  17. Statistical Computations with Astrogrid and the Grid

    NASA Astrophysics Data System (ADS)

    Nichol, R.; Smith, G.; Miller, C.; Genovese, C.; Wasserman, L.; Bryan, B.; Gray, A.; Schneider, J.; Moore, A.

    We outline our first steps towards marrying two new and emerging technologies; the Virtual Observatory (e.g, AstroGrid) and the computational grid. We discuss the construction of VOTechBroker, which is a modular software tool designed to abstract the tasks of submission and management of a large number of computational jobs to a distributed computer system. The broker will also interact with the AstroGrid workflow and MySpace environments. We present our planned usage of the VOTechBroker in computing a huge number of n-point correlation functions from the SDSS, as well as fitting over a million CMBfast models to the WMAP data.

  18. Job satisfaction, job stress and psychosomatic health problems in software professionals in India

    PubMed Central

    Madhura, Sahukar; Subramanya, Pailoor; Balaram, Pradhan

    2014-01-01

    This questionnaire based study investigates correlation between job satisfaction, job stress and psychosomatic health in Indian software professionals. Also, examines how yoga practicing Indian software professionals cope up with stress and psychosomatic health problems. The sample consisted of yoga practicing and non-yoga practicing Indian software professionals working in India. The findings of this study have shown that there is significant correlation among job satisfaction, job stress and health. In Yoga practitioners job satisfaction is not significantly related to Psychosomatic health whereas in non-yoga group Psychosomatic Health symptoms showed significant relationship with Job satisfaction. PMID:25598623

  19. A paradigm for parallel unstructured grid generation

    SciTech Connect

    Gaither, A.; Marcum, D.; Reese, D.

    1996-12-31

    In this paper, a sequential 2D unstructured grid generator based on iterative point insertion and local reconnection is coupled with a Delauney tessellation domain decomposition scheme to create a scalable parallel unstructured grid generator. The Message Passing Interface (MPI) is used for distributed communication in the parallel grid generator. This work attempts to provide a generic framework to enable the parallelization of fast sequential unstructured grid generators in order to compute grand-challenge scale grids for Computational Field Simulation (CFS). Motivation for moving from sequential to scalable parallel grid generation is presented. Delaunay tessellation and iterative point insertion and local reconnection (advancing front method only) unstructured grid generation techniques are discussed with emphasis on how these techniques can be utilized for parallel unstructured grid generation. Domain decomposition techniques are discussed for both Delauney and advancing front unstructured grid generation with emphasis placed on the differences needed for both grid quality and algorithmic efficiency.

  20. Striped ratio grids for scatter estimation

    NASA Astrophysics Data System (ADS)

    Hsieh, Scott S.; Wang, Adam S.; Star-Lack, Josh

    2016-03-01

    Striped ratio grids are a new concept for scatter management in cone-beam CT. These grids are a modification of conventional anti-scatter grids and consist of stripes which alternate between high grid ratio and low grid ratio. Such a grid is related to existing hardware concepts for scatter estimation such as blocker-based methods or primary modulation, but rather than modulating the primary, the striped ratio grid modulates the scatter. The transitions between adjacent stripes can be used to estimate and subtract the remaining scatter. However, these transitions could be contaminated by variation in the primary radiation. We describe a simple nonlinear image processing algorithm to estimate scatter, and proceed to validate the striped ratio grid on experimental data of a pelvic phantom. The striped ratio grid is emulated by combining data from two scans with different grids. Preliminary results are encouraging and show a significant reduction of scatter artifact.

  1. Faces of the Recovery Act: The Impact of Smart Grid

    ScienceCinema

    President Obama

    2016-07-12

    On October 27th, Baltimore Gas & Electric was selected to receive $200 million for Smart Grid innovation projects under the Recovery Act. Watch as members of their team, along with President Obama, explain how building a smarter grid will help consumers cut their utility bills, battle climate change and create jobs.

  2. Faces of the Recovery Act: The Impact of Smart Grid

    SciTech Connect

    President Obama

    2009-11-24

    On October 27th, Baltimore Gas & Electric was selected to receive $200 million for Smart Grid innovation projects under the Recovery Act. Watch as members of their team, along with President Obama, explain how building a smarter grid will help consumers cut their utility bills, battle climate change and create jobs.

  3. How I teach evidence-based epidural information in a hospital and keep my job.

    PubMed

    Tumblin, Ann

    2007-01-01

    A childbirth educator reveals her dilemma in teaching evidence-based practice in today's high-tech birth climate. She focuses on strategies to use when sharing epidural information with expectant parents. PMID:18769516

  4. Modelling shear flows with smoothed particle hydrodynamics and grid-based methods

    NASA Astrophysics Data System (ADS)

    Junk, Veronika; Walch, Stefanie; Heitsch, Fabian; Burkert, Andreas; Wetzstein, Markus; Schartmann, Marc; Price, Daniel

    2010-09-01

    Given the importance of shear flows for astrophysical gas dynamics, we study the evolution of the Kelvin-Helmholtz instability (KHI) analytically and numerically. We derive the dispersion relation for the two-dimensional KHI including viscous dissipation. The resulting expression for the growth rate is then used to estimate the intrinsic viscosity of four numerical schemes depending on code-specific as well as on physical parameters. Our set of numerical schemes includes the Tree-SPH code VINE, an alternative smoothed particle hydrodynamics (SPH) formulation developed by Price and the finite-volume grid codes FLASH and PLUTO. In the first part, we explicitly demonstrate the effect of dissipation-inhibiting mechanisms such as the Balsara viscosity on the evolution of the KHI. With VINE, increasing density contrasts lead to a continuously increasing suppression of the KHI (with complete suppression from a contrast of 6:1 or higher). The alternative SPH formulation including an artificial thermal conductivity reproduces the analytically expected growth rates up to a density contrast of 10:1. The second part addresses the shear flow evolution with FLASH and PLUTO. Both codes result in a consistent non-viscous evolution (in the equal as well as in the different density case) in agreement with the analytical prediction. The viscous evolution studied with FLASH shows minor deviations from the analytical prediction.

  5. A fast and stable solver for acoustic scattering problems based on the nonuniform grid approach.

    PubMed

    Chernokozhin, Evgeny; Brick, Yaniv; Boag, Amir

    2016-01-01

    A fast and stable boundary element method (BEM) algorithm for solving external problems of acoustic scattering by impenetrable bodies is developed. The method employs the Burton-Miller integral equation, which provides stable convergence of iterative solvers, and a generalized multilevel nonuniform grid (MLNG) algorithm for fast evaluation of field integrals. The MLNG approach is used here for the removal of computational bottlenecks involved with repeated matrix-vector multiplications as well as for the low-order basis function regularization of the hyper-singular integral kernel. The method is used for calculating the fields scattered by large acoustic scatterers, including nonconvex bodies with piece-wise smooth surfaces. As a result, the algorithm is capable of accurately incorporating high-frequency effects such as creeping waves and multiple-edges diffractions. In all cases, stable convergence of the method is observed. High accuracy of the method is demonstrated by comparison with the traditional BEM solution. The computational complexity of the method in terms of both the computation time and storage is estimated in practical computations and shown to be close to the asymptotic O(N log N) dependence. PMID:26827041

  6. Grid-based e-Labs for Pre-College Research in Physics and Astronomy

    NASA Astrophysics Data System (ADS)

    Loughran, Thomas J.

    2006-12-01

    e-Labs are Grid-enabled collaborative research environments which make data and analysis tools from large scientific collaborations available for pre-college science research. Guided-inquiry pedagogy underlies a workflow consisting of a series of performance milestones and associated resources, providing assistance for students in climbing the learning curve so as to query data, publish studies and interact with readers in the e-Lab. Teacher pages are available to help them use e-Labs in their classrooms. Currently three e-Labs--using data from the QuarkNet/WALTA Cosmic Ray collaboration, the CMS test beam, and the environmental sensing monitors from LIGO-Hanford--are currently deployed in production or testing runs. Students at Saint Joseph’s High School in South Bend, IN are extending the e-Lab pedagogy to astronomy using data from the National Optical Astronomy Observatory’s TLRBSE research projects, as well as data from the Spitzer Space Telescope under the NOAO/NASA Spitzer Research Teacher program. The prospects of developing and employing e-Labs to facilitate high school student research using data from virtual observatories will be discussed. e-Labs are being developed under the NSF-supported Interactions in Understanding the Universe (I2U2) program. (This presentation is sponsored by AAPT member Beth Marchant.)

  7. Spatial distribution of polychlorinated naphthalenes in the atmosphere across North China based on gridded field observations.

    PubMed

    Lin, Yan; Zhao, Yifan; Qiu, Xinghua; Ma, Jin; Yang, Qiaoyun; Shao, Min; Zhu, Tong

    2013-09-01

    Polychlorinated naphthalenes (PCNs) belong to a group of dioxin-like pollutants; however little information is available on PCNs in North China. In this study, gridded field observations by passive air sampling at 90 sites were undertaken to determine the levels, spatial distributions, and sources of PCNs in the atmosphere of North China. A median concentration of 48 pg m(-3) (range: 10-2460 pg m(-3)) for ∑29PCNs indicated heavy PCN pollution. The compositional profile indicated that nearly 90% of PCNs observed were from thermal processes rather than from commercial mixtures. Regarding the source type, a quantitative apportionment suggested that local non-point emissions contributed two-thirds of the total PCNs observed in the study, whereas a point source of electronic-waste recycling site contributed a quarter of total PCNs. The estimated toxic equivalent quantity for dioxin-like PCNs ranged from 0.97 to 687 fg TEQ m(-3), with the electronic-waste recycling site with the highest risk.

  8. The Open Science Grid

    SciTech Connect

    Pordes, Ruth; Kramer, Bill; Olson, Doug; Livny, Miron; Roy, Alain; Avery, Paul; Blackburn, Kent; Wenaus, Torre; Wurthwein, Frank; Gardner, Rob; Wilde, Mike; /Chicago U. /Indiana U.

    2007-06-01

    The Open Science Grid (OSG) provides a distributed facility where the Consortium members provide guaranteed and opportunistic access to shared computing and storage resources. OSG provides support for and evolution of the infrastructure through activities that cover operations, security, software, troubleshooting, addition of new capabilities, and support for existing and engagement with new communities. The OSG SciDAC-2 project provides specific activities to manage and evolve the distributed infrastructure and support its use. The innovative aspects of the project are the maintenance and performance of a collaborative (shared & common) petascale national facility over tens of autonomous computing sites, for many hundreds of users, transferring terabytes of data a day, executing tens of thousands of jobs a day, and providing robust and usable resources for scientific groups of all types and sizes. More information can be found at the OSG web site: www.opensciencegrid.org.

  9. The GEWEX LandFlux project: evaluation of model evaporation using tower-based and globally gridded forcing data

    NASA Astrophysics Data System (ADS)

    McCabe, M. F.; Ershadi, A.; Jimenez, C.; Miralles, D. G.; Michel, D.; Wood, E. F.

    2016-01-01

    Determining the spatial distribution and temporal development of evaporation at regional and global scales is required to improve our understanding of the coupled water and energy cycles and to better monitor any changes in observed trends and variability of linked hydrological processes. With recent international efforts guiding the development of long-term and globally distributed flux estimates, continued product assessments are required to inform upon the selection of suitable model structures and also to establish the appropriateness of these multi-model simulations for global application. In support of the objectives of the Global Energy and Water Cycle Exchanges (GEWEX) LandFlux project, four commonly used evaporation models are evaluated against data from tower-based eddy-covariance observations, distributed across a range of biomes and climate zones. The selected schemes include the Surface Energy Balance System (SEBS) approach, the Priestley-Taylor Jet Propulsion Laboratory (PT-JPL) model, the Penman-Monteith-based Mu model (PM-Mu) and the Global Land Evaporation Amsterdam Model (GLEAM). Here we seek to examine the fidelity of global evaporation simulations by examining the multi-model response to varying sources of forcing data. To do this, we perform parallel and collocated model simulations using tower-based data together with a global-scale grid-based forcing product. Through quantifying the multi-model response to high-quality tower data, a better understanding of the subsequent model response to the coarse-scale globally gridded data that underlies the LandFlux product can be obtained, while also providing a relative evaluation and assessment of model performance. Using surface flux observations from 45 globally distributed eddy-covariance stations as independent metrics of performance, the tower-based analysis indicated that PT-JPL provided the highest overall statistical performance (0.72; 61 W m-2; 0.65), followed closely by GLEAM (0.68; 64 W m-2

  10. A New Grid based Ionosphere Algorithm for GAGAN using Data Fusion Technique (ISRO GIVE Model-Multi Layer Data Fusion)

    NASA Astrophysics Data System (ADS)

    Srinivasan, Nirmala; Ganeshan, A. S.; Mishra, Saumyaketu

    2012-07-01

    A New Grid based Ionosphere Algorithm for GAGAN using Data Fusion Technique (ISRO GIVE Model-Multi Layer Data Fusion) Saumyaketu Mishra, Nirmala S, A S Ganeshan ISRO Satellite Centre, Bangalore and Timothy Schempp, Gregory Um, Hans Habereder Raytheon Company Development of a region-specific ionosphere model is the key element in providing precision approach services for civil aviation with GAGAN (GPS Aided GEO Augmented Navigation). GAGAN is an Indian SBAS (Space Based Augmentation System) comprising of three segments; space segment (GEO and GPS), ground segment (15 Indian reference stations (INRES), 2 master control centers and 3 ground uplink stations) and user segment. The GAGAN system is intended to provide air navigation services for APV 1/1.5 precision approach over the Indian land mass and RNP 0.1 navigation service over Indian Flight Information Region (FIR), conforming to the standards of GNSS ICAO-SARPS. Ionosphere being largest source of error is of prime concern for a SBAS. India is a low latitude country, posing challenges for grid based ionosphere algorithm development; large spatial and temporal gradients, Equatorial anomaly, Depletions (bubbles), Scintillations etc. To meet the required GAGAN performance, it is necessary to develop and implement a best suitable ionosphere model, applicable for the Indian region as thin shell models like planar does not meet the requirement. ISRO GIVE Model - Multi Layer Data Fusion (IGM-MLDF) employs an innovative approach for computing the ionosphere corrections and confidences at pre-defined grid points at 350 Km shell height. Ionosphere variations over the Geo-magnetic equatorial regions shows peak electron density shell height variations from 200 km to 500 km, so single thin shell assumption at 350 km is not valid over Indian region. Hence IGM-MLDF employs innovative scheme of modeling at two shell heights. Through empirical analysis the shell heights of 250 km and 450 km are chosen. The ionosphere measurement

  11. The GEWEX LandFlux project: Evaluation of model evaporation using tower-based and globally gridded forcing data

    DOE PAGESBeta

    McCabe, M. F.; Ershadi, A.; Jimenez, C.; Miralles, D. G.; Michel, D.; Wood, E. F.

    2016-01-26

    Determining the spatial distribution and temporal development of evaporation at regional and global scales is required to improve our understanding of the coupled water and energy cycles and to better monitor any changes in observed trends and variability of linked hydrological processes. With recent international efforts guiding the development of long-term and globally distributed flux estimates, continued product assessments are required to inform upon the selection of suitable model structures and also to establish the appropriateness of these multi-model simulations for global application. In support of the objectives of the Global Energy and Water Cycle Exchanges (GEWEX) LandFlux project, fourmore » commonly used evaporation models are evaluated against data from tower-based eddy-covariance observations, distributed across a range of biomes and climate zones. The selected schemes include the Surface Energy Balance System (SEBS) approach, the Priestley–Taylor Jet Propulsion Laboratory (PT-JPL) model, the Penman–Monteith-based Mu model (PM-Mu) and the Global Land Evaporation Amsterdam Model (GLEAM). Here we seek to examine the fidelity of global evaporation simulations by examining the multi-model response to varying sources of forcing data. To do this, we perform parallel and collocated model simulations using tower-based data together with a global-scale grid-based forcing product. Through quantifying the multi-model response to high-quality tower data, a better understanding of the subsequent model response to the coarse-scale globally gridded data that underlies the LandFlux product can be obtained, while also providing a relative evaluation and assessment of model performance. Using surface flux observations from 45 globally distributed eddy-covariance stations as independent metrics of performance, the tower-based analysis indicated that PT-JPL provided the highest overall statistical performance (0.72; 61 W m–2; 0.65), followed closely by GLEAM

  12. Adult Competency Education Kit. Basic Skills in Speaking, Math, and Reading for Employment. Part Q. ACE Competency Based Job Descriptions: #91--Meat Cutter; #92--Shipping Clerk; #93--Long Haul Truck Driver; #94--Truck Driver--Light.

    ERIC Educational Resources Information Center

    San Mateo County Office of Education, Redwood City, CA. Career Preparation Centers.

    This fourteenth of fifteen sets of Adult Competency Education (ACE) Based Job Descriptions in the ACE kit contains job descriptions for Meat Cutter, Shipping Clerk, Long Haul Truck Driver, and Truck Driver--Light. Each begins with a fact sheet that includes this information: occupational title, D.O.T. code, ACE number, career ladder, D.O.T.…

  13. Adult Competency Education Kit. Basic Skills in Speaking, Math, and Reading for Employment. Part R. ACE Competency Based Job Descriptions: #95--Bus Driver; #98--General Loader; #99--Forklift Operator; #100--Material Handler.

    ERIC Educational Resources Information Center

    San Mateo County Office of Education, Redwood City, CA. Career Preparation Centers.

    This fifteenth of fifteen sets of Adult Competency Education (ACE) Competency Based Job Descriptions in the ACE kit contains job descriptions for Bus Driver, General Loader, Forklift Operator, and Material Handler. Each begins with a fact sheet that includes this information: occupational title, D.O.T. code, ACE number, career ladder, D.O.T.…

  14. Adult Competency Education Kit. Basic Skills in Speaking, Math, and Reading for Employment. Part F. ACE Competency Based Job Descriptions: #20--Body Fender Mechanic; #21--New Car Get-Ready Person.

    ERIC Educational Resources Information Center

    San Mateo County Office of Education, Redwood City, CA. Career Preparation Centers.

    This third of sixteen sets of Adult Competency Education (ACE) Based Job Descriptions in the ACE kit contains job descriptions for Body Fender Mechanic and New Car Get-Ready Person. Each begins with a fact sheet that includes this information: occupational title, D.O.T. code, ACE number, career ladder, D.O.T. general educational developmental…

  15. RBioCloud: A Light-Weight Framework for Bioconductor and R-based Jobs on the Cloud.

    PubMed

    Varghese, Blesson; Patel, Ishan; Barker, Adam

    2015-01-01

    Large-scale ad hoc analytics of genomic data is popular using the R-programming language supported by over 700 software packages provided by Bioconductor. More recently, analytical jobs are benefitting from on-demand computing and storage, their scalability and their low maintenance cost, all of which are offered by the cloud. While biologists and bioinformaticists can take an analytical job and execute it on their personal workstations, it remains challenging to seamlessly execute the job on the cloud infrastructure without extensive knowledge of the cloud dashboard. How analytical jobs can not only with minimum effort be executed on the cloud, but also how both the resources and data required by the job can be managed is explored in this paper. An open-source light-weight framework for executing R-scripts using Bioconductor packages, referred to as `RBioCloud', is designed and developed. RBioCloud offers a set of simple command-line tools for managing the cloud resources, the data and the execution of the job. Three biological test cases validate the feasibility of RBioCloud. The framework is available from http://www.rbiocloud.com. PMID:26357328

  16. RBioCloud: A Light-Weight Framework for Bioconductor and R-based Jobs on the Cloud.

    PubMed

    Varghese, Blesson; Patel, Ishan; Barker, Adam

    2015-01-01

    Large-scale ad hoc analytics of genomic data is popular using the R-programming language supported by over 700 software packages provided by Bioconductor. More recently, analytical jobs are benefitting from on-demand computing and storage, their scalability and their low maintenance cost, all of which are offered by the cloud. While biologists and bioinformaticists can take an analytical job and execute it on their personal workstations, it remains challenging to seamlessly execute the job on the cloud infrastructure without extensive knowledge of the cloud dashboard. How analytical jobs can not only with minimum effort be executed on the cloud, but also how both the resources and data required by the job can be managed is explored in this paper. An open-source light-weight framework for executing R-scripts using Bioconductor packages, referred to as `RBioCloud', is designed and developed. RBioCloud offers a set of simple command-line tools for managing the cloud resources, the data and the execution of the job. Three biological test cases validate the feasibility of RBioCloud. The framework is available from http://www.rbiocloud.com.

  17. A daily high-resolution global gridded precipitation product (1979-2016) based on gauge, satellite, and reanalysis data

    NASA Astrophysics Data System (ADS)

    Beck, Hylke; de Roo, Ad; van Dijk, Albert

    2016-04-01

    Existing global precipitation (P) products suffer from several limitations, most importantly that they fail to take advantage of the complementary performance of satellite and reanalysis products. In this work we introduce Multi-Source Weighted-Ensemble Precipitation (MSWEP), a daily high-resolution global P product for the period 1979-2016 based on merging the P estimates from different sources with disparate error characteristics. For each grid cell, a normalized (unitless) daily P time series was computed by weighted averaging of bias-corrected estimates from three sources: (i) interpolated maps based on thousands of gauges worldwide; (ii) three satellite products (CMORPH, GSMaP, and TMPA 3B42RT); and (iii) a reanalysis product (ERA-Interim). The weight assigned to the gauge-based estimate was based on the distance to surrounding gauges, while the weights assigned to the satellite- and reanalysis-based estimates were based on their performance at surrounding gauges. The normalized daily P time series were subsequently multiplied by climatic mean P estimates from high-quality global and regional climatic datasets explicitly corrected for orographic effects (WorldClim and PRISM, among others). MSWEP was successfully validated in two ways, (i) by comparing its performance to that of existing P products at several hundred independent gauges, and (ii) by forcing a hydrologic model with MSWEP and existing P products for several hundred small catchments around the globe and comparing the streamflow simulation performance. We expect MSWEP to be useful for numerous large-scale hydrological applications.

  18. Peer-to-peer Cooperative Scheduling Architecture for National Grid Infrastructure

    NASA Astrophysics Data System (ADS)

    Matyska, Ludek; Ruda, Miroslav; Toth, Simon

    For some ten years, the Czech National Grid Infrastructure MetaCentrum uses a single central PBSPro installation to schedule jobs across the country. This centralized approach keeps a full track about all the clusters, providing support for jobs spanning several sites, implementation for the fair-share policy and better overall control of the grid environment. Despite a steady progress in the increased stability and resilience to intermittent very short network failures, growing number of sites and processors makes this architecture, with a single point of failure and scalability limits, obsolete. As a result, a new scheduling architecture is proposed, which relies on higher autonomy of clusters. It is based on a peer to peer network of semi-independent schedulers for each site or even cluster. Each scheduler accepts jobs for the whole infrastructure, cooperating with other schedulers on implementation of global policies like central job accounting, fair-share, or submission of jobs across several sites. The scheduling system is integrated with the Magrathea system to support scheduling of virtual clusters, including the setup of their internal network, again eventually spanning several sites. On the other hand, each scheduler is local to one of several clusters and is able to directly control and submit jobs to them even if the connection of other scheduling peers is lost. In parallel to the change of the overall architecture, the scheduling system itself is being replaced. Instead of PBSPro, chosen originally for its declared support of large scale distributed environment, the new scheduling architecture is based on the open-source Torque system. The implementation and support for the most desired properties in PBSPro and Torque are discussed and the necessary modifications to Torque to support the MetaCentrum scheduling architecture are presented, too.

  19. GridPP: the UK grid for particle physics.

    PubMed

    Britton, D; Cass, A J; Clarke, P E L; Coles, J; Colling, D J; Doyle, A T; Geddes, N I; Gordon, J C; Jones, R W L; Kelsey, D P; Lloyd, S L; Middleton, R P; Patrick, G N; Sansum, R A; Pearce, S E

    2009-06-28

    The start-up of the Large Hadron Collider (LHC) at CERN, Geneva, presents a huge challenge in processing and analysing the vast amounts of scientific data that will be produced. The architecture of the worldwide grid that will handle 15 PB of particle physics data annually from this machine is based on a hierarchical tiered structure. We describe the development of the UK component (GridPP) of this grid from a prototype system to a full exploitation grid for real data analysis. This includes the physical infrastructure, the deployment of middleware, operational experience and the initial exploitation by the major LHC experiments. PMID:19451101

  20. GridPP: the UK grid for particle physics.

    PubMed

    Britton, D; Cass, A J; Clarke, P E L; Coles, J; Colling, D J; Doyle, A T; Geddes, N I; Gordon, J C; Jones, R W L; Kelsey, D P; Lloyd, S L; Middleton, R P; Patrick, G N; Sansum, R A; Pearce, S E

    2009-06-28

    The start-up of the Large Hadron Collider (LHC) at CERN, Geneva, presents a huge challenge in processing and analysing the vast amounts of scientific data that will be produced. The architecture of the worldwide grid that will handle 15 PB of particle physics data annually from this machine is based on a hierarchical tiered structure. We describe the development of the UK component (GridPP) of this grid from a prototype system to a full exploitation grid for real data analysis. This includes the physical infrastructure, the deployment of middleware, operational experience and the initial exploitation by the major LHC experiments.