Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY
NASA Astrophysics Data System (ADS)
Bystritskaya, Elena; Fomenko, Alexander; Gogitidze, Nelly; Lobodzinski, Bogdan
2014-06-01
The H1 Virtual Organization (VO), as one of the small VOs, employs most components of the EMI or gLite Middleware. In this framework, a monitoring system is designed for the H1 Experiment to identify and recognize within the GRID the best suitable resources for execution of CPU-time consuming Monte Carlo (MC) simulation tasks (jobs). Monitored resources are Computer Elements (CEs), Storage Elements (SEs), WMS-servers (WMSs), CernVM File System (CVMFS) available to the VO HONE and local GRID User Interfaces (UIs). The general principle of monitoring GRID elements is based on the execution of short test jobs on different CE queues using submission through various WMSs and directly to the CREAM-CEs as well. Real H1 MC Production jobs with a small number of events are used to perform the tests. Test jobs are periodically submitted into GRID queues, the status of these jobs is checked, output files of completed jobs are retrieved, the result of each job is analyzed and the waiting time and run time are derived. Using this information, the status of the GRID elements is estimated and the most suitable ones are included in the automatically generated configuration files for use in the H1 MC production. The monitoring system allows for identification of problems in the GRID sites and promptly reacts on it (for example by sending GGUS (Global Grid User Support) trouble tickets). The system can easily be adapted to identify the optimal resources for tasks other than MC production, simply by changing to the relevant test jobs. The monitoring system is written mostly in Python and Perl with insertion of a few shell scripts. In addition to the test monitoring system we use information from real production jobs to monitor the availability and quality of the GRID resources. The monitoring tools register the number of job resubmissions, the percentage of failed and finished jobs relative to all jobs on the CEs and determine the average values of waiting and running time for the involved GRID queues. CEs which do not meet the set criteria can be removed from the production chain by including them in an exception table. All of these monitoring actions lead to a more reliable and faster execution of MC requests.
Active Job Monitoring in Pilots
NASA Astrophysics Data System (ADS)
Kuehn, Eileen; Fischer, Max; Giffels, Manuel; Jung, Christopher; Petzold, Andreas
2015-12-01
Recent developments in high energy physics (HEP) including multi-core jobs and multi-core pilots require data centres to gain a deep understanding of the system to monitor, design, and upgrade computing clusters. Networking is a critical component. Especially the increased usage of data federations, for example in diskless computing centres or as a fallback solution, relies on WAN connectivity and availability. The specific demands of different experiments and communities, but also the need for identification of misbehaving batch jobs, requires an active monitoring. Existing monitoring tools are not capable of measuring fine-grained information at batch job level. This complicates network-aware scheduling and optimisations. In addition, pilots add another layer of abstraction. They behave like batch systems themselves by managing and executing payloads of jobs internally. The number of real jobs being executed is unknown, as the original batch system has no access to internal information about the scheduling process inside the pilots. Therefore, the comparability of jobs and pilots for predicting run-time behaviour or network performance cannot be ensured. Hence, identifying the actual payload is important. At the GridKa Tier 1 centre a specific tool is in use that allows the monitoring of network traffic information at batch job level. This contribution presents the current monitoring approach and discusses recent efforts and importance to identify pilots and their substructures inside the batch system. It will also show how to determine monitoring data of specific jobs from identified pilots. Finally, the approach is evaluated.
NASA Astrophysics Data System (ADS)
Dumitrescu, Catalin; Nowack, Andreas; Padhi, Sanjay; Sarkar, Subir
2010-04-01
This paper presents a web-based Job Monitoring framework for individual Grid sites that allows users to follow in detail their jobs in quasi-real time. The framework consists of several independent components : (a) a set of sensors that run on the site CE and worker nodes and update a database, (b) a simple yet extensible web services framework and (c) an Ajax powered web interface having a look-and-feel and control similar to a desktop application. The monitoring framework supports LSF, Condor and PBS-like batch systems. This is one of the first monitoring systems where an X.509 authenticated web interface can be seamlessly accessed by both end-users and site administrators. While a site administrator has access to all the possible information, a user can only view the jobs for the Virtual Organizations (VO) he/she is a part of. The monitoring framework design supports several possible deployment scenarios. For a site running a supported batch system, the system may be deployed as a whole, or existing site sensors can be adapted and reused with the web services components. A site may even prefer to build the web server independently and choose to use only the Ajax powered web interface. Finally, the system is being used to monitor a glideinWMS instance. This broadens the scope significantly, allowing it to monitor jobs over multiple sites.
A Job Monitoring and Accounting Tool for the LSF Batch System
NASA Astrophysics Data System (ADS)
Sarkar, Subir; Taneja, Sonia
2011-12-01
This paper presents a web based job monitoring and group-and-user accounting tool for the LSF Batch System. The user oriented job monitoring displays a simple and compact quasi real-time overview of the batch farm for both local and Grid jobs. For Grid jobs the Distinguished Name (DN) of the Grid users is shown. The overview monitor provides the most up-to-date status of a batch farm at any time. The accounting tool works with the LSF accounting log files. The accounting information is shown for a few pre-defined time periods by default. However, one can also compute the same information for any arbitrary time window. The tool already proved to be an extremely useful means to validate more extensive accounting tools available in the Grid world. Several sites have already been using the present tool and more sites running the LSF batch system have shown interest. We shall discuss the various aspects that make the tool essential for site administrators and end-users alike and outline the current status of development as well as future plans.
Analyzing data flows of WLCG jobs at batch job level
NASA Astrophysics Data System (ADS)
Kuehn, Eileen; Fischer, Max; Giffels, Manuel; Jung, Christopher; Petzold, Andreas
2015-05-01
With the introduction of federated data access to the workflows of WLCG, it is becoming increasingly important for data centers to understand specific data flows regarding storage element accesses, firewall configurations, as well as the scheduling of batch jobs themselves. As existing batch system monitoring and related system monitoring tools do not support measurements at batch job level, a new tool has been developed and put into operation at the GridKa Tier 1 center for monitoring continuous data streams and characteristics of WLCG jobs and pilots. Long term measurements and data collection are in progress. These measurements already have been proven to be useful analyzing misbehaviors and various issues. Therefore we aim for an automated, realtime approach for anomaly detection. As a requirement, prototypes for standard workflows have to be examined. Based on measurements of several months, different features of HEP jobs are evaluated regarding their effectiveness for data mining approaches to identify these common workflows. The paper will introduce the actual measurement approach and statistics as well as the general concept and first results classifying different HEP job workflows derived from the measurements at GridKa.
Final Scientific Report: A Scalable Development Environment for Peta-Scale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karbach, Carsten; Frings, Wolfgang
2013-02-22
This document is the final scientific report of the project DE-SC000120 (A scalable Development Environment for Peta-Scale Computing). The objective of this project is the extension of the Parallel Tools Platform (PTP) for applying it to peta-scale systems. PTP is an integrated development environment for parallel applications. It comprises code analysis, performance tuning, parallel debugging and system monitoring. The contribution of the Juelich Supercomputing Centre (JSC) aims to provide a scalable solution for system monitoring of supercomputers. This includes the development of a new communication protocol for exchanging status data between the target remote system and the client running PTP.more » The communication has to work for high latency. PTP needs to be implemented robustly and should hide the complexity of the supercomputer's architecture in order to provide a transparent access to various remote systems via a uniform user interface. This simplifies the porting of applications to different systems, because PTP functions as abstraction layer between parallel application developer and compute resources. The common requirement for all PTP components is that they have to interact with the remote supercomputer. E.g. applications are built remotely and performance tools are attached to job submissions and their output data resides on the remote system. Status data has to be collected by evaluating outputs of the remote job scheduler and the parallel debugger needs to control an application executed on the supercomputer. The challenge is to provide this functionality for peta-scale systems in real-time. The client server architecture of the established monitoring application LLview, developed by the JSC, can be applied to PTP's system monitoring. LLview provides a well-arranged overview of the supercomputer's current status. A set of statistics, a list of running and queued jobs as well as a node display mapping running jobs to their compute resources form the user display of LLview. These monitoring features have to be integrated into the development environment. Besides showing the current status PTP's monitoring also needs to allow for submitting and canceling user jobs. Monitoring peta-scale systems especially deals with presenting the large amount of status data in a useful manner. Users require to select arbitrary levels of detail. The monitoring views have to provide a quick overview of the system state, but also need to allow for zooming into specific parts of the system, into which the user is interested in. At present, the major batch systems running on supercomputers are PBS, TORQUE, ALPS and LoadLeveler, which have to be supported by both the monitoring and the job controlling component. Finally, PTP needs to be designed as generic as possible, so that it can be extended for future batch systems.« less
Automated CFD Parameter Studies on Distributed Parallel Computers
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.; Aftosmis, Michael; Pandya, Shishir; Tejnil, Edward; Ahmad, Jasim; Kwak, Dochan (Technical Monitor)
2002-01-01
The objective of the current work is to build a prototype software system which will automated the process of running CFD jobs on Information Power Grid (IPG) resources. This system should remove the need for user monitoring and intervention of every single CFD job. It should enable the use of many different computers to populate a massive run matrix in the shortest time possible. Such a software system has been developed, and is known as the AeroDB script system. The approach taken for the development of AeroDB was to build several discrete modules. These include a database, a job-launcher module, a run-manager module to monitor each individual job, and a web-based user portal for monitoring of the progress of the parameter study. The details of the design of AeroDB are presented in the following section. The following section provides the results of a parameter study which was performed using AeroDB for the analysis of a reusable launch vehicle (RLV). The paper concludes with a section on the lessons learned in this effort, and ideas for future work in this area.
Simplified Distributed Computing
NASA Astrophysics Data System (ADS)
Li, G. G.
2006-05-01
The distributed computing runs from high performance parallel computing, GRID computing, to an environment where idle CPU cycles and storage space of numerous networked systems are harnessed to work together through the Internet. In this work we focus on building an easy and affordable solution for computationally intensive problems in scientific applications based on existing technology and hardware resources. This system consists of a series of controllers. When a job request is detected by a monitor or initialized by an end user, the job manager launches the specific job handler for this job. The job handler pre-processes the job, partitions the job into relative independent tasks, and distributes the tasks into the processing queue. The task handler picks up the related tasks, processes the tasks, and puts the results back into the processing queue. The job handler also monitors and examines the tasks and the results, and assembles the task results into the overall solution for the job request when all tasks are finished for each job. A resource manager configures and monitors all participating notes. A distributed agent is deployed on all participating notes to manage the software download and report the status. The processing queue is the key to the success of this distributed system. We use BEA's Weblogic JMS queue in our implementation. It guarantees the message delivery and has the message priority and re-try features so that the tasks never get lost. The entire system is built on the J2EE technology and it can be deployed on heterogeneous platforms. It can handle algorithms and applications developed in any languages on any platforms. J2EE adaptors are provided to manage and communicate the existing applications to the system so that the applications and algorithms running on Unix, Linux and Windows can all work together. This system is easy and fast to develop based on the industry's well-adopted technology. It is highly scalable and heterogeneous. It is an open system and any number and type of machines can join the system to provide the computational power. This asynchronous message-based system can achieve second of response time. For efficiency, communications between distributed tasks are often done at the start and end of the tasks but intermediate status of the tasks can also be provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Logee, T.L.
1987-10-15
The active solar Domestic Hot Water (DHW) system at the Tucson Job Corps Center was designed and constructed as part of the Solar in Federal Buildings Program (SFBP). This retrofitted system is one of eight of the systems in the SFBP selected for quality monitoring. The purpose of this monitoring effort is to document the performance of quality state-of-the-art solar systems in large Federal buildings. The systems are unique prototypes. Design errors and system faults discovered during the monitoring period could not always be corrected. Therefore, the aggregated overall performance is often considerably below what might be expected had similarmore » systems been constructed consecutively with each repetition incorporating corrections and improvements. The solar collector system is installed on a two story dormitory at the Job Corps Center. The solar system preheats hot water for about two hundred students. The solar system provided about 50% of the energy needed for water heating in the winter and nearly 100% of the water heating needs in the summer. There are about 70,000 gallons of water used per month. There are seventy-nine L.O.F. panels or 1659 square feet of collectors (1764 square feet before freeze damage occurred) mounted in two rows on the south facing roof. Collected solar energy is stored in the 2200-gallon storage tank. The control system is by Johnson Controls. City water is piped directly to the storage tank and is circulated in the collectors. Freeze protection is provided by recirculation of storage water. There is an auxiliary gas fired boiler and 750 gallon DHW storage tank to provide backup for the solar system. Highlights of the performance monitoring from the solar collection system at the Tucson Job Corps Center during the November 1984 through July 1985 monitoring period are presented in this report.« less
A new Self-Adaptive disPatching System for local clusters
NASA Astrophysics Data System (ADS)
Kan, Bowen; Shi, Jingyan; Lei, Xiaofeng
2015-12-01
The scheduler is one of the most important components of a high performance cluster. This paper introduces a self-adaptive dispatching system (SAPS) based on Torque[1] and Maui[2]. It promotes cluster resource utilization and improves the overall speed of tasks. It provides some extra functions for administrators and users. First of all, in order to allow the scheduling of GPUs, a GPU scheduling module based on Torque and Maui has been developed. Second, SAPS analyses the relationship between the number of queueing jobs and the idle job slots, and then tunes the priority of users’ jobs dynamically. This means more jobs run and fewer job slots are idle. Third, integrating with the monitoring function, SAPS excludes nodes in error states as detected by the monitor, and returns them to the cluster after the nodes have recovered. In addition, SAPS provides a series of function modules including a batch monitoring management module, a comprehensive scheduling accounting module and a real-time alarm module. The aim of SAPS is to enhance the reliability and stability of Torque and Maui. Currently, SAPS has been running stably on a local cluster at IHEP (Institute of High Energy Physics, Chinese Academy of Sciences), with more than 12,000 cpu cores and 50,000 jobs running each day. Monitoring has shown that resource utilization has been improved by more than 26%, and the management work for both administrator and users has been reduced greatly.
A framework for graph-based synthesis, analysis, and visualization of HPC cluster job data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayo, Jackson R.; Kegelmeyer, W. Philip, Jr.; Wong, Matthew H.
The monitoring and system analysis of high performance computing (HPC) clusters is of increasing importance to the HPC community. Analysis of HPC job data can be used to characterize system usage and diagnose and examine failure modes and their effects. This analysis is not straightforward, however, due to the complex relationships that exist between jobs. These relationships are based on a number of factors, including shared compute nodes between jobs, proximity of jobs in time, etc. Graph-based techniques represent an approach that is particularly well suited to this problem, and provide an effective technique for discovering important relationships in jobmore » queuing and execution data. The efficacy of these techniques is rooted in the use of a semantic graph as a knowledge representation tool. In a semantic graph job data, represented in a combination of numerical and textual forms, can be flexibly processed into edges, with corresponding weights, expressing relationships between jobs, nodes, users, and other relevant entities. This graph-based representation permits formal manipulation by a number of analysis algorithms. This report presents a methodology and software implementation that leverages semantic graph-based techniques for the system-level monitoring and analysis of HPC clusters based on job queuing and execution data. Ontology development and graph synthesis is discussed with respect to the domain of HPC job data. The framework developed automates the synthesis of graphs from a database of job information. It also provides a front end, enabling visualization of the synthesized graphs. Additionally, an analysis engine is incorporated that provides performance analysis, graph-based clustering, and failure prediction capabilities for HPC systems.« less
Machine learning based job status prediction in scientific clusters
Yoo, Wucherl; Sim, Alex; Wu, Kesheng
2016-09-01
Large high-performance computing systems are built with increasing number of components with more CPU cores, more memory, and more storage space. At the same time, scientific applications have been growing in complexity. Together, they are leading to more frequent unsuccessful job statuses on HPC systems. From measured job statuses, 23.4% of CPU time was spent to the unsuccessful jobs. Here, we set out to study whether these unsuccessful job statuses could be anticipated from known job characteristics. To explore this possibility, we have developed a job status prediction method for the execution of jobs on scientific clusters. The Random Forestsmore » algorithm was applied to extract and characterize the patterns of unsuccessful job statuses. Experimental results show that our method can predict the unsuccessful job statuses from the monitored ongoing job executions in 99.8% the cases with 83.6% recall and 94.8% precision. Lastly, this prediction accuracy can be sufficiently high that it can be used to mitigation procedures of predicted failures.« less
Machine learning based job status prediction in scientific clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Wucherl; Sim, Alex; Wu, Kesheng
Large high-performance computing systems are built with increasing number of components with more CPU cores, more memory, and more storage space. At the same time, scientific applications have been growing in complexity. Together, they are leading to more frequent unsuccessful job statuses on HPC systems. From measured job statuses, 23.4% of CPU time was spent to the unsuccessful jobs. Here, we set out to study whether these unsuccessful job statuses could be anticipated from known job characteristics. To explore this possibility, we have developed a job status prediction method for the execution of jobs on scientific clusters. The Random Forestsmore » algorithm was applied to extract and characterize the patterns of unsuccessful job statuses. Experimental results show that our method can predict the unsuccessful job statuses from the monitored ongoing job executions in 99.8% the cases with 83.6% recall and 94.8% precision. Lastly, this prediction accuracy can be sufficiently high that it can be used to mitigation procedures of predicted failures.« less
Workflow management in large distributed systems
NASA Astrophysics Data System (ADS)
Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.
2011-12-01
The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.
Monitoring of services with non-relational databases and map-reduce framework
NASA Astrophysics Data System (ADS)
Babik, M.; Souto, F.
2012-12-01
Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core site services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via instrumented jobs. This generates quite a lot of monitoring data to process, as there are several data points for every job and several million jobs are executed every day. The recent uptake of non-relational databases opens a new paradigm in the large-scale storage and distributed processing of systems with heavy read-write workloads. For SAM this brings new possibilities to improve its model, from performing aggregation of measurements to storing raw data and subsequent re-processing. Both SAM and SWAT are currently tuned to run at top performance, reaching some of the limits in storage and processing power of their existing Oracle relational database. We investigated the usability and performance of non-relational storage together with its distributed data processing capabilities. For this, several popular systems have been compared. In this contribution we describe our investigation of the existing non-relational databases suited for monitoring systems covering Cassandra, HBase and MongoDB. Further, we present our experiences in data modeling and prototyping map-reduce algorithms focusing on the extension of the already existing availability and reliability computations. Finally, possible future directions in this area are discussed, analyzing the current deficiencies of the existing Grid monitoring systems and proposing solutions to leverage the benefits of the non-relational databases to get more scalable and flexible frameworks.
The ATLAS PanDA Monitoring System and its Evolution
NASA Astrophysics Data System (ADS)
Klimentov, A.; Nevski, P.; Potekhin, M.; Wenaus, T.
2011-12-01
The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on the design of PanDA in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. A decision was therefore made to migrate the PanDA monitor server to Django Web Application Framework and apply JSON/AJAX technology in the browser front end. This allows us to greatly reduce the amount of application code, separate data preparation from presentation, leverage open source for tools such as authentication and authorization mechanisms, and provide a richer and more dynamic user experience. We describe our approach, design and initial experience with the migration process.
Storage element performance optimization for CMS analysis jobs
NASA Astrophysics Data System (ADS)
Behrmann, G.; Dahlblom, J.; Guldmyr, J.; Happonen, K.; Lindén, T.
2012-12-01
Tier-2 computing sites in the Worldwide Large Hadron Collider Computing Grid (WLCG) host CPU-resources (Compute Element, CE) and storage resources (Storage Element, SE). The vast amount of data that needs to processed from the Large Hadron Collider (LHC) experiments requires good and efficient use of the available resources. Having a good CPU efficiency for the end users analysis jobs requires that the performance of the storage system is able to scale with I/O requests from hundreds or even thousands of simultaneous jobs. In this presentation we report on the work on improving the SE performance at the Helsinki Institute of Physics (HIP) Tier-2 used for the Compact Muon Experiment (CMS) at the LHC. Statistics from CMS grid jobs are collected and stored in the CMS Dashboard for further analysis, which allows for easy performance monitoring by the sites and by the CMS collaboration. As part of the monitoring framework CMS uses the JobRobot which sends every four hours 100 analysis jobs to each site. CMS also uses the HammerCloud tool for site monitoring and stress testing and it has replaced the JobRobot. The performance of the analysis workflow submitted with JobRobot or HammerCloud can be used to track the performance due to site configuration changes, since the analysis workflow is kept the same for all sites and for months in time. The CPU efficiency of the JobRobot jobs at HIP was increased approximately by 50 % to more than 90 %, by tuning the SE and by improvements in the CMSSW and dCache software. The performance of the CMS analysis jobs improved significantly too. Similar work has been done on other CMS Tier-sites, since on average the CPU efficiency for CMSSW jobs has increased during 2011. Better monitoring of the SE allows faster detection of problems, so that the performance level can be kept high. The next storage upgrade at HIP consists of SAS disk enclosures which can be stress tested on demand with HammerCloud workflows, to make sure that the I/O-performance is good.
JSD: Parallel Job Accounting on the IBM SP2
NASA Technical Reports Server (NTRS)
Saphir, William; Jones, James Patton; Walter, Howard (Technical Monitor)
1995-01-01
The IBM SP2 is one of the most promising parallel computers for scientific supercomputing - it is fast and usually reliable. One of its biggest problems is a lack of robust and comprehensive system software. Among other things, this software allows a collection of Unix processes to be treated as a single parallel application. It does not, however, provide accounting for parallel jobs other than what is provided by AIX for the individual process components. Without parallel job accounting, it is not possible to monitor system use, measure the effectiveness of system administration strategies, or identify system bottlenecks. To address this problem, we have written jsd, a daemon that collects accounting data for parallel jobs. jsd records information in a format that is easily machine- and human-readable, allowing us to extract the most important accounting information with very little effort. jsd also notifies system administrators in certain cases of system failure.
Detecting Abnormal Machine Characteristics in Cloud Infrastructures
NASA Technical Reports Server (NTRS)
Bhaduri, Kanishka; Das, Kamalika; Matthews, Bryan L.
2011-01-01
In the cloud computing environment resources are accessed as services rather than as a product. Monitoring this system for performance is crucial because of typical pay-peruse packages bought by the users for their jobs. With the huge number of machines currently in the cloud system, it is often extremely difficult for system administrators to keep track of all machines using distributed monitoring programs such as Ganglia1 which lacks system health assessment and summarization capabilities. To overcome this problem, we propose a technique for automated anomaly detection using machine performance data in the cloud. Our algorithm is entirely distributed and runs locally on each computing machine on the cloud in order to rank the machines in order of their anomalous behavior for given jobs. There is no need to centralize any of the performance data for the analysis and at the end of the analysis, our algorithm generates error reports, thereby allowing the system administrators to take corrective actions. Experiments performed on real data sets collected for different jobs validate the fact that our algorithm has a low overhead for tracking anomalous machines in a cloud infrastructure.
Design of an automatic production monitoring system on job shop manufacturing
NASA Astrophysics Data System (ADS)
Prasetyo, Hoedi; Sugiarto, Yohanes; Rosyidi, Cucuk Nur
2018-02-01
Every production process requires monitoring system, so the desired efficiency and productivity can be monitored at any time. This system is also needed in the job shop type of manufacturing which is mainly influenced by the manufacturing lead time. Processing time is one of the factors that affect the manufacturing lead time. In a conventional company, the recording of processing time is done manually by the operator on a sheet of paper. This method is prone to errors. This paper aims to overcome this problem by creating a system which is able to record and monitor the processing time automatically. The solution is realized by utilizing electric current sensor, barcode, RFID, wireless network and windows-based application. An automatic monitoring device is attached to the production machine. It is equipped with a touch screen-LCD so that the operator can use it easily. Operator identity is recorded through RFID which is embedded in his ID card. The workpiece data are collected from the database by scanning the barcode listed on its monitoring sheet. A sensor is mounted on the machine to measure the actual machining time. The system's outputs are actual processing time and machine's capacity information. This system is connected wirelessly to a workshop planning application belongs to the firm. Test results indicated that all functions of the system can run properly. This system successfully enables supervisors, PPIC or higher level management staffs to monitor the processing time quickly with a better accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lumsdaine, Andrew
2013-03-08
The main purpose of the Coordinated Infrastructure for Fault Tolerance in Systems initiative has been to conduct research with a goal of providing end-to-end fault tolerance on a systemwide basis for applications and other system software. While fault tolerance has been an integral part of most high-performance computing (HPC) system software developed over the past decade, it has been treated mostly as a collection of isolated stovepipes. Visibility and response to faults has typically been limited to the particular hardware and software subsystems in which they are initially observed. Little fault information is shared across subsystems, allowing little flexibility ormore » control on a system-wide basis, making it practically impossible to provide cohesive end-to-end fault tolerance in support of scientific applications. As an example, consider faults such as communication link failures that can be seen by a network library but are not directly visible to the job scheduler, or consider faults related to node failures that can be detected by system monitoring software but are not inherently visible to the resource manager. If information about such faults could be shared by the network libraries or monitoring software, then other system software, such as a resource manager or job scheduler, could ensure that failed nodes or failed network links were excluded from further job allocations and that further diagnosis could be performed. As a founding member and one of the lead developers of the Open MPI project, our efforts over the course of this project have been focused on making Open MPI more robust to failures by supporting various fault tolerance techniques, and using fault information exchange and coordination between MPI and the HPC system software stack from the application, numeric libraries, and programming language runtime to other common system components such as jobs schedulers, resource managers, and monitoring tools.« less
Environment/Health/Safety (EHS): Databases
Hazard Documents Database Biosafety Authorization System CATS (Corrective Action Tracking System) (for findings 12/2005 to present) Chemical Management System Electrical Safety Ergonomics Database (for new Learned / Best Practices REMS - Radiation Exposure Monitoring System SJHA Database - Subcontractor Job
Job submission and management through web services: the experience with the CREAM service
NASA Astrophysics Data System (ADS)
Aiftimiei, C.; Andreetto, P.; Bertocco, S.; Fina, S. D.; Ronco, S. D.; Dorigo, A.; Gianelle, A.; Marzolla, M.; Mazzucato, M.; Sgaravatto, M.; Verlato, M.; Zangrando, L.; Corvo, M.; Miccio, V.; Sciaba, A.; Cesini, D.; Dongiovanni, D.; Grandi, C.
2008-07-01
Modern Grid middleware is built around components providing basic functionality, such as data storage, authentication, security, job management, resource monitoring and reservation. In this paper we describe the Computing Resource Execution and Management (CREAM) service. CREAM provides a Web service-based job execution and management capability for Grid systems; in particular, it is being used within the gLite middleware. CREAM exposes a Web service interface allowing conforming clients to submit and manage computational jobs to a Local Resource Management System. We developed a special component, called ICE (Interface to CREAM Environment) to integrate CREAM in gLite. ICE transfers job submissions and cancellations from the Workload Management System, allowing users to manage CREAM jobs from the gLite User Interface. This paper describes some recent studies aimed at assessing the performance and reliability of CREAM and ICE; those tests have been performed as part of the acceptance tests for integration of CREAM and ICE in gLite. We also discuss recent work towards enhancing CREAM with a BES and JSDL compliant interface.
Automatic Integration Testbeds validation on Open Science Grid
NASA Astrophysics Data System (ADS)
Caballero, J.; Thapa, S.; Gardner, R.; Potekhin, M.
2011-12-01
A recurring challenge in deploying high quality production middleware is the extent to which realistic testing occurs before release of the software into the production environment. We describe here an automated system for validating releases of the Open Science Grid software stack that leverages the (pilot-based) PanDA job management system developed and used by the ATLAS experiment. The system was motivated by a desire to subject the OSG Integration Testbed to more realistic validation tests. In particular those which resemble to every extent possible actual job workflows used by the experiments thus utilizing job scheduling at the compute element (CE), use of the worker node execution environment, transfer of data to/from the local storage element (SE), etc. The context is that candidate releases of OSG compute and storage elements can be tested by injecting large numbers of synthetic jobs varying in complexity and coverage of services tested. The native capabilities of the PanDA system can thus be used to define jobs, monitor their execution, and archive the resulting run statistics including success and failure modes. A repository of generic workflows and job types to measure various metrics of interest has been created. A command-line toolset has been developed so that testbed managers can quickly submit "VO-like" jobs into the system when newly deployed services are ready for testing. A system for automatic submission has been crafted to send jobs to integration testbed sites, collecting the results in a central service and generating regular reports for performance and reliability.
Kwf-Grid workflow management system for Earth science applications
NASA Astrophysics Data System (ADS)
Tran, V.; Hluchy, L.
2009-04-01
In this paper, we present workflow management tool for Earth science applications in EGEE. The workflow management tool was originally developed within K-wf Grid project for GT4 middleware and has many advanced features like semi-automatic workflow composition, user-friendly GUI for managing workflows, knowledge management. In EGEE, we are porting the workflow management tool to gLite middleware for Earth science applications K-wf Grid workflow management system was developed within "Knowledge-based Workflow System for Grid Applications" under the 6th Framework Programme. The workflow mangement system intended to - semi-automatically compose a workflow of Grid services, - execute the composed workflow application in a Grid computing environment, - monitor the performance of the Grid infrastructure and the Grid applications, - analyze the resulting monitoring information, - capture the knowledge that is contained in the information by means of intelligent agents, - and finally to reuse the joined knowledge gathered from all participating users in a collaborative way in order to efficiently construct workflows for new Grid applications. Kwf Grid workflow engines can support different types of jobs (e.g. GRAM job, web services) in a workflow. New class of gLite job has been added to the system, allows system to manage and execute gLite jobs in EGEE infrastructure. The GUI has been adapted to the requirements of EGEE users, new credential management servlet is added to portal. Porting K-wf Grid workflow management system to gLite would allow EGEE users to use the system and benefit from its avanced features. The system is primarly tested and evaluated with applications from ES clusters.
Man-Robot Symbiosis: A Framework For Cooperative Intelligence And Control
NASA Astrophysics Data System (ADS)
Parker, Lynne E.; Pin, Francois G.
1988-10-01
The man-robot symbiosis concept has the fundamental objective of bridging the gap between fully human-controlled and fully autonomous systems to achieve true man-robot cooperative control and intelligence. Such a system would allow improved speed, accuracy, and efficiency of task execution, while retaining the man in the loop for innovative reasoning and decision-making. The symbiont would have capabilities for supervised and unsupervised learning, allowing an increase of expertise in a wide task domain. This paper describes a robotic system architecture facilitating the symbiotic integration of teleoperative and automated modes of task execution. The architecture reflects a unique blend of many disciplines of artificial intelligence into a working system, including job or mission planning, dynamic task allocation, man-robot communication, automated monitoring, and machine learning. These disciplines are embodied in five major components of the symbiotic framework: the Job Planner, the Dynamic Task Allocator, the Presenter/Interpreter, the Automated Monitor, and the Learning System.
Monitoring of computing resource utilization of the ATLAS experiment
NASA Astrophysics Data System (ADS)
Rousseau, David; Dimitrov, Gancho; Vukotic, Ilija; Aidel, Osman; Schaffer, Rd; Albrand, Solveig
2012-12-01
Due to the good performance of the LHC accelerator, the ATLAS experiment has seen higher than anticipated levels for both the event rate and the average number of interactions per bunch crossing. In order to respond to these changing requirements, the current and future usage of CPU, memory and disk resources has to be monitored, understood and acted upon. This requires data collection at a fairly fine level of granularity: the performance of each object written and each algorithm run, as well as a dozen per-job variables, are gathered for the different processing steps of Monte Carlo generation and simulation and the reconstruction of both data and Monte Carlo. We present a system to collect and visualize the data from both the online Tier-0 system and distributed grid production jobs. Around 40 GB of performance data are expected from up to 200k jobs per day, thus making performance optimization of the underlying Oracle database of utmost importance.
A Security Monitoring Framework For Virtualization Based HEP Infrastructures
NASA Astrophysics Data System (ADS)
Gomez Ramirez, A.; Martinez Pedreira, M.; Grigoras, C.; Betev, L.; Lara, C.; Kebschull, U.;
2017-10-01
High Energy Physics (HEP) distributed computing infrastructures require automatic tools to monitor, analyze and react to potential security incidents. These tools should collect and inspect data such as resource consumption, logs and sequence of system calls for detecting anomalies that indicate the presence of a malicious agent. They should also be able to perform automated reactions to attacks without administrator intervention. We describe a novel framework that accomplishes these requirements, with a proof of concept implementation for the ALICE experiment at CERN. We show how we achieve a fully virtualized environment that improves the security by isolating services and Jobs without a significant performance impact. We also describe a collected dataset for Machine Learning based Intrusion Prevention and Detection Systems on Grid computing. This dataset is composed of resource consumption measurements (such as CPU, RAM and network traffic), logfiles from operating system services, and system call data collected from production Jobs running in an ALICE Grid test site and a big set of malware samples. This malware set was collected from security research sites. Based on this dataset, we will proceed to develop Machine Learning algorithms able to detect malicious Jobs.
Ferguson, Sue A; Marras, William S; Lavender, Steven A; Splittstoesser, Riley E; Yang, Gang
2014-02-01
The objective is to quantify differences in physical exposures for those who stayed on a job (survivor) versus those who left the job (turnover). It has been suggested that high physical job demands lead to greater turnover and that turnover rates may supplement low-back disorder incidence rates in passive surveillance systems. A prospective study with 811 participants was conducted. The physical exposure of distribution center work was quantified using a moment monitor. A total of 68 quantitative physical exposure measures in three categories (load, position, and timing) were examined. Low-back health function was quantified using the lumbar motion monitor at baseline and 6-month follow-up. There were 365 turnover employees within the 6-month follow-up period and 446 "survivors" who remained on the same job, of which 126 survivors had a clinically meaningful decline in low-back functional performance (cases) and 320 survivors did not have a meaningful decline in low-back functional performance (noncases). Of the job exposure measures, 6% were significantly different between turnover and cases compared to 69% between turnover and noncases. Turnover employees had significantly greater exposure compared to noncases. Turnover employees had similar physical job exposures to workers who remained on the job and had a clinically meaningful decline in low-back functional performance. Thus, ergonomists and HR should be aware that high turnover jobs appear to have similar physical exposure as those jobs that put workers at risk for a decline in low-back functional performance.
MonALISA, an agent-based monitoring and control system for the LHC experiments
NASA Astrophysics Data System (ADS)
Balcas, J.; Kcira, D.; Mughal, A.; Newman, H.; Spiropulu, M.; Vlimant, J. R.
2017-10-01
MonALISA, which stands for Monitoring Agents using a Large Integrated Services Architecture, has been developed over the last fifteen years by California Insitute of Technology (Caltech) and its partners with the support of the software and computing program of the CMS and ALICE experiments at the Large Hadron Collider (LHC). The framework is based on Dynamic Distributed Service Architecture and is able to provide complete system monitoring, performance metrics of applications, Jobs or services, system control and global optimization services for complex systems. A short overview and status of MonALISA is given in this paper.
User Centric Job Monitoring - a redesign and novel approach in the STAR experiment
NASA Astrophysics Data System (ADS)
Arkhipkin, D.; Lauret, J.; Zulkarneeva, Y.
2014-06-01
User Centric Monitoring (or UCM) has been a long awaited feature in STAR, whereas programs, workflows and system "events" could be logged, broadcast and later analyzed. UCM allows to collect and filter available job monitoring information from various resources and present it to users in a user-centric view rather than an administrative-centric point of view. The first attempt and implementation of "a" UCM approach was made in STAR 2004 using a log4cxx plug-in back-end and then further evolved with an attempt to push toward a scalable database back-end (2006) and finally using a Web-Service approach (2010, CSW4DB SBIR). The latest showed to be incomplete and not addressing the evolving needs of the experiment where streamlined messages for online (data acquisition) purposes as well as the continuous support for the data mining needs and event analysis need to coexists and unified in a seamless approach. The code also revealed to be hardly maintainable. This paper presents the next evolutionary step of the UCM toolkit, a redesign and redirection of our latest attempt acknowledging and integrating recent technologies and a simpler, maintainable and yet scalable manner. The extended version of the job logging package is built upon three-tier approach based on Task, Job and Event, and features a Web-Service based logging API, a responsive AJAX-powered user interface, and a database back-end relying on MongoDB, which is uniquely suited for STAR needs. In addition, we present details of integration of this logging package with the STAR offline and online software frameworks. Leveraging on the reported experience and work from the ATLAS and CMS experience on using the ESPER engine, we discuss and show how such approach has been implemented in STAR for meta-data event triggering stream processing and filtering. An ESPER based solution seems to fit well into the online data acquisition system where many systems are monitored.
AsyncStageOut: Distributed user data management for CMS Analysis
NASA Astrophysics Data System (ADS)
Riahi, H.; Wildish, T.; Ciangottini, D.; Hernández, J. M.; Andreeva, J.; Balcas, J.; Karavakis, E.; Mascheroni, M.; Tanasijczuk, A. J.; Vaandering, E. W.
2015-12-01
AsyncStageOut (ASO) is a new component of the distributed data analysis system of CMS, CRAB, designed for managing users' data. It addresses a major weakness of the previous model, namely that mass storage of output data was part of the job execution resulting in inefficient use of job slots and an unacceptable failure rate at the end of the jobs. ASO foresees the management of up to 400k files per day of various sizes, spread worldwide across more than 60 sites. It must handle up to 1000 individual users per month, and work with minimal delay. This creates challenging requirements for system scalability, performance and monitoring. ASO uses FTS to schedule and execute the transfers between the storage elements of the source and destination sites. It has evolved from a limited prototype to a highly adaptable service, which manages and monitors the user file placement and bookkeeping. To ensure system scalability and data monitoring, it employs new technologies such as a NoSQL database and re-uses existing components of PhEDEx and the FTS Dashboard. We present the asynchronous stage-out strategy and the architecture of the solution we implemented to deal with those issues and challenges. The deployment model for the high availability and scalability of the service is discussed. The performance of the system during the commissioning and the first phase of production are also shown, along with results from simulations designed to explore the limits of scalability.
AsyncStageOut: Distributed User Data Management for CMS Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riahi, H.; Wildish, T.; Ciangottini, D.
2015-12-23
AsyncStageOut (ASO) is a new component of the distributed data analysis system of CMS, CRAB, designed for managing users' data. It addresses a major weakness of the previous model, namely that mass storage of output data was part of the job execution resulting in inefficient use of job slots and an unacceptable failure rate at the end of the jobs. ASO foresees the management of up to 400k files per day of various sizes, spread worldwide across more than 60 sites. It must handle up to 1000 individual users per month, and work with minimal delay. This creates challenging requirementsmore » for system scalability, performance and monitoring. ASO uses FTS to schedule and execute the transfers between the storage elements of the source and destination sites. It has evolved from a limited prototype to a highly adaptable service, which manages and monitors the user file placement and bookkeeping. To ensure system scalability and data monitoring, it employs new technologies such as a NoSQL database and re-uses existing components of PhEDEx and the FTS Dashboard. We present the asynchronous stage-out strategy and the architecture of the solution we implemented to deal with those issues and challenges. The deployment model for the high availability and scalability of the service is discussed. The performance of the system during the commissioning and the first phase of production are also shown, along with results from simulations designed to explore the limits of scalability.« less
Performance optimisations for distributed analysis in ALICE
NASA Astrophysics Data System (ADS)
Betev, L.; Gheata, A.; Gheata, M.; Grigoras, C.; Hristov, P.
2014-06-01
Performance is a critical issue in a production system accommodating hundreds of analysis users. Compared to a local session, distributed analysis is exposed to services and network latencies, remote data access and heterogeneous computing infrastructure, creating a more complex performance and efficiency optimization matrix. During the last 2 years, ALICE analysis shifted from a fast development phase to the more mature and stable code. At the same time, the frameworks and tools for deployment, monitoring and management of large productions have evolved considerably too. The ALICE Grid production system is currently used by a fair share of organized and individual user analysis, consuming up to 30% or the available resources and ranging from fully I/O-bound analysis code to CPU intensive correlations or resonances studies. While the intrinsic analysis performance is unlikely to improve by a large factor during the LHC long shutdown (LS1), the overall efficiency of the system has still to be improved by an important factor to satisfy the analysis needs. We have instrumented all analysis jobs with "sensors" collecting comprehensive monitoring information on the job running conditions and performance in order to identify bottlenecks in the data processing flow. This data are collected by the MonALISa-based ALICE Grid monitoring system and are used to steer and improve the job submission and management policy, to identify operational problems in real time and to perform automatic corrective actions. In parallel with an upgrade of our production system we are aiming for low level improvements related to data format, data management and merging of results to allow for a better performing ALICE analysis.
DOT National Transportation Integrated Search
2000-03-01
The Denver Regional Transportation District (RTD) acquired a CAD/AVL system that became fully operational in 1996. The CAD/AVL system added radio channels and covert alarms in buses, located vehicles in real time, and monitored schedule adherence. Th...
Using CREAM and CEMonitor for job submission and management in the gLite middleware
NASA Astrophysics Data System (ADS)
Aiftimiei, C.; Andreetto, P.; Bertocco, S.; Dalla Fina, S.; Dorigo, A.; Frizziero, E.; Gianelle, A.; Marzolla, M.; Mazzucato, M.; Mendez Lorenzo, P.; Miccio, V.; Sgaravatto, M.; Traldi, S.; Zangrando, L.
2010-04-01
In this paper we describe the use of CREAM and CEMonitor services for job submission and management within the gLite Grid middleware. Both CREAM and CEMonitor address one of the most fundamental operations of a Grid middleware, that is job submission and management. Specifically, CREAM is a job management service used for submitting, managing and monitoring computational jobs. CEMonitor is an event notification framework, which can be coupled with CREAM to provide the users with asynchronous job status change notifications. Both components have been integrated in the gLite Workload Management System by means of ICE (Interface to CREAM Environment). These software components have been released for production in the EGEE Grid infrastructure and, for what concerns the CEMonitor service, also in the OSG Grid. In this paper we report the current status of these services, the achieved results, and the issues that still have to be addressed.
Enabling IPv6 at FZU - WLCG Tier2 in Prague
NASA Astrophysics Data System (ADS)
Kouba, Tomáš; Chudoba, Jiří; Eliáš, Marek
2014-06-01
The usage of the new IPv6 protocol in production is becoming reality in the HEP community and the Computing Centre of the Institute of Physics in Prague participates in many IPv6 related activities. Our contribution presents experience with monitoring in HEPiX distributed IPv6 testbed which includes 11 remote sites. We use Nagios to check availability of services and Smokeping for monitoring the network latency. Since it is not always trivial to setup DNS in a dual stack environment properly, we developed a Nagios plugin for checking whether a domain name is resolvable when using only IP protocol version 6 and only version 4. We will also present local area network monitoring and tuning related to IPv6 performance. One of the most important software for a grid site is a batch system for a job execution. We will present our experience with configuring and running Torque batch system in a dual stack environment. We also discuss the steps needed to run VO specific jobs in our IPv6 testbed.
Robot computer problem solving system
NASA Technical Reports Server (NTRS)
Becker, J. D.; Merriam, E. W.
1974-01-01
The conceptual, experimental, and practical phases of developing a robot computer problem solving system are outlined. Robot intelligence, conversion of the programming language SAIL to run under the THNEX monitor, and the use of the network to run several cooperating jobs at different sites are discussed.
PanDA Pilot Submission using Condor-G: Experience and Improvements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao X.; Hover John; Wlodek Tomasz
2011-01-01
PanDA (Production and Distributed Analysis) is the workload management system of the ATLAS experiment, used to run managed production and user analysis jobs on the grid. As a late-binding, pilot-based system, the maintenance of a smooth and steady stream of pilot jobs to all grid sites is critical for PanDA operation. The ATLAS Computing Facility (ACF) at BNL, as the ATLAS Tier1 center in the US, operates the pilot submission systems for the US. This is done using the PanDA 'AutoPilot' scheduler component which submits pilot jobs via Condor-G, a grid job scheduling system developed at the University of Wisconsin-Madison.more » In this paper, we discuss the operation and performance of the Condor-G pilot submission at BNL, with emphasis on the challenges and issues encountered in the real grid production environment. With the close collaboration of Condor and PanDA teams, the scalability and stability of the overall system has been greatly improved over the last year. We review improvements made to Condor-G resulting from this collaboration, including isolation of site-based issues by running a separate Gridmanager for each remote site, introduction of the 'Nonessential' job attribute to allow Condor to optimize its behavior for the specific character of pilot jobs, better understanding and handling of the Gridmonitor process, as well as better scheduling in the PanDA pilot scheduler component. We will also cover the monitoring of the health of the system.« less
Comprehensive efficiency analysis of supercomputer resource usage based on system monitoring data
NASA Astrophysics Data System (ADS)
Mamaeva, A. A.; Shaykhislamov, D. I.; Voevodin, Vad V.; Zhumatiy, S. A.
2018-03-01
One of the main problems of modern supercomputers is the low efficiency of their usage, which leads to the significant idle time of computational resources, and, in turn, to the decrease in speed of scientific research. This paper presents three approaches to study the efficiency of supercomputer resource usage based on monitoring data analysis. The first approach performs an analysis of computing resource utilization statistics, which allows to identify different typical classes of programs, to explore the structure of the supercomputer job flow and to track overall trends in the supercomputer behavior. The second approach is aimed specifically at analyzing off-the-shelf software packages and libraries installed on the supercomputer, since efficiency of their usage is becoming an increasingly important factor for the efficient functioning of the entire supercomputer. Within the third approach, abnormal jobs – jobs with abnormally inefficient behavior that differs significantly from the standard behavior of the overall supercomputer job flow – are being detected. For each approach, the results obtained in practice in the Supercomputer Center of Moscow State University are demonstrated.
Job monitoring on DIRAC for Belle II distributed computing
NASA Astrophysics Data System (ADS)
Kato, Yuji; Hayasaka, Kiyoshi; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo
2015-12-01
We developed a monitoring system for Belle II distributed computing, which consists of active and passive methods. In this paper we describe the passive monitoring system, where information stored in the DIRAC database is processed and visualized. We divide the DIRAC workload management flow into steps and store characteristic variables which indicate issues. These variables are chosen carefully based on our experiences, then visualized. As a result, we are able to effectively detect issues. Finally, we discuss the future development for automating log analysis, notification of issues, and disabling problematic sites.
1986-09-01
dissatisfied nonrespon- dents would balance out with satisfied nonrespondents. 2. Responses from the census were assumed to be unbiased because the...Behavior . and Human Performance, 31: 201-205 (February 1983). 10. Glaser, Edward M. Productivity Gains Through Worklife Improvements. New York: Harcourt...OFFICE SYMBOL 7a. NAME OF MONITORING ORGANIZATION School of Systems and OIfapplicuble) Logistics AFIT/LSM ___________________ Sc. ADDRESS ( City . State
Integration of Grid and Local Batch Resources at DESY
NASA Astrophysics Data System (ADS)
Beyer, Christoph; Finnern, Thomas; Gellrich, Andreas; Hartmann, Thomas; Kemp, Yves; Lewendel, Birgit
2017-10-01
As one of the largest resource centres DESY has to support differing work flows of users from various scientific backgrounds. Users can be for one HEP experiments in WLCG or Belle II as well as local HEP users but also physicists from other fields as photon science or accelerator development. By abandoning specific worker node setups in favour of generic flat nodes with middleware resources provided via CVMFS, we gain flexibility to subsume different use cases in a homogeneous environment. Grid jobs and the local batch system are managed in a HTCondor based setup, accepting pilot, user and containerized jobs. The unified setup allows dynamic re-assignment of resources between the different use cases. Monitoring is implemented on global batch system metrics as well as on a per job level utilizing corresponding cgroup information.
1982-06-01
start/stop chiller optimization , and demand limiting were added. The system monitors a 7,000 ton chiller plant and controls 74 air handlers. The EMCS does...Modify analog limits. g. Adjust setpoints of selected controllers. h. Select manual or automatic control modes. i. Enable and disable individual points...or event schedules and controller setpoints ; make nonscheduled starts and stops of equipment or disable field panels when required for routine
Experience with ATLAS MySQL PanDA database service
NASA Astrophysics Data System (ADS)
Smirnov, Y.; Wlodek, T.; De, K.; Hover, J.; Ozturk, N.; Smith, J.; Wenaus, T.; Yu, D.
2010-04-01
The PanDA distributed production and analysis system has been in production use for ATLAS data processing and analysis since late 2005 in the US, and globally throughout ATLAS since early 2008. Its core architecture is based on a set of stateless web services served by Apache and backed by a suite of MySQL databases that are the repository for all PanDA information: active and archival job queues, dataset and file catalogs, site configuration information, monitoring information, system control parameters, and so on. This database system is one of the most critical components of PanDA, and has successfully delivered the functional and scaling performance required by PanDA, currently operating at a scale of half a million jobs per week, with much growth still to come. In this paper we describe the design and implementation of the PanDA database system, its architecture of MySQL servers deployed at BNL and CERN, backup strategy and monitoring tools. The system has been developed, thoroughly tested, and brought to production to provide highly reliable, scalable, flexible and available database services for ATLAS Monte Carlo production, reconstruction and physics analysis.
Application of Intelligent Tutoring Technology to an Apparently Mechanical Task.
ERIC Educational Resources Information Center
Newman, Denis
The increasing automation of many occupations leads to jobs that involve understanding and monitoring the operation of complex computer systems. One case is PATRIOT, an air defense surface-to-air missile system deployed by the U.S. Army. Radar information is processed and presented to the operators in highly abstract form. The system identifies…
Dynamics and Control of Mechanical Energy Propagation in Granular Systems
2012-01-01
recover the linear approximation made by Job et. al. [2] for this collision. [1] S . Plimpton , J. Comput. Phys, 117, 1 (1995). [2] S . Job, F. Santibanez, F. Tapia, F. Melo, Ultrasonics 48, 506 (2008). ...the author( s ) and should not contrued as an official Department of the Army position, policy or decision, unless so designated by other documentation...12. DISTRIBUTION AVAILIBILITY STATEMENT Approved for Public Release; Distribution Unlimited UU 9. SPONSORING/MONITORING AGENCY NAME( S ) AND ADDRESS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, Amjad Majid; Albert, Don; Andersson, Par
SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work 9normally a parallel job) on the set of allocated nodes. Finally, it arbitrates conflicting requests for resources by managing a queue of pending work.
Occupational Survey Report. AFSC 4M0X1 Aerospace Physiology
2002-05-01
Chamber NCOIC Job Hyperbaric Chamber Specialist Job • Perform Type 2, 4 and 1 chamber flights • Perform inside observer duties during hypobaric ...78% Hyperbaric Chamber Specialist Independent Job 4% Not Grouped 2% U2 Aerospace Physiology Cluster 10% Job Structure Sample size: 168 Aerospace...Altitude Chamber Cluster (N=130) Hypobaric Chamber Instructor/Monitor Job HAAMS Job Altitude Chamber Apprentice Job 78% UPT Parasail Job Altitude
Coma Patient Monitoring System Using Image Processing
NASA Astrophysics Data System (ADS)
Sankalp, Meenu
2011-12-01
COMA PATIENT MONITORING SYSTEM provides high quality healthcare services in the near future. To provide more convenient and comprehensive medical monitoring in big hospitals since it is tough job for medical personnel to monitor each patient for 24 hours.. The latest development in patient monitoring system can be used in Intensive Care Unit (ICU), Critical Care Unit (CCU), and Emergency Rooms of hospital. During treatment, the patient monitor is continuously monitoring the coma patient to transmit the important information. Also in the emergency cases, doctor are able to monitor patient condition efficiently to reduce time consumption, thus it provides more effective healthcare system. So due to importance of patient monitoring system, the continuous monitoring of the coma patient can be simplified. This paper investigates about the effects seen in the patient using "Coma Patient Monitoring System" which is a very advanced product related to physical changes in body movement of the patient and gives Warning in form of alarm and display on the LCD in less than one second time. It also passes a sms to a person sitting at the distant place if there exists any movement in any body part of the patient. The model for the system uses Keil software for the software implementation of the developed system.
Structural health monitoring of pipelines rehabilitated with lining technology
NASA Astrophysics Data System (ADS)
Farhidzadeh, Alireza; Dehghan-Niri, Ehsan; Salamone, Salvatore
2014-03-01
Damage detection of pipeline systems is a tedious and time consuming job due to digging requirement, accessibility, interference with other facilities, and being extremely wide spread in metropolitans. Therefore, a real-time and automated monitoring system can pervasively reduce labor work, time, and expenditures. This paper presents the results of an experimental study aimed at monitoring the performance of full scale pipe lining systems, subjected to static and dynamic (seismic) loading, using Acoustic Emission (AE) technique and Guided Ultrasonic Waves (GUWs). Particularly, two damage mechanisms are investigated: 1) delamination between pipeline and liner as the early indicator of damage, and 2) onset of nonlinearity and incipient failure of the liner as critical damage state.
Tribal Air Quality Monitoring.
ERIC Educational Resources Information Center
Wall, Dennis
2001-01-01
The Institute for Tribal Environmental Professionals (ITEP) (Flagstaff, Arizona) provides training and support for tribal professionals in the technical job skills needed for air quality monitoring and other environmental management tasks. ITEP also arranges internships, job placements, and hands-on training opportunities and supports an…
Association rule mining on grid monitoring data to detect error sources
NASA Astrophysics Data System (ADS)
Maier, Gerhild; Schiffers, Michael; Kranzlmueller, Dieter; Gaidioz, Benjamin
2010-04-01
Error handling is a crucial task in an infrastructure as complex as a grid. There are several monitoring tools put in place, which report failing grid jobs including exit codes. However, the exit codes do not always denote the actual fault, which caused the job failure. Human time and knowledge is required to manually trace back errors to the real fault underlying an error. We perform association rule mining on grid job monitoring data to automatically retrieve knowledge about the grid components' behavior by taking dependencies between grid job characteristics into account. Therewith, problematic grid components are located automatically and this information - expressed by association rules - is visualized in a web interface. This work achieves a decrease in time for fault recovery and yields an improvement of a grid's reliability.
New systems of work organization and workers' health.
Kompier, Michiel A J
2006-12-01
This paper aims at identifying major changes in and around work organizations, their effects upon job characteristics and the health and well-being of today's employees, and related research challenges. Increased internationalization and competition, increased utilization of information and communication technology, the changing workforce configuration, and flexibility and new organizational practices are considered. As work has changed from physical to mental in nature, job characteristics have changed significantly. Meanwhile work and family life have blended. New systems of work organization have become more prevalent, but they do not represent a radical change across the whole economy. New practices may have an adverse impact upon job characteristics, but their effects depend on their design, implementation, and management. Research recommendations include improved monitoring of changes in work organization and studies into their health and safety consequences, intervention studies, studies into the motivating potential of modern work practices, studies of marginalized workers and workers in less developed countries, and "mechanism studies".
Real Time Monitor of Grid job executions
NASA Astrophysics Data System (ADS)
Colling, D. J.; Martyniak, J.; McGough, A. S.; Křenek, A.; Sitera, J.; Mulač, M.; Dvořák, F.
2010-04-01
In this paper we describe the architecture and operation of the Real Time Monitor (RTM), developed by the Grid team in the HEP group at Imperial College London. This is arguably the most popular dissemination tool within the EGEE [1] Grid. Having been used, on many occasions including GridFest and LHC inauguration events held at CERN in October 2008. The RTM gathers information from EGEE sites hosting Logging and Bookkeeping (LB) services. Information is cached locally at a dedicated server at Imperial College London and made available for clients to use in near real time. The system consists of three main components: the RTM server, enquirer and an apache Web Server which is queried by clients. The RTM server queries the LB servers at fixed time intervals, collecting job related information and storing this in a local database. Job related data includes not only job state (i.e. Scheduled, Waiting, Running or Done) along with timing information but also other attributes such as Virtual Organization and Computing Element (CE) queue - if known. The job data stored in the RTM database is read by the enquirer every minute and converted to an XML format which is stored on a Web Server. This decouples the RTM server database from the client removing the bottleneck problem caused by many clients simultaneously accessing the database. This information can be visualized through either a 2D or 3D Java based client with live job data either being overlaid on to a 2 dimensional map of the world or rendered in 3 dimensions over a globe map using OpenGL.
Try P.R.A.I.S.E. - Positive Reinforcement And Individualized Systematic Economics.
ERIC Educational Resources Information Center
Wollam, Scott
Described is a multi-faceted money system which utilizes positive and negative reinforcement while at the same time incorporating peer pressure and reinforcement for behavior modification. The system uses such items as money, checks, deposit slips, and bank books. Children have jobs such as pencil sellers, banker, or door monitor, and receive pay…
Bena, Antonella; Giraudo, Massimiliano
2013-01-01
To study the relationship between job tenure and injury risk, controlling for individual factors and company characteristics. Analysis of incidence and injury risk by job tenure, controlling for gender, age, nationality, economic activity, firm size. Sample of 7% of Italian workers registered in the INPS (National Institute of Social Insurance) database. Private sector employees who worked as blue collars or apprentices. First-time occupational injuries, all occupational injuries, serious occupational injuries. Our findings show an increase in injury risk among those who start a new job and an inverse relationship between job tenure and injury risk. Multivariate analysis confirm these results. Recommendations for improving this situation include the adoption of organizational models that provide periods of mentoring from colleagues already in the company and the assignment to simple and not much hazardous tasks. The economic crisis may exacerbate this problem: it is important for Italy to improve the systems of monitoring relations between temporary employment and health.
High job control enhances vagal recovery in media work.
Lindholm, Harri; Sinisalo, Juha; Ahlberg, Jari; Jahkola, Antti; Partinen, Markku; Hublin, Christer; Savolainen, Aslak
2009-12-01
Job strain has been linked to increased risk of cardiovascular diseases. In modern media work, time pressures, rapidly changing situations, computer work and irregular working hours are common. Heart rate variability (HRV) has been widely used to monitor sympathovagal balance. Autonomic imbalance may play an additive role in the development of cardiovascular diseases. To study the effects of work demands and job control on the autonomic nervous system recovery among the media personnel. From the cross-sectional postal survey of the employees in Finnish Broadcasting Company (n = 874), three age cohorts (n = 132) were randomly selected for an analysis of HRV in 24 h electrocardiography recordings. In the middle-aged group, those who experienced high job control had significantly better vagal recovery than those with low or moderate control (P < 0.01). Among young and ageing employees, job control did not associate with autonomic recovery. High job control over work rather than low demands seemed to enhance autonomic recovery in middle-aged media workers. This was independent of poor health habits such as smoking, physical inactivity or alcohol consumption.
Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M.
2009-09-09
SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allciated nodes. Finally, it arbitrates conflicting requests for resouces by managing a queue of pending work.
Monitoring of computing resource use of active software releases at ATLAS
NASA Astrophysics Data System (ADS)
Limosani, Antonio; ATLAS Collaboration
2017-10-01
The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the TierO at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end-user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as “MemoryMonitor”, to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed in plots generated using Python visualization libraries and collected into pre-formatted auto-generated Web pages, which allow the ATLAS developer community to track the performance of their algorithms. This information is however preferentially filtered to domain leaders and developers through the use of JIRA and via reports given at ATLAS software meetings. Finally, we take a glimpse of the future by reporting on the expected CPU and RAM usage in benchmark workflows associated with the High Luminosity LHC and anticipate the ways performance monitoring will evolve to understand and benchmark future workflows.
Application-level regression testing framework using Jenkins
Budiardja, Reuben; Bouvet, Timothy; Arnold, Galen
2017-09-26
Monitoring and testing for regression of large-scale systems such as the NCSA's Blue Waters supercomputer are challenging tasks. In this paper, we describe the solution we came up with to perform those tasks. The goal was to find an automated solution for running user-level regression tests to evaluate system usability and performance. Jenkins, an automation server software, was chosen for its versatility, large user base, and multitude of plugins including collecting data and plotting test results over time. We also describe our Jenkins deployment to launch and monitor jobs on remote HPC system, perform authentication with one-time password, and integratemore » with our LDAP server for its authorization. We show some use cases and describe our best practices for successfully using Jenkins as a user-level system-wide regression testing and monitoring framework for large supercomputer systems.« less
Application-level regression testing framework using Jenkins
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budiardja, Reuben; Bouvet, Timothy; Arnold, Galen
Monitoring and testing for regression of large-scale systems such as the NCSA's Blue Waters supercomputer are challenging tasks. In this paper, we describe the solution we came up with to perform those tasks. The goal was to find an automated solution for running user-level regression tests to evaluate system usability and performance. Jenkins, an automation server software, was chosen for its versatility, large user base, and multitude of plugins including collecting data and plotting test results over time. We also describe our Jenkins deployment to launch and monitor jobs on remote HPC system, perform authentication with one-time password, and integratemore » with our LDAP server for its authorization. We show some use cases and describe our best practices for successfully using Jenkins as a user-level system-wide regression testing and monitoring framework for large supercomputer systems.« less
Experiences running NASTRAN on the Microvax 2 computer
NASA Technical Reports Server (NTRS)
Butler, Thomas G.; Mitchell, Reginald S.
1987-01-01
The MicroVAX operates NASTRAN so well that the only detectable difference in its operation compared to an 11/780 VAX is in the execution time. On the modest installation described here, the engineer has all of the tools he needs to do an excellent job of analysis. System configuration decisions, system sizing, preparation of the system disk, definition of user quotas, installation, monitoring of system errors, and operation policies are discussed.
The event notification and alarm system for the Open Science Grid operations center
NASA Astrophysics Data System (ADS)
Hayashi, S.; Teige and, S.; Quick, R.
2012-12-01
The Open Science Grid Operations (OSG) Team operates a distributed set of services and tools that enable the utilization of the OSG by several HEP projects. Without these services users of the OSG would not be able to run jobs, locate resources, obtain information about the status of systems or generally use the OSG. For this reason these services must be highly available. This paper describes the automated monitoring and notification systems used to diagnose and report problems. Described here are the means used by OSG Operations to monitor systems such as physical facilities, network operations, server health, service availability and software error events. Once detected, an error condition generates a message sent to, for example, Email, SMS, Twitter, an Instant Message Server, etc. The mechanism being developed to integrate these monitoring systems into a prioritized and configurable alarming system is emphasized.
NASA Astrophysics Data System (ADS)
Witt, J.; Gumley, L.; Braun, J.; Dutcher, S.; Flynn, B.
2017-12-01
The Atmosphere SIPS (Science Investigator-led Processing Systems) team at the Space Science and Engineering Center (SSEC), which is funded through a NASA contract, creates Level 2 cloud and aerosol products from the VIIRS instrument aboard the S-NPP satellite. In order to monitor the ingest and processing of files, we have developed an extensive monitoring system to observe every step in the process. The status grid is used for real time monitoring, and shows the current state of the system, including what files we have and whether or not we are meeting our latency requirements. Our snapshot tool displays the state of the system in the past. It displays which files were available at a given hour and is used for historical and backtracking purposes. In addition to these grid like tools we have created histograms and other statistical graphs for tracking processing and ingest metrics, such as total processing time, job queue time, and latency statistics.
Development of noSQL data storage for the ATLAS PanDA Monitoring System
NASA Astrophysics Data System (ADS)
Potekhin, M.; ATLAS Collaboration
2012-06-01
For several years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, typically exceeding 500k completed jobs/day by June 2011. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in the areas of database scalability and monitoring system performance and efficiency. These challenges are being met with a R&D effort aimed at implementing a scalable and efficient monitoring data storage based on a noSQL solution (Cassandra). We present our motivations for using this technology, as well as data design and the techniques used for efficient indexing of the data. We also discuss the hardware requirements as they were determined by testing with actual data and realistic loads.
NASA Astrophysics Data System (ADS)
Waddell, K.
2015-12-01
Middle-skilled workers are those whose jobs require considerable skill but not an advanced degree. Nationwide, one-third of the projected job growth for 2010-2020 will require middle-skilled workers. The educational paths to these jobs include career and technical education (CTE), certificates and associate's degrees from community colleges, apprenticeship programs, and training provided by employers. In the oil industry, the demand is expected to about 150,000 jobs. In environmental restoration and monitoring, there will be a need for at least 15,000 middle-skilled workers. Examples of the types of jobs include geological and petroleum technicians, derrick and drill operators, and pump system and refinery operators for the oil and gas sector. For the environmental restoration and monitoring sector, the types of jobs include environmental science technicians, and forest (and coastal) conservation technicians and workers. However, all of these numbers will be influenced by the growth and contraction of the regional or national economy that is not uncommon in the private sector. Over the past year, for example, the oil and gas industry has shed approximately 75,000 jobs (out of a workforce of 600,000) here in the United States, due almost exclusively to the drop of oil prices globally. A disproportionate number of the lost jobs were among the middle-skilled workforce. Meanwhile, the recent settlements stemming from the Deepwater Horizon oil spill are expected to create a surge of environmental restoration activity in the Gulf of Mexico region that has the potential to create thousands of new jobs over the next decade and beyond. Consequently, there is a need to develop education, training and apprenticeship programs that will help develop flexibility and complementary skill sets among middle-skilled workers that could help reduce the impacts of economic downturns and meet the needs of newly expanding sectors such as the environmental restoration field. This presentation will discuss the programs, activities, and frameworks needed to build this capacity in the middle-skilled workforce over the coming years.
LHCb experience with running jobs in virtual machines
NASA Astrophysics Data System (ADS)
McNab, A.; Stagni, F.; Luzzi, C.
2015-12-01
The LHCb experiment has been running production jobs in virtual machines since 2013 as part of its DIRAC-based infrastructure. We describe the architecture of these virtual machines and the steps taken to replicate the WLCG worker node environment expected by user and production jobs. This relies on the uCernVM system for providing root images for virtual machines. We use the CernVM-FS distributed filesystem to supply the root partition files, the LHCb software stack, and the bootstrapping scripts necessary to configure the virtual machines for us. Using this approach, we have been able to minimise the amount of contextualisation which must be provided by the virtual machine managers. We explain the process by which the virtual machine is able to receive payload jobs submitted to DIRAC by users and production managers, and how this differs from payloads executed within conventional DIRAC pilot jobs on batch queue based sites. We describe our operational experiences in running production on VM based sites managed using Vcycle/OpenStack, Vac, and HTCondor Vacuum. Finally we show how our use of these resources is monitored using Ganglia and DIRAC.
IT Security Support for the Spaceport Command Control System Development
NASA Technical Reports Server (NTRS)
Varise, Brian
2014-01-01
My job title is IT Security support for the Spaceport Command & Control System Development. As a cyber-security analyst it is my job to ensure NASA's information stays safe from cyber threats, such as, viruses, malware and denial-of-service attacks by establishing and enforcing system access controls. Security is very important in the world of technology and it is used everywhere from personal computers to giant networks ran by Government agencies worldwide. Without constant monitoring analysis, businesses, public organizations and government agencies are vulnerable to potential harmful infiltration of their computer information system. It is my responsibility to ensure authorized access by examining improper access, reporting violations, revoke access, monitor information request by new programming and recommend improvements. My department oversees the Launch Control System and networks. An audit will be conducted for the LCS based on compliance with the Federal Information Security Management Act (FISMA) and The National Institute of Standards and Technology (NIST). I recently finished analyzing the SANS top 20 critical controls to give cost effective recommendations on various software and hardware products for compliance. Upon my completion of this internship, I will have successfully completed my duties as well as gain knowledge that will be helpful to my career in the future as a Cyber Security Analyst.
Dashboard Task Monitor for Managing ATLAS User Analysis on the Grid
NASA Astrophysics Data System (ADS)
Sargsyan, L.; Andreeva, J.; Jha, M.; Karavakis, E.; Kokoszkiewicz, L.; Saiz, P.; Schovancova, J.; Tuckett, D.; Atlas Collaboration
2014-06-01
The organization of the distributed user analysis on the Worldwide LHC Computing Grid (WLCG) infrastructure is one of the most challenging tasks among the computing activities at the Large Hadron Collider. The Experiment Dashboard offers a solution that not only monitors but also manages (kill, resubmit) user tasks and jobs via a web interface. The ATLAS Dashboard Task Monitor provides analysis users with a tool that is independent of the operating system and Grid environment. This contribution describes the functionality of the application and its implementation details, in particular authentication, authorization and audit of the management operations.
Karasek, Robert; Choi, BongKyoo; Ostergren, Per-Olof; Ferrario, Marco; De Smet, Patrick
2007-01-01
Scale comparative properties of "JCQ-like" questionnaires with respect to the JCQ have been little known. Assessing validity and reliability of two methods for generating comparable scale scores between the Job Content Questionnaire (JCQ) and JCQ-like questionnaires in sub-populations of the large Job Stress, Absenteeism and Coronary Heart Disease European Cooperative (JACE) study: the Swedish version of Demand-Control Questionnaire (DCQ) and a transformed Multinational Monitoring of Trends and Determinants in Cardiovascular Disease Project (MONICA) questionnaire. A random population sample of all Malmo males and females aged 52-58 (n = 682) years was given a new test questionnaire with both instruments (the JCQ and the DCQ). Comparability-facilitating algorithms were created (Method I). For the transformed Milan MONICA questionnaire, a simple weighting system was used (Method II). The converted scale scores from the JCQ-like questionnaires were found to be reliable and highly correlated to those of the original JCQ. However, agreements for the high job strain group between the JCQ and the DCQ, and between the JCQ and the DCQ (Method I applied) were only moderate (Kappa). Use of a multiple level job strain scale generated higher levels of job strain agreement, as did a new job strain definition that excludes the intermediate levels of the job strain distribution. The two methods were valid and generally reliable.
Owen, D C; Boswell, C; Opton, L; Franco, L; Meriwether, C
2018-06-01
Baseline information was obtained from a School of Nursing faculty and staff about perceptions of job satisfaction, empowerment, and engagement in the workplace before the introduction of an integrated faculty and staff shared governance system. Governance structure in schools of nursing has the potential to enhance or impose constraints on the work environment for faculty, staff, and stakeholders. RESULTS: Faculty and staff perceptions of job satisfaction and engagement in the workplace before the introduction of a new model of shared governance are presented. Statistical differences were found between faculty and staff responses on the overall or total scales and select subscales, and group patterns of relationships differed. We provided a description of the first shared governance structure derived from the perspective of shared governance as defined and operationalized in Magnet Hospital health care systems and includes administrators, faculty, and staff in decision-making councils. As academia embarks on this change in governance structure from hierarchical to a more flattened approach findings support examining levels of work engagement, structural and psychological empowerment, and job satisfaction as key monitors of the work environment. Copyright © 2018 Elsevier Inc. All rights reserved.
Work stress and innate immune response.
Boscolo, P; Di Gioacchino, M; Reale, M; Muraro, R; Di Giampaolo, L
2011-01-01
Several reports highlight the relationship between blood NK cytotoxic activity and life style. Easy life style, including physical activity, healthy dietary habits as well as good mental health are characterized by an efficient immune response. Life style is related to the type of occupational activity since work has a central part in life either as source of income or contributing to represent the social identity. Not only occupational stress, but also job loss or insecurity are thus considered serious stressful situations, inducing emotional disorders which may affect both neuroendocrine and immune systems; reduced reactivity to mitogens and/or decreased blood NK cytotoxic activity was reported in unemployed workers or in those with a high perception of job insecurity and/or job stress. Although genetic factors have a key role in the pathogenesis of autoimmune disorders, occupational stress (as in night shifts) was reported associated to an increased incidence of autoimmune disorders. Monitoring blood NK response may thus be included in the health programs as an indirect index of stressful job and/or poor lifestyle.
Computer Managed Instruction at Arthur Andersen & Company: A Status Report.
ERIC Educational Resources Information Center
Dennis, Verl E.; Gruner, Dennis
1992-01-01
Computer managed instruction (CMI) based on the principle of mastery learning has been cost effective for job training in the tax division of Arthur Andersen & Company. The CMI software system, which uses computerized pretests and posttests to monitor training, has been upgraded from microcomputer use to local area networks. Success factors at…
NASA Astrophysics Data System (ADS)
Barreiro, F. H.; Borodin, M.; De, K.; Golubkov, D.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Padolski, S.; Wenaus, T.; ATLAS Collaboration
2017-10-01
The second generation of the ATLAS Production System called ProdSys2 is a distributed workload manager that runs daily hundreds of thousands of jobs, from dozens of different ATLAS specific workflows, across more than hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based on many criteria, such as input and output size, memory requirements and CPU consumption, with manageable scheduling policies and by supporting different kind of computational resources, such as GRID, clouds, supercomputers and volunteer-computers. The system dynamically assigns a group of jobs (task) to a group of geographically distributed computing resources. Dynamic assignment and resources utilization is one of the major features of the system, it didn’t exist in the earliest versions of the production system where Grid resources topology was predefined using national or/and geographical pattern. Production System has a sophisticated job fault-recovery mechanism, which efficiently allows to run multi-Terabyte tasks without human intervention. We have implemented “train” model and open-ended production which allow to submit tasks automatically as soon as new set of data is available and to chain physics groups data processing and analysis with central production by the experiment. We present an overview of the ATLAS Production System and its major components features and architecture: task definition, web user interface and monitoring. We describe the important design decisions and lessons learned from an operational experience during the first year of LHC Run2. We also report the performance of the designed system and how various workflows, such as data (re)processing, Monte-Carlo and physics group production, users analysis, are scheduled and executed within one production system on heterogeneous computing resources.
NASA Astrophysics Data System (ADS)
Gehrcke, Jan-Philip; Kluth, Stefan; Stonjek, Stefan
2010-04-01
We show how the ATLAS offline software is ported on the Amazon Elastic Compute Cloud (EC2). We prepare an Amazon Machine Image (AMI) on the basis of the standard ATLAS platform Scientific Linux 4 (SL4). Then an instance of the SLC4 AMI is started on EC2 and we install and validate a recent release of the ATLAS offline software distribution kit. The installed software is archived as an image on the Amazon Simple Storage Service (S3) and can be quickly retrieved and connected to new SL4 AMI instances using the Amazon Elastic Block Store (EBS). ATLAS jobs can then configure against the release kit using the ATLAS configuration management tool (cmt) in the standard way. The output of jobs is exported to S3 before the SL4 AMI is terminated. Job status information is transferred to the Amazon SimpleDB service. The whole process of launching instances of our AMI, starting, monitoring and stopping jobs and retrieving job output from S3 is controlled from a client machine using python scripts implementing the Amazon EC2/S3 API via the boto library working together with small scripts embedded in the SL4 AMI. We report our experience with setting up and operating the system using standard ATLAS job transforms.
20 CFR 655.1308 - Offered wage rate.
Code of Federal Regulations, 2010 CFR
2010-04-01
.... Recruitment for this purpose begins when the job order is accepted by the SWA for posting. (d) Wage offer. The... job offers for beginning level employees who have a basic understanding of the occupation. These... monitored and reviewed for accuracy. (2) Level II wage rates are assigned to job offers for employees who...
32 CFR 1656.4 - Alternative Service Office: jurisdiction and authority.
Code of Federal Regulations, 2010 CFR
2010-07-01
... for job placement; (5) Monitor the ASW's job performance; (6) Issue a certificate of satisfactory... assigned to perform alternative service. (b) The ASO shall: (1) Evaluate and approve jobs and employers for Alternative Service; (2) Order the ASW to report for alternative service work; (3) Issue such orders as are...
gLExec and MyProxy integration in the ATLAS/OSG PanDA workload management system
NASA Astrophysics Data System (ADS)
Caballero, J.; Hover, J.; Litmaath, M.; Maeno, T.; Nilsson, P.; Potekhin, M.; Wenaus, T.; Zhao, X.
2010-04-01
Worker nodes on the grid exhibit great diversity, making it difficult to offer uniform processing resources. A pilot job architecture, which probes the environment on the remote worker node before pulling down a payload job, can help. Pilot jobs become smart wrappers, preparing an appropriate environment for job execution and providing logging and monitoring capabilities. PanDA (Production and Distributed Analysis), an ATLAS and OSG workload management system, follows this design. However, in the simplest (and most efficient) pilot submission approach of identical pilots carrying the same identifying grid proxy, end-user accounting by the site can only be done with application-level information (PanDA maintains its own end-user accounting), and end-user jobs run with the identity and privileges of the proxy carried by the pilots, which may be seen as a security risk. To address these issues, we have enabled PanDA to use gLExec, a tool provided by EGEE which runs payload jobs under an end-user's identity. End-user proxies are pre-staged in a credential caching service, MyProxy, and the information needed by the pilots to access them is stored in the PanDA DB. gLExec then extracts from the user's proxy the proper identity under which to run. We describe the deployment, installation, and configuration of gLExec, and how PanDA components have been augmented to use it. We describe how difficulties were overcome, and how security risks have been mitigated. Results are presented from OSG and EGEE Grid environments performing ATLAS analysis using PanDA and gLExec.
Method for resource control in parallel environments using program organization and run-time support
NASA Technical Reports Server (NTRS)
Ekanadham, Kattamuri (Inventor); Moreira, Jose Eduardo (Inventor); Naik, Vijay Krishnarao (Inventor)
2001-01-01
A system and method for dynamic scheduling and allocation of resources to parallel applications during the course of their execution. By establishing well-defined interactions between an executing job and the parallel system, the system and method support dynamic reconfiguration of processor partitions, dynamic distribution and redistribution of data, communication among cooperating applications, and various other monitoring actions. The interactions occur only at specific points in the execution of the program where the aforementioned operations can be performed efficiently.
Method for resource control in parallel environments using program organization and run-time support
NASA Technical Reports Server (NTRS)
Ekanadham, Kattamuri (Inventor); Moreira, Jose Eduardo (Inventor); Naik, Vijay Krishnarao (Inventor)
1999-01-01
A system and method for dynamic scheduling and allocation of resources to parallel applications during the course of their execution. By establishing well-defined interactions between an executing job and the parallel system, the system and method support dynamic reconfiguration of processor partitions, dynamic distribution and redistribution of data, communication among cooperating applications, and various other monitoring actions. The interactions occur only at specific points in the execution of the program where the aforementioned operations can be performed efficiently.
NASA Technical Reports Server (NTRS)
1990-01-01
Vadeko International, Inc., Mississauga, Ontario developed for the Canadian National Railways (CN) the Robotic Paint Application System. The robotic paint shop has two parallel paint booths, allowing simultaneous painting of two hopper cars. Each booth has three robots, two that move along wall-mounted rails to spray-paint the exterior, a third that is lowered through a hatch in the railcar's top to paint the interior. A fully computerized system controls the movement of the robots and the painting process. The robots can do in four hours a job that formerly took 32 hours. The robotic system applies a more thorough coating and CN expects that will double the useful life of its hoppers and improve cost efficiency. Human painters no longer have to handle the difficult and hazardous job. CN paint shop employees have been retrained to operate the computer system that controls the robots. In addition to large scale robotic systems, Vadeko International is engaged in such other areas of technology as flexible automation, nuclear maintenance, underwater vehicles, thin film deposition and wide band monitoring.
The distributed production system of the SuperB project: description and results
NASA Astrophysics Data System (ADS)
Brown, D.; Corvo, M.; Di Simone, A.; Fella, A.; Luppi, E.; Paoloni, E.; Stroili, R.; Tomassetti, L.
2011-12-01
The SuperB experiment needs large samples of MonteCarlo simulated events in order to finalize the detector design and to estimate the data analysis performances. The requirements are beyond the capabilities of a single computing farm, so a distributed production model capable of exploiting the existing HEP worldwide distributed computing infrastructure is needed. In this paper we describe the set of tools that have been developed to manage the production of the required simulated events. The production of events follows three main phases: distribution of input data files to the remote site Storage Elements (SE); job submission, via SuperB GANGA interface, to all available remote sites; output files transfer to CNAF repository. The job workflow includes procedures for consistency checking, monitoring, data handling and bookkeeping. A replication mechanism allows storing the job output on the local site SE. Results from 2010 official productions are reported.
Petersson, E-L; Wikberg, C; Westman, J; Ariai, N; Nejati, S; Björkelund, C
2018-05-01
Depression reduces individuals' function and work ability and is associated with both frequent and long-term sickness absence. Investigate if monitoring of depression course using a self-assessment instrument in recurrent general practitioner (GP) consultations leads to improved work ability, decreased job strain, and quality of life among primary care patients. Primary care patients n = 183, who worked. In addition to regular treatment (control group), intervention patients received evaluation and monitoring and used the MADRS-S depression scale during GP visit at baseline and at visits 4, 8, and 12 weeks. Work ability, quality of life and job strain were outcome measures. Depression symptoms decreased in all patients. Significantly steeper increase of WAI at 3 months in the intervention group. Social support was perceived high in a significantly higher frequency in intervention group compared to control group. Monitoring of depression course using a self-assessment instrument in recurrent GP consultations seems to lead to improved self-assessed work ability and increased high social support, but not to reduced job strain or increased quality of life compared to TAU. Future studies concerning rehabilitative efforts that seek to influence work ability probably also should include more active interventions at the workplace.
A scalable architecture for online anomaly detection of WLCG batch jobs
NASA Astrophysics Data System (ADS)
Kuehn, E.; Fischer, M.; Giffels, M.; Jung, C.; Petzold, A.
2016-10-01
For data centres it is increasingly important to monitor the network usage, and learn from network usage patterns. Especially configuration issues or misbehaving batch jobs preventing a smooth operation need to be detected as early as possible. At the GridKa data and computing centre we therefore operate a tool BPNetMon for monitoring traffic data and characteristics of WLCG batch jobs and pilots locally on different worker nodes. On the one hand local information itself are not sufficient to detect anomalies for several reasons, e.g. the underlying job distribution on a single worker node might change or there might be a local misconfiguration. On the other hand a centralised anomaly detection approach does not scale regarding network communication as well as computational costs. We therefore propose a scalable architecture based on concepts of a super-peer network.
Commands to Monitor and Control Jobs on Peregrine | High-Performance
also be used with flags to return more or less information. For example showq -u
Recognizing Job Safety Hazards. Module SH-09. Safety and Health.
ERIC Educational Resources Information Center
Center for Occupational Research and Development, Inc., Waco, TX.
This student module on recognizing job safety hazards is one of 50 modules concerned with job safety and health. This module details employee and employer responsibilities in correcting and monitoring safety hazards. Following the introduction, 10 objectives (each keyed to a page in the text) the student is expected to accomplish are listed (e.g.,…
Peterson, Curtis W; Rose, Donny; Mink, Jonah; Levitz, David
2016-05-16
In many developing nations, cervical cancer screening is done by visual inspection with acetic acid (VIA). Monitoring and evaluation (M&E) of such screening programs is challenging. An enhanced visual assessment (EVA) system was developed to augment VIA procedures in low-resource settings. The EVA System consists of a mobile colposcope built around a smartphone, and an online image portal for storing and annotating images. A smartphone app is used to control the mobile colposcope, and upload pictures to the image portal. In this paper, a new app feature that documents clinical decisions using an integrated job aid was deployed in a cervical cancer screening camp in Kenya. Six organizations conducting VIA used the EVA System to screen 824 patients over the course of a week, and providers recorded their diagnoses and treatments in the application. Real-time aggregated statistics were broadcast on a public website. Screening organizations were able to assess the number of patients screened, alongside treatment rates, and the patients who tested positive and required treatment in real time, which allowed them to make adjustments as needed. The real-time M&E enabled by "smart" diagnostic medical devices holds promise for broader use in screening programs in low-resource settings.
Occupational stress in human computer interaction.
Smith, M J; Conway, F T; Karsh, B T
1999-04-01
There have been a variety of research approaches that have examined the stress issues related to human computer interaction including laboratory studies, cross-sectional surveys, longitudinal case studies and intervention studies. A critical review of these studies indicates that there are important physiological, biochemical, somatic and psychological indicators of stress that are related to work activities where human computer interaction occurs. Many of the stressors of human computer interaction at work are similar to those stressors that have historically been observed in other automated jobs. These include high workload, high work pressure, diminished job control, inadequate employee training to use new technology, monotonous tasks, por supervisory relations, and fear for job security. New stressors have emerged that can be tied primarily to human computer interaction. These include technology breakdowns, technology slowdowns, and electronic performance monitoring. The effects of the stress of human computer interaction in the workplace are increased physiological arousal; somatic complaints, especially of the musculoskeletal system; mood disturbances, particularly anxiety, fear and anger; and diminished quality of working life, such as reduced job satisfaction. Interventions to reduce the stress of computer technology have included improved technology implementation approaches and increased employee participation in implementation. Recommendations for ways to reduce the stress of human computer interaction at work are presented. These include proper ergonomic conditions, increased organizational support, improved job content, proper workload to decrease work pressure, and enhanced opportunities for social support. A model approach to the design of human computer interaction at work that focuses on the system "balance" is proposed.
Automated Euler and Navier-Stokes Database Generation for a Glide-Back Booster
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.; Rogers, Stuart E.; Aftosmis, Mike J.; Pandya, Shishir A.; Ahmad, Jasim U.; Tejnil, Edward
2004-01-01
The past two decades have seen a sustained increase in the use of high fidelity Computational Fluid Dynamics (CFD) in basic research, aircraft design, and the analysis of post-design issues. As the fidelity of a CFD method increases, the number of cases that can be readily and affordably computed greatly diminishes. However, computer speeds now exceed 2 GHz, hundreds of processors are currently available and more affordable, and advances in parallel CFD algorithms scale more readily with large numbers of processors. All of these factors make it feasible to compute thousands of high fidelity cases. However, there still remains the overwhelming task of monitoring the solution process. This paper presents an approach to automate the CFD solution process. A new software tool, AeroDB, is used to compute thousands of Euler and Navier-Stokes solutions for a 2nd generation glide-back booster in one week. The solution process exploits a common job-submission grid environment, the NASA Information Power Grid (IPG), using 13 computers located at 4 different geographical sites. Process automation and web-based access to a MySql database greatly reduces the user workload, removing much of the tedium and tendency for user input errors. The AeroDB framework is shown. The user submits/deletes jobs, monitors AeroDB's progress, and retrieves data and plots via a web portal. Once a job is in the database, a job launcher uses an IPG resource broker to decide which computers are best suited to run the job. Job/code requirements, the number of CPUs free on a remote system, and queue lengths are some of the parameters the broker takes into account. The Globus software provides secure services for user authentication, remote shell execution, and secure file transfers over an open network. AeroDB automatically decides when a job is completed. Currently, the Cart3D unstructured flow solver is used for the Euler equations, and the Overflow structured overset flow solver is used for the Navier-Stokes equations. Other codes can be readily included into the AeroDB framework.
Widerszal-Bazyl, M; Cieślak, R
2000-01-01
Many studies on the impact of psychosocial working conditions on health prove that psychosocial stress at work is an important risk factor endangering workers' health. Thus it should be constantly monitored like other work hazards. The paper presents a newly developed instrument for stress monitoring called the Psychosocial Working Conditions Questionnaire (PWC). Its structure is based on Robert Karasek's model of job stress (Karasek, 1979; Karasek & Theorell, 1990). It consists of 3 main scales Job Demands, Job Control, Social Support and 2 additional scales adapted from the Occupational Stress Questionnaire (Elo, Leppanen, Lindstrom, & Ropponen, 1992), Well-Being and Desired Changes. The study of 8 occupational groups (bank and insurance specialists, middle medical personnel, construction workers, shop assistants, government and self-government administration officers, computer scientists, public transport drivers, teachers, N = 3,669) indicates that PWC has satisfactory psychometrics parameters. Norms for the 8 groups were developed.
Job strain, blood pressure and response to uncontrollable stress.
Steptoe, A; Cropley, M; Joekes, K
1999-02-01
The association between cardiovascular disease risk and job strain (high-demand, low-control work) may be mediated by heightened physiological stress responsivity. We hypothesized that high levels of job strain lead to increased cardiovascular responses to uncontrollable but not controllable stressors. Associations between job strain and blood pressure reductions after the working day (unwinding) were also assessed. Assessment of cardiovascular responses to standardized behavioral tasks, and ambulatory monitoring of blood pressure and heart rate during a working day and evening. We studied 162 school teachers (60 men, 102 women) selected from a larger survey as experiencing high or low job strain. Blood pressure, heart rate and electrodermal responses to an externally paced (uncontrollable) task and a self-paced (controllable) task were assessed. Blood pressure was monitored using ambulatory apparatus from 0900 to 2230 h on a working day. The groups of subjects with high and low job strain did not differ in demographic factors, body mass or resting cardiovascular activity. Blood pressure reactions to the uncontrollable task were greater in high than low job-strain groups, but responses to the controllable task were not significantly different between groups. Systolic and diastolic blood pressure did not differ between groups over the working day, but decreased to a greater extent in the evening in subjects with low job strain. Job strain is associated with a heightened blood pressure response to uncontrollable but not controllable tasks. The failure of subjects with high job strain to show reduced blood pressure in the evening may be a manifestation of chronic allostatic load.
1998-01-01
Performing Organization Name(s) and Address(es) U.S. Department of Labor Occupational Safety & Health Administration 200 Constitution Avenue Washington, DC...20210 Performing Organization Report Number OSHA 3071 Sponsoring/Monitoring Agency Name(s) and Address(es) Sponsor/Monitor’s Acronym(s) Sponsor...identifying existing or potential job hazards (both safety and health), and determining the best way to perform the job or to reduce or eliminate these
Impact of an automated dispensing system in outpatient pharmacies.
Humphries, Tammy L; Delate, Thomas; Helling, Dennis K; Richardson, Bruce
2008-01-01
To evaluate the impact of an automated dispensing system (ADS) on pharmacy staff work activities and job satisfaction. Cross-sectional, retrospective study. Kaiser Permanente Colorado (KPCO) outpatient pharmacies in September 2005. Pharmacists and technicians from 18 outpatient pharmacies. All KPCO outpatient pharmacists (n = 136) and technicians (n = 160) were surveyed regarding demographics and work activities and pharmacist job satisfaction. Work activities and job satisfaction were compared between pharmacies with and without ADS. Historical prescription purchase records from ADS pharmacies were assessed for pre-ADS to post-ADS changes in productivity. Self-reported pharmacy staff work activities and pharmacist job satisfaction. Pharmacists who responded to the demographic questionnaire (n = 74) were primarily women (60%), had a bachelor's degree in pharmacy (68%), and had been in practice for 10 years or more (53%). Responding technicians (n = 72) were predominantly women (80%) with no postsecondary degree (90%) and fewer than 10 years (68%) in practice. Pharmacists in ADS pharmacies who responded to the work activities questionnaire (n = 50) reported equivalent mean hours spent in patient care activities and filling medication orders compared with non-ADS pharmacists (n = 33; P > 0.05). Similarly, technicians in ADS pharmacies who responded to the work activities questionnaire (n = 64) reported equivalent mean hours spent in filling medication orders compared with non-ADS technicians (n = 38; P > 0.05). An equivalent proportion of ADS pharmacists reported satisfaction with their current job compared with non-ADS pharmacies (P > 0.05). Mean productivity did not increase appreciably after automation (P >0.05). By itself, installing an ADS does not appear to shift pharmacist work activities from dispensing to patient counseling or to increase job satisfaction. Shifting pharmacist work activities from dispensing to counseling and monitoring drug therapy outcomes may be warranted in ADS pharmacies.
Processing of the WLCG monitoring data using NoSQL
NASA Astrophysics Data System (ADS)
Andreeva, J.; Beche, A.; Belov, S.; Dzhunov, I.; Kadochnikov, I.; Karavakis, E.; Saiz, P.; Schovancova, J.; Tuckett, D.
2014-06-01
The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation, performance and reliability. Furthermore, the generated monitoring flow is constantly increasing, which represents another challenge for the monitoring systems. While existing solutions are traditionally based on Oracle for data storage and processing, recent developments evaluate NoSQL for processing large-scale monitoring datasets. NoSQL databases are getting increasingly popular for processing datasets at the terabyte and petabyte scale using commodity hardware. In this contribution, the integration of NoSQL data processing in the Experiment Dashboard framework is described along with first experiences of using this technology for monitoring the LHC computing activities.
A gLite FTS based solution for managing user output in CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cinquilli, M.; Riahi, H.; Spiga, D.
2012-01-01
The CMS distributed data analysis workflow assumes that jobs run in a different location from where their results are finally stored. Typically the user output must be transferred across the network from one site to another, possibly on a different continent or over links not necessarily validated for high bandwidth/high reliability transfer. This step is named stage-out and in CMS was originally implemented as a synchronous step of the analysis job execution. However, our experience showed the weakness of this approach both in terms of low total job execution efficiency and failure rates, wasting precious CPU resources. The nature ofmore » analysis data makes it inappropriate to use PhEDEx, the core data placement system for CMS. As part of the new generation of CMS Workload Management tools, the Asynchronous Stage-Out system (AsyncStageOut) has been developed to enable third party copy of the user output. The AsyncStageOut component manages glite FTS transfers of data from the temporary store at the site where the job ran to the final location of the data on behalf of that data owner. The tool uses python daemons, built using the WMCore framework, and CouchDB, to manage the queue of work and FTS transfers. CouchDB also provides the platform for a dedicated operations monitoring system. In this paper, we present the motivations of the asynchronous stage-out system. We give an insight into the design and the implementation of key features, describing how it is coupled with the CMS workload management system. Finally, we show the results and the commissioning experience.« less
The customer-centered innovation map.
Bettencourt, Lance A; Ulwick, Anthony W
2008-05-01
We all know that people "hire" products and services to get a job done. Surgeons hire scalpels to dissect soft tissue. Janitors hire soap dispensers and paper towels to remove grime from their hands. To find ways to innovate, it's critical to deconstruct the job the customer is trying to get done from beginning to end, to gain a complete view of all the points at which a customer might desire more help from a product or service. A methodology called job mapping helps companies analyze the biggest drawbacks of the products and services customers currently use and discover opportunities for innovation. It involves breaking down the task the customer wants to accomplish into the eight universal steps of a job: (1) defining the objectives, (2) locating the necessary inputs, (3) preparing the physical environment, (4) confirming that everything is ready, (5) executing the task, (6) monitoring its progress, (7) making modifications as necessary, and (8) concluding the job. Job mapping differs substantively from process mapping in that the goal is to identify what customers are trying to get done at every step, not what they are doing currently. For example, when an anesthesiologist checks a monitor during a surgical procedure, the action taken is just a means to the end. Detecting a change in patient vital signs is the job the doctor is trying to get done. Within each of the discrete steps lie multiple opportunities for making the job simpler, easier, or faster. By mapping out every step of the job and locating those opportunities, companies can discover new ways to differentiate their offerings.
Lavender, Steven A; Marras, William S; Ferguson, Sue A; Splittstoesser, Riley E; Yang, Gang
2012-01-01
Using our ultrasound-based "Moment Monitor," exposures to biomechanical low back disorder risk factors were quantified in 195 volunteers who worked in 50 different distribution center jobs. Low back injury rates, determined from a retrospective examination of each company's Occupational Safety and Health Administration (OSHA) 300 records over the 3-year period immediately prior to data collection, were used to classify each job's back injury risk level. The analyses focused on the factors differentiating the high-risk jobs (those having had 12 or more back injuries/200,000 hr of exposure) from the low-risk jobs (those defined as having no back injuries in the preceding 3 years). Univariate analyses indicated that measures of load moment exposure and force application could distinguish between high (n = 15) and low (n = 15) back injury risk distribution center jobs. A three-factor multiple logistic regression model capable of predicting high-risk jobs with very good sensitivity (87%) and specificity (73%) indicated that risk could be assessed using the mean across the sampled lifts of the peak forward and or lateral bending dynamic load moments that occurred during each lift, the mean of the peak push/pull forces across the sampled lifts, and the mean duration of the non-load exposure periods. A surrogate model, one that does not require the Moment Monitor equipment to assess a job's back injury risk, was identified although with some compromise in model sensitivity relative to the original model.
Rettke, Horst; Frei, Irena Anna; Horlacher, Kathrin; Kleinknecht-Dolf, Michael; Spichiger, Elisabeth; Spirig, Rebecca
2015-06-01
The literature reports critically on the consequences of the introduction of case-based hospital reimbursement systems, which hamper the delivery of professional nursing care. For this reason, we examined the characteristics of nursing service context factors (work environment factors) in acute care hospitals with regards to the introduction of the new reimbursement system in Switzerland. This qualitative study describes practice experiences of nurses in the context of the characteristics of the nursing service context factors interprofessional collaboration, leadership, workload and job satisfaction. Twenty focus group interviews were conducted with a total of 146 nurses in five acute care hospitals. The results indicated that for quite some time the participants had observed an increase in complexity of nursing care and a growing invasiveness of clinical diagnostics and treatment. At the same time they noticed a decrease in patient length of stay. They strived to offer high quality nursing care even in situations where demands outweighed resources. Good interprofessional collaboration and supportive leadership contributed substantially to nurses' ability to overcome daily challenges. Job satisfaction was bolstered by interactions with patients. Also, the role played by the nursing team itself is not to be underestimated. From the participants' point of view, context factors harbor great potential for attaining positive patient outcomes and higher job satisfaction and have to be monitored repeatedly.
Development of noSQL data storage for the ATLAS PanDA Monitoring System
NASA Astrophysics Data System (ADS)
Ito, H.; Potekhin, M.; Wenaus, T.
2012-12-01
For several years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, typically exceeding 500k completed jobs/day by June 2011. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in the areas of database scalability and monitoring system performance and efficiency. These challenges are being met with an R&D effort aimed at implementing a scalable and efficient monitoring data storage based on a noSQL solution (Cassandra). We present our motivations for using this technology, as well as data design and the techniques used for efficient indexing of the data. We also discuss the hardware requirements as they were determined by testing with actual data and realistic rate of queries. In conclusion, we present our experience with operating a Cassandra cluster over an extended period of time and with data load adequate for planned application.
The MICRO-BOSS scheduling system: Current status and future efforts
NASA Technical Reports Server (NTRS)
Sadeh, Norman M.
1992-01-01
In this paper, a micro-opportunistic approach to factory scheduling was described that closely monitors the evolution of bottlenecks during the construction of the schedule and continuously redirects search towards the bottleneck that appears to be most critical. This approach differs from earlier opportunistic approaches, as it does not require scheduling large resource subproblems or large job subproblems before revising the current scheduling strategy. This micro-opportunistic approach was implemented in the context of the MICRO-BOSS factory scheduling system. A study comparing MICRO-BOSS against a macro-opportunistic scheduler suggests that the additional flexibility of the micro-opportunistic approach to scheduling generally yields important reductions in both tardiness and inventory. Current research efforts include: adaptation of MICRO-BOSS to deal with sequence-dependent setups and development of micro-opportunistic reactive scheduling techniques that will enable the system to patch the schedule in the presence of contingencies such as machine breakdowns, raw materials arriving late, job cancellations, etc.
Building Tomorrow's Business Today
ERIC Educational Resources Information Center
Ryan, Jim
2010-01-01
Modern automobile maintenance, like most skilled-trades jobs, is more than simple nuts and bolts. Today, skilled-trades jobs might mean hydraulics, computerized monitoring equipment, electronic blueprints, even lasers. As chief executive officer of Grainger, a business-to-business maintenance, repair, and operating supplies company that…
NASA Astrophysics Data System (ADS)
Licari, Daniele; Calzolari, Federico
2011-12-01
In this paper we introduce a new way to deal with Grid portals referring to our implementation. L-GRID is a light portal to access the EGEE/EGI Grid infrastructure via Web, allowing users to submit their jobs from a common Web browser in a few minutes, without any knowledge about the Grid infrastructure. It provides the control over the complete lifecycle of a Grid Job, from its submission and status monitoring, to the output retrieval. The system, implemented as client-server architecture, is based on the Globus Grid middleware. The client side application is based on a java applet; the server relies on a Globus User Interface. There is no need of user registration on the server side, and the user needs only his own X.509 personal certificate. The system is user-friendly, secure (it uses SSL protocol, mechanism for dynamic delegation and identity creation in public key infrastructures), highly customizable, open source, and easy to install. The X.509 personal certificate does not get out from the local machine. It allows to reduce the time spent for the job submission, granting at the same time a higher efficiency and a better security level in proxy delegation and management.
Torres-Madriz, Gilberto; Lerner, Debra; Ruthazer, Robin; Rogers, William H.; Wilson, Ira B.
2013-01-01
Little is known about how the structure of work affects adherence to HIV antiretroviral therapy. We surveyed participants in an adherence intervention study to learn more about job characteristics, including measures of psychological demand and control, and job accommodations. Adherence was assessed using the Medication Event Monitoring System (MEMS). Of 156 trial subjects, 69 were employed, and these 69 made 229 study visits. Psychological demands and control were unrelated to adherence, but the presence of workplace accommodations was significantly associated with adherence (p <0.05). In multivariable models adjusting for clustering, those who reported having received an accommodation were 12% more adherent than those who did not receive an accommodation. Adherence was unrelated to experiencing side effects affecting work performance. Having the ability to institute job accommodations was more important to adherence than the psychosocial structure of the work. These potential benefits of requesting modifications need to be weighed against the possible risks of workplace disclosure. PMID:20091340
20 CFR 631.31 - Monitoring and oversight.
Code of Federal Regulations, 2010 CFR
2010-04-01
... TITLE III OF THE JOB TRAINING PARTNERSHIP ACT State Administration § 631.31 Monitoring and oversight. The Governor is responsible for monitoring and oversight of all State and substate grantee activities... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Monitoring and oversight. 631.31 Section 631...
20 CFR 631.31 - Monitoring and oversight.
Code of Federal Regulations, 2011 CFR
2011-04-01
... TITLE III OF THE JOB TRAINING PARTNERSHIP ACT State Administration § 631.31 Monitoring and oversight. The Governor is responsible for monitoring and oversight of all State and substate grantee activities... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Monitoring and oversight. 631.31 Section 631...
20 CFR 631.31 - Monitoring and oversight.
Code of Federal Regulations, 2012 CFR
2012-04-01
... TITLE III OF THE JOB TRAINING PARTNERSHIP ACT State Administration § 631.31 Monitoring and oversight. The Governor is responsible for monitoring and oversight of all State and substate grantee activities... 20 Employees' Benefits 3 2012-04-01 2012-04-01 false Monitoring and oversight. 631.31 Section 631...
Effective radiological contamination control and monitoring techniques in high alpha environments.
Funke, Kevin C
2003-02-01
In the decommissioning of a highly contaminated alpha environment, such as the one at Hanford's 233-S Plutonium Concentration Facility, one of the key elements of a successful radiological control program is an integrated safety approach. This approach begins with the job-planning phase where the scope of the work is described. This is followed by a brainstorming session involving engineering and craft to identify how to perform the work in a logical sequence of events. Once the brainstorming session is over, a Job Hazard Analysis is performed to identify any potential problems. Mockups are utilized to enable the craft to get hands on experience and provide feedback and ideas to make the job run smoother. Ideas and experience gained during mockups are incorporated into the task instruction. To assure appropriate data are used in planning and executing the job, our principal evaluation tools included lapel and workplace air sampling, plus continuous air monitors and frequent surveys to effectively monitor job progress. In this highly contaminated alpha environment, with contamination levels ranging from 0.3 Bq cm-2 to approximately 100,000 Bq cm-2 (2,000 dpm per 100 cm2 to approximately 600 million dpm per 100 cm2), with average working levels of 1,600-3,200 Bq cm-2 (10-20 million dpm per 100 cm2) without concomitant ambient radiation levels, control of the spread of contamination is key to keeping airborne levels As Low As Reasonably Achievable.
Effective Radiological Contamination Control and Monitoring Techniques In High Alpha Environments.
Funke, Kevin C.
2003-02-01
In the decommissioning of a highly contaminated alpha environment, such as the one at Hanford's 233-S Plutonium Concentration Facility, one of the key elements of a successful radiological control program is an integrated safety approach. This approach begins with the job-planning phase where the scope of the work is described. This is followed by a brainstorming session involving engineering and craft to identify how to perform the work in a logical sequence of events. Once the brainstorming session is over, a Job Hazard Analysis is performed to identify any potential problems. Mockups are utilized to enable the craft to get hands on experience and provide feedback and ideas to make the job run smoother. Ideas and experience gained during mockups are incorporated into the task instruction. To assure appropriate data are used in planning and executing the job, our principal evaluation tools included lapel and workplace air sampling, plus continuous air monitors and frequent surveys to effectively monitor job progress. In this highly contaminated alpha environment, with contamination levels ranging from 0.3 Bq cm to approximately 100,000 Bq cm (2,000 dpm per 100 cm to approximately 600 million dpm per 100 cm ), with average working levels of 1,600-3,200 Bq cm (10-20 million dpm per 100 cm ) without concomitant ambient radiation levels, control of the spread of contamination is key to keeping airborne levels As Low As Reasonably Achievable.
Code of Federal Regulations, 2013 CFR
2013-07-01
... change from job to job. The air balance in magnet wire ovens is critical to product quality. Magnet wire... Method D5291-02, “Standard Test Methods for Instrumental Determination of Carbon, Hydrogen, and Nitrogen...
Code of Federal Regulations, 2014 CFR
2014-07-01
... change from job to job. The air balance in magnet wire ovens is critical to product quality. Magnet wire... Method D5291-02, “Standard Test Methods for Instrumental Determination of Carbon, Hydrogen, and Nitrogen...
Code of Federal Regulations, 2012 CFR
2012-07-01
... change from job to job. The air balance in magnet wire ovens is critical to product quality. Magnet wire... Method D5291-02, “Standard Test Methods for Instrumental Determination of Carbon, Hydrogen, and Nitrogen...
Code of Federal Regulations, 2011 CFR
2011-07-01
... change from job to job. The air balance in magnet wire ovens is critical to product quality. Magnet wire... Method D5291-02, “Standard Test Methods for Instrumental Determination of Carbon, Hydrogen, and Nitrogen...
Code of Federal Regulations, 2010 CFR
2010-07-01
... change from job to job. The air balance in magnet wire ovens is critical to product quality. Magnet wire... Method D5291-02, “Standard Test Methods for Instrumental Determination of Carbon, Hydrogen, and Nitrogen...
Retrospective assessment of solvent exposure in paint manufacturing.
Glass, D C; Spurgeon, A; Calvert, I A; Clark, J L; Harrington, J M
1994-01-01
This paper describes how exposure to solvents at two large paint making sites was assessed in a study carried out to investigate the possibility of neuropsychological effects resulting from long term exposure to organic solvents. A job exposure matrix was constructed by buildings and year. A detailed plant history was taken and this was used to identify uniform exposure periods during which workers' exposure to solvents was not thought to have changed significantly. Exposure monitoring data, collected by the company before the study, was then used to characterise exposure within each uniform exposure period. Estimates were made for periods during which no air monitoring was available. Individual detailed job histories were collected for subjects and controls. The job histories were used to estimate exposure on an individual basis with the job exposure matrix. Exposure was expressed as duration, cumulative dose, and intensity of exposure. Classification of exposure by duration alone was found to result in misclassification of subjects. PMID:7951794
JOVIAL/Ada Microprocessor Study.
1982-04-01
Study Final Technical Report interesting feature of the nodes is that they provide multiple virtual terminals, so it is possible to monitor several...Terminal Interface Tasking Except ion Handling A more elaborate system could allow such features as spooling, background jobs or multiple users. To a large...Another editor feature is the buffer. Buffers may hold small amounts of text or entire text objects. They allow multiple files to be edited simultaneously
Florentin, Arnaud; Zmirou-Navier, Denis; Paris, Christophe
2017-08-01
To detect new hazards ("signals"), occupational health monitoring systems mostly rest on the description of exposures in the jobs held and on reports by medical doctors; these are subject to declarative bias. Our study aims to assess whether job-exposure matrices (JEMs) could be useful tools for signal detection by improving exposure reporting. Using the French national occupational disease surveillance and prevention network (RNV3P) data from 2001 to 2011, we explored the associations between disease and exposure prevalence for 3 well-known pathology/exposure couples and for one debatable couple. We compared the associations measured when using physicians' reports or applying the JEMs, respectively, for these selected diseases and across non-selected RNV3P population or for cases with musculoskeletal disorders, used as two reference groups; the ratio of exposure prevalences according to the two sources of information were computed for each disease category. Our population contained 58,188 subjects referred with pathologies related to work. Mean age at diagnosis was 45.8 years (95% CI 45.7; 45.9), and 57.2% were men. For experts, exposure ratios increase with knowledge on exposure causality. As expected, JEMs retrieved more exposed cases than experts (exposure ratios between 12 and 194), except for the couple silica/silicosis, but not for the MSD control group (ratio between 0.2 and 0.8). JEMs enhanced the number of exposures possibly linked with some conditions, compared to experts' assessment, relative to the whole database or to a reference group; they are less likely to suffer from declarative bias than reports by occupational health professionals.
PPP effectiveness study. [automatic procedures recording and crew performance monitoring system
NASA Technical Reports Server (NTRS)
Arbet, J. D.; Benbow, R. L.
1976-01-01
This design note presents a study of the Procedures and Performance Program (PPP) effectiveness. The intent of the study is to determine manpower time savings and the improvements in job performance gained through PPP automated techniques. The discussion presents a synopsis of PPP capabilities and identifies potential users and associated applications, PPP effectiveness, and PPP applications to other simulation/training facilities. Appendix A provides a detailed description of each PPP capability.
Digital Topographic Support System (DTSS).
1987-07-29
effects applications software, a word processing package and a Special Purpose Product Builder ( SPPB ) in terms common to his Job. Through the MI, the...communicating with the TA in terms he understands, the applications software, the SPPB and the GIS form the underlying tools which perform the computations and...displayed on the monitors or plotted on paper or Mylar. The SPPB will guide the TA enabling him to design products which are not included in the applications
HappyFace as a generic monitoring tool for HEP experiments
NASA Astrophysics Data System (ADS)
Kawamura, Gen; Magradze, Erekle; Musheghyan, Haykuhi; Quadt, Arnulf; Rzehorz, Gerhard
2015-12-01
The importance of monitoring on HEP grid computing systems is growing due to a significant increase in their complexity. Computer scientists and administrators have been studying and building effective ways to gather information on and clarify a status of each local grid infrastructure. The HappyFace project aims at making the above-mentioned workflow possible. It aggregates, processes and stores the information and the status of different HEP monitoring resources into the common database of HappyFace. The system displays the information and the status through a single interface. However, this model of HappyFace relied on the monitoring resources which are always under development in the HEP experiments. Consequently, HappyFace needed to have direct access methods to the grid application and grid service layers in the different HEP grid systems. To cope with this issue, we use a reliable HEP software repository, the CernVM File System. We propose a new implementation and an architecture of HappyFace, the so-called grid-enabled HappyFace. It allows its basic framework to connect directly to the grid user applications and the grid collective services, without involving the monitoring resources in the HEP grid systems. This approach gives HappyFace several advantages: Portability, to provide an independent and generic monitoring system among the HEP grid systems. Eunctionality, to allow users to perform various diagnostic tools in the individual HEP grid systems and grid sites. Elexibility, to make HappyFace beneficial and open for the various distributed grid computing environments. Different grid-enabled modules, to connect to the Ganga job monitoring system and to check the performance of grid transfers among the grid sites, have been implemented. The new HappyFace system has been successfully integrated and now it displays the information and the status of both the monitoring resources and the direct access to the grid user applications and the grid collective services.
Current issues relating to psychosocial job strain and cardiovascular disease research.
Theorell, T; Karasek, R A
1996-01-01
The authors comment on recent reviews of cardiovascular job strain research by P. L. Schnall and P. A. Landsbergis (1994), and by T. S. Kristensen (1995), which conclude that job strain as defined by the demand-control model (the combination of contributions of low job decision latitudes and high psychological job demands) is confirmed as a risk factor for cardiovascular mortality in a large majority of studies. Lack of social support at work appears to further increase risk. Several still-unresolved research questions are examined in light of recent studies: (a) methodological issues related to use of occupational aggregate estimations and occupational career aggregate assessments, use of standard scales for job analysis and recall bias issues in self-reporting; (b) confounding factors and differential strengths of association by subgroups in job strain-cardiovascular disease analyses with respect to social class, gender, and working hours; and (c) review of results of monitoring job strain-blood pressure associations and associated methodological issues.
Efficient monitoring of CRAB jobs at CMS
NASA Astrophysics Data System (ADS)
Silva, J. M. D.; Balcas, J.; Belforte, S.; Ciangottini, D.; Mascheroni, M.; Rupeika, E. A.; Ivanov, T. T.; Hernandez, J. M.; Vaandering, E.
2017-10-01
CRAB is a tool used for distributed analysis of CMS data. Users can submit sets of jobs with similar requirements (tasks) with a single request. CRAB uses a client-server architecture, where a lightweight client, a server, and ancillary services work together and are maintained by CMS operators at CERN. As with most complex software, good monitoring tools are crucial for efficient use and longterm maintainability. This work gives an overview of the monitoring tools developed to ensure the CRAB server and infrastructure are functional, help operators debug user problems, and minimize overhead and operating cost. This work also illustrates the design choices and gives a report on our experience with the tools we developed and the external ones we used.
Efficient Monitoring of CRAB Jobs at CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, J. M.D.; Balcas, J.; Belforte, S.
CRAB is a tool used for distributed analysis of CMS data. Users can submit sets of jobs with similar requirements (tasks) with a single request. CRAB uses a client-server architecture, where a lightweight client, a server, and ancillary services work together and are maintained by CMS operators at CERN. As with most complex software, good monitoring tools are crucial for efficient use and longterm maintainability. This work gives an overview of the monitoring tools developed to ensure the CRAB server and infrastructure are functional, help operators debug user problems, and minimize overhead and operating cost. This work also illustrates themore » design choices and gives a report on our experience with the tools we developed and the external ones we used.« less
BaHaMAS A Bash Handler to Monitor and Administrate Simulations
NASA Astrophysics Data System (ADS)
Sciarra, Alessandro
2018-03-01
Numerical QCD is often extremely resource demanding and it is not rare to run hundreds of simulations at the same time. Each of these can last for days or even months and it typically requires a job-script file as well as an input file with the physical parameters for the application to be run. Moreover, some monitoring operations (i.e. copying, moving, deleting or modifying files, resume crashed jobs, etc.) are often required to guarantee that the final statistics is correctly accumulated. Proceeding manually in handling simulations is probably the most error-prone way and it is deadly uncomfortable and inefficient! BaHaMAS was developed and successfully used in the last years as a tool to automatically monitor and administrate simulations.
Job evaluation for clinical nursing jobs by implementing the NHS JE system.
Kahya, Emin; Oral, Nurten
2007-10-01
The purpose of this paper was to evaluate locally all the clinical nursing jobs implementing the NHS JE system in four hospitals. The NHS JE was developed by the Department of Health in the UK in 2003-2004. A job analysis questionnaire was designed to gather current job descriptions. It was distributed to each of 158 clinical nurses and supervisor nurses in 31 variety clinics at four hospitals in one city. The questionnaires were analysed to evaluate locally all the identified 94 nursing jobs. Fourteen of 19 nursing jobs in the medical and surgical clinics can be matched to the nurse national job in the NHS JE system. The results indicated that two new nursing jobs titled nurse B and nurse advanced B should be added to the list of national nursing jobs in the NHS JE system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Karen S; Kasemir, Kay
2009-01-01
An effective alarm system consists of a mechanism to monitor control points and generate alarm notifications, tools for operators to view, hear, acknowledge and handle alarms and a good configuration. Despite the availability of numerous fully featured tools, accelerator alarm systems continue to be disappointing to operations, frequently to the point of alarms being permanently silenced or totally ignored. This is often due to configurations that produce an excessive number of alarms or fail to communicate the required operator response. Most accelerator controls systems do a good job of monitoring specified points and generating notifications when parameters exceed predefined limits.more » In some cases, improved tools can help, but more often, poor configuration is the root cause of ineffective alarm systems. A SNS, we have invested considerable effort in generating appropriate configurations using a rigorous set of rules based on best practices in the industrial process controls community. This paper will discuss our alarm configuration philosophy and operator response to our new system.« less
ERIC Educational Resources Information Center
Barrett, Gerald V.; And Others
The report describes field studies involving nonsupervisory Naval maintenance and monitoring electronics personnel. The studies' results indicated that Naval retention was related to a number of individual and job attributes. Extended Naval tenure was associated with lower verbal and clerical aptitudes (Naval Test Battery); higher levels of…
Trial by fire: a multivariate examination of the relation between job tenure and work injuries.
Breslin, F C; Smith, P
2006-01-01
This study examined the relation between months on the job and lost-time claim rates, with a particular focus on age related differences. Workers' compensation records and labour force survey data were used to compute claim rates per 1000 full time equivalents. To adjust for potential confounding, multivariate analyses included age, sex, occupation, and industry, as well job tenure as predictors of claim rates. At any age, the claim rates decline as time on the job increases. For example, workers in the first month on the job were over four times more likely to have a lost-time claim than workers with over one year in their current job. The job tenure injury associations were stronger among males, the goods industry, manual occupations, and older adult workers. The present results suggest that all worker subgroups examined show increased risk when new on the job. Recommendations for improving this situation include earlier training, starting workers in low hazard conditions, reducing job turnover rates in firms, and improved monitoring of hazard exposures that new workers encounter.
Conceptual design of intravenous fluids level monitoring system - a review
NASA Astrophysics Data System (ADS)
Verma, Prikshit; Padmani, Aniket; Boopathi, M.
2017-11-01
In today’s world of automation, there are advancements going on in all the fields. Each work is being automated day by day. However, if we see our current medical care system, some areas require manual caretaker and are loaded with heavy jobs, which consumes a lot of time. Nevertheless, since the work is related to human health, it should be properly done and that too with accuracy. An example of such a particular work is injecting saline or Intravenous (IV) fluids in a patient. The monitoring of such fluids needs utter attention as if the bottle of the fluid is not changed on time, it may lead to various problems for the patients like backflow of blood, blood loss etc. Various researches have been performed to overcome such critical situation. Different monitoring and alerting techniques are described in different researches. So, in our study, we will go through the researches done in this particular field and will see how different ideas are implemented.
Job Grading System for Trades and Labor Occupations. Part II.
ERIC Educational Resources Information Center
Civil Service Commission, Washington, DC. Bureau of Policies and Standards.
Three new standards (telephone mechanic, electroplater, and animal caretaker) for grading jobs under the Federal Wage System are cited. There is an alphabetical listing by job for published job grading standards, an occupational code-structure index for published grading standards, and a list of 61 jobs by published job grading standard with…
AliEn—ALICE environment on the GRID
NASA Astrophysics Data System (ADS)
Saiz, P.; Aphecetche, L.; Bunčić, P.; Piskač, R.; Revsbech, J.-E.; Šego, V.; Alice Collaboration
2003-04-01
AliEn ( http://alien.cern.ch) (ALICE Environment) is a Grid framework built on top of the latest Internet standards for information exchange and authentication (SOAP, PKI) and common Open Source components. AliEn provides a virtual file catalogue that allows transparent access to distributed datasets and a number of collaborating Web services which implement the authentication, job execution, file transport, performance monitor and event logging. In the paper we will present the architecture and components of the system.
Job Design and Ethnic Differences in Working Women’s Physical Activity
Grzywacz, Joseph G.; Crain, A. Lauren; Martinson, Brian C.; Quandt, Sara A.
2014-01-01
Objective To document the role job control and schedule control play in shaping women’s physical activity, and how it delineates educational and racial variability in associations of job and social control with physical activity. Methods Prospective data were obtained from a community-based sample of working women (N = 302). Validated instruments measured job control and schedule control. Steps per day were assessed using New Lifestyles 800 activity monitors. Results Greater job control predicted more steps per day, whereas greater schedule control predicted fewer steps. Small indirect associations between ethnicity and physical activity were observed among women with a trade school degree or less but not for women with a college degree. Conclusions Low job control created barriers to physical activity among working women with a trade school degree or less. Greater schedule control predicted less physical activity, suggesting women do not use time “created” by schedule flexibility for personal health enhancement. PMID:24034681
Job design and ethnic differences in working women's physical activity.
Grzywacz, Joseph G; Crain, A Lauren; Martinson, Brian C; Quandt, Sara A
2014-01-01
To document the role job control and schedule control play in shaping women's physical activity, and how it delineates educational and racial variability in associations of job and social control with physical activity. Prospective data were obtained from a community-based sample of working women (N = 302). Validated instruments measured job control and schedule control. Steps per day were assessed using New Lifestyles 800 activity monitors. Greater job control predicted more steps per day, whereas greater schedule control predicted fewer steps. Small indirect associations between ethnicity and physical activity were observed among women with a trade school degree or less but not for women with a college degree. Low job control created barriers to physical activity among working women with a trade school degree or less. Greater schedule control predicted less physical activity, suggesting women do not use time "created" by schedule flexibility for personal health enhancement.
Job sharing in clinical nutrition management: a plan for successful implementation.
Visocan, B J; Herold, L S; Mulcahy, M J; Schlosser, M F
1993-10-01
While women continue to enter the American work force in record numbers; many experience difficulty in juggling career and family obligations. Flexible scheduling is one option used to ease work and family pressures. Women's changing work roles have potentially noteworthy implications for clinical nutrition management, a traditionally female-dominated profession where the recruitment and retention of valued, experienced registered dietitians can prove to be a human resources challenge. Job sharing, one type of flexible scheduling, is applicable to the nutrition management arena. This article describes and offers a plan for overcoming obstacles to job sharing, including determining feasibility, gaining support of top management, establishing program design, announcing the job share program, and using implementation, monitoring, and fine-tuning strategies. Benefits that can be derived from a successful job share are reduced absenteeism, decreased turnover, enhanced recruitment, improved morale, increased productivity, improved job coverage, and enhanced skills and knowledge base. A case study illustrates one method for achieving job sharing success in clinical nutrition management.
Moreira, Sandra; Vasconcelos, Lia; Silva Santos, Carlos
2017-09-28
This study aimed to develop a methodological tool to analyze and monitor the green jobs in the context of Occupational Health and Safety. A literature review in combination with an investigation of Occupational Health Indicators was performed. The resulting tool of Occupational Health Indicators was based on the existing information of "Single Report" and was validated by national's experts. The tool brings together 40 Occupational Health Indicators in four key fields established by World Health Organization in their conceptual framework "Health indicators of sustainable jobs." The tool proposed allows for assessing if the green jobs enabled to follow the principles and requirements of Occupational Health Indicators and if these jobs are as good for the environment as for the workers' health, so if they can be considered quality jobs. This shows that Occupational Health Indicators are indispensable for the assessment of the sustainability of green jobs and should be taken into account in the definition and evaluation of policies and strategies of the sustainable development.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-06
... warranted by the evaluation, is as follows: Facility: Hanford site. Location: Richland, Washington. Job Titles and/or Job Duties: All personnel who were internally monitored (urine or fecal), who worked at the... Analysis and Support, National Institute for Occupational Safety and Health (NIOSH), 4676 Columbia Parkway...
ERIC Educational Resources Information Center
Lambert, Misty D.; Torres, Robert M.; Tummons, John D.
2012-01-01
Monitoring the stress of teachers continues to be important--particularly stress levels of beginning agriculture teachers. The study sought to describe the relationship between beginning teachers' perceived ability to manage their time and their level of stress. The Time Management Practices Inventory and the Job Stress Survey were used to measure…
20 CFR 655.185 - Job service complaint system; enforcement of work contracts.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Job service complaint system; enforcement of... Job service complaint system; enforcement of work contracts. (a) Filing with DOL. Complaints arising under this subpart must be filed through the Job Service Complaint System, as described in 20 CFR part...
TethysCluster: A comprehensive approach for harnessing cloud resources for hydrologic modeling
NASA Astrophysics Data System (ADS)
Nelson, J.; Jones, N.; Ames, D. P.
2015-12-01
Advances in water resources modeling are improving the information that can be supplied to support decisions affecting the safety and sustainability of society. However, as water resources models become more sophisticated and data-intensive they require more computational power to run. Purchasing and maintaining the computing facilities needed to support certain modeling tasks has been cost-prohibitive for many organizations. With the advent of the cloud, the computing resources needed to address this challenge are now available and cost-effective, yet there still remains a significant technical barrier to leverage these resources. This barrier inhibits many decision makers and even trained engineers from taking advantage of the best science and tools available. Here we present the Python tools TethysCluster and CondorPy, that have been developed to lower the barrier to model computation in the cloud by providing (1) programmatic access to dynamically scalable computing resources, (2) a batch scheduling system to queue and dispatch the jobs to the computing resources, (3) data management for job inputs and outputs, and (4) the ability to dynamically create, submit, and monitor computing jobs. These Python tools leverage the open source, computing-resource management, and job management software, HTCondor, to offer a flexible and scalable distributed-computing environment. While TethysCluster and CondorPy can be used independently to provision computing resources and perform large modeling tasks, they have also been integrated into Tethys Platform, a development platform for water resources web apps, to enable computing support for modeling workflows and decision-support systems deployed as web apps.
NASA Technical Reports Server (NTRS)
French, Jennifer R.
1995-01-01
As automated systems proliferate in aviation systems, human operators are taking on less and less of an active role in the jobs they once performed, often reducing what should be important jobs to tasks barely more complex than monitoring machines. When operators are forced into these roles, they risk slipping into hazardous states of awareness, which can lead to reduced skills, lack of vigilance, and the inability to react quickly and competently when there is a machine failure. Using Air Traffic Control (ATC) as a model, the present study developed tools for conducting tests focusing on levels of automation as they relate to situation awareness. Subjects participated in a two-and-a-half hour experiment that consisted of a training period followed by a simulation of air traffic control similar to the system presently used by the FAA, then an additional simulation employing automated assistance. Through an iterative design process utilizing numerous revisions and three experimental sessions, several measures for situational awareness in a simulated Air Traffic Control System were developed and are prepared for use in future experiments.
Fan, Lin-bo; Blumenthal, James A.; Hinderliter, Alan L.; Sherwood, Andrew
2013-01-01
Objectives Blunted nighttime blood pressure dipping is an established cardiovascular risk factor. This study examined the effect of job strain on nighttime blood pressure dipping among men and women with high blood pressure. Methods The sample consisted of 122 blue collar and white collar workers (men=72, women=50). Job psychological demands, job control and social support were measured by the Job Content Questionnaire. Job strain was assessed by the ratio of job demands/job control. Nighttime blood pressure dipping was evaluated from 24-hour ambulatory blood pressure monitoring performed on three workdays. Results Men with high job strain had a 5.4 mm Hg higher sleep systolic blood pressure (P=0.03) and 3.5 mm Hg higher sleep pulse pressure (P=0.02) compared to men with low job strain. Men with high job strain had a smaller fall in systolic blood pressure and pulse pressure from awake to sleep than those with low job strain (P<0.05). Hierarchical analyses showed that job strain was an independent determinant of systolic blood pressure dipping (P=0.03) among men after adjusting for ethnicity, body mass index, anxiety and depression symptoms, current smoking status, and alcohol consumption. Further exploratory analyses indicated that job control was the salient component of job strain associated with blood pressure dipping (p=.03). Conclusions High job strain is associated with a blunting of the normal diurnal variation in blood pressure and pulse pressure, which may contribute to the relationship between job strain and cardiovascular disease. PMID:22460541
Administrative Job Level Study and Factoring System.
ERIC Educational Resources Information Center
Portland Community Coll., OR.
The administrative job classification system and generic job descriptions presented in this report were developed at Portland Community College (PCC) as management tools. After introductory material outlining the objectives of and criteria used in the administrative job-level study, and offering information on the administrative job factoring…
Intelligent computer-aided training and tutoring
NASA Technical Reports Server (NTRS)
Loftin, R. Bowen; Savely, Robert T.
1991-01-01
Specific autonomous training systems based on artificial intelligence technology for use by NASA astronauts, flight controllers, and ground-based support personnel that demonstrate an alternative to current training systems are described. In addition to these specific systems, the evolution of a general architecture for autonomous intelligent training systems that integrates many of the features of traditional training programs with artificial intelligence techniques is presented. These Intelligent Computer-Aided Training (ICAT) systems would provide, for the trainee, much of the same experience that could be gained from the best on-the-job training. By integrating domain expertise with a knowledge of appropriate training methods, an ICAT session should duplicate, as closely as possible, the trainee undergoing on-the-job training in the task environment, benefitting from the full attention of a task expert who is also an expert trainer. Thus, the philosophy of the ICAT system is to emulate the behavior of an experienced individual devoting his full time and attention to the training of a novice - proposing challenging training scenarios, monitoring and evaluating the actions of the trainee, providing meaningful comments in response to trainee errors, responding to trainee requests for information, giving hints (if appropriate), and remembering the strengths and weaknesses displayed by the trainee so that appropriate future exercises can be designed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lingerfelt, Eric J; Messer, II, Otis E
2017-01-02
The Bellerophon software system supports CHIMERA, a production-level HPC application that simulates the evolution of core-collapse supernovae. Bellerophon enables CHIMERA's geographically dispersed team of collaborators to perform job monitoring and real-time data analysis from multiple supercomputing resources, including platforms at OLCF, NERSC, and NICS. Its multi-tier architecture provides an encapsulated, end-to-end software solution that enables the CHIMERA team to quickly and easily access highly customizable animated and static views of results from anywhere in the world via a cross-platform desktop application.
ERIC Educational Resources Information Center
Nathanson, Stanley N.
This report presents the results of an evaluation of the Cleff Job Matching System (CJMS). The CJMS provides a means by which jobs and job applicants can be matched at the semi- and low-skilled levels in both white- and blue-collar jobs. The CJMS operates by obtaining numerical profiles of both job seekers and jobs, across 16 Dimensions of Work,…
ERIC Educational Resources Information Center
Al-Smadi, Marwan Saleh; Qblan, Yahya Mohammed
2015-01-01
It is vital that colleges and universities monitor the satisfaction levels of their employees to secure high levels of their performance. The current study aimed to identify the impact of some variables (gender, Teaching experience and college type) on assessing the level of job satisfaction among faculty of Najran University. A survey was…
NASA Astrophysics Data System (ADS)
Budi Harja, Herman; Prakosa, Tri; Raharno, Sri; Yuwana Martawirya, Yatna; Nurhadi, Indra; Setyo Nogroho, Alamsyah
2018-03-01
The production characteristic of job-shop industry at which products have wide variety but small amounts causes every machine tool will be shared to conduct production process with dynamic load. Its dynamic condition operation directly affects machine tools component reliability. Hence, determination of maintenance schedule for every component should be calculated based on actual usage of machine tools component. This paper describes study on development of monitoring system to obtaining information about each CNC machine tool component usage in real time approached by component grouping based on its operation phase. A special device has been developed for monitoring machine tool component usage by utilizing usage phase activity data taken from certain electronics components within CNC machine. The components are adaptor, servo driver and spindle driver, as well as some additional components such as microcontroller and relays. The obtained data are utilized for detecting machine utilization phases such as power on state, machine ready state or spindle running state. Experimental result have shown that the developed CNC machine tool monitoring system is capable of obtaining phase information of machine tool usage as well as its duration and displays the information at the user interface application.
AWAS: A dynamic work scheduling system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Y.; Hao, J.; Kocur, G.
1994-12-31
The Automated Work Administration System (AWAS) is an automated scheduling system developed at GTE. A typical work center has 1000 employees and processes 4000 jobs each day. Jobs are geographically distributed within the service area of the work center, require different skills, and have to be done within specified time windows. Each job can take anywhere from 12 minutes to several hours to complete. Each employee can have his/her individual schedule, skill, or working area. The jobs can enter and leave the system at any time The employees dial up to the system to request for their next job atmore » the beginning of a day or after a job is done. The system is able to respond to the changes dynamically and produce close to optimum solutions at real time. We formulate the real world problem as a minimum cost network flow problem. Both employees and jobs are formulated as nodes. Relationship between jobs and employees are formulated as arcs, and working hours contributed by employees and consumed by jobs are formulated as flow. The goal is to minimize missed commitments. We solve the problem with the successive shortest path algorithm. Combined with pre-processing and post-processing, the system produces reasonable outputs and the response time is very good.« less
20 CFR 655.1316 - Job Service Complaint System; enforcement of work contracts.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Job Service Complaint System; enforcement of... for Temporary Agricultural Employment in the United States (H-2A Workers) § 655.1316 Job Service... through the Job Service Complaint System, as described in 20 CFR part 658, Subpart E. Complaints which...
Job Scheduling Under the Portable Batch System
NASA Technical Reports Server (NTRS)
Henderson, Robert L.; Woodrow, Thomas S. (Technical Monitor)
1995-01-01
The typical batch queuing system schedules jobs for execution by a set of queue controls. The controls determine from which queues jobs may be selected. Within the queue, jobs are ordered first-in, first-run. This limits the set of scheduling policies available to a site. The Portable Batch System removes this limitation by providing an external scheduling module. This separate program has full knowledge of the available queued jobs, running jobs, and system resource usage. Sites are able to implement any policy expressible in one of several procedural language. Policies may range from "bet fit" to "fair share" to purely political. Scheduling decisions can be made over the full set of jobs regardless of queue or order. The scheduling policy can be changed to fit a wide variety of computing environments and scheduling goals. This is demonstrated by the use of PBS on an IBM SP-2 system at NASA Ames.
Faraji Khiavi, F; Amiri, E; Ghobadian, S; Roshankar, R
2015-01-01
Background: Increasing nurses’ motivation is among the most important and complex nursing duties. Performance evaluation system could be used as a means to improve the quantity and quality of the human resources. Therefore, current research objected to evaluate the effect of final evaluation on job motivation from the perspective of nurses in Ahvaz hospitals according to Herzberg scheme. Methods: This investigation conducted in 2012. Research population included nurses in Ahvaz educational hospitals. The sample size was calculated 120 and sampling was performed based on classification and random sampling. Research instrument was a self-made questionnaire with confirmed validity through content analysis and Cronbach’s alpha calculated at 0.94. Data examined utilizing ANOVA, T-Test, and descriptive statistics. Results: The nurses considered the final evaluation on management policy (3.2 ± 1.11) and monitoring (3.15 ± 1.15) among health items and responsibility (3.15 ± 1.15) and progress (3.06 ± 1.24) among motivational factors relatively effective. There was a significant association between scores of nurses' views in different age and sex groups (P = 0.01), but there was no significant association among respondents in educational level and marital status. Conclusion: Experienced nurses believed that evaluation has little effect on job motivation. If annual assessment of the various job aspects are considered, managers could use it as an efficient tool to motivate nurses. PMID:28316733
Faraji Khiavi, F; Amiri, E; Ghobadian, S; Roshankar, R
2015-01-01
Background: Increasing nurses' motivation is among the most important and complex nursing duties. Performance evaluation system could be used as a means to improve the quantity and quality of the human resources. Therefore, current research objected to evaluate the effect of final evaluation on job motivation from the perspective of nurses in Ahvaz hospitals according to Herzberg scheme. Methods: This investigation conducted in 2012. Research population included nurses in Ahvaz educational hospitals. The sample size was calculated 120 and sampling was performed based on classification and random sampling. Research instrument was a self-made questionnaire with confirmed validity through content analysis and Cronbach's alpha calculated at 0.94. Data examined utilizing ANOVA, T-Test, and descriptive statistics. Results: The nurses considered the final evaluation on management policy (3.2 ± 1.11) and monitoring (3.15 ± 1.15) among health items and responsibility (3.15 ± 1.15) and progress (3.06 ± 1.24) among motivational factors relatively effective. There was a significant association between scores of nurses' views in different age and sex groups (P = 0.01), but there was no significant association among respondents in educational level and marital status. Conclusion: Experienced nurses believed that evaluation has little effect on job motivation. If annual assessment of the various job aspects are considered, managers could use it as an efficient tool to motivate nurses.
Automation and quality assurance of the production cycle
NASA Astrophysics Data System (ADS)
Hajdu, L.; Didenko, L.; Lauret, J.
2010-04-01
Processing datasets on the order of tens of terabytes is an onerous task, faced by production coordinators everywhere. Users solicit data productions and, especially for simulation data, the vast amount of parameters (and sometime incomplete requests) point at the need for a tracking, control and archiving all requests made so a coordinated handling could be made by the production team. With the advent of grid computing the parallel processing power has increased but traceability has also become increasing problematic due to the heterogeneous nature of Grids. Any one of a number of components may fail invalidating the job or execution flow in various stages of completion and re-submission of a few of the multitude of jobs (keeping the entire dataset production consistency) a difficult and tedious process. From the definition of the workflow to its execution, there is a strong need for validation, tracking, monitoring and reporting of problems. To ease the process of requesting production workflow, STAR has implemented several components addressing the full workflow consistency. A Web based online submission request module, implemented using Drupal's Content Management System API, enforces ahead that all parameters are described in advance in a uniform fashion. Upon submission, all jobs are independently tracked and (sometime experiment-specific) discrepancies are detected and recorded providing detailed information on where/how/when the job failed. Aggregate information on success and failure are also provided in near real-time.
XRootD popularity on hadoop clusters
NASA Astrophysics Data System (ADS)
Meoni, Marco; Boccali, Tommaso; Magini, Nicolò; Menichetti, Luca; Giordano, Domenico;
2017-10-01
Performance data and metadata of the computing operations at the CMS experiment are collected through a distributed monitoring infrastructure, currently relying on a traditional Oracle database system. This paper shows how to harness Big Data architectures in order to improve the throughput and the efficiency of such monitoring. A large set of operational data - user activities, job submissions, resources, file transfers, site efficiencies, software releases, network traffic, machine logs - is being injected into a readily available Hadoop cluster, via several data streamers. The collected metadata is further organized running fast arbitrary queries; this offers the ability to test several Map&Reduce-based frameworks and measure the system speed-up when compared to the original database infrastructure. By leveraging a quality Hadoop data store and enabling an analytics framework on top, it is possible to design a mining platform to predict dataset popularity and discover patterns and correlations.
Evaluation of impairment of the upper extremity.
Blair, S J; McCormick, E; Bear-Lehman, J; Fess, E E; Rader, E
1987-08-01
Evaluation of impairment of the upper extremity is the product of a team effort by the physician, occupational therapist, physical therapist, and rehabilitation counselor. A careful recording of the anatomic impairment should be made because this is critical in determining the subsequent functional activities of the extremity. The measurement criteria for clinical and functional evaluation includes condition assessment instruments. Some assess the neurovascular system, others assess movements including the monitoring of articular motion and musculotendinous function. Sensibility assessment instruments measure sympathetic response and detect single joint stimulus, discrimination, quantification, and recognition abilities. A detailed description of each assessment is recorded and physical capacity evaluation is only one component of the entire vocational evaluation. This evaluation answers questions regarding the injured worker's ability to return to his previous job. The work simulator is a useful instrument that allows rehabilitation and testing of the injured upper extremity. Job site evaluation includes assessment criteria for work performance, work behavior, and work environment.
Moreira, Sandra; Vasconcelos, Lia; Silva Santos, Carlos
2017-01-01
Objective: This study aimed to develop a methodological tool to analyze and monitor the green jobs in the context of Occupational Health and Safety. Methods: A literature review in combination with an investigation of Occupational Health Indicators was performed. The resulting tool of Occupational Health Indicators was based on the existing information of "Single Report" and was validated by national's experts. Results: The tool brings together 40 Occupational Health Indicators in four key fields established by World Health Organization in their conceptual framework "Health indicators of sustainable jobs." The tool proposed allows for assessing if the green jobs enabled to follow the principles and requirements of Occupational Health Indicators and if these jobs are as good for the environment as for the workers' health, so if they can be considered quality jobs. Conclusions: This shows that Occupational Health Indicators are indispensable for the assessment of the sustainability of green jobs and should be taken into account in the definition and evaluation of policies and strategies of the sustainable development. PMID:28794392
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doucet, Mathieu; Hobson, Tanner C.; Ferraz Leal, Ricardo Miguel
The Django Remote Submission (DRS) is a Django (Django, n.d.) application to manage long running job submission, including starting the job, saving logs, and storing results. It is an independent project available as a standalone pypi package (PyPi, n.d.). It can be easily integrated in any Django project. The source code is freely available as a GitHub repository (django-remote-submission, n.d.). To run the jobs in background, DRS takes advantage of Celery (Celery, n.d.), a powerful asynchronous job queue used for running tasks in the background, and the Redis Server (Redis, n.d.), an in-memory data structure store. Celery uses brokers tomore » pass messages between a Django Project and the Celery workers. Redis is the message broker of DRS. In addition DRS provides real time monitoring of the progress of Jobs and associated logs. Through the Django Channels project (Channels, n.d.), and the usage of Web Sockets, it is possible to asynchronously display the Job Status and the live Job output (standard output and standard error) on a web page.« less
The Production Rate and Employment of Ph.D. Astronomers
NASA Astrophysics Data System (ADS)
Metcalfe, Travis S.
2008-02-01
In an effort to encourage self-regulation of the astronomy job market, I examine the supply of, and demand for, astronomers over time. On the supply side, I document the production rate of Ph.D. astronomers from 1970 to 2006 using the UMI Dissertation Abstracts database, along with data from other independent sources. I compare the long-term trends in Ph.D. production with federal astronomy research funding over the same time period, and I demonstrate that additional funding is correlated with higher subsequent Ph.D. production. On the demand side, I monitor the changing patterns of employment using statistics about the number and types of jobs advertised in the AAS Job Register from 1984 to 2006. Finally, I assess the sustainability of the job market by normalizing this demand by the annual Ph.D. production. The most recent data suggest that there are now annual advertisements for about one postdoctoral job, half a faculty job, and half a research/support position for every new domestic Ph.D. recipient in astronomy and astrophysics. The average new astronomer might expect to hold up to 3 jobs before finding a steady position.
Doucet, Mathieu; Hobson, Tanner C.; Ferraz Leal, Ricardo Miguel
2017-08-01
The Django Remote Submission (DRS) is a Django (Django, n.d.) application to manage long running job submission, including starting the job, saving logs, and storing results. It is an independent project available as a standalone pypi package (PyPi, n.d.). It can be easily integrated in any Django project. The source code is freely available as a GitHub repository (django-remote-submission, n.d.). To run the jobs in background, DRS takes advantage of Celery (Celery, n.d.), a powerful asynchronous job queue used for running tasks in the background, and the Redis Server (Redis, n.d.), an in-memory data structure store. Celery uses brokers tomore » pass messages between a Django Project and the Celery workers. Redis is the message broker of DRS. In addition DRS provides real time monitoring of the progress of Jobs and associated logs. Through the Django Channels project (Channels, n.d.), and the usage of Web Sockets, it is possible to asynchronously display the Job Status and the live Job output (standard output and standard error) on a web page.« less
Gunther, Eric J M; Sliker, Levin J; Bodine, Cathy
2017-11-01
Unemployment among the almost 5 million working-age adults with cognitive disabilities in the USA is a costly problem in both tax dollars and quality of life. Job coaching is an effective tool to overcome this, but the cost of job coaching services sums with every new employee or change of employment roles. There is a need for a cost-effective, automated alternative to job coaching that incurs a one-time cost and can be reused for multiple employees or roles. An effective automated job coach must be aware of its location and the location of destinations within the job site. This project presents a design and prototype of a cart-mounted indoor positioning and navigation system with necessary original software using Ultra High Frequency Radio Frequency Identification (UHF RFID). The system presented in this project for use within a warehouse setting is one component of an automated job coach to assist in the job of order filler. The system demonstrated accuracy to within 0.3 m under the correct conditions with strong potential to serve as the basis for an effective indoor navigation system to assist warehouse workers with disabilities. Implications for rehabilitation An automated job coach could improve employability of and job retention for people with cognitive disabilities. An indoor navigation system using ultra high frequency radio frequency identification was proposed with an average positioning accuracy of 0.3 m. The proposed system, in combination with a non-linear context-aware prompting system, could be used as an automated job coach for warehouse order fillers with cognitive disabilities.
Cantley, Linda F; Tessier-Sherman, Baylah; Slade, Martin D; Galusha, Deron; Cullen, Mark R
2016-01-01
Objective To examine associations between workplace injury and musculoskeletal disorder (MSD) risk and expert ratings of job-level psychosocial demand and job control, adjusting for job-level physical demand. Methods Among a cohort of 9260 aluminium manufacturing workers in jobs for which expert ratings of job-level physical and psychological demand and control were obtained during the 2 years following rating obtainment, multivariate mixed effects models were used to estimate relative risk (RR) of minor injury and minor MSD, serious injury and MSD, minor MSD only and serious MSD only by tertile of demand and control, adjusting for physical demand as well as other recognised risk factors. Results Compared with workers in jobs rated as having low psychological demand, workers in jobs with high psychological demand had 49% greater risk of serious injury and serious MSD requiring medical treatment, work restrictions or lost work time (RR=1.49; 95% CI 1.10 to 2.01). Workers in jobs rated as having low control displayed increased risk for minor injury and minor MSD (RR=1.45; 95% CI 1.12 to 1.87) compared with those in jobs rated as having high control. Conclusions Using expert ratings of job-level exposures, this study provides evidence that psychological job demand and job control contribute independently to injury and MSD risk in a blue-collar manufacturing cohort, and emphasises the importance of monitoring psychosocial workplace exposures in addition to physical workplace exposures to promote worker health and safety. PMID:26163544
Intelligent Computerized Training System
NASA Technical Reports Server (NTRS)
Wang, Lui; Baffes, Paul; Loftin, R. Bowen; Hua, Grace C.
1991-01-01
Intelligent computer-aided training system gives trainees same experience gained from best on-the-job training. Automated system designed to emulate behavior of experienced teacher devoting full time and attention to training novice. Proposes challenging training scenarios, monitors and evaluates trainee's actions, makes meaningful comments in response to errors, reponds to requests for information, gives hints when appropriate, and remembers strengths and weaknesses so it designs suitable exercises. Used to train flight-dynamics officers in deploying satellites from Space Shuttle. Adapted to training for variety of tasks and situations, simply by modifying one or at most two of its five modules. Helps to ensure continuous supply of trained specialists despite scarcity of experienced and skilled human trainers.
Request queues for interactive clients in a shared file system of a parallel computing system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, John M.; Faibish, Sorin
Interactive requests are processed from users of log-in nodes. A metadata server node is provided for use in a file system shared by one or more interactive nodes and one or more batch nodes. The interactive nodes comprise interactive clients to execute interactive tasks and the batch nodes execute batch jobs for one or more batch clients. The metadata server node comprises a virtual machine monitor; an interactive client proxy to store metadata requests from the interactive clients in an interactive client queue; a batch client proxy to store metadata requests from the batch clients in a batch client queue;more » and a metadata server to store the metadata requests from the interactive client queue and the batch client queue in a metadata queue based on an allocation of resources by the virtual machine monitor. The metadata requests can be prioritized, for example, based on one or more of a predefined policy and predefined rules.« less
Extending the Fermi-LAT Data Processing Pipeline to the Grid
NASA Astrophysics Data System (ADS)
Zimmer, S.; Arrabito, L.; Glanzman, T.; Johnson, T.; Lavalley, C.; Tsaregorodtsev, A.
2012-12-01
The Data Handling Pipeline (“Pipeline”) has been developed for the Fermi Gamma-Ray Space Telescope (Fermi) Large Area Telescope (LAT) which launched in June 2008. Since then it has been in use to completely automate the production of data quality monitoring quantities, reconstruction and routine analysis of all data received from the satellite and to deliver science products to the collaboration and the Fermi Science Support Center. Aside from the reconstruction of raw data from the satellite (Level 1), data reprocessing and various event-level analyses are also reasonably heavy loads on the pipeline and computing resources. These other loads, unlike Level 1, can run continuously for weeks or months at a time. In addition it receives heavy use in performing production Monte Carlo tasks. In daily use it receives a new data download every 3 hours and launches about 2000 jobs to process each download, typically completing the processing of the data before the next download arrives. The need for manual intervention has been reduced to less than 0.01% of submitted jobs. The Pipeline software is written almost entirely in Java and comprises several modules. The software comprises web-services that allow online monitoring and provides charts summarizing work flow aspects and performance information. The server supports communication with several batch systems such as LSF and BQS and recently also Sun Grid Engine and Condor. This is accomplished through dedicated job control services that for Fermi are running at SLAC and the other computing site involved in this large scale framework, the Lyon computing center of IN2P3. While being different in the logic of a task, we evaluate a separate interface to the Dirac system in order to communicate with EGI sites to utilize Grid resources, using dedicated Grid optimized systems rather than developing our own. More recently the Pipeline and its associated data catalog have been generalized for use by other experiments, and are currently being used by the Enriched Xenon Observatory (EXO), Cryogenic Dark Matter Search (CDMS) experiments as well as for Monte Carlo simulations for the future Cherenkov Telescope Array (CTA).
Experienced job autonomy among maternity care professionals in The Netherlands.
Perdok, Hilde; Cronie, Doug; van der Speld, Cecile; van Dillen, Jeroen; de Jonge, Ank; Rijnders, Marlies; de Graaf, Irene; Schellevis, François G; Verhoeven, Corine J
2017-11-01
High levels of experienced job autonomy are found to be beneficial for healthcare professionals and for the relationship with their patients. The aim of this study was to assess how maternity care professionals in the Netherlands perceive their job autonomy in the Dutch maternity care system and whether they expect a new system of integrated maternity care to affect their experienced job autonomy. A cross-sectional survey. The Leiden Quality of Work Life Questionnaire was used to assess experienced job autonomy among maternity care professionals. Data were collected in the Netherlands in 2015. 799 professionals participated of whom 362 were primary care midwives, 240 obstetricians, 93 clinical midwives and 104 obstetric nurses. The mean score for experienced job autonomy was highest for primary care midwives, followed by obstetricians, clinical midwives and obstetric nurses. Primary care midwives scored highest in expecting to lose their job autonomy in an integrated care system. There are significant differences in experienced job autonomy between maternity care professionals. When changing the maternity care system it will be a challenge to maintain a high level of experienced job autonomy for professionals. A decrease in job autonomy could lead to a reduction in job related wellbeing and in satisfaction with care among pregnant women. Copyright © 2017. Published by Elsevier Ltd.
Public storage for the Open Science Grid
NASA Astrophysics Data System (ADS)
Levshina, T.; Guru, A.
2014-06-01
The Open Science Grid infrastructure doesn't provide efficient means to manage public storage offered by participating sites. A Virtual Organization that relies on opportunistic storage has difficulties finding appropriate storage, verifying its availability, and monitoring its utilization. The involvement of the production manager, site administrators and VO support personnel is required to allocate or rescind storage space. One of the main requirements for Public Storage implementation is that it should use SRM or GridFTP protocols to access the Storage Elements provided by the OSG Sites and not put any additional burden on sites. By policy, no new services related to Public Storage can be installed and run on OSG sites. Opportunistic users also have difficulties in accessing the OSG Storage Elements during the execution of jobs. A typical users' data management workflow includes pre-staging common data on sites before a job's execution, then storing for a subsequent download to a local institution the output data produced by a job on a worker node. When the amount of data is significant, the only means to temporarily store the data is to upload it to one of the Storage Elements. In order to do that, a user's job should be aware of the storage location, availability, and free space. After a successful data upload, users must somehow keep track of the data's location for future access. In this presentation we propose solutions for storage management and data handling issues in the OSG. We are investigating the feasibility of using the integrated Rule-Oriented Data System developed at RENCI as a front-end service to the OSG SEs. The current architecture, state of deployment and performance test results will be discussed. We will also provide examples of current usage of the system by beta-users.
A microeconomic scheduler for parallel computers
NASA Technical Reports Server (NTRS)
Stoica, Ion; Abdel-Wahab, Hussein; Pothen, Alex
1995-01-01
We describe a scheduler based on the microeconomic paradigm for scheduling on-line a set of parallel jobs in a multiprocessor system. In addition to the classical objectives of increasing the system throughput and reducing the response time, we consider fairness in allocating system resources among the users, and providing the user with control over the relative performances of his jobs. We associate with every user a savings account in which he receives money at a constant rate. When a user wants to run a job, he creates an expense account for that job to which he transfers money from his savings account. The job uses the funds in its expense account to obtain the system resources it needs for execution. The share of the system resources allocated to the user is directly related to the rate at which the user receives money; the rate at which the user transfers money into a job expense account controls the job's performance. We prove that starvation is not possible in our model. Simulation results show that our scheduler improves both system and user performances in comparison with two different variable partitioning policies. It is also shown to be effective in guaranteeing fairness and providing control over the performance of jobs.
De Croon, Einar M; Blonk, Roland W B; Sluiter, Judith K; Frings-Dresen, Monique H W
2005-02-01
Monitoring psychological job strain may help occupational physicians to take preventive action at the appropriate time. For this purpose, the 10-item trucker strain monitor (TSM) assessing work-related fatigue and sleeping problems in truck drivers was developed. This study examined (1) test-retest reliability, (2) criterion validity of the TSM with respect to future sickness absence due to psychological health complaints and (3) usefulness of the TSM two-scales structure. The TSM and self-administered questionnaires, providing information about stressful working conditions (job control and job demands) and sickness absence, were sent to a random sample of 2000 drivers in 1998. Of the 1123 responders, 820 returned a completed questionnaire 2 years later (response: 72%). The TSM work-related fatigue scale, the TSM sleeping problems scale and the TSM composite scale showed satisfactory 2-year test-retest reliability (coefficient r=0.62, 0.66 and 0.67, respectively). The work-related fatigue, sleeping problems scale and composite scale had sensitivities of 61, 65 and 61%, respectively in identifying drivers with future sickness absence due to psychological health complaints. The specificity and positive predictive value of the TSM composite scale were 77 and 11%, respectively. The work-related fatigue scale and the sleeping problems scale were moderately strong correlated (r=0.62). However, stressful working conditions were differentially associated with the two scales. The results support the test-retest reliability, criterion validity and two-factor structure of the TSM. In general, the results suggest that the use of occupation-specific psychological job strain questionnaires is fruitful.
System Enhancements for Mechanical Inspection Processes
NASA Technical Reports Server (NTRS)
Hawkins, Myers IV
2011-01-01
Quality inspection of parts is a major component to any project that requires hardware implementation. Keeping track of all of the inspection jobs is essential to having a smooth running process. By using HTML, the programming language ColdFusion, and the MySQL database, I created a web-based job management system for the 170 Mechanical Inspection Group that will replace the Microsoft Access based management system. This will improve the ways inspectors and the people awaiting inspection view and keep track of hardware as it is in the inspection process. In the end, the management system should be able to insert jobs into a queue, place jobs in and out of a bonded state, pre-release bonded jobs, and close out inspection jobs.
Your Job Search Organiser. The Essential Guide for a Successful Job Search.
ERIC Educational Resources Information Center
Stevens, Paul
This publication organizes job searches in Australia by creating a paperwork system and recording essential information. It is organized into two parts: career planning and job search management. Part 1 contains the following sections: job evaluation, goal setting, job search obstacles--personal constraints and job search obstacles; and job search…
Estimating job runtime for CMS analysis jobs
NASA Astrophysics Data System (ADS)
Sfiligoi, I.
2014-06-01
The basic premise of pilot systems is to create an overlay scheduling system on top of leased resources. And by definition, leases have a limited lifetime, so any job that is scheduled on such resources must finish before the lease is over, or it will be killed and all the computation is wasted. In order to effectively schedule jobs to resources, the pilot system thus requires the expected runtime of the users' jobs. Past studies have shown that relying on user provided estimates is not a valid strategy, so the system should try to make an estimate by itself. This paper provides a study of the historical data obtained from the Compact Muon Solenoid (CMS) experiment's Analysis Operations submission system. Clear patterns are observed, suggesting that making prediction of an expected job lifetime range is achievable with high confidence level in this environment.
Self-Monitors Apply for a Job: Self-Presentation and Affective Consequences.
ERIC Educational Resources Information Center
Larkin, Judith E.; Pines, Harvey A.
High and low self-monitors were given the task of applying for a position that was or was not a good fit with their personality. Subjects were 97 introductory psychology students who had previously taken the 18-item Self-Monitoring Scale (SMS). They took the SMS again--as if it were being used to decide whether they would be offered a very…
48 CFR 22.1203-4 - Method of job offer.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Method of job offer. 22.1203-4 Section 22.1203-4 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION... Under Service Contracts 22.1203-4 Method of job offer. A job offer made by a successor contractor must...
48 CFR 22.1203-4 - Method of job offer.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Method of job offer. 22.1203-4 Section 22.1203-4 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION... Under Service Contracts 22.1203-4 Method of job offer. A job offer made by a successor contractor must...
Job Accommodation System: Project TIE (Technology in Employment).
ERIC Educational Resources Information Center
Roberts, Gary; Zimbrich, Karen; Butterworth, John; Hart, Debra
This manual presents a comprehensive evaluation tool that can be used by employees with disabilities, by rehabilitation practitioners, and by consultants to develop job accommodations in a variety of employment settings. The Job Accommodation System is designed to help in identifying, selecting, and implementing job accommodations and consists of…
NASA Astrophysics Data System (ADS)
Kim, Ji-Su; Park, Jung-Hyeon; Lee, Dong-Ho
2017-10-01
This study addresses a variant of job-shop scheduling in which jobs are grouped into job families, but they are processed individually. The problem can be found in various industrial systems, especially in reprocessing shops of remanufacturing systems. If the reprocessing shop is a job-shop type and has the component-matching requirements, it can be regarded as a job shop with job families since the components of a product constitute a job family. In particular, sequence-dependent set-ups in which set-up time depends on the job just completed and the next job to be processed are also considered. The objective is to minimize the total family flow time, i.e. the maximum among the completion times of the jobs within a job family. A mixed-integer programming model is developed and two iterated greedy algorithms with different local search methods are proposed. Computational experiments were conducted on modified benchmark instances and the results are reported.
Optimizing Resource Utilization in Grid Batch Systems
NASA Astrophysics Data System (ADS)
Gellrich, Andreas
2012-12-01
On Grid sites, the requirements of the computing tasks (jobs) to computing, storage, and network resources differ widely. For instance Monte Carlo production jobs are almost purely CPU-bound, whereas physics analysis jobs demand high data rates. In order to optimize the utilization of the compute node resources, jobs must be distributed intelligently over the nodes. Although the job resource requirements cannot be deduced directly, jobs are mapped to POSIX UID/GID according to the VO, VOMS group and role information contained in the VOMS proxy. The UID/GID then allows to distinguish jobs, if users are using VOMS proxies as planned by the VO management, e.g. ‘role=production’ for Monte Carlo jobs. It is possible to setup and configure batch systems (queuing system and scheduler) at Grid sites based on these considerations although scaling limits were observed with the scheduler MAUI. In tests these limitations could be overcome with a home-made scheduler.
Academics Job Satisfaction and Job Stress across Countries in the Changing Academic Environments
ERIC Educational Resources Information Center
Shin, Jung Cheol; Jung, Jisun
2014-01-01
This study examined job satisfaction and job stress across 19 higher education systems. We classified the 19 countries according to their job satisfaction and job stress and applied regression analysis to test whether new public management has impacts on either or both job satisfaction and job stress. According to this study, strong market driven…
20 CFR 670.535 - Are Job Corps centers required to establish behavior management systems?
Code of Federal Regulations, 2010 CFR
2010-04-01
... behavior management systems? 670.535 Section 670.535 Employees' Benefits EMPLOYMENT AND TRAINING... systems? (a) Yes, each Job Corps center must establish and maintain its own student incentives system to encourage and reward students' accomplishments. (b) The Job Corps center must establish and maintain a...
Summer Institute in Engineering and Computer Applications: Learning Through Experience
NASA Technical Reports Server (NTRS)
Langdon, Joan S.
1995-01-01
The document describing the Summer Institute project is made up of the following information: Administrative procedures; Seminars/Special Courses/Tours/College fair; Facilities/ Transportation; Staff and Administration; Collaboration; Participant/Project monitoring and evaluation; Fiscal and developmental activities; Job readiness/Job internship development and placement; and Student Follow-up/Tracking. Appendices include presentations, self-evaluations; abstracts and papers developed by the students during their participation in the program.
2005-08-01
Covers interaction of type, image, motion, sound, and sequence in Design staging for various media formats including commercials. 3 Computer Programming...the Behavioral & Social Sciences ARI 2511 Jefferson Davis Highway 11. MONITOR REPORT NUMBER Arlington, VA 22202-3926 Technical Report 1168 12...situational judgment test, and indicators of person-environment fit (e.g., job satisfaction). 15. SUBJECT TERMS Behavioral and social science Personnel
2007-01-01
the ERDC XT3. On Sapphire, once a user submits a job, it is generally run on any available... the alpha version available to the DoD user community. This framework allows the user to prepare, submit, monitor, and manage a large number of CFD...input allows the user to enter multiple values or a range of values. This is useful for doing a parametric study. When generating jobs, CaseMan
Abstracts of ARI Research Publications, FY 1977
1980-04-01
significant part of the job and (b) the amount of preparation needed. Excluding technical skill specialty activities , the jobs of company corianders in...ground sensors (UGS). When personnel or vehicle movements 4 activate a UGS in the vicinity, a monitor display elsewhere indicates the activation . This...Operators were able to detect more targets during periods of low target activity than during periods of high target activity . However, accuracy of
Cantley, Linda F; Tessier-Sherman, Baylah; Slade, Martin D; Galusha, Deron; Cullen, Mark R
2016-04-01
To examine associations between workplace injury and musculoskeletal disorder (MSD) risk and expert ratings of job-level psychosocial demand and job control, adjusting for job-level physical demand. Among a cohort of 9260 aluminium manufacturing workers in jobs for which expert ratings of job-level physical and psychological demand and control were obtained during the 2 years following rating obtainment, multivariate mixed effects models were used to estimate relative risk (RR) of minor injury and minor MSD, serious injury and MSD, minor MSD only and serious MSD only by tertile of demand and control, adjusting for physical demand as well as other recognised risk factors. Compared with workers in jobs rated as having low psychological demand, workers in jobs with high psychological demand had 49% greater risk of serious injury and serious MSD requiring medical treatment, work restrictions or lost work time (RR=1.49; 95% CI 1.10 to 2.01). Workers in jobs rated as having low control displayed increased risk for minor injury and minor MSD (RR=1.45; 95% CI 1.12 to 1.87) compared with those in jobs rated as having high control. Using expert ratings of job-level exposures, this study provides evidence that psychological job demand and job control contribute independently to injury and MSD risk in a blue-collar manufacturing cohort, and emphasises the importance of monitoring psychosocial workplace exposures in addition to physical workplace exposures to promote worker health and safety. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Decision Support System Based on Computational Collective Intelligence in Campus Information Systems
NASA Astrophysics Data System (ADS)
Saito, Yoshihito; Matsuo, Tokuro
Education institutions such as universities have a lot of information including book information, equipment administrative information, student information, and several others. The institutions also have multiple information in time series. As collective intelligence in campus, integrating and reusing these preserved information regarding career and taking a class, university can effectively support students' decision making of their getting jobs and subjects choice. Our purpose of support is to increase student's motivation. In this paper, we focus on course record and job information included in students' information, and propose the method to analyze correlation between a pattern of taking class and job lined up. Afterwards, we propose a support system regarding getting a job and taking class by using our proposed method. For a student who has his/her favorite job to get, the system supports his/her decision making of lecture choice by recommending a set of appropriate lecture groups. On another hand, for a student who does not have favorite job to get, the system supports his/her decision making of getting job by presenting appropriate job families related with lecture group in which he/she has ever taken. The contribution of this paper is showing a concrete method to reuse the campus collective information, implementing a system, and user perspectives.
Job design, employment practices and well-being: a systematic review of intervention studies.
Daniels, Kevin; Gedikli, Cigdem; Watson, David; Semkina, Antonina; Vaughn, Oluwafunmilayo
2017-09-01
There is inconsistent evidence that deliberate attempts to improve job design realise improvements in well-being. We investigated the role of other employment practices, either as instruments for job redesign or as instruments that augment job redesign. Our primary outcome was well-being. Where studies also assessed performance, we considered performance as an outcome. We reviewed 33 intervention studies. We found that well-being and performance may be improved by: training workers to improve their own jobs; training coupled with job redesign; and system wide approaches that simultaneously enhance job design and a range of other employment practices. We found insufficient evidence to make any firm conclusions concerning the effects of training managers in job redesign and that participatory approaches to improving job design have mixed effects. Successful implementation of interventions was associated with worker involvement and engagement with interventions, managerial commitment to interventions and integration of interventions with other organisational systems. Practitioner Summary: Improvements in well-being and performance may be associated with system-wide approaches that simultaneously enhance job design, introduce a range of other employment practices and focus on worker welfare. Training may have a role in initiating job redesign or augmenting the effects of job design on well-being.
Coordinated Fault Tolerance for High-Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dongarra, Jack; Bosilca, George; et al.
2013-04-08
Our work to meet our goal of end-to-end fault tolerance has focused on two areas: (1) improving fault tolerance in various software currently available and widely used throughout the HEC domain and (2) using fault information exchange and coordination to achieve holistic, systemwide fault tolerance and understanding how to design and implement interfaces for integrating fault tolerance features for multiple layers of the software stack—from the application, math libraries, and programming language runtime to other common system software such as jobs schedulers, resource managers, and monitoring tools.
Development of Career Progression Systems for Employees in the Foodservice Industry. Final Report.
ERIC Educational Resources Information Center
National Restaurant Association, Chicago, IL.
Firms representing four segments of the foodservice industry (institutional foodservice (9 jobs), commercial restaurants (19 jobs), hotel foodservice (100 jobs), and airline foodservice (10 jobs), participated in a career and training study to test the feasibility of designing and implementing career progression (c.p.) systems within these…
Public School Educator and Teacher Educator Job Analysis Ratings of Certification Test Objectives.
ERIC Educational Resources Information Center
Silvestro, John R.; And Others
The job analysis procedures used in the development of the Illinois Certification Testing System are described. The degree of congruence between job analysis ratings provided by public school educators (PSEs) and teacher educators (TEs) who completed the job analysis surveys is examined. National Evaluation Systems, Inc., and the Illinois State…
Web Based Information System for Job Training Activities Using Personal Extreme Programming (PXP)
NASA Astrophysics Data System (ADS)
Asri, S. A.; Sunaya, I. G. A. M.; Rudiastari, E.; Setiawan, W.
2018-01-01
Job training is one of the subjects in university or polytechnic that involves many users and reporting activities. Time and distance became problems for users to reporting and to do obligations tasks during job training due to the location where the job training took place. This research tried to develop a web based information system of job training to overcome the problems. This system was developed using Personal Extreme Programming (PXP). PXP is one of the agile methods is combination of Extreme Programming (XP) and Personal Software Process (PSP). The information system that has developed and tested which are 24% of users are strongly agree, 74% are agree, 1% disagree and 0% strongly disagree about system functionality.
20 CFR 670.535 - Are Job Corps centers required to establish behavior management systems?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Are Job Corps centers required to establish behavior management systems? 670.535 Section 670.535 Employees' Benefits EMPLOYMENT AND TRAINING... systems? (a) Yes, each Job Corps center must establish and maintain its own student incentives system to...
Steptoe, A; Cropley, M
2000-05-01
To test the hypothesis that work stress (persistent high job demands over 1 year) in combination with high reactivity to mental stress predict ambulatory blood pressure. Assessment of cardiovascular responses to standardized behavioural tasks, job demands, and ambulatory blood pressure over a working day and evening after 12 months. We studied 81 school teachers (26 men, 55 women), 36 of whom experienced persistent high job demands over 1 year, while 45 reported lower job demands. Participants were divided on the basis of high and low job demands, and high and low systolic pressure reactions to an uncontrollable stress task. Blood pressure and concurrent physical activity were monitored using ambulatory apparatus from 0900 to 2230 h on a working day. Cardiovascular stress reactivity was associated with waist/hip ratio. Systolic and diastolic pressure during the working day were greater in high job demand participants who were stress reactive than in other groups, after adjustment for age, baseline blood pressure, body mass index and negative affectivity. The difference was not accounted for by variations in physical activity. Cardiovascular stress reactivity and sustained psychosocial stress may act in concert to increase cardiovascular risk in susceptible individuals.
Examining job tenure and lost-time claim rates in Ontario, Canada, over a 10-year period, 1999-2008.
Morassaei, Sara; Breslin, F Curtis; Shen, Min; Smith, Peter M
2013-03-01
We sought to examine the association between job tenure and lost-time claim rates over a 10-year period in Ontario, Canada. Data were obtained from workers' compensation records and labour force survey data from 1999 to 2008. Claim rates were calculated for gender, age, industry, occupation, year and job tenure group. A multivariate analysis and examination of effect modification were performed. Differences in injury event and source of injury were also examined by job tenure. Lost-time claim rates were significantly higher for workers with shorter job tenure, regardless of other factors. Claim rates for new workers differed by gender, age and industry, but remained relatively constant at an elevated rate over the observed time period. This study is the first to examine lost-time claim rates by job tenure over a time period during which overall claim rates generally declined. Claim rates did not show a convergence by job tenure. Findings highlight that new workers are still at elevated risk, and suggest the need for improved training, reducing exposures among new workers, promoting permanent employment, and monitoring work injury trends and risk factors.
Requesting Different Nodes Types When Submitting Jobs on the Peregrine
System | High-Performance Computing | NREL Requesting Different Nodes Types When Submitting Jobs on the Peregrine System Requesting Different Nodes Types When Submitting Jobs on the Peregrine
Job Management Requirements for NAS Parallel Systems and Clusters
NASA Technical Reports Server (NTRS)
Saphir, William; Tanner, Leigh Ann; Traversat, Bernard
1995-01-01
A job management system is a critical component of a production supercomputing environment, permitting oversubscribed resources to be shared fairly and efficiently. Job management systems that were originally designed for traditional vector supercomputers are not appropriate for the distributed-memory parallel supercomputers that are becoming increasingly important in the high performance computing industry. Newer job management systems offer new functionality but do not solve fundamental problems. We address some of the main issues in resource allocation and job scheduling we have encountered on two parallel computers - a 160-node IBM SP2 and a cluster of 20 high performance workstations located at the Numerical Aerodynamic Simulation facility. We describe the requirements for resource allocation and job management that are necessary to provide a production supercomputing environment on these machines, prioritizing according to difficulty and importance, and advocating a return to fundamental issues.
Tobe, Sheldon W; Kiss, Alexander; Szalai, John Paul; Perkins, Nancy; Tsigoulis, Michelle; Baker, Brian
2005-08-01
Psychosocial stressors such as job strain and marital stress have been associated with a sustained increase in blood pressure (BP). We evaluated whether job strain and marital cohesion were associated with ambulatory BP in workers with normal or untreated elevated BP using baseline data from the Double Exposure study. The study population included 248 male and female volunteers who were nonmedicated, employed, and living with a significant other, all for a minimum of 6 months. Blood pressure was measured with an ambulatory BP monitor and participants completed a diary that recorded time during work, spousal contact, and sleep. Job strain and marital cohesion were calculated from the Job Content Questionnaire and the Dyadic Adjustment Scale, respectively. Of the subjects, 54.4% were female with a mean age of 50.8 years (6.6, SD). In all, 21.3% reported job strain. Significant assocations were found between 24-h systolic BP (SBP) and alcohol consumption (P = .033), job strain (P = .007), male gender (P = .004), and age (P = .039) and was inversely associated with exercise (P = .037). An interaction between 24-h SBP, job strain, and marital cohesion was found such that greater marital cohesion was associated with lower SBP in subjects with job strain. Psychosocial factors may influence the development of early hypertension. This should be clarified by the cohort phase of the Double Exposure study.
Stochastic scheduling on a repairable manufacturing system
NASA Astrophysics Data System (ADS)
Li, Wei; Cao, Jinhua
1995-08-01
In this paper, we consider some stochastic scheduling problems with a set of stochastic jobs on a manufacturing system with a single machine that is subject to multiple breakdowns and repairs. When the machine processing a job fails, the job processing must restart some time later when the machine is repaired. For this typical manufacturing system, we find the optimal policies that minimize the following objective functions: (1) the weighed sum of the completion times; (2) the weighed number of late jobs having constant due dates; (3) the weighted number of late jobs having random due dates exponentially distributed, which generalize some previous results.
Organizational commitment and job satisfaction among nurses in Serbia: a factor analysis.
Veličković, Vladica M; Višnjić, Aleksandar; Jović, Slađana; Radulović, Olivera; Šargić, Čedomir; Mihajlović, Jovan; Mladenović, Jelena
2014-01-01
One of the basic prerequisites of efficient organizational management in health institutions is certainly monitoring and measuring satisfaction of employees and their commitment to the health institution in which they work. The aim of this article was to identify and test factors that may have a predictive effect on job satisfaction and organizational commitment. We conducted a cross-sectional study that included 1,337 nurses from Serbia. Data were analyzed by using exploratory factor analysis, multivariate regressions, and descriptive statistics. The study identified three major factors of organizational commitment: affective commitment, disloyalty, and continuance commitment. The most important predictors of these factors were positive professional identification, extrinsic job satisfaction, and intrinsic job satisfaction (p < .0001). Predictors significantly affecting both job satisfaction and organizational commitment were identified as well; the most important of which was positive professional identification (p < .0001). This study identified the main factors affecting job satisfaction and organizational commitment of nurses, which formed a good basis for the creation of organizational management policy and human resource management policy in health institutions in Serbia. Copyright © 2014 Elsevier Inc. All rights reserved.
A user friendly database for use in ALARA job dose assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zodiates, A.M.; Willcock, A.
1995-03-01
The pressurized water reactor (PWR) design chosen for adoption by Nuclear Electric plc was based on the Westinghouse Standard Nuclear Unit Power Plant (SNUPPS). This design was developed to meet the United Kingdom requirements and these improvements are embodied in the Sizewell B plant which will start commercial operation in 1994. A user-friendly database was developed to assist the station in the dose and ALARP assessments of the work expected to be carried out during station operation and outage. The database stores the information in an easily accessible form and enables updating, editing, retrieval, and searches of the information. Themore » database contains job-related information such as job locations, number of workers required, job times, and the expected plant doserates. It also contains the means to flag job requirements such as requirements for temporary shielding, flushing, scaffolding, etc. Typical uses of the database are envisaged to be in the prediction of occupational doses, the identification of high collective and individual dose jobs, use in ALARP assessments, setting of dose targets, monitoring of dose control performance, and others.« less
Forcella, Laura; Bonfiglioli, Roberta; Cutilli, Piero; Siciliano, Eugenio; Di Donato, Angela; Di Nicola, Marta; Antonucci, Andrea; Di Giampaolo, Luca; Boscolo, Paolo; Violante, Francesco Saverio
2012-07-01
To study job stress and upper limb biomechanical overload due to repetitive and forceful manual activities in a factory producing high fashion clothing. A total of 518 workers (433 women and 85 men) were investigated to determine anxiety, occupational stress (using the Italian version of the Karasek Job Content Questionnaire) and perception of symptoms (using the Italian version of the Somatization scale of Symptom Checklist SCL-90). Biomechanical overload was analyzed using the OCRA Check list. Biomechanical assessment did not reveal high-risk jobs, except for cutting. Although the perception of anxiety and job insecurity was within the normal range, all the workers showed a high level of job strain (correlated with the perception of symptoms) due to very low decision latitude. Occupational stress resulted partially in line with biomechanical risk factors; however, the perception of low decision latitude seems to play a major role in determining job strain. Interactions between physical and psychological factors cannot be demonstrated. Anyway, simultaneous long-term monitoring of occupational stress features and biomechanical overload could guide workplace interventions aimed at reducing the risk of adverse health effects.
Unified Monitoring Architecture for IT and Grid Services
NASA Astrophysics Data System (ADS)
Aimar, A.; Aguado Corman, A.; Andrade, P.; Belov, S.; Delgado Fernandez, J.; Garrido Bear, B.; Georgiou, M.; Karavakis, E.; Magnoni, L.; Rama Ballesteros, R.; Riahi, H.; Rodriguez Martinez, J.; Saiz, P.; Zolnai, D.
2017-10-01
This paper provides a detailed overview of the Unified Monitoring Architecture (UMA) that aims at merging the monitoring of the CERN IT data centres and the WLCG monitoring using common and widely-adopted open source technologies such as Flume, Elasticsearch, Hadoop, Spark, Kibana, Grafana and Zeppelin. It provides insights and details on the lessons learned, explaining the work performed in order to monitor the CERN IT data centres and the WLCG computing activities such as the job processing, data access and transfers, and the status of sites and services.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-13
... DEPARTMENT OF LABOR Office of the Secretary Agency Information Collection Activities; Submission for OMB Review; Comment Request; Middle Class Tax Relief and Job Creation Act of 2012 State Monitoring... Creation Act of 2012 State Monitoring,'' to the Office of Management and Budget (OMB) for review and...
Workforce Diversity: Monitoring Employment Trends in Public Organizations.
ERIC Educational Resources Information Center
Guajardo, Salomon A.
1999-01-01
Presents the use of research designs that can be used by human resource specialists to evaluate and monitor work force diversity and minority employment. Compares results of Repeated Measure Analyses of Variance with One Within-subjects Factor design with Repeated Measure Analyses of Variance with One Within-subjects Factor by job category. (JOW)
A study of dynamic data placement for ATLAS distributed data management
NASA Astrophysics Data System (ADS)
Beermann, T.; Stewart, G. A.; Maettig, P.
2015-12-01
This contribution presents a study on the applicability and usefulness of dynamic data placement methods for data-intensive systems, such as ATLAS distributed data management (DDM). In this system the jobs are sent to the data, therefore having a good distribution of data is significant. Ways of forecasting workload patterns are examined which then are used to redistribute data to achieve a better overall utilisation of computing resources and to reduce waiting time for jobs before they can run on the grid. This method is based on a tracer infrastructure that is able to monitor and store historical data accesses and which is used to create popularity reports. These reports provide detailed summaries about data accesses in the past, including information about the accessed files, the involved users and the sites. From this past data it is possible to then make near-term forecasts for data popularity in the future. This study evaluates simple prediction methods as well as more complex methods like neural networks. Based on the outcome of the predictions a redistribution algorithm deletes unused replicas and adds new replicas for potentially popular datasets. Finally, a grid simulator is used to examine the effects of the redistribution. The simulator replays workload on different data distributions while measuring the job waiting time and site usage. The study examines how the average waiting time is affected by the amount of data that is moved, how it differs for the various forecasting methods and how that compares to the optimal data distribution.
5 CFR 532.217 - Appropriated fund survey jobs.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Appropriated fund survey jobs. 532.217... PREVAILING RATE SYSTEMS Prevailing Rate Determinations § 532.217 Appropriated fund survey jobs. (a) A lead agency shall survey the following required jobs: Job title Job grade Janitor (Light) 1 Janitor (Heavy) 2...
5 CFR 532.225 - Nonappropriated fund survey jobs.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Nonappropriated fund survey jobs. 532.225... PREVAILING RATE SYSTEMS Prevailing Rate Determinations § 532.225 Nonappropriated fund survey jobs. (a) A lead agency shall survey the following required jobs: Job title Job grade Janitor (Light) 1 Food Service...
5 CFR 532.225 - Nonappropriated fund survey jobs.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Nonappropriated fund survey jobs. 532.225... PREVAILING RATE SYSTEMS Prevailing Rate Determinations § 532.225 Nonappropriated fund survey jobs. (a) A lead agency shall survey the following required jobs: Job title Job grade Janitor (Light) 1 Food Service...
5 CFR 532.217 - Appropriated fund survey jobs.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Appropriated fund survey jobs. 532.217... PREVAILING RATE SYSTEMS Prevailing Rate Determinations § 532.217 Appropriated fund survey jobs. (a) A lead agency shall survey the following required jobs: Job title Job grade Janitor (Light) 1 Janitor (Heavy) 2...
5 CFR 532.225 - Nonappropriated fund survey jobs.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 5 Administrative Personnel 1 2012-01-01 2012-01-01 false Nonappropriated fund survey jobs. 532.225... PREVAILING RATE SYSTEMS Prevailing Rate Determinations § 532.225 Nonappropriated fund survey jobs. (a) A lead agency shall survey the following required jobs: Job title Job grade Janitor (Light) 1 Food Service...
5 CFR 532.217 - Appropriated fund survey jobs.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 5 Administrative Personnel 1 2012-01-01 2012-01-01 false Appropriated fund survey jobs. 532.217... PREVAILING RATE SYSTEMS Prevailing Rate Determinations § 532.217 Appropriated fund survey jobs. (a) A lead agency shall survey the following required jobs: Job title Job grade Janitor (Light) 1 Janitor (Heavy) 2...
5 CFR 532.225 - Nonappropriated fund survey jobs.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Nonappropriated fund survey jobs. 532.225... PREVAILING RATE SYSTEMS Prevailing Rate Determinations § 532.225 Nonappropriated fund survey jobs. (a) A lead agency shall survey the following required jobs: Job title Job grade Janitor (Light) 1 Food Service...
5 CFR 532.217 - Appropriated fund survey jobs.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Appropriated fund survey jobs. 532.217... PREVAILING RATE SYSTEMS Prevailing Rate Determinations § 532.217 Appropriated fund survey jobs. (a) A lead agency shall survey the following required jobs: Job title Job grade Janitor (Light) 1 Janitor (Heavy) 2...
5 CFR 532.225 - Nonappropriated fund survey jobs.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 5 Administrative Personnel 1 2013-01-01 2013-01-01 false Nonappropriated fund survey jobs. 532.225... PREVAILING RATE SYSTEMS Prevailing Rate Determinations § 532.225 Nonappropriated fund survey jobs. (a) A lead agency shall survey the following required jobs: Job title Job grade Janitor (Light) 1 Food Service...
Running Jobs on the Peregrine System | High-Performance Computing | NREL
on the Peregrine high-performance computing (HPC) system. Running Different Types of Jobs Batch jobs scheduling policies - queue names, limits, etc. Requesting different node types Sample batch scripts
20 CFR 638.519 - Incentives system.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR JOB CORPS PROGRAM UNDER TITLE IV-B OF THE JOB TRAINING PARTNERSHIP ACT Center Operations § 638.519 Incentives system. The center... established by the Job Corps Director. ...
Job-mix modeling and system analysis of an aerospace multiprocessor.
NASA Technical Reports Server (NTRS)
Mallach, E. G.
1972-01-01
An aerospace guidance computer organization, consisting of multiple processors and memory units attached to a central time-multiplexed data bus, is described. A job mix for this type of computer is obtained by analysis of Apollo mission programs. Multiprocessor performance is then analyzed using: 1) queuing theory, under certain 'limiting case' assumptions; 2) Markov process methods; and 3) system simulation. Results of the analyses indicate: 1) Markov process analysis is a useful and efficient predictor of simulation results; 2) efficient job execution is not seriously impaired even when the system is so overloaded that new jobs are inordinately delayed in starting; 3) job scheduling is significant in determining system performance; and 4) a system having many slow processors may or may not perform better than a system of equal power having few fast processors, but will not perform significantly worse.
ERIC Educational Resources Information Center
Stevenson, Kimberly
This master's thesis describes the development of an expert system and interactive videodisc computer-based instructional job aid used for assisting in the integration of electron beam lithography devices. Comparable to all comprehensive training, expert system and job aid development require a criterion-referenced systems approach treatment to…
A quantitative model of application slow-down in multi-resource shared systems
Lim, Seung-Hwan; Kim, Youngjae
2016-12-26
Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price-resource contention among jobs increases job completion time. In this study, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job ismore » characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We extended the D-factor model to capture the slow-down of applications when multiple identical resources exist such as multi-core environments and multi-disks environments. Finally, validation results of the extended D-factor model with HPC checkpoint applications on the parallel file systems show that D-factor accurately captures the slow down of concurrent applications in such environments.« less
A quantitative model of application slow-down in multi-resource shared systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Seung-Hwan; Kim, Youngjae
Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price-resource contention among jobs increases job completion time. In this study, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job ismore » characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We extended the D-factor model to capture the slow-down of applications when multiple identical resources exist such as multi-core environments and multi-disks environments. Finally, validation results of the extended D-factor model with HPC checkpoint applications on the parallel file systems show that D-factor accurately captures the slow down of concurrent applications in such environments.« less
Simple, Scalable, Script-Based Science Processor (S4P)
NASA Technical Reports Server (NTRS)
Lynnes, Christopher; Vollmer, Bruce; Berrick, Stephen; Mack, Robert; Pham, Long; Zhou, Bryan; Wharton, Stephen W. (Technical Monitor)
2001-01-01
The development and deployment of data processing systems to process Earth Observing System (EOS) data has proven to be costly and prone to technical and schedule risk. Integration of science algorithms into a robust operational system has been difficult. The core processing system, based on commercial tools, has demonstrated limitations at the rates needed to produce the several terabytes per day for EOS, primarily due to job management overhead. This has motivated an evolution in the EOS Data Information System toward a more distributed one incorporating Science Investigator-led Processing Systems (SIPS). As part of this evolution, the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC) has developed a simplified processing system to accommodate the increased load expected with the advent of reprocessing and launch of a second satellite. This system, the Simple, Scalable, Script-based Science Processor (S42) may also serve as a resource for future SIPS. The current EOSDIS Core System was designed to be general, resulting in a large, complex mix of commercial and custom software. In contrast, many simpler systems, such as the EROS Data Center AVHRR IKM system, rely on a simple directory structure to drive processing, with directories representing different stages of production. The system passes input data to a directory, and the output data is placed in a "downstream" directory. The GES DAAC's Simple Scalable Script-based Science Processing System is based on the latter concept, but with modifications to allow varied science algorithms and improve portability. It uses a factory assembly-line paradigm: when work orders arrive at a station, an executable is run, and output work orders are sent to downstream stations. The stations are implemented as UNIX directories, while work orders are simple ASCII files. The core S4P infrastructure consists of a Perl program called stationmaster, which detects newly arrived work orders and forks a job to run the appropriate executable (registered in a configuration file for that station). Although S4P is written in Perl, the executables associated with a station can be any program that can be run from the command line, i.e., non-interactively. An S4P instance is typically monitored using a simple Graphical User Interface. However, the reliance of S4P on UNIX files and directories also allows visibility into the state of stations and jobs using standard operating system commands, permitting remote monitor/control over low-bandwidth connections. S4P is being used as the foundation for several small- to medium-size systems for data mining, on-demand subsetting, processing of direct broadcast Moderate Resolution Imaging Spectroradiometer (MODIS) data, and Quick-Response MODIS processing. It has also been used to implement a large-scale system to process MODIS Level 1 and Level 2 Standard Products, which will ultimately process close to 2 TB/day.
Collectively loading an application in a parallel computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.
Collectively loading an application in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: identifying, by a parallel computer control system, a subset of compute nodes in the parallel computer to execute a job; selecting, by the parallel computer control system, one of the subset of compute nodes in the parallel computer as a job leader compute node; retrieving, by the job leader compute node from computer memory, an application for executing the job; and broadcasting, by the job leader to the subset of compute nodes in the parallel computer, the application for executing the job.
Chowdhary, Mudit; Chhabra, Arpit M; Switchenko, Jeffrey M; Jhaveri, Jaymin; Sen, Neilayan; Patel, Pretesh R; Curran, Walter J; Abrams, Ross A; Patel, Kirtesh R; Marwaha, Gaurav
2017-09-01
To examine whether permanent radiation oncologist (RO) employment opportunities vary based on geography. A database of full-time RO jobs was created by use of American Society for Radiation Oncology (ASTRO) Career Center website posts between March 28, 2016, and March 31, 2017. Jobs were first classified by region based on US Census Bureau data. Jobs were further categorized as academic or nonacademic depending on the employer. The prevalence of job openings per 10 million population was calculated to account for regional population differences. The χ 2 test was implemented to compare position type across regions. The number and locations of graduating RO during our study period was calculated using National Resident Matching Program data. The χ 2 goodness-of-fit test was then used to compare a set of observed proportions of jobs with a corresponding set of hypothesized proportions of jobs based on the proportions of graduates per region. A total of 211 unique jobs were recorded. The highest and lowest percentages of jobs were seen in the South (31.8%) and Northeast (18.5%), respectively. Of the total jobs, 82 (38.9%) were academic; the South had the highest percentage of overall academic jobs (35.4%), while the West had the lowest (14.6%). Regionally, the Northeast had the highest percentage of academic jobs (56.4%), while the West had the lowest (26.7%). A statistically significant difference was noted between regional academic and nonacademic job availability (P=.021). After we accounted for unit population, the Midwest had the highest number of total jobs per 10 million (9.0) while the South had the lowest (5.9). A significant difference was also observed in the proportion of RO graduates versus actual jobs per region (P=.003), with a surplus of trainees seen in the Northeast. This study presents a quantitative analysis of the RO job market. We found a disproportionately small number of opportunities compared with graduates trained in the Northeast, as well as a significant regional imbalance of academic versus nonacademic jobs. Long-term monitoring is required to confirm these results. Copyright © 2017 Elsevier Inc. All rights reserved.
The MICRO-BOSS scheduling system: Current status and future efforts
NASA Technical Reports Server (NTRS)
Sadeh, Norman M.
1993-01-01
In this paper, a micro-opportunistic approach to factory scheduling was described that closely monitors the evolution of bottlenecks during the construction of the schedule, and continuously redirects search towards the bottleneck that appears to be most critical. This approach differs from earlier opportunistic approaches, as it does not require scheduling large resource subproblems or large job subproblems before revising the current scheduling strategy. This micro-opportunistic approach was implemented in the context of the MICRO-BOSS factory scheduling system. A study comparing MICRO-BOSS against a macro-opportunistic scheduler suggests that the additional flexibility of the micro-opportunistic approach to scheduling generally yields important reductions in both tardiness and inventory.
Developing Effective Linkages between Job Corps and One-Stop Systems: A Technical Assistance Guide.
ERIC Educational Resources Information Center
Dickinson, Katherine; Soukamneuth, Sengsouvanh
This document is intended to help Job Corps centers and Office of Acquisition Policy contractors establish linkages with one-stop systems. Chapter 1 summarizes the requirements for linkages between Job Corps and one-stop systems that are specified in the Workforce Investment Act (WIA) of 1998 and compares one-stop delivery systems before and under…
20 CFR 670.535 - Are Job Corps centers required to establish behavior management systems?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 4 2014-04-01 2014-04-01 false Are Job Corps centers required to establish behavior management systems? 670.535 Section 670.535 Employees' Benefits EMPLOYMENT AND TRAINING... system to encourage and reward students' accomplishments. (b) The Job Corps center must establish and...
20 CFR 670.535 - Are Job Corps centers required to establish behavior management systems?
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 4 2012-04-01 2012-04-01 false Are Job Corps centers required to establish behavior management systems? 670.535 Section 670.535 Employees' Benefits EMPLOYMENT AND TRAINING... system to encourage and reward students' accomplishments. (b) The Job Corps center must establish and...
20 CFR 670.535 - Are Job Corps centers required to establish behavior management systems?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 4 2013-04-01 2013-04-01 false Are Job Corps centers required to establish behavior management systems? 670.535 Section 670.535 Employees' Benefits EMPLOYMENT AND TRAINING... system to encourage and reward students' accomplishments. (b) The Job Corps center must establish and...
How to Tell How Important Agriculture Is to Your State.
ERIC Educational Resources Information Center
Schluter, Gerald; Edmondson, William
1986-01-01
Emphasizes agriculture's economic importance and lists the top 10 states according to 4 possible criteria for determining economic dependence on agriculture: number of food and fiber system jobs, number of farmworkers, proportion of food and fiber system jobs, and proportion of farmworkers to total food and fiber system jobs. (JHZ)
ERIC Educational Resources Information Center
Brooks, Nita G.; Greer, Timothy H.; Morris, Steven A.
2018-01-01
The authors' focus was the assessment of skill requirements for information systems security positions to understand expectations for security jobs and to highlight issues relevant to curriculum management. The analysis of 798 job advertisements involved the exploration of domain-related and soft skills as well as degree and certification…
Citizen Science Seismic Stations for Monitoring Regional and Local Events
NASA Astrophysics Data System (ADS)
Zucca, J. J.; Myers, S.; Srikrishna, D.
2016-12-01
The earth has tens of thousands of seismometers installed on its surface or in boreholes that are operated by many organizations for many purposes including the study of earthquakes, volcanos, and nuclear explosions. Although global networks such as the Global Seismic Network and the International Monitoring System do an excellent job of monitoring nuclear test explosions and other seismic events, their thresholds could be lowered with the addition of more stations. In recent years there has been interest in citizen-science approaches to augment government-sponsored monitoring networks (see, for example, Stubbs and Drell, 2013). A modestly-priced seismic station that could be purchased by citizen scientists could enhance regional and local coverage of the GSN, IMS, and other networks if those stations are of high enough quality and distributed optimally. In this paper we present a minimum set of hardware and software specifications that a citizen seismograph station would need in order to add value to global networks. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Running and testing GRID services with Puppet at GRIF- IRFU
NASA Astrophysics Data System (ADS)
Ferry, S.; Schaer, F.; Meyer, JP
2015-12-01
GRIF is a distributed Tiers 2 centre, made of 6 different centres in the Paris region, and serving many VOs. The sub-sites are connected with 10 Gbps private network and share tools for central management. One of the sub-sites, GRIF-IRFU held and maintained in the CEA- Saclay centre, moved a year ago, to a configuration management using Puppet. Thanks to the versatility of Puppet/Foreman automation, the GRIF-IRFU site maintains usual grid services, with, among them: a CREAM-CE with a TORQUE+Maui (running a batch with more than 5000 jobs slots), a DPM storage of more than 2 PB, a Nagios monitoring essentially based on check_mk, as well as centralized services for the French NGI, like the accounting, or the argus central suspension system. We report on the actual functionalities of Puppet and present the last tests and evolutions including a monitoring with Graphite, a HT-condor multicore batch accessed with an ARC-CE and a CEPH storage file system.
Analyzing jobs for redesign decisions.
Conn, V S; Davis, N K; Occena, L G
1996-01-01
Job analysis, the collection and interpretation of information that describes job behaviors and activities performed by occupants of jobs, can provide nurse administrators with valuable information for redesigning effective and efficient systems of care.
cisTEM, user-friendly software for single-particle image processing.
Grant, Timothy; Rohou, Alexis; Grigorieff, Nikolaus
2018-03-07
We have developed new open-source software called cis TEM (computational imaging system for transmission electron microscopy) for the processing of data for high-resolution electron cryo-microscopy and single-particle averaging. cis TEM features a graphical user interface that is used to submit jobs, monitor their progress, and display results. It implements a full processing pipeline including movie processing, image defocus determination, automatic particle picking, 2D classification, ab-initio 3D map generation from random parameters, 3D classification, and high-resolution refinement and reconstruction. Some of these steps implement newly-developed algorithms; others were adapted from previously published algorithms. The software is optimized to enable processing of typical datasets (2000 micrographs, 200 k - 300 k particles) on a high-end, CPU-based workstation in half a day or less, comparable to GPU-accelerated processing. Jobs can also be scheduled on large computer clusters using flexible run profiles that can be adapted for most computing environments. cis TEM is available for download from cistem.org. © 2018, Grant et al.
cisTEM, user-friendly software for single-particle image processing
2018-01-01
We have developed new open-source software called cisTEM (computational imaging system for transmission electron microscopy) for the processing of data for high-resolution electron cryo-microscopy and single-particle averaging. cisTEM features a graphical user interface that is used to submit jobs, monitor their progress, and display results. It implements a full processing pipeline including movie processing, image defocus determination, automatic particle picking, 2D classification, ab-initio 3D map generation from random parameters, 3D classification, and high-resolution refinement and reconstruction. Some of these steps implement newly-developed algorithms; others were adapted from previously published algorithms. The software is optimized to enable processing of typical datasets (2000 micrographs, 200 k – 300 k particles) on a high-end, CPU-based workstation in half a day or less, comparable to GPU-accelerated processing. Jobs can also be scheduled on large computer clusters using flexible run profiles that can be adapted for most computing environments. cisTEM is available for download from cistem.org. PMID:29513216
20 CFR 655.150 - Interstate clearance of job order.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Interstate clearance of job order. 655.150... job order. (a) SWA posts in interstate clearance system. The SWA must promptly place the job order in... transmit a copy of its active job order to all States listed in the job order as anticipated worksites...
20 CFR 655.150 - Interstate clearance of job order.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 3 2013-04-01 2013-04-01 false Interstate clearance of job order. 655.150... job order. (a) SWA posts in interstate clearance system. The SWA must promptly place the job order in... transmit a copy of its active job order to all States listed in the job order as anticipated worksites...
20 CFR 655.150 - Interstate clearance of job order.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Interstate clearance of job order. 655.150... job order. (a) SWA posts in interstate clearance system. The SWA must promptly place the job order in... transmit a copy of its active job order to all States listed in the job order as anticipated worksites...
20 CFR 655.150 - Interstate clearance of job order.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 3 2012-04-01 2012-04-01 false Interstate clearance of job order. 655.150... job order. (a) SWA posts in interstate clearance system. The SWA must promptly place the job order in... transmit a copy of its active job order to all States listed in the job order as anticipated worksites...
20 CFR 655.150 - Interstate clearance of job order.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 3 2014-04-01 2014-04-01 false Interstate clearance of job order. 655.150... job order. (a) SWA posts in interstate clearance system. The SWA must promptly place the job order in... transmit a copy of its active job order to all States listed in the job order as anticipated worksites...
Estimating linear temporal trends from aggregated environmental monitoring data
Erickson, Richard A.; Gray, Brian R.; Eager, Eric A.
2017-01-01
Trend estimates are often used as part of environmental monitoring programs. These trends inform managers (e.g., are desired species increasing or undesired species decreasing?). Data collected from environmental monitoring programs is often aggregated (i.e., averaged), which confounds sampling and process variation. State-space models allow sampling variation and process variations to be separated. We used simulated time-series to compare linear trend estimations from three state-space models, a simple linear regression model, and an auto-regressive model. We also compared the performance of these five models to estimate trends from a long term monitoring program. We specifically estimated trends for two species of fish and four species of aquatic vegetation from the Upper Mississippi River system. We found that the simple linear regression had the best performance of all the given models because it was best able to recover parameters and had consistent numerical convergence. Conversely, the simple linear regression did the worst job estimating populations in a given year. The state-space models did not estimate trends well, but estimated population sizes best when the models converged. We found that a simple linear regression performed better than more complex autoregression and state-space models when used to analyze aggregated environmental monitoring data.
Job satisfaction among the academic staff of a saudi university: an evaluative study.
Al-Rubaish, Abdullah M; Rahim, Sheikh Idris A; Abumadini, Mahdi S; Wosornu, Lade
2009-09-01
Job satisfaction is a major determinant of job performance, manpower retention and employee well-being. To explore the state of job satisfaction among the academic staff of King Faisal University - Dammam (KFU-D), and detect the areas and groups at a higher risk of being dissatisfied. A fully-structured 5-option Likert-type Job Satisfaction Questionnaire (JSQ) composed of an evaluative item and eleven domains making a total of 46 items was used. It was distributed by internal mail to all the 340 academic staff, 248 of whom returned completed questionnaires (response rate = 72.9 %). The overall mean Job Satisfaction Rate (JSR) was 73.6 %. The highest JSR's were found in three domains ("Supervision", "Responsibility", and "Interpersonal Relationships"), and the lowest in four others ("Salary", "My Work Itself", "Working Conditions", and "Advancement"). The JSR was significantly lower among Saudi nationals, females, those below age 40, those from clinical medical and Dentistry departments. Multiple Regression identified six independent variables which conjointly explained 25 % of the variance in job satisfaction (p < 0.0001). These were: being an expatriate, above the age of 50, serving the university for less than one or more than ten years, and, not from a clinical department of Medicine or Dentistry. Most staff were satisfied with many aspects of their jobs, but there was significant dissatisfaction with several job-related aspects and demographic features. Appropriate interventions are indicated. Further studies are needed to confirm the present findings and to monitor future trends.
A hybrid job-shop scheduling system
NASA Technical Reports Server (NTRS)
Hellingrath, Bernd; Robbach, Peter; Bayat-Sarmadi, Fahid; Marx, Andreas
1992-01-01
The intention of the scheduling system developed at the Fraunhofer-Institute for Material Flow and Logistics is the support of a scheduler working in a job-shop. Due to the existing requirements for a job-shop scheduling system the usage of flexible knowledge representation and processing techniques is necessary. Within this system the attempt was made to combine the advantages of symbolic AI-techniques with those of neural networks.
Layerwise Monitoring of the Selective Laser Melting Process by Thermography
NASA Astrophysics Data System (ADS)
Krauss, Harald; Zeugner, Thomas; Zaeh, Michael F.
Selective Laser Melting is utilized to build parts directly from CAD data. In this study layerwise monitoring of the temperature distribution is used to gather information about the process stability and the resulting part quality. The heat distribution varies with different kinds of parameters including scan vector length, laser power, layer thickness and inter-part distance in the job layout. By integration of an off-axis mounted uncooled thermal detector, the solidification as well as the layer deposition are monitored and evaluated. This enables the identification of hot spots in an early stage during the solidification process and helps to avoid process interrupts. Potential quality indicators are derived from spatially resolved measurement data and are correlated to the resulting part properties. A model of heat dissipation is presented based on the measurement of the material response for varying heat input. Current results show the feasibility of process surveillance by thermography for a limited section of the building platform in a commercial system.
78 FR 15006 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-08
... experience in DA&M, total years of work experience, job level, job competencies, and competency level), education (e.g., degrees, training, coursework), job title, pay plan, job series, job grade, feedback on... and PIN prevents unauthorized access. Retention and disposal: Destroy three (3) years after...
Blumenthal, J A; Thyrum, E T; Siegel, W C
1995-02-01
The effects of job strain, occupational status, and marital status on blood pressure were evaluated in 99 men and women with mild hypertension. Blood pressure was measured during daily life at home and at work over 15 h of ambulatory blood pressure monitoring. On a separate day, blood pressure was measured in the laboratory during mental stress testing. As expected, during daily life, blood pressure was higher at work than at home. High job strain was associated with elevated systolic blood pressure among women, but not men. However, both men and women with high status occupations had significantly higher blood pressures during daily life and during laboratory mental stress testing. This was especially true for men, in that men with high job status had higher systolic blood pressures than low job status men. Marital status also was an important moderating variable, particularly for women, with married women having higher ambulatory blood pressures than single women. During mental stress testing, married persons had higher systolic blood pressures than unmarried individuals. These data suggest that occupational status and marital status may contribute even more than job strain to variations in blood pressure during daily life and laboratory testing.
Evaluation of NoSQL databases for DIRAC monitoring and beyond
NASA Astrophysics Data System (ADS)
Mathe, Z.; Casajus Ramo, A.; Stagni, F.; Tomassetti, L.
2015-12-01
Nowadays, many database systems are available but they may not be optimized for storing time series data. Monitoring DIRAC jobs would be better done using a database optimised for storing time series data. So far it was done using a MySQL database, which is not well suited for such an application. Therefore alternatives have been investigated. Choosing an appropriate database for storing huge amounts of time series data is not trivial as one must take into account different aspects such as manageability, scalability and extensibility. We compared the performance of Elasticsearch, OpenTSDB (based on HBase) and InfluxDB NoSQL databases, using the same set of machines and the same data. We also evaluated the effort required for maintaining them. Using the LHCb Workload Management System (WMS), based on DIRAC as a use case we set up a new monitoring system, in parallel with the current MySQL system, and we stored the same data into the databases under test. We evaluated Grafana (for OpenTSDB) and Kibana (for ElasticSearch) metrics and graph editors for creating dashboards, in order to have a clear picture on the usability of each candidate. In this paper we present the results of this study and the performance of the selected technology. We also give an outlook of other potential applications of NoSQL databases within the DIRAC project.
ERIC Educational Resources Information Center
General Accounting Office, Washington, DC. Accounting and Information Management Div.
A study examined states' development of automated systems for the Job Opportunities and Basic Skills (JOBS) program administered by the states, with the Administration for Children and Families (ACF) responsible for program oversight and direction. Results indicated that ACF had not provided direction and focus in its systems development guidance…
SLURM: Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Dunlap, C; Garlick, J
2002-04-24
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, and scheduling modules. The design also includes a scalable, general-purpose communication infrastructure. Development will take place in four phases: Phase I results in a solid infrastructure; Phase II produces a functional but limited interactive job initiation capability without use of the interconnect/switch; Phase III provides switch support and documentation; Phase IV provides job status, fault-tolerance, and job queuing and control through Livermore's Distributed Productionmore » Control System (DPCS), a meta-batch and resource management system.« less
Plat, M J; Frings-Dresen, M H W; Sluiter, J K
2011-12-01
Some occupations have tasks and activities that require monitoring safety and health aspects of the job; examples of such occupations are emergency services personnel and military personnel. The two objectives of this systematic review were to describe (1) the existing job-specific workers' health surveillance (WHS) activities and (2) the effectiveness of job-specific WHS interventions with respect to work functioning, for selected jobs. The search strategy systematically searched the PubMed, PsycINFO and OSH-update databases. The search strategy consisted of several synonyms of the job titles of interest, combined with synonyms for workers' health surveillance. The methodological quality was checked. At least one study was found for each of the following occupations fire fighters, ambulance personnel, police personnel and military personnel. For the first objective, 24 studies described several job-specific WHS activities aimed at aspects of psychological, 'physical' (energetic, biomechanical and balance), sense-related, environmental exposure or cardiovascular requirements. The seven studies found for the second objective measured different outcomes related to work functioning. The methodological quality of the interventions varied, but with the exception of one study, all scored over 55% of the maximum score. Six studies showed effectiveness on at least some of the defined outcomes. The studies described several job-specific interventions: a trauma resilience training, healthy lifestyle promotion, physical readiness training, respiratory muscle training, endurance and resistance training, a physical exercise programme and comparing vaccines. Several examples of job-specific WHS activities were found for the four occupations. Compared to studies focusing on physical tasks, a few studies were found that focus on psychological tasks. Effectiveness studies for job-specific WHS interventions were scarce, although their results were promising. We recommend studying job-specific WHS in effectiveness studies.
20 CFR 628.420 - Job training plan.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Job training plan. 628.420 Section 628.420... THE JOB TRAINING PARTNERSHIP ACT Local Service Delivery System § 628.420 Job training plan. (a) The Governor shall issue instructions and schedules to assure that job training plans and plan modifications...
20 CFR 628.420 - Job training plan.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 3 2012-04-01 2012-04-01 false Job training plan. 628.420 Section 628.420... THE JOB TRAINING PARTNERSHIP ACT Local Service Delivery System § 628.420 Job training plan. (a) The Governor shall issue instructions and schedules to assure that job training plans and plan modifications...
20 CFR 628.420 - Job training plan.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Job training plan. 628.420 Section 628.420... THE JOB TRAINING PARTNERSHIP ACT Local Service Delivery System § 628.420 Job training plan. (a) The Governor shall issue instructions and schedules to assure that job training plans and plan modifications...
Bond, Gary R; Campbell, Kikuko; Becker, Deborah R
2013-06-01
This study compared job matching rates for clients with severe mental illness enrolled in two types of employment programs. Also examined was the occupational matching hypothesis that job matching is associated with better employment outcomes. The study involved a secondary analysis of a randomized controlled trial comparing evidence-based supported employment to a diversified placement approach. The study sample consisted of 187 participants, of whom 147 obtained a paid job during the 2-year follow-up. Jobs were coded using the Dictionary of Occupational Titles classification system. Match between initial job preferences and type of job obtained was the predictor variable. Outcomes included time to job start, job satisfaction, and job tenure on first job. Most occupational preferences were for clerical and service jobs, and most participants obtained employment in these two occupational domains. In most cases, the first job obtained matched a participant's occupational preference. The occupational matching hypothesis was not supported for any employment outcome. The occupational matching rate was similar in this study to previous studies. Most clients who obtain employment with the help of evidence-based supported employment or diversified placement services find jobs matching their occupational preference, and most often it is a rough match. Occupational matching is but one aspect of job matching; it may be time to discard actuarial classification systems such as the Dictionary of Occupational Titles as a basis for assessing job match.
Portela, Luciana Fernandes; Rotenberg, Lucia; Almeida, Ana Luiza Pereira; Landsbergis, Paul; Griep, Rosane Harter
2013-01-01
Evidence suggests that the workplace plays an important etiologic role in blood pressure (BP) alterations. Associations in female samples are controversial, and the domestic environment is hypothesized to be an important factor in this relationship. This study assessed the association between job strain and BP within a sample of female nursing workers, considering the potential role of domestic overload. A cross-sectional study was conducted in a group of 175 daytime workers who wore an ambulatory BP monitor for 24 h during a working day. Mean systolic and diastolic BP were calculated. Job strain was evaluated using the Demand-Control Model. Domestic overload was based on the level of responsibility in relation to four household tasks and on the number of beneficiaries. After adjustments no significant association between high job strain and BP was detected. Stratified analyses revealed that women exposed to both domestic overload and high job strain had higher systolic BP at home. These results indicate a possible interaction between domestic overload and job strain on BP levels and revealed the importance of domestic work, which is rarely considered in studies of female workers. PMID:24287860
Predictors of nursing faculty members' organizational commitment in governmental universities.
Al-Hussami, Mahmoud; Saleh, Mohammad Y N; Abdalkader, Raghed Hussein; Mahadeen, Alia I
2011-05-01
It is essential for all university leaders to develop and maintain an effective programme of total quality management in a climate that promotes work satisfaction and employee support. The purpose of the study was to investigate the relationship of faculty members' organizational commitment to their job satisfaction, perceived organizational support, job autonomy, workload, and pay. A quantitative study, implementing a correlational research design to determine whether relationships existed between organizational commitment and job satisfaction, perceived organizational support, job autonomy, workload and pay. Stepwise linear regression analysis was used to estimate the probability of recorded variables included significant sample characteristics namely, age, experience and other work related attributes. The outcome showed a predictive model of three predictors which were significantly related to faculty members' commitment: job satisfaction, perceived support and age. Although the findings were positive toward organizational commitment, continued consideration should be given to the fact that faculty members remain committed as the cost associated with leaving is high. A study of this nature increases the compartment in which faculty administrators monitor the work climate, observe and identify factors that may increase or decrease job satisfaction and the work commitment. © 2011 The Authors. Journal compilation © 2011 Blackwell Publishing Ltd.
Portela, Luciana Fernandes; Rotenberg, Lucia; Almeida, Ana Luiza Pereira; Landsbergis, Paul; Griep, Rosane Harter
2013-11-27
Evidence suggests that the workplace plays an important etiologic role in blood pressure (BP) alterations. Associations in female samples are controversial, and the domestic environment is hypothesized to be an important factor in this relationship. This study assessed the association between job strain and BP within a sample of female nursing workers, considering the potential role of domestic overload. A cross-sectional study was conducted in a group of 175 daytime workers who wore an ambulatory BP monitor for 24 h during a working day. Mean systolic and diastolic BP were calculated. Job strain was evaluated using the Demand-Control Model. Domestic overload was based on the level of responsibility in relation to four household tasks and on the number of beneficiaries. After adjustments no significant association between high job strain and BP was detected. Stratified analyses revealed that women exposed to both domestic overload and high job strain had higher systolic BP at home. These results indicate a possible interaction between domestic overload and job strain on BP levels and revealed the importance of domestic work, which is rarely considered in studies of female workers.
75 FR 19622 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-15
... organization, office symbol/code, job title, job function, grade/rank, job series, military specialty, start... the locks, security personnel and administrative procedures.'' Retention and disposal: Delete entry... approves the retention and disposal schedule, records will be treated as permanent.'' System manager(s) and...
48 CFR 217.7103 - Master agreements and job orders.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Master agreements and job orders. 217.7103 Section 217.7103 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... Agreement for Repair and Alteration of Vessels 217.7103 Master agreements and job orders. ...
48 CFR 217.7103 - Master agreements and job orders.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Master agreements and job orders. 217.7103 Section 217.7103 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... Agreement for Repair and Alteration of Vessels 217.7103 Master agreements and job orders. ...
48 CFR 217.7103 - Master agreements and job orders.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Master agreements and job orders. 217.7103 Section 217.7103 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... Agreement for Repair and Alteration of Vessels 217.7103 Master agreements and job orders. ...
48 CFR 217.7103 - Master agreements and job orders.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Master agreements and job orders. 217.7103 Section 217.7103 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... Agreement for Repair and Alteration of Vessels 217.7103 Master agreements and job orders. ...
48 CFR 217.7103 - Master agreements and job orders.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Master agreements and job orders. 217.7103 Section 217.7103 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... Agreement for Repair and Alteration of Vessels 217.7103 Master agreements and job orders. ...
Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS 1994), volume 1
NASA Technical Reports Server (NTRS)
Erickson, Jon D. (Editor)
1994-01-01
The AIAA/NASA Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS '94) was originally proposed because of the strong belief that America's problems of global economic competitiveness and job creation and preservation can partly be solved by the use of intelligent robotics, which are also required for human space exploration missions. Individual sessions addressed nuclear industry, agile manufacturing, security/building monitoring, on-orbit applications, vision and sensing technologies, situated control and low-level control, robotic systems architecture, environmental restoration and waste management, robotic remanufacturing, and healthcare applications.
1983-11-16
paper 22 V. Recommendations 23 A. General 23 B. Specific 23 Reference-s 24 7 I. INTRODUCTION A. BACKGROUND AND RATIONALE FOR THE PAPER The impact of...frequent shift changes. People reporting lower back pains indicated a reduction of symptoms when supervisors expressed support and emphasis of CCTV...survey. Approximately 40 people interviewed acknowledged requesting job change away from monitoring 18 tasks. The reasons stated for the job change
Jaschob, Daniel; Riffle, Michael
2012-07-30
Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. JobCenter is a client-server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or "in the cloud") and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.
A Multi-Factor Analysis of Job Satisfaction among School Nurses
ERIC Educational Resources Information Center
Foley, Marcia; Lee, Julie; Wilson, Lori; Cureton, Virginia Young; Canham, Daryl
2004-01-01
Although job satisfaction has been widely studied among registered nurses working in traditional health care settings, little is known about the job-related values and perceptions of nurses working in school systems. Job satisfaction is linked to lower levels of job-related stress, burnout, and career abandonment among nurses. This study evaluated…
The ATLAS PanDA Pilot in Operation
NASA Astrophysics Data System (ADS)
Nilsson, P.; Caballero, J.; De, K.; Maeno, T.; Stradling, A.; Wenaus, T.; ATLAS Collaboration
2011-12-01
The Production and Distributed Analysis system (PanDA) [1-2] was designed to meet ATLAS [3] requirements for a data-driven workload management system capable of operating at LHC data processing scale. Submitted jobs are executed on worker nodes by pilot jobs sent to the grid sites by pilot factories. This paper provides an overview of the PanDA pilot [4] system and presents major features added in light of recent operational experience, including multi-job processing, advanced job recovery for jobs with output storage failures, gLExec [5-6] based identity switching from the generic pilot to the actual user, and other security measures. The PanDA system serves all ATLAS distributed processing and is the primary system for distributed analysis; it is currently used at over 100 sites worldwide. We analyze the performance of the pilot system in processing real LHC data on the OSG [7], EGI [8] and Nordugrid [9-10] infrastructures used by ATLAS, and describe plans for its evolution.
Pay Equity Act (No. 34 of 1987), 29 June 1987.
1987-01-01
This document contains major provisions of Ontario, Canada's 1987 Pay Equity Act. The Act seeks to redress systemic gender discrimination in compensation for work performed by employees in "female job classes" and applies to all private sector employers in Ontario with 10 or more employees, all public sector employers, and the employees of applicable employers. The Act continues to apply even if an employer subsequently reduces the number of employees below 10. The Act calls for identification of systemic gender discrimination in compensation through comparisons between female job classes and male job classes in terms of compensation and value of work performed, which is a composite of skill, effort, and responsibility normally required. Pay equity is deemed achieved when the job rate for the female job class is at least equal to the rate for a male job class in the same establishment. If there is no male job class to use for comparison, pay equity is achieved when the female job rate is at least equal to the job rate of a male job class in the same establishment that, at the time of comparison, had a higher job rate while performing work of lower value than the female job class. Differences in compensation between a female and a male job class are allowed if they result from a formal seniority system that does not discriminate on basis of gender, a temporary training or development assignment equally available to males and females, a specified merit compensation plan, actions taken as the result of a gender-neutral reevaluation process, or a skills shortage leading to a temporary inflation in compensation. Pay equity will not be achieved by reducing any employee's compensation. The Act establishes a Pay Equity Commission to oversee implementation.
Backfilling with guarantees granted upon job submission.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leung, Vitus Joseph; Bunde, David P.; Lindsay, Alexander M.
2011-01-01
In this paper, we present scheduling algorithms that simultaneously support guaranteed starting times and favor jobs with system desired traits. To achieve the first of these goals, our algorithms keep a profile with potential starting times for every unfinished job and never move these starting times later, just as in Conservative Backfilling. To achieve the second, they exploit previously unrecognized flexibility in the handling of holes opened in this profile when jobs finish early. We find that, with one choice of job selection function, our algorithms can consistently yield a lower average waiting time than Conservative Backfilling while still providingmore » a guaranteed start time to each job as it arrives. In fact, in most cases, the algorithms give a lower average waiting time than the more aggressive EASY backfilling algorithm, which does not provide guaranteed start times. Alternately, with a different choice of job selection function, our algorithms can focus the benefit on the widest submitted jobs, the reason for the existence of parallel systems. In this case, these jobs experience significantly lower waiting time than Conservative Backfilling with minimal impact on other jobs.« less
77 FR 8213 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-14
... credit and financial analysis decisions and monitor the program. Description of Respondents: Not-for... about jobs created or saved for the Intermediary Relending Program and Rural Development Loan Fund. The...
78 FR 60331 - Privacy Act of 1974: System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-01
... on Position Classification Appeals, Job Grading Appeals, Retained Grade or Pay Appeals, Fair Labor..., Job Grading Appeals, Retained Grade or Pay Appeals, Fair Labor Standard Act (FLSA) Claims and... appeal or a job grading appeal with the U.S. Office of Personnel Management, Merit System Accountability...
DOT National Transportation Integrated Search
1997-01-01
The purposes of this paper are to describe how the locational patterns of jobs, and the arrival time of home-to-work trips, vary according to the system used to classify jobs. SEMCOG has obtained a special cross-tabulation of 1990 census data on work...
20 CFR 658.416 - Action on JS-related complaints.
Code of Federal Regulations, 2011 CFR
2011-04-01
... ADMINISTRATIVE PROVISIONS GOVERNING THE JOB SERVICE SYSTEM Job Service Complaint System State Agency Js Complaint...-complainant to another job. (b)(1) If the JS-related complaint concerns violations of an employment-related... been achieved to the satisfaction of the complainant within 15 working days after receipt of the...
A procedure for linking psychosocial job characteristics data to health surveys.
Schwartz, J E; Pieper, C F; Karasek, R A
1988-01-01
A system is presented for linking information about psychosocial characteristics of job situations to national health surveys. Job information can be imputed to individuals on surveys that contain three-digit US Census occupation codes. Occupational mean scores on psychosocial job characteristics-control over task situation (decision latitude), psychological work load, physical exertion, and other measures-for the linkage system are derived from US national surveys of working conditions (Quality of Employment Surveys 1969, 1972, and 1977). This paper discusses a new method for reducing the biases in multivariate analyses that are likely to arise when utilizing linkage systems based on mean scores. Such biases are reduced by modifying the linkage system to adjust imputed individual scores for demographic factors such as age, education, race, marital status and, implicitly, sex (since men and women have separate linkage data bases). Statistics on the linkage system's efficiency and reliability are reported. All dimensions have high inter-survey reproducibility. Despite their psychosocial nature, decision latitude and physical exertion can be more efficiently imputed with the linkage system than earnings (a non-psychosocial job characteristic). The linkage system presented here is a useful tool for initial epidemiological studies of the consequences of psychosocial job characteristics and constitutes the methodological basis for the subsequent paper. PMID:3389426
Stewart, P A; Lee, J S; Marano, D E; Spirtas, R; Forbes, C D; Blair, A
1991-01-01
Methods are presented that were used for assessing exposures in a cohort mortality study of 15,000 employees who held 150,000 jobs at an Air Force base from 1939 to 1982. Standardisation of the word order and spelling of the job titles identified 43,000 unique job title organisation combinations. Walkthrough surveys were conducted, long term employees were interviewed, and available industrial hygiene data were collected to evaluate historic exposures. Because of difficulties linking air monitoring data and use of specific chemicals to the departments identified in the work histories, position descriptions were used to identify the tasks in each job. From knowledge of the tasks and the chemicals used in those tasks the presence or absence of 23 chemicals or groups of chemicals were designated for each job organisation combination. Also, estimates of levels of exposure were made for trichloroethylene and for mixed solvents, a category comprising several solvents including trichloroethylene, Stoddard solvent, carbon tetrachloride, JP4 gasoline, freon, alcohols, 1,1,1-trichloroethane, acetone, toluene, methyl ethyl ketone, methylene chloride, o-dichlorobenzene, perchloroethylene, chloroform, styrene, and xylene. PMID:1878309
NASA Technical Reports Server (NTRS)
Hu, Chaumin
2007-01-01
IPG Execution Service is a framework that reliably executes complex jobs on a computational grid, and is part of the IPG service architecture designed to support location-independent computing. The new grid service enables users to describe the platform on which they need a job to run, which allows the service to locate the desired platform, configure it for the required application, and execute the job. After a job is submitted, users can monitor it through periodic notifications, or through queries. Each job consists of a set of tasks that performs actions such as executing applications and managing data. Each task is executed based on a starting condition that is an expression of the states of other tasks. This formulation allows tasks to be executed in parallel, and also allows a user to specify tasks to execute when other tasks succeed, fail, or are canceled. The two core components of the Execution Service are the Task Database, which stores tasks that have been submitted for execution, and the Task Manager, which executes tasks in the proper order, based on the user-specified starting conditions, and avoids overloading local and remote resources while executing tasks.
ERIC Educational Resources Information Center
Hoffman, Nancy
2011-01-01
Which non-American education systems best prepare young people for fulfilling jobs and successful adult lives? And what can the United States--where far too many young people currently enter adulthood without adequate preparation for the twenty-first-century job market--learn, adopt, and adapt from these other systems? In "Schooling in the…
Elements of an Asbestos Operations and Maintenance (O&M) Program
Links to descriptions of Elements of an Operations and Maintenance (O&M) Program: Training, Occupant Notification, Monitoring ACM, Job-Site Controls for Work Involving ACM, Safe Work Practices, Recordkeeping, Worker Protection.
Physically and psychologically hazardous jobs and mental health in Thailand
Yiengprugsawan, Vasoontara; Strazdins, Lyndall; Lim, Lynette L.-Y.; Kelly, Matthew; Seubsman, Sam-ang; Sleigh, Adrian C.
2015-01-01
This paper investigates associations between hazardous jobs, mental health and wellbeing among Thai adults. In 2005, 87 134 distance-learning students from Sukhothai Thammathirat Open University completed a self-administered questionnaire; at the 2009 follow-up 60 569 again participated. Job characteristics were reported in 2005, psychological distress and life satisfaction were reported in both 2005 and 2009. We derived two composite variables grading psychologically and physically hazardous jobs and reported adjusted odds ratios (AOR) from multivariate logistic regressions. Analyses focused on cohort members in paid work: the total was 62 332 at 2005 baseline and 41 671 at 2009 follow-up. Cross-sectional AORs linking psychologically hazardous jobs to psychological distress ranged from 1.52 (one hazard) to 4.48 (four hazards) for males and a corresponding 1.34–3.76 for females. Similarly AORs for physically hazardous jobs were 1.75 (one hazard) to 2.76 (four or more hazards) for males and 1.70–3.19 for females. A similar magnitude of associations was found between psychologically adverse jobs and low life satisfaction (AORs of 1.34–4.34 among males and 1.18–3.63 among females). Longitudinal analyses confirm these cross-sectional relationships. Thus, significant dose–response associations were found linking hazardous job exposures in 2005 to mental health and wellbeing in 2009. The health impacts of psychologically and physically hazardous jobs in developed, Western countries are equally evident in transitioning Southeast Asian countries such as Thailand. Regulation and monitoring of work conditions will become increasingly important to the health and wellbeing of the Thai workforce. PMID:24218225
Job satisfaction of nurses and identifying factors of job satisfaction in Slovenian Hospitals.
Lorber, Mateja; Skela Savič, Brigita
2012-06-01
To determine the level of job satisfaction of nursing professionals in Slovenian hospitals and factors influencing job satisfaction in nursing. The study included 4 hospitals selected from the hospital list comprising 26 hospitals in Slovenia. The employees of these hospitals represent 29.8% and 509 employees included in the study represent 6% of all employees in nursing in Slovenian hospitals. One structured survey questionnaire was administered to the leaders and the other to employees, both consisting 154 items evaluated on a 5 point Likert-type scale. We examined the correlation between independent variables (age, number of years of employment, behavior of leaders, personal characteristics of leaders, and managerial competencies of leaders) and the dependent variable (job satisfaction - satisfaction with the work, coworkers, management, pay, etc) by applying correlation analysis and multivariate regression analysis. In addition, factor analysis was used to establish characteristic components of the variables measured. We found a medium level of job satisfaction in both leaders (3.49±0.5) and employees (3.19±0.6), however, there was a significant difference between their estimates (t=3.237; P=lt;0.001). Job satisfaction was explained by age (Plt;0.05; β=0.091), years of employment (Plt;0.05; β=0.193), personal characteristics of leaders (Plt;0.001; β=0.158), and managerial competencies of leaders (Plt;0.000; β=0.634) in 46% of cases. The factor analysis yielded four factors explaining 64% of the total job satisfaction variance. Satisfied employees play a crucial role in an organization's success, so health care organizations must be aware of the importance of employees' job satisfaction. It is recommended to monitor employees' job satisfaction levels on an annual basis.
29 CFR 1620.13 - “Equal Work”-What it means.
Code of Federal Regulations, 2012 CFR
2012-07-01
...” and “female jobs.” (1) Wage classification systems which designate certain jobs as “male jobs” and other jobs as “female jobs” frequently specify markedly lower rates for the “females jobs.” Such... “female” unless sex is a bona fide occupational qualification for the job. (2) The EPA prohibits...
29 CFR 1620.13 - “Equal Work”-What it means.
Code of Federal Regulations, 2013 CFR
2013-07-01
...” and “female jobs.” (1) Wage classification systems which designate certain jobs as “male jobs” and other jobs as “female jobs” frequently specify markedly lower rates for the “females jobs.” Such... “female” unless sex is a bona fide occupational qualification for the job. (2) The EPA prohibits...
29 CFR 1620.13 - “Equal Work”-What it means.
Code of Federal Regulations, 2014 CFR
2014-07-01
...” and “female jobs.” (1) Wage classification systems which designate certain jobs as “male jobs” and other jobs as “female jobs” frequently specify markedly lower rates for the “females jobs.” Such... “female” unless sex is a bona fide occupational qualification for the job. (2) The EPA prohibits...
Jobs for JOBS: Toward a Work-Based Welfare System. Occasional Paper 1993-1.
ERIC Educational Resources Information Center
Levitan, Sar A.; Gallo, Frank
The Job Opportunities and Basic Skills (JOBS) program, a component of the 1988 Family Support Act, emphasizes education and occupational training for welfare recipients, but it has not provided sufficient corrective measures to promote work among recipients of Aid for Families with Dependent Children (AFDC). The most serious deficiency of JOBS is…
GO, an exec for running the programs: CELL, COLLIDER, MAGIC, PATRICIA, PETROS, TRANSPORT, and TURTLE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shoaee, H.
1982-05-01
An exec has been written and placed on the PEP group's public disk to facilitate the use of several PEP related computer programs available on VM. The exec's program list currently includes: CELL, COLLIDER, MAGIC, PATRICIA, PETROS, TRANSPORT, and TURTLE. In addition, provisions have been made to allow addition of new programs to this list as they become available. The GO exec is directly callable from inside the Wylbur editor (in fact, currently this is the only way to use the GO exec.). It provides the option of running any of the above programs in either interactive or batch mode.more » In the batch mode, the GO exec sends the data in the Wylbur active file along with the information required to run the job to the batch monitor (BMON, a virtual machine that schedules and controls execution of batch jobs). This enables the user to proceed with other VM activities at his/her terminal while the job executes, thus making it of particular interest to the users with jobs requiring much CPU time to execute and/or those wishing to run multiple jobs independently. In the interactive mode, useful for small jobs requiring less CPU time, the job is executed by the user's own Virtual Machine using the data in the active file as input. At the termination of an interactive job, the GO exec facilitates examination of the output by placing it in the Wylbur active file.« less
GO, an exec for running the programs: CELL, COLLIDER, MAGIC, PATRICIA, PETROS, TRANSPORT and TURTLE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shoaee, H.
1982-05-01
An exec has been written and placed on the PEP group's public disk (PUBRL 192) to facilitate the use of several PEP related computer programs available on VM. The exec's program list currently includes: CELL, COLLIDER, MAGIC, PATRICIA, PETROS, TRANSPORT, and TURTLE. In addition, provisions have been made to allow addition of new programs to this list as they become available. The GO exec is directly callable from inside the Wylbur editor (in fact, currently this is the only way to use the GO exec.) It provides the option of running any of the above programs in either interactive ormore » batch mode. In the batch mode, the GO exec sends the data in the Wylbur active file along with the information required to run the job to the batch monitor (BMON, a virtual machine that schedules and controls execution of batch jobs). This enables the user to proceed with other VM activities at his/her terminal while the job executes, thus making it of particular interest to the users with jobs requiring much CPU time to execute and/or those wishing to run multiple jobs independently. In the interactive mode, useful for small jobs requiring less CPU time, the job is executed by the user's own Virtual Machine using the data in the active file as input. At the termination of an interactive job, the GO exec facilitates examination of the output by placing it in the Wylbur active file.« less
20 CFR 658.412 - Complaint resolution.
Code of Federal Regulations, 2010 CFR
2010-04-01
... GOVERNING THE JOB SERVICE SYSTEM Job Service Complaint System State Agency Js Complaint System § 658.412 Complaint resolution. (a) A JS-related complaint is resolved when: (1) The complainant indicates...
A framework of space weather satellite data pipeline
NASA Astrophysics Data System (ADS)
Ma, Fuli; Zou, Ziming
Various applications indicate a need of permanent space weather information. The diversity of available instruments enables a big variety of products. As an indispensable part of space weather satellite operation system, space weather data processing system is more complicated than before. The information handled by the data processing system has been used in more and more fields such as space weather monitoring and space weather prediction models. In the past few years, many satellites have been launched by China. The data volume downlinked by these satellites has achieved the so-called big data level and it will continue to grow fast in the next few years due to the implementation of many new space weather programs. Because of the huge amount of data, the current infrastructure is no longer incapable of processing data timely, so we proposed a new space weather data processing system (SWDPS) based on the architecture of cloud computing. Similar to Hadoop, SWDPS decomposes the tasks into smaller tasks which will be executed by many different work nodes. Control Center in SWDPS, just like NameNode and JobTracker within Hadoop which is the bond between the data and the cluster, will establish work plan for the cluster once a client submits data. Control Center will allocate node for the tasks and the monitor the status of all tasks. As the same of TaskTrakcer, Compute Nodes in SWDPS are the salves of Control Center which are responsible for calling the plugins(e.g., dividing and sorting plugins) to execute the concrete jobs. They will also manage all the tasks’ status and report them to Control Center. Once a task fails, a Compute Node will notify Control Center. Control Center decides what to do then; it may resubmit the job elsewhere, it may mark that specific record as something to avoid, and it may even blacklist the Compute Node as unreliable. In addition to these modules, SWDPS has a different module named Data Service which is used to provide file operations such as adding, deleting, modifying and querying for the clients. Beyond that Data Service can also split and combine files based on the timestamp of each record. SWDPS has been used for quite some time and it has been successfully dealt with many satellites, such as FY1C, FY1D, FY2A, FY2B, etc. The good performance in actual operation shows that SWDPS is stable and reliable.
2012-01-01
Background Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. Results JobCenter is a client–server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or “in the cloud”) and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. Conclusions JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/. PMID:22846423
Web Proxy Auto Discovery for the WLCG
NASA Astrophysics Data System (ADS)
Dykstra, D.; Blomer, J.; Blumenfeld, B.; De Salvo, A.; Dewhurst, A.; Verguilov, V.
2017-10-01
All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily support that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which they direct to the nearest publicly accessible web proxy servers. The responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.
Web Proxy Auto Discovery for the WLCG
Dykstra, D.; Blomer, J.; Blumenfeld, B.; ...
2017-11-23
All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily supportmore » that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which it directs to the nearest publicly accessible web proxy servers. Furthermore, the responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.« less
Web Proxy Auto Discovery for the WLCG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dykstra, D.; Blomer, J.; Blumenfeld, B.
All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily supportmore » that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which it directs to the nearest publicly accessible web proxy servers. Furthermore, the responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.« less
Differences between Employees' and Supervisors' Evaluations of Work Performance and Support Needs
ERIC Educational Resources Information Center
Bennett, Kyle; Frain, Michael; Brady, Michael P.; Rosenberg, Howard; Surinak, Tricia
2009-01-01
Assessment systems are needed that are sensitive to employees' work performance as well as their need for support, while incorporating the input from both employees and their supervisors. This study examined the correspondence of one such evaluation system, the Job Observation and Behavior Scale (JOBS) and the JOBS: Opportunity for…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-26
... ``job clubs'' have evolved into one of several important activities used by the public workforce system... are formally run through the public workforce system--including at Department of Labor funded American... communities; (2) documenting how they differ from and are similar to the job clubs operated by publicly...
Efficiency improvements of offline metrology job creation
NASA Astrophysics Data System (ADS)
Zuniga, Victor J.; Carlson, Alan; Podlesny, John C.; Knutrud, Paul C.
1999-06-01
Progress of the first lot of a new design through the production line is watched very closely. All performance metrics, cycle-time, in-line measurement results and final electrical performance are critical. Rapid movement of this lot through the line has serious time-to-market implications. Having this material waiting at a metrology operation for an engineer to create a measurement job plan wastes valuable turnaround time. Further, efficient use of a metrology system is compromised by the time required to create and maintain these measurement job plans. Thus, having a method to develop metrology job plans prior to the actual running of the material through the manufacture area can significantly improve both cycle time and overall equipment efficiency. Motorola and Schlumberger have worked together to develop and test such a system. The Remote Job Generator (RJG) created job plans for new device sin a manufacturing process from an NT host or workstation, offline. This increases available system tim effort making production measurements, decreases turnaround time on job plan creation and editing, and improves consistency across job plans. Most importantly this allows job plans for new devices to be available before the first wafers of the device arrive at the tool for measurement. The software also includes a database manager which allows updates of existing job plans to incorporate measurement changes required by process changes or measurement optimization. This paper will review the result of productivity enhancements through the increased metrology utilization and decreased cycle time associated with the use of RJG. Finally, improvements in process control through better control of Job Plans across different devices and layers will be discussed.
20 CFR 658.426 - Complaints against USES.
Code of Federal Regulations, 2010 CFR
2010-04-01
... PROVISIONS GOVERNING THE JOB SERVICE SYSTEM Job Service Complaint System Federal Js Complaint System § 658... USES has violated JS regulations should be mailed to the Assistant Secretary for Employment and...
Using Job Analysis Techniques to Understand Training Needs for Promotores de Salud.
Ospina, Javier H; Langford, Toshiko A; Henry, Kimberly L; Nelson, Tristan Q
2018-04-01
Despite the value of community health worker programs, such as Promotores de Salud, for addressing health disparities in the Latino community, little consensus has been reached to formally define the unique roles and duties associated with the job, thereby creating unique job training challenges. Understanding the job tasks and worker attributes central to this work is a critical first step for developing the training and evaluation systems of promotores programs. Here, we present the process and findings of a job analysis conducted for promotores working for Planned Parenthood. We employed a systematic approach, the combination job analysis method, to define the job in terms of its work and worker requirements, identifying key job tasks, as well as the worker attributes necessary to effectively perform them. Our results suggest that the promotores' job encompasses a broad range of activities and requires an equally broad range of personal characteristics to perform. These results played an important role in the development of our training and evaluation protocols. In this article, we introduce the technique of job analysis, provide an overview of the results from our own application of this technique, and discuss how these findings can be used to inform a training and performance evaluation system. This article provides a template for other organizations implementing similar community health worker programs and illustrates the value of conducting a job analysis for clarifying job roles, developing and evaluating job training materials, and selecting qualified job candidates.
ERIC Educational Resources Information Center
Hull, Daniel M.; Lovett, James E.
This task analysis report for the Robotics/Automated Systems Technician (RAST) curriculum project first provides a RAST job description. It then discusses the task analysis, including the identification of tasks, the grouping of tasks according to major areas of specialty, and the comparison of the competencies to existing or new courses to…
Roh, Hyolyun; Lee, Daehee; Kim, Yongjae
2014-05-01
[Purpose] The purpose of this study was to assess the work-related musculoskeletal system symptoms and the extent of job stress in female caregivers, as well as the interrelationship between these factors. [Subjects and Methods] Korea Occupational Safety and Health Agency (KOSHA) Code H-43 of the Guidelines for the Examination of Elements Harmful to the Musculoskeletal System was used as a tool to measure musculoskeletal symptoms. Caregiver job stress was assessed from the Korean Occupational Stress Scale short form. [Results] The level of symptoms in the hand/wrist/finger and leg/foot regions had some relation to job stress. Job stress scores were mainly shown to be high when pain was reported. On the other hand, it was shown that the degree of musculoskeletal symptoms by body part was unrelated to conflicts in relationships, job instability, or workplace culture. [Conclusion] As for the correlations between musculoskeletal symptoms and job stress, it was shown that as job requirements increased, most musculoskeletal symptoms also increased.
AFTER: Batch jobs on the Apollo ring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hofstadler, P.
1987-07-01
This document describes AFTER, a system that allows users of an Apollo ring to submit batch jobs to run without leaving themselves logged in to the ring. Jobs may be submitted to run at a later time or on a different node. Results from the batch job are mailed to the user through some designated mail system. AFTER features an understandable user interface, good on line help, and site customization. This manual serves primarily as a user's guide to AFTER although administration and installation are covered for completeness.
Steinsvåg, Kjersti; Bråtveit, Magne; Moen, Bente E
2007-01-01
Objectives To identify and describe the exposure to selected known and suspected carcinogenic agents, mixtures and exposure circumstances for defined job categories in Norway's offshore petroleum industry from 1970 to 2005, in order to provide exposure information for a planned cohort study on cancer. Methods Background information on possible exposure was obtained through company visits, including interviewing key personnel (n = 83) and collecting monitoring reports (n = 118) and other relevant documents (n = 329). On the basis of a previous questionnaire administered to present and former offshore employees in 1998, 27 job categories were defined. Results This study indicated possible exposure to 18 known and suspected carcinogenic agents, mixtures or exposure circumstances. Monitoring reports were obtained on seven agents (benzene, mineral oil mist and vapour, respirable and total dust, asbestos fibres, refractory ceramic fibres, formaldehyde and tetrachloroethylene). The mean exposure level of 367 personal samples of benzene was 0.037 ppm (range: less than the limit of detection to 2.6 ppm). Asbestos fibres were detected (0.03 fibres/cm3) when asbestos‐containing brake bands were used in drilling draw work in 1988. Personal samples of formaldehyde in the process area ranged from 0.06 to 0.29 mg/m3. Descriptions of products containing known and suspected carcinogens, exposure sources and processes were extracted from the collected documentation and the interviews of key personnel. Conclusions This study described exposure to 18 known and suspected carcinogenic agents, mixtures and exposure circumstances for 27 job categories in Norway's offshore petroleum industry. For a planned cohort study on cancer, quantitative estimates of exposure to benzene, and mineral oil mist and vapour might be developed. For the other agents, information in the present study can be used for further assessment of exposure, for instance, by expert judgement. More systematic exposure surveillance is needed in this industry. For future studies, new monitoring programmes need to be implemented. PMID:17043075
Steinsvåg, Kjersti; Bråtveit, Magne; Moen, Bente E
2007-04-01
To identify and describe the exposure to selected known and suspected carcinogenic agents, mixtures and exposure circumstances for defined job categories in Norway's offshore petroleum industry from 1970 to 2005, in order to provide exposure information for a planned cohort study on cancer. Background information on possible exposure was obtained through company visits, including interviewing key personnel (n = 83) and collecting monitoring reports (n = 118) and other relevant documents (n = 329). On the basis of a previous questionnaire administered to present and former offshore employees in 1998, 27 job categories were defined. This study indicated possible exposure to 18 known and suspected carcinogenic agents, mixtures or exposure circumstances. Monitoring reports were obtained on seven agents (benzene, mineral oil mist and vapour, respirable and total dust, asbestos fibres, refractory ceramic fibres, formaldehyde and tetrachloroethylene). The mean exposure level of 367 personal samples of benzene was 0.037 ppm (range: less than the limit of detection to 2.6 ppm). Asbestos fibres were detected (0.03 fibres/cm3) when asbestos-containing brake bands were used in drilling draw work in 1988. Personal samples of formaldehyde in the process area ranged from 0.06 to 0.29 mg/m3. Descriptions of products containing known and suspected carcinogens, exposure sources and processes were extracted from the collected documentation and the interviews of key personnel. This study described exposure to 18 known and suspected carcinogenic agents, mixtures and exposure circumstances for 27 job categories in Norway's offshore petroleum industry. For a planned cohort study on cancer, quantitative estimates of exposure to benzene, and mineral oil mist and vapour might be developed. For the other agents, information in the present study can be used for further assessment of exposure, for instance, by expert judgement. More systematic exposure surveillance is needed in this industry. For future studies, new monitoring programmes need to be implemented.
Preventing Heat-Related Illness or Death of Outdoor Workers
... attention to workers who show signs of heat-related illness Evaluating work practices continually to reduce ex- ertion and environmental heat stress Monitoring weather reports daily and reschedul- ing jobs ...
Career Success: The Effects of Personality.
ERIC Educational Resources Information Center
Lau, Victor P.; Shaffer, Margaret A.
1999-01-01
A model based on Bandura's Social Learning Theory proposes the following personality traits as determinants of career success: locus of control, self-monitoring, self-esteem, and optimism, along with job performance and person-to-environment fit. (SK)
NAS Requirements Checklist for Job Queuing/Scheduling Software
NASA Technical Reports Server (NTRS)
Jones, James Patton
1996-01-01
The increasing reliability of parallel systems and clusters of computers has resulted in these systems becoming more attractive for true production workloads. Today, the primary obstacle to production use of clusters of computers is the lack of a functional and robust Job Management System for parallel applications. This document provides a checklist of NAS requirements for job queuing and scheduling in order to make most efficient use of parallel systems and clusters for parallel applications. Future requirements are also identified to assist software vendors with design planning.
Basic principles of a flexible astronomical data processing system in UNIX environment.
NASA Astrophysics Data System (ADS)
Verkhodanov, O. V.; Erukhimov, B. L.; Monosov, M. L.; Chernenkov, V. N.; Shergin, V. S.
Methods of construction of a flexible system for astronomical data processing (FADPS) are described. An example of construction of such a FADPS for continuum radiometer data of the RATAN-600 is presented. The Job Control Language of this system is the Job Control Language of OS UNIX. It is shown that using basic commands of the data processing system (DPS) a user, knowing basic principles of Job in OS UNIX, can create his own mini-DPS. Examples of such mini-DPSs are presented.
Yeh, Wan-Yu; Cheng, Yawen; Chen, Chiou-Jung
2009-04-01
Today, performance-based pay systems, also known as variable pay systems, are commonly implemented in workplaces as a business strategy to improve workers' performance and reduce labor costs. However, their impact on workers' job stress and stress-related health outcomes has rarely been investigated. By utilizing data from a nationally representative sample of paid employees in Taiwan, we examined the distribution of variable pay systems across socio-demographic categories and employment sectors. We also examined the associations of pay systems with psychosocial job characteristics (assessed by Karasek's Demand-Control model) and self-reported burnout status (measured by the Chinese version of the Copenhagen Burnout Inventory). A total of 8906 men and 6382 women aged 25-65 years were studied, and pay systems were classified into three categories, i.e., fixed salary, performance-based pay (with a basic salary), and piece-rated or time-based pay (without a basic salary). Results indicated that in men, 57% of employees were given a fixed salary, 24% were given a performance-based pay, and 19% were remunerated through a piece-rated or time-based pay. In women, the distributions of the 3 pay systems were 64%, 20% and 15%, respectively. Among the three pay systems, employees earning through a performance-based pay were found to have the longest working hours, highest level of job control, and highest percentage of workers who perceived high stress at work. Those remunerated through a piece-rated/time-based pay were found to have the lowest job control, shortest working hours, highest job insecurity, lowest potential for career growth, and lowest job satisfaction. The results of multivariate regression analyses showed that employees earning through performance-based and piece-rated pay systems showed higher scores for personal burnout and work-related burnout, as compared to those who were given fixed salaries, after adjusting for age, education, marital status, employment grade, job characteristics, and family care workloads. As variable pay systems have gained in popularity, findings from this study call for more attention on the tradeoff between the widely discussed management advantages of such pay systems and the health burden they place on employees.
A self-organizing neural network for job scheduling in distributed systems
NASA Astrophysics Data System (ADS)
Newman, Harvey B.; Legrand, Iosif C.
2001-08-01
The aim of this work is to describe a possible approach for the optimization of the job scheduling in large distributed systems, based on a self-organizing Neural Network. This dynamic scheduling system should be seen as adaptive middle layer software, aware of current available resources and making the scheduling decisions using the "past experience." It aims to optimize job specific parameters as well as the resource utilization. The scheduling system is able to dynamically learn and cluster information in a large dimensional parameter space and at the same time to explore new regions in the parameters space. This self-organizing scheduling system may offer a possible solution to provide an effective use of resources for the off-line data processing jobs for future HEP experiments.
Design and Development of Mopping Robot-'HotBot'
NASA Astrophysics Data System (ADS)
Khan, M. R.; Huq, N. M. L.; Billah, M. M.; Ahmmad, S. M.
2013-12-01
To have a healthy, comfortable, and fresh civilized life we need to do some unhealthy households. Cleaning the dirty floor with a mop is one of the most disgusting and scary house hold jobs. Mopping robots are a solution of such problem. However, these robots are still on the way to be smart enough. Many points limit their efficiency, i.e. cleaning sticky dirt, having dry floor after cleaning, monitoring, cost etc. 'HotBot' is a mopping robot that can clean dirty floor with nice efficiency leaving no sticky dirt. Hot water can be used for heavy stains or normal water for usual situation and economy. It needs neither to be monitored during mopping nor to wipe the floor after it. 'HotBot' has sensors to detect obstacles and a control mechanism to avoid those. Moreover, it cleans sequentially and equipped with several accident-protection-systems. Moreover, it is also cost effective compared to the robots available so far.
The Production Rate and Employment of Ph.D. Astronomers
NASA Astrophysics Data System (ADS)
Metcalfe, Travis S.
2007-05-01
As in many sciences, the production rate of new Ph.D. astronomers is decoupled from the global demand for trained scientists. As noted by Thronson (1991, PASP, 103, 90), overproduction appears to be built into the system, making the mathematical formulation of surplus astronomer production similar to that for industrial pollution models -- an unintended side effect of the process. Following Harris (1994, ASP Conf., 57, 12), I document the production of Ph.D. astronomers from 1990 to 2005 using the online Dissertation Abstracts database. To monitor the changing patterns of employment, I examine the number of postdoctoral, tenure-track, and other jobs advertised in the AAS Job Register during this same period. Although the current situation is clearly unsustainable, it was much worse a decade ago with nearly 7 new Ph.D. astronomers in 1995 for every new tenure-track job. While the number of new permanent positions steadily increased throughout the late 1990's, the number of new Ph.D. recipients gradually declined. After the turn of the century, the production of new astronomers leveled off, but new postdoctoral positions grew dramatically. There has also been recent growth in the number of non-tenure-track lecturer, research, and support positions. This is just one example of a larger cultural shift to temporary employment that is happening throughout society -- it is not unique to astronomy.
5 CFR 532.703 - Agency review.
Code of Federal Regulations, 2010 CFR
2010-01-01
... attachment to the decision of the reasons for the decision, including an analysis of the employee's job, i.e... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PREVAILING RATE SYSTEMS Job...'s job. Note: Application for review will be hereafter referred to as an “application”. (b) In...
The value of job analysis, job description and performance.
Wolfe, M N; Coggins, S
1997-01-01
All companies, regardless of size, are faced with the same employment concerns. Efficient personnel management requires the use of three human resource techniques--job analysis, job description and performance appraisal. These techniques and tools are not for large practices only. Small groups can obtain the same benefits by employing these performance control measures. Job analysis allows for the development of a compensation system. Job descriptions summarize the most important duties. Performance appraisals help reward outstanding work.
Wong, Imelda S; Ostry, Aleck S; Demers, Paul A; Davies, Hugh W
2012-01-01
This pilot study is one of the first to examine the impact of job strain and shift work on both the autonomic nervous system (ANS) and the hypothalamic-pituitary-adrenal (HPA) axis using two salivary stress biomarkers and two subclinical heart disease indicators. This study also tested the feasibility of a rigorous biological sampling protocol in a busy workplace setting. Paramedics (n = 21) self-collected five salivary samples over 1 rest and 2 workdays. Samples were analyzed for α-amylase and cortisol diurnal slopes and daily production. Heart rate variability (HRV) was logged over 2 workdays with the Polar RS800 Heart Rate monitors. Endothelial functioning was measured using fingertip peripheral arterial tonometry. Job strain was ascertained using a paramedic-specific survey. The effects of job strain and shift work were examined by comparing paramedic types (dispatchers vs. ambulance attendants) and shift types (daytime vs. rotating day/night). Over 90% of all expected samples were collected and fell within expected normal ranges. Workday samples were significantly different from rest day samples. Dispatchers reported higher job strain than ambulance paramedics and exhibited reduced daily alpha-amylase production, elevated daily cortisol production, and reduced endothelial function. In comparison with daytime-only workers, rotating shift workers reported higher job strain, exhibited flatter α-amylase and cortisol diurnal slopes, reduced daily α-amylase production, elevated daily cortisol production, and reduced HRV and endothelial functioning. Despite non-statistically significant differences between group comparisons, the consistency of the overall trend in subjective and objective measures suggests that exposure to work stressors may lead to dysregulation in neuroendocrine activity and, over the long-term, to early signs of heart disease. Results suggest that further study is warranted in this population. Power calculations based on effect sizes in the shift type comparison suggest a study size of n = 250 may result in significant differences at p = 0.05. High compliance among paramedics to complete the intensive protocol suggests this study will be feasible in a larger population.
2010-06-01
62 1. KPP 1: High Performing Workplace and Environment................65 a. Attribute 1. System...source for employee values and actions. The stereotypical value of the federal government employee, especially under the GS system, was job security...most directly met by this model is job security. This job security is often stereotyped by the saying; you cannot fire a government employee
20 CFR 658.412 - Complaint resolution.
Code of Federal Regulations, 2011 CFR
2011-04-01
... GOVERNING THE JOB SERVICE SYSTEM Job Service Complaint System State Agency Js Complaint System § 658.412... satisfaction with the outcome, or (2) The complainant chooses not to elevate the complaint to the next level of...
20 CFR 653.102 - Job information.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 3 2013-04-01 2013-04-01 false Job information. 653.102 Section 653.102... SERVICE SYSTEM Services for Migrant and Seasonal Farmworkers (MSFWs) § 653.102 Job information. All State agencies shall make job order information conspicuous and available to MSFWs in all local offices. This...
32 CFR 1656.13 - Review of alternative service job assignments.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 32 National Defense 6 2014-07-01 2014-07-01 false Review of alternative service job assignments... SERVICE SYSTEM ALTERNATIVE SERVICE § 1656.13 Review of alternative service job assignments. (a) Review of ASW job assignments will be accomplished in accordance with the provisions of this subsection. (b...
32 CFR 1656.13 - Review of alternative service job assignments.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 6 2011-07-01 2011-07-01 false Review of alternative service job assignments... SERVICE SYSTEM ALTERNATIVE SERVICE § 1656.13 Review of alternative service job assignments. (a) Review of ASW job assignments will be accomplished in accordance with the provisions of this subsection. (b...
20 CFR 653.103 - MSFW job applications.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 3 2014-04-01 2014-04-01 false MSFW job applications. 653.103 Section 653... EMPLOYMENT SERVICE SYSTEM Services for Migrant and Seasonal Farmworkers (MSFWs) § 653.103 MSFW job... offer to refer the applicant to any available jobs for which the MSFW may be qualified, and any JS...
20 CFR 653.102 - Job information.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 3 2012-04-01 2012-04-01 false Job information. 653.102 Section 653.102... SERVICE SYSTEM Services for Migrant and Seasonal Farmworkers (MSFWs) § 653.102 Job information. All State agencies shall make job order information conspicuous and available to MSFWs in all local offices. This...
48 CFR 252.217-7004 - Job orders and compensation.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Job orders and... of Provisions And Clauses 252.217-7004 Job orders and compensation. As prescribed in 217.7104(a), use the following clause: Job Orders and Compensation (MAY 2006) (a) The Contracting Officer shall solicit...
32 CFR 1656.13 - Review of alternative service job assignments.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 32 National Defense 6 2013-07-01 2013-07-01 false Review of alternative service job assignments... SERVICE SYSTEM ALTERNATIVE SERVICE § 1656.13 Review of alternative service job assignments. (a) Review of ASW job assignments will be accomplished in accordance with the provisions of this subsection. (b...
20 CFR 653.103 - MSFW job applications.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false MSFW job applications. 653.103 Section 653... EMPLOYMENT SERVICE SYSTEM Services for Migrant and Seasonal Farmworkers (MSFWs) § 653.103 MSFW job... offer to refer the applicant to any available jobs for which the MSFW may be qualified, and any JS...
20 CFR 653.103 - MSFW job applications.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 3 2012-04-01 2012-04-01 false MSFW job applications. 653.103 Section 653... EMPLOYMENT SERVICE SYSTEM Services for Migrant and Seasonal Farmworkers (MSFWs) § 653.103 MSFW job... offer to refer the applicant to any available jobs for which the MSFW may be qualified, and any JS...
48 CFR 252.217-7004 - Job orders and compensation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Job orders and... of Provisions And Clauses 252.217-7004 Job orders and compensation. As prescribed in 217.7104(a), use the following clause: Job Orders and Compensation (MAY 2006) (a) The Contracting Officer shall solicit...
48 CFR 252.217-7004 - Job orders and compensation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Job orders and... of Provisions And Clauses 252.217-7004 Job orders and compensation. As prescribed in 217.7104(a), use the following clause: JOB ORDERS AND COMPENSATION (MAY 2006) (a) The Contracting Officer shall solicit...
20 CFR 653.102 - Job information.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Job information. 653.102 Section 653.102... SERVICE SYSTEM Services for Migrant and Seasonal Farmworkers (MSFWs) § 653.102 Job information. All State agencies shall make job order information conspicuous and available to MSFWs in all local offices. This...
32 CFR 1656.13 - Review of alternative service job assignments.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 32 National Defense 6 2012-07-01 2012-07-01 false Review of alternative service job assignments... SERVICE SYSTEM ALTERNATIVE SERVICE § 1656.13 Review of alternative service job assignments. (a) Review of ASW job assignments will be accomplished in accordance with the provisions of this subsection. (b...
20 CFR 653.103 - MSFW job applications.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 3 2013-04-01 2013-04-01 false MSFW job applications. 653.103 Section 653... EMPLOYMENT SERVICE SYSTEM Services for Migrant and Seasonal Farmworkers (MSFWs) § 653.103 MSFW job... offer to refer the applicant to any available jobs for which the MSFW may be qualified, and any JS...
48 CFR 252.217-7004 - Job orders and compensation.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Job orders and... of Provisions And Clauses 252.217-7004 Job orders and compensation. As prescribed in 217.7104(a), use the following clause: Job Orders and Compensation (MAY 2006) (a) The Contracting Officer shall solicit...
48 CFR 252.217-7004 - Job orders and compensation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Job orders and... of Provisions And Clauses 252.217-7004 Job orders and compensation. As prescribed in 217.7104(a), use the following clause: JOB ORDERS AND COMPENSATION (MAY 2006) (a) The Contracting Officer shall solicit...
20 CFR 653.102 - Job information.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 3 2014-04-01 2014-04-01 false Job information. 653.102 Section 653.102... SERVICE SYSTEM Services for Migrant and Seasonal Farmworkers (MSFWs) § 653.102 Job information. All State agencies shall make job order information conspicuous and available to MSFWs in all local offices. This...
20 CFR 653.102 - Job information.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Job information. 653.102 Section 653.102... SERVICE SYSTEM Services for Migrant and Seasonal Farmworkers (MSFWs) § 653.102 Job information. All State agencies shall make job order information conspicuous and available to MSFWs in all local offices. This...
Job Analysis, Job Descriptions, and Performance Appraisal Systems.
ERIC Educational Resources Information Center
Sims, Johnnie M.; Foxley, Cecelia H.
1980-01-01
Job analysis, job descriptions, and performance appraisal can benefit student services administration in many ways. Involving staff members in the development and implementation of these techniques can increase commitment to and understanding of the overall objectives of the office, as well as communication and cooperation among colleagues.…
Practices implemented by a Texas charter school system to overcome science teacher shortage
NASA Astrophysics Data System (ADS)
Yasar, Bilgehan M.
The purpose of this study was to examine practices used by a charter school system to hire and retain science teachers. The research design for this study was a qualitative case study. This single instrumental case study explored the issue within a bounded system. Purposeful sampling strategy was used to identify the participants who were interviewed individually. Findings of the case study supported that using online resources, advertising in the newspaper, attending job fairs, using alternative certification programs, attracting alumni, contacting the college of educations and hiring internationally helped the charter school system with hiring science teachers. Improving teacher salary scale, implementing teacher mentorship programs, reimbursing teachers for certification and master's programs, providing professional development and supporting teachers helped to retain science teachers. Therefore, this study contributes to determining strategies and techniques, selecting methods and programs, training administrators, and monitoring for successful hiring and retaining science teacher implementation.
Deep Space Network (DSN), Network Operations Control Center (NOCC) computer-human interfaces
NASA Technical Reports Server (NTRS)
Ellman, Alvin; Carlton, Magdi
1993-01-01
The Network Operations Control Center (NOCC) of the DSN is responsible for scheduling the resources of DSN, and monitoring all multi-mission spacecraft tracking activities in real-time. Operations performs this job with computer systems at JPL connected to over 100 computers at Goldstone, Australia and Spain. The old computer system became obsolete, and the first version of the new system was installed in 1991. Significant improvements for the computer-human interfaces became the dominant theme for the replacement project. Major issues required innovating problem solving. Among these issues were: How to present several thousand data elements on displays without overloading the operator? What is the best graphical representation of DSN end-to-end data flow? How to operate the system without memorizing mnemonics of hundreds of operator directives? Which computing environment will meet the competing performance requirements? This paper presents the technical challenges, engineering solutions, and results of the NOCC computer-human interface design.
Workplace stress in nursing workers from an emergency hospital: Job Stress Scale analysis.
Urbanetto, Janete de Souza; da Silva, Priscila Costa; Hoffmeister, Eveline; de Negri, Bianca Souza; da Costa, Bartira Ercília Pinheiro; Poli de Figueiredo, Carlos Eduardo
2011-01-01
This study identifies workplace stress according to the Job Stress Scale and associates it with socio-demographic and occupational variables of nursing workers from an emergency hospital. This is a cross-sectional study and data were collected through a questionnaire applied to 388 nursing professionals. Descriptive statistics were applied; univariate and multivariate analyses were performed. The results indicate there is a significant association with being a nursing technician or auxiliary, working in the position for more than 15 years, and having low social support, with 3.84, 2.25 and 4.79 times more chances of being placed in the 'high strain job' quadrant. The study reveals that aspects related to the workplace should be monitored by competent agencies in order to improve the quality of life of nursing workers.
Renny, Joseph S.; Tomasevich, Laura L.; Tallmadge, Evan H.; Collum, David B.
2014-01-01
Applications of the method of continuous variations—MCV or the Method of Job—to problems of interest to organometallic chemists are described. MCV provides qualitative and quantitative insights into the stoichiometries underlying association of m molecules of A and n molecules of B to form AmBn. Applications to complex ensembles probe associations that form metal clusters and aggregates. Job plots in which reaction rates are monitored provide relative stoichiometries in rate-limiting transition structures. In a specialized variant, ligand- or solvent-dependent reaction rates are dissected into contributions in both the ground states and transition states, which affords insights into the full reaction coordinate from a single Job plot. Gaps in the literature are identified and critiqued. PMID:24166797
Physically and psychologically hazardous jobs and mental health in Thailand.
Yiengprugsawan, Vasoontara; Strazdins, Lyndall; Lim, Lynette L-Y; Kelly, Matthew; Seubsman, Sam-ang; Sleigh, Adrian C
2015-09-01
This paper investigates associations between hazardous jobs, mental health and wellbeing among Thai adults. In 2005, 87 134 distance-learning students from Sukhothai Thammathirat Open University completed a self-administered questionnaire; at the 2009 follow-up 60 569 again participated. Job characteristics were reported in 2005, psychological distress and life satisfaction were reported in both 2005 and 2009. We derived two composite variables grading psychologically and physically hazardous jobs and reported adjusted odds ratios (AOR) from multivariate logistic regressions. Analyses focused on cohort members in paid work: the total was 62 332 at 2005 baseline and 41 671 at 2009 follow-up. Cross-sectional AORs linking psychologically hazardous jobs to psychological distress ranged from 1.52 (one hazard) to 4.48 (four hazards) for males and a corresponding 1.34-3.76 for females. Similarly AORs for physically hazardous jobs were 1.75 (one hazard) to 2.76 (four or more hazards) for males and 1.70-3.19 for females. A similar magnitude of associations was found between psychologically adverse jobs and low life satisfaction (AORs of 1.34-4.34 among males and 1.18-3.63 among females). Longitudinal analyses confirm these cross-sectional relationships. Thus, significant dose-response associations were found linking hazardous job exposures in 2005 to mental health and wellbeing in 2009. The health impacts of psychologically and physically hazardous jobs in developed, Western countries are equally evident in transitioning Southeast Asian countries such as Thailand. Regulation and monitoring of work conditions will become increasingly important to the health and wellbeing of the Thai workforce. © The Author 2013. Published by Oxford University Press.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-18
... the Regional Entities set priorities of what to audit, and are they doing a good job setting priorities? Do audits focus too much on documentation? Would alternative auditing methods also demonstrate...
A Comparison of Different Database Technologies for the CMS AsyncStageOut Transfer Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciangottini, D.; Balcas, J.; Mascheroni, M.
AsyncStageOut (ASO) is the component of the CMS distributed data analysis system (CRAB) that manages users transfers in a centrally controlled way using the File Transfer System (FTS3) at CERN. It addresses a major weakness of the previous, decentralized model, namely that the transfer of the user’s output data to a single remote site was part of the job execution, resulting in inefficient use of job slots and an unacceptable failure rate. Currently ASO manages up to 600k files of various sizes per day from more than 500 users per month, spread over more than 100 sites. ASO uses amore » NoSQL database (CouchDB) as internal bookkeeping and as way to communicate with other CRAB components. Since ASO/CRAB were put in production in 2014, the number of transfers constantly increased up to a point where the pressure to the central CouchDB instance became critical, creating new challenges for the system scalability, performance, and monitoring. This forced a re-engineering of the ASO application to increase its scalability and lowering its operational effort. In this contribution we present a comparison of the performance of the current NoSQL implementation and a new SQL implementation, and how their different strengths and features influenced the design choices and operational experience. We also discuss other architectural changes introduced in the system to handle the increasing load and latency in delivering output to the user.« less
A comparison of different database technologies for the CMS AsyncStageOut transfer database
NASA Astrophysics Data System (ADS)
Ciangottini, D.; Balcas, J.; Mascheroni, M.; Rupeika, E. A.; Vaandering, E.; Riahi, H.; Silva, J. M. D.; Hernandez, J. M.; Belforte, S.; Ivanov, T. T.
2017-10-01
AsyncStageOut (ASO) is the component of the CMS distributed data analysis system (CRAB) that manages users transfers in a centrally controlled way using the File Transfer System (FTS3) at CERN. It addresses a major weakness of the previous, decentralized model, namely that the transfer of the user’s output data to a single remote site was part of the job execution, resulting in inefficient use of job slots and an unacceptable failure rate. Currently ASO manages up to 600k files of various sizes per day from more than 500 users per month, spread over more than 100 sites. ASO uses a NoSQL database (CouchDB) as internal bookkeeping and as way to communicate with other CRAB components. Since ASO/CRAB were put in production in 2014, the number of transfers constantly increased up to a point where the pressure to the central CouchDB instance became critical, creating new challenges for the system scalability, performance, and monitoring. This forced a re-engineering of the ASO application to increase its scalability and lowering its operational effort. In this contribution we present a comparison of the performance of the current NoSQL implementation and a new SQL implementation, and how their different strengths and features influenced the design choices and operational experience. We also discuss other architectural changes introduced in the system to handle the increasing load and latency in delivering output to the user.
20 CFR 658.418 - Decision of the State hearing official.
Code of Federal Regulations, 2010 CFR
2010-04-01
... ADMINISTRATIVE PROVISIONS GOVERNING THE JOB SERVICE SYSTEM Job Service Complaint System State Agency Js Complaint... consider the validity or constitutionality of JS regulations or of the Federal statutes under which they... JS Complaint System ...
The 'last mile' of data handling: Fermilab's IFDH tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyon, Adam L.; Mengel, Marc W.
2014-01-01
IFDH (Intensity Frontier Data Handling), is a suite of tools for data movement tasks for Fermilab experiments and is an important part of the FIFE[2] (Fabric for Intensity Frontier [1] Experiments) initiative described at this conference. IFDH encompasses moving input data from caches or storage elements to compute nodes (the 'last mile' of data movement) and moving output data potentially to those caches as part of the journey back to the user. IFDH also involves throttling and locking to ensure that large numbers of jobs do not cause data movement bottlenecks. IFDH is realized as an easy to use layermore » that users call in their job scripts (e.g. 'ifdh cp'), hiding the low level data movement tools. One advantage of this layer is that the underlying low level tools can be selected or changed without the need for the user to alter their scripts. Logging and performance monitoring can also be added easily. This system will be presented in detail as well as its impact on the ease of data handling at Fermilab experiments.« less
ERIC Educational Resources Information Center
Schulz, Russel E.; Farrell, Jean R.
This resource guide for the use of job aids ("how-to-do-it" guidance) for activities identified in the second phase of the Instructional Systems Development Model (ISD) contains an introduction to the use of job aids, as well as descriptive authoring flowcharts for Blocks II.1 through II.4. The introduction includes definitions;…
ERIC Educational Resources Information Center
Schulz, Russel E.; Farrell, Jean R.
This resource guide for the use of job aids ("how-to-do-it" guidance) for activities identified in the first phase of the Instructional Systems Development Model (ISD) contains an introduction to the use of job aids, as well as descriptive authoring flowcharts for Blocks I.2 through I.5. The introduction includes definitions;…
Job Sharing: An Alternative to Traditional Employment Patterns. ERS Information Aid.
ERIC Educational Resources Information Center
Block, Alan W.
In the face of declining enrollments and widespread reductions-in-force in school systems, job sharing can provide part-time positions for persons unable to work full-time and can allow some individuals to maintain their positions on a part-time basis as an alternative to being laid off. Job sharing can also benefit school systems by increasing…
System Reform and Job Satisfaction of Juvenile Justice Teachers
ERIC Educational Resources Information Center
Houchins, David E.; Shippen, Margaret E.; Jolivette, Kristine
2006-01-01
The aim of this study was to examine the effect of five years of system-wide reform on factors associated with the job satisfaction of juvenile justice teachers. Prior to this research, no data were available on the effect of reform on the job satisfaction of this population. A comprehensive survey was administered to teachers who had been in the…
ERIC Educational Resources Information Center
Schulz, Russel E.; Farrell, Jean R.
This resource guide for the use of job aids ("how-to-do-it" guidance) for activities identified in the third phase of the Instructional Systems Development Model (ISD) contains an introduction to the use of job aids, as well as descriptive authoring flowcharts for Blocks III.1 through III.5. The introduction includes definitions;…
Chen, Chin-Huang; Wang, Jane; Yang, Cheng-San; Fan, Jun-Yu
2016-07-01
We explored the impact of job content and stress on anxiety, depressive symptoms and self-perceived health status among nurse practitioners (NPs). Taiwan's NP roles vary between hospitals as a result of the diverse demands and complex tasks that cause job-related stress, potentially affecting the health of the NP. This study utilised a cross-sectional descriptive design with 161 NPs from regional hospitals participating. Data collection involved demographics, the Taiwan Nurse Stress Checklist, the Job Content Questionnaire, the Beck Anxiety Inventory, the Beck Depression Inventory, a General Health Status Checklist and salivary cortisol tests. NPs reported moderate job stress, similar job control to nurses, mild anxiety and depression, and below-average self-perceived health. Being a licensed NP, personal response, competence, and incompleteness of the personal arrangements subscales of job stress, and anxiety predicted self-perceived health after adjusting for other covariates. Job stress and anxiety affect NP health. NPs are a valuable resource, and the healthcare system demand is growing. Reasonable NP staffing, working hours, proper promotion systems, the causes of job stress, job content clarification and practical work shift scheduling need to be considered. The occupational safety and physical and psychological health of NPs are strongly associated with the quality of patient care. © 2016 John Wiley & Sons Ltd.
20 CFR 670.530 - Are Job Corps centers required to maintain a student accountability system?
Code of Federal Regulations, 2010 CFR
2010-04-01
... student accountability system? 670.530 Section 670.530 Employees' Benefits EMPLOYMENT AND TRAINING... accountability system? Yes, each Job Corps center must establish and operate an effective system to account for... student absence. Each center must operate its student accountability system according to requirements and...
A Job Analysis for K-8 Principals in a Nationwide Charter School System
ERIC Educational Resources Information Center
Cumings, Laura; Coryn, Chris L. S.
2009-01-01
Background: Although no single technique on its own can predict job performance, a job analysis is a customary approach for identifying the relevant knowledge, skills, abilities, and other characteristics (KSAO) necessary to successfully complete the job tasks of a position. Once the position requirements are identified, the hiring process is…
Job-Oriented Basic Skills (JOBS) Program for the Acoustic Sensor Operations Strand.
ERIC Educational Resources Information Center
U'Ren, Paula Kabance; Baker, Meryl S.
An effort was undertaken to develop a job-oriented basic skills curriculum appropriate for the acoustic sensor operations area, which includes members of four ratings: ocean systems technician, aviation antisubmarine warfare operator, sonar technician (surface), and sonar technician (submarine). Analysis of the job duties of the four ratings…
ERIC Educational Resources Information Center
Cole, Paul F.
U.S. industry and the U.S workplace are changing. More highly skilled jobs are replacing unskilled and semiskilled jobs, and more jobs require higher-order thinking skills. At the same time, the education system is failing to educate young people to fill those jobs in the future. Although a higher percentage of students graduate than ever before,…
48 CFR 217.7103-3 - Solicitations for job orders.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Solicitations for job... Master Agreement for Repair and Alteration of Vessels 217.7103-3 Solicitations for job orders. (a) When a... perform the work and agree to execute a master agreement before award of a job order. (b) Follow the...
48 CFR 217.7103-3 - Solicitations for job orders.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Solicitations for job... Master Agreement for Repair and Alteration of Vessels 217.7103-3 Solicitations for job orders. (a) When a... perform the work and agree to execute a master agreement before award of a job order. (b) Follow the...
48 CFR 217.7103-3 - Solicitations for job orders.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Solicitations for job... Master Agreement for Repair and Alteration of Vessels 217.7103-3 Solicitations for job orders. (a) When a... perform the work and agree to execute a master agreement before award of a job order. (b) Follow the...
48 CFR 217.7103-3 - Solicitations for job orders.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Solicitations for job... Master Agreement for Repair and Alteration of Vessels 217.7103-3 Solicitations for job orders. (a) When a... perform the work and agree to execute a master agreement before award of a job order. (b) Follow the...
48 CFR 217.7103-3 - Solicitations for job orders.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Solicitations for job... Master Agreement for Repair and Alteration of Vessels 217.7103-3 Solicitations for job orders. (a) When a... perform the work and agree to execute a master agreement before award of a job order. (b) Follow the...
Job Sharing: Is It in Your Future?
ERIC Educational Resources Information Center
Russell, Thyra K.
This paper reports the results of a survey of 1,277 libraries in Illinois which investigated the status of job sharing in armed forces, college and university, community college, government, law, medical, public, religious, and special libraries and library systems. Job sharing is described as the division of one full-time job between two or more…
Factors Influencing the Job Satisfaction of Health System Employees in Tabriz, Iran
Bagheri, Shokoufe; Kousha, Ahmad; Janati, Ali; Asghari-Jafarabadi, Mohammad
2012-01-01
Background: Employees can be counseled on how they feel about their job. If any particular dimension of their job is causing them dissatisfaction, they can be assisted to appropriately change it. In this study, we investigated the factors affecting job satisfaction from the perspective of employees working in the health system and thereby a quantitative measure of job satisfaction. Methods: Using eight focus group discussions (n=70), factors affecting job satisfaction of the employees were discussed. The factors identified from literature review were categorized in four groups: structural and managerial, social, work in it-self, environmental and welfare factors. Results: The findings confirmed the significance of structural and managerial, social, work in it-self, environmental and welfare factors in the level of job satisfaction. In addition, a new factor related to individual characteristics such as employee personal characteristics and development was identified. Conclusion: In order to improve the quality and productivity of work, besides, structural and managerial, social, work in it-self, environmental and welfare factors, policy makers should be taken into account individual characteristics of the employee as a factor affecting job satisfaction. PMID:24688933
Factors influencing the job satisfaction of health system employees in tabriz, iran.
Bagheri, Shokoufe; Kousha, Ahmad; Janati, Ali; Asghari-Jafarabadi, Mohammad
2012-01-01
Employees can be counseled on how they feel about their job. If any particular dimension of their job is causing them dissatisfaction, they can be assisted to appropriately change it. In this study, we investigated the factors affecting job satisfaction from the perspective of employees working in the health system and thereby a quantitative measure of job satisfaction. Using eight focus group discussions (n=70), factors affecting job satisfaction of the employees were discussed. The factors identified from literature review were categorized in four groups: structural and managerial, social, work in it-self, environmental and welfare factors. The findings confirmed the significance of structural and managerial, social, work in it-self, environmental and welfare factors in the level of job satisfaction. In addition, a new factor related to individual characteristics such as employee personal characteristics and development was identified. In order to improve the quality and productivity of work, besides, structural and managerial, social, work in it-self, environmental and welfare factors, policy makers should be taken into account individual characteristics of the employee as a factor affecting job satisfaction.
The ALICE analysis train system
NASA Astrophysics Data System (ADS)
Zimmermann, Markus; ALICE Collaboration
2015-05-01
In the ALICE experiment hundreds of users are analyzing big datasets on a Grid system. High throughput and short turn-around times are achieved by a centralized system called the LEGO trains. This system combines analysis from different users in so-called analysis trains which are then executed within the same Grid jobs thereby reducing the number of times the data needs to be read from the storage systems. The centralized trains improve the performance, the usability for users and the bookkeeping in comparison to single user analysis. The train system builds upon the already existing ALICE tools, i.e. the analysis framework as well as the Grid submission and monitoring infrastructure. The entry point to the train system is a web interface which is used to configure the analysis and the desired datasets as well as to test and submit the train. Several measures have been implemented to reduce the time a train needs to finish and to increase the CPU efficiency.
What Strategies Do the Nurses Apply to Cope With Job Stress?: A Qualitative Study
Akbar, Rasool Eslami; Elahi, Nasrin; Mohammadi, Eesa; Khoshknab, Masoud Fallahi
2016-01-01
Background: Nursing staff encounter a lot of physical, psychological and social stressors at work. Because the adverse effects of job stress on the health of this group of staff and subsequently on the quality of care services provided by nurses; study and identify how nurses cope with the job stress is very important and can help prevent the occurrence of unfavorable outcomes. Objectives: The aim of this study was to explore the experiences of nurses to identify the strategies they used to cope with the job stress. Methods: In this qualitative study content analysis approach was used. Purposive sampling approach was applied. The sample population included 18 nurses working in three hospitals. Data collection was conducted through face to face unstructured interview and was analyzed using conventional content analysis approach. Findings: The analysis of the data emerged six main themes about the strategies used by nurses to cope with job stress, which, include: situational control of conditions, seeking help, preventive monitoring of situation, self-controlling, avoidance and escape and spiritual coping. Conclusions: Exploring experiences of nurses on how to cope with job stress emerged context-dependent and original strategies and this knowledge can pave the ground for nurses to increase self-awareness of how to cope with job stress. And could also be the basis for planning and the adoption of necessary measures by the authorities to adapt nurses with their profession better and improves their health which are essential elements to fulfill high-quality nursing care. PMID:26755462
Evaluation of Job Queuing/Scheduling Software: Phase I Report
NASA Technical Reports Server (NTRS)
Jones, James Patton
1996-01-01
The recent proliferation of high performance work stations and the increased reliability of parallel systems have illustrated the need for robust job management systems to support parallel applications. To address this issue, the national Aerodynamic Simulation (NAS) supercomputer facility compiled a requirements checklist for job queuing/scheduling software. Next, NAS began an evaluation of the leading job management system (JMS) software packages against the checklist. This report describes the three-phase evaluation process, and presents the results of Phase 1: Capabilities versus Requirements. We show that JMS support for running parallel applications on clusters of workstations and parallel systems is still insufficient, even in the leading JMS's. However, by ranking each JMS evaluated against the requirements, we provide data that will be useful to other sites in selecting a JMS.
Geleto, Ayele; Baraki, Negga; Atomsa, Gudina Egata; Dessie, Yadeta
2015-09-01
Human factor is the primary resource of health care system. For optimal performance of health care system, the workforce needs to be satisfied with the job he/she is doing. This research was aimed to assess the level of job satisfaction and associated factors among health care providers at public health institutions in Harari region, Eastern Ethiopia. Health facility based cross-sectional study was conducted among 405 randomly selected health care providers in Harari regional state, Eastern Ethiopia. Data were collected by self-administered structured questionnaires. Epidata Version 3.1 was used for data entry and analysis was made with SPSS version 17. Level of job satisfaction was measured with a multi item scales derived from Wellness Council of America and Best Companies Group. The average/mean value was used as the cutoff point to determine whether the respondents were satisfied with their job or not. Multivariable logistic regression was used to analyze data and odds ratio with 95% CI at P ≤ 0.05 was used to identify associated factors with level of job satisfaction. Less than half 179 (44.2%) of the respondents were satisfied with their job. Being midwifery in profession [AOR = 1.20; 95% CI (1.11-2.23)], age less than 35 years [AOR = 2.0; 95% CI (1.67-2.88)], having good attitude to stay in the same ward for longer period [AOR = 3.21; 95 % CI (1.33, 5.41)], and safe working environment [AOR = 4.61; 95% CI (3.33, 6.92)] were found were found to be associated with job satisfaction. Less than half (44.2%) of the respondents were satisfied with their current job. Organizational management system, salary and payment and working environment were among factors that affects level of job satisfaction. Thus, regional health bureau and health facility administrators need to pay special attention to improve management system through the application of a health sector reform strategy.
77 FR 58301 - Final Requirements-Race to the Top-Early Learning Challenge; Phase 2
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-20
... career advancement. Core Area B addresses the importance of a high-quality plan for rating and monitoring... adversely affect a sector of the economy, productivity, competition, jobs, the environment, public health or...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruggeman, David Alan
This report gives general information about how to become a meteorologist and what kinds of jobs exist in that field. Then it goes into detail about why weather is monitored at LANL, how it is done, and where the data can be accessed online.
ERIC Educational Resources Information Center
Blake, Anthony; Francis, David
1973-01-01
Approaches to developing management ability include systematic techniques, mental enlargement, self-analysis, and job-related counseling. A method is proposed to integrate them into a responsive program involving depth understanding, vision of the future, specialization commitment to change, and self-monitoring control. (MS)
Rebuilding Job Training from the Ground Up: Workforce System Reform After 9/11.
ERIC Educational Resources Information Center
Fischer, David Jason; Kleiman, Neil Scott
Since September 11, 2001, New York City (NYC) has lost over 130,000 jobs, unemployment in the boroughs is around 9% and unemployment benefits have run out for many. NYC has long neglected workforce development, viewing it as a social service to distribute federal funds and train entry workers for dead-end jobs. To create a workforce system from…
ERIC Educational Resources Information Center
Scott-Bracey, Pamela
2011-01-01
The purpose of this study was to explore the alignment of soft skills sought by current business IS entry-level employers in electronic job postings, with the integration of soft skills in undergraduate business information systems (IS) syllabi of public four-year universities in Texas. One hundred fifty job postings were extracted from two major…
20 CFR 658.421 - Handling of JS-related complaints.
Code of Federal Regulations, 2011 CFR
2011-04-01
... ADMINISTRATIVE PROVISIONS GOVERNING THE JOB SERVICE SYSTEM Job Service Complaint System Federal Js Complaint... paragraph (e) of this section, to the appellant's satisfaction, the Regional Administrator may, in the...
NASA Astrophysics Data System (ADS)
Park, Sangwook; Lee, Young-Ran; Hwang, Yoola; Javier Santiago Noguero Galilea
2009-12-01
This paper describes the Flight Dynamics Automation (FDA) system for COMS Flight Dynamics System (FDS) and its test result in terms of the performance of the automation jobs. FDA controls the flight dynamics functions such as orbit determination, orbit prediction, event prediction, and fuel accounting. The designed FDA is independent from the specific characteristics which are defined by spacecraft manufacturer or specific satellite missions. Therefore, FDA could easily links its autonomous job control functions to any satellite mission control system with some interface modification. By adding autonomous system along with flight dynamics system, it decreases the operator’s tedious and repeated jobs but increase the usability and reliability of the system. Therefore, FDA is used to improve the completeness of whole mission control system’s quality. The FDA is applied to the real flight dynamics system of a geostationary satellite, COMS and the experimental test is performed. The experimental result shows the stability and reliability of the mission control operations through the automatic job control.
Job Involvement and Organizational Commitment of Employees of Prehospital Emergency Medical System.
Rahati, Alireza; Sotudeh-Arani, Hossein; Adib-Hajbaghery, Mohsen; Rostami, Majid
2015-12-01
Several studies are available on organizational commitment of employees in different organizations. However, the organizational commitment and job involvement of the employees in the prehospital emergency medical system (PEMS) of Iran have largely been ignored. This study aimed to investigate the organizational commitment and job involvement of the employees of PEMS and the relationship between these two issues. This cross-sectional study was conducted on 160 employees of Kashan PEMS who were selected through a census method in 2014. A 3-part instrument was used in this study, including a demographic questionnaire, the Allen and Miller's organizational commitment inventory, and the Lodahl and Kejner's job involvement inventory. We used descriptive statistics, Spearman correlation coefficient, Kruskal-Wallis, Friedman, analysis of variance, and Tukey post hoc tests to analyze the data. The mean job involvement and organizational commitment scores were 61.78 ± 10.69 and 73.89 ± 13.58, respectively. The mean scores of job involvement and organizational commitment were significantly different in subjects with different work experiences (P = 0.043 and P = 0.012, respectively). However, no significant differences were observed between the mean scores of organizational commitment and job involvement in subjects with different fields of study, different levels of interest in the profession, and various educational levels. A direct significant correlation was found between the total scores of organizational commitment and job involvement of workers in Kashan PEMS (r = 0.910, P < 0.001). This study showed that the employees in the Kashan PEMS obtained half of the score of organizational commitment and about two-thirds of the job involvement score. Therefore, the higher level managers of the emergency medical system are advised to implement some strategies to increase the employees' job involvement and organizational commitment.
School-Based Job Placement Service Model: Phase I, Planning. Final Report.
ERIC Educational Resources Information Center
Gingerich, Garland E.
To assist school administrators and guidance personnel in providing job placement services, a study was conducted to: (1) develop a model design for a school-based job placement system, (2) identify students to be served by the model, (3) list specific services provided to students, and (4) develop job descriptions for each individual responsible…
glideinWMS—a generic pilot-based workload management system
NASA Astrophysics Data System (ADS)
Sfiligoi, I.
2008-07-01
The Grid resources are distributed among hundreds of independent Grid sites, requiring a higher level Workload Management System (WMS) to be used efficiently. Pilot jobs have been used for this purpose by many communities, bringing increased reliability, global fair share and just in time resource matching. glideinWMS is a WMS based on the Condor glidein concept, i.e. a regular Condor pool, with the Condor daemons (startds) being started by pilot jobs, and real jobs being vanilla, standard or MPI universe jobs. The glideinWMS is composed of a set of Glidein Factories, handling the submission of pilot jobs to a set of Grid sites, and a set of VO Frontends, requesting pilot submission based on the status of user jobs. This paper contains the structural overview of glideinWMS as well as a detailed description of the current implementation and the current scalability limits.
glideinWMS - A generic pilot-based Workload Management System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sfiligoi, Igor; /Fermilab
The Grid resources are distributed among hundreds of independent Grid sites, requiring a higher level Workload Management System (WMS) to be used efficiently. Pilot jobs have been used for this purpose by many communities, bringing increased reliability, global fair share and just in time resource matching. GlideinWMS is a WMS based on the Condor glidein concept, i.e. a regular Condor pool, with the Condor daemons (startds) being started by pilot jobs, and real jobs being vanilla, standard or MPI universe jobs. The glideinWMS is composed of a set of Glidein Factories, handling the submission of pilot jobs to a setmore » of Grid sites, and a set of VO Frontends, requesting pilot submission based on the status of user jobs. This paper contains the structural overview of glideinWMS as well as a detailed description of the current implementation and the current scalability limits.« less
Production Management System for AMS Computing Centres
NASA Astrophysics Data System (ADS)
Choutko, V.; Demakov, O.; Egorov, A.; Eline, A.; Shan, B. S.; Shi, R.
2017-10-01
The Alpha Magnetic Spectrometer [1] (AMS) has collected over 95 billion cosmic ray events since it was installed on the International Space Station (ISS) on May 19, 2011. To cope with enormous flux of events, AMS uses 12 computing centers in Europe, Asia and North America, which have different hardware and software configurations. The centers are participating in data reconstruction, Monte-Carlo (MC) simulation [2]/Data and MC production/as well as in physics analysis. Data production management system has been developed to facilitate data and MC production tasks in AMS computing centers, including job acquiring, submitting, monitoring, transferring, and accounting. It was designed to be modularized, light-weighted, and easy-to-be-deployed. The system is based on Deterministic Finite Automaton [3] model, and implemented by script languages, Python and Perl, and the built-in sqlite3 database on Linux operating systems. Different batch management systems, file system storage, and transferring protocols are supported. The details of the integration with Open Science Grid are presented as well.
Cloud flexibility using DIRAC interware
NASA Astrophysics Data System (ADS)
Fernandez Albor, Víctor; Seco Miguelez, Marcos; Fernandez Pena, Tomas; Mendez Muñoz, Victor; Saborido Silva, Juan Jose; Graciani Diaz, Ricardo
2014-06-01
Communities of different locations are running their computing jobs on dedicated infrastructures without the need to worry about software, hardware or even the site where their programs are going to be executed. Nevertheless, this usually implies that they are restricted to use certain types or versions of an Operating System because either their software needs an definite version of a system library or a specific platform is required by the collaboration to which they belong. On this scenario, if a data center wants to service software to incompatible communities, it has to split its physical resources among those communities. This splitting will inevitably lead to an underuse of resources because the data centers are bound to have periods where one or more of its subclusters are idle. It is, in this situation, where Cloud Computing provides the flexibility and reduction in computational cost that data centers are searching for. This paper describes a set of realistic tests that we ran on one of such implementations. The test comprise software from three different HEP communities (Auger, LHCb and QCD phenomelogists) and the Parsec Benchmark Suite running on one or more of three Linux flavors (SL5, Ubuntu 10.04 and Fedora 13). The implemented infrastructure has, at the cloud level, CloudStack that manages the virtual machines (VM) and the hosts on which they run, and, at the user level, the DIRAC framework along with a VM extension that will submit, monitorize and keep track of the user jobs and also requests CloudStack to start or stop the necessary VM's. In this infrastructure, the community software is distributed via the CernVM-FS, which has been proven to be a reliable and scalable software distribution system. With the resulting infrastructure, users are allowed to send their jobs transparently to the Data Center. The main purpose of this system is the creation of flexible cluster, multiplatform with an scalable method for software distribution for several VOs. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine, which is transparent to the user.
A bioinformatics knowledge discovery in text application for grid computing
Castellano, Marcello; Mastronardi, Giuseppe; Bellotti, Roberto; Tarricone, Gianfranco
2009-01-01
Background A fundamental activity in biomedical research is Knowledge Discovery which has the ability to search through large amounts of biomedical information such as documents and data. High performance computational infrastructures, such as Grid technologies, are emerging as a possible infrastructure to tackle the intensive use of Information and Communication resources in life science. The goal of this work was to develop a software middleware solution in order to exploit the many knowledge discovery applications on scalable and distributed computing systems to achieve intensive use of ICT resources. Methods The development of a grid application for Knowledge Discovery in Text using a middleware solution based methodology is presented. The system must be able to: perform a user application model, process the jobs with the aim of creating many parallel jobs to distribute on the computational nodes. Finally, the system must be aware of the computational resources available, their status and must be able to monitor the execution of parallel jobs. These operative requirements lead to design a middleware to be specialized using user application modules. It included a graphical user interface in order to access to a node search system, a load balancing system and a transfer optimizer to reduce communication costs. Results A middleware solution prototype and the performance evaluation of it in terms of the speed-up factor is shown. It was written in JAVA on Globus Toolkit 4 to build the grid infrastructure based on GNU/Linux computer grid nodes. A test was carried out and the results are shown for the named entity recognition search of symptoms and pathologies. The search was applied to a collection of 5,000 scientific documents taken from PubMed. Conclusion In this paper we discuss the development of a grid application based on a middleware solution. It has been tested on a knowledge discovery in text process to extract new and useful information about symptoms and pathologies from a large collection of unstructured scientific documents. As an example a computation of Knowledge Discovery in Database was applied on the output produced by the KDT user module to extract new knowledge about symptom and pathology bio-entities. PMID:19534749
A bioinformatics knowledge discovery in text application for grid computing.
Castellano, Marcello; Mastronardi, Giuseppe; Bellotti, Roberto; Tarricone, Gianfranco
2009-06-16
A fundamental activity in biomedical research is Knowledge Discovery which has the ability to search through large amounts of biomedical information such as documents and data. High performance computational infrastructures, such as Grid technologies, are emerging as a possible infrastructure to tackle the intensive use of Information and Communication resources in life science. The goal of this work was to develop a software middleware solution in order to exploit the many knowledge discovery applications on scalable and distributed computing systems to achieve intensive use of ICT resources. The development of a grid application for Knowledge Discovery in Text using a middleware solution based methodology is presented. The system must be able to: perform a user application model, process the jobs with the aim of creating many parallel jobs to distribute on the computational nodes. Finally, the system must be aware of the computational resources available, their status and must be able to monitor the execution of parallel jobs. These operative requirements lead to design a middleware to be specialized using user application modules. It included a graphical user interface in order to access to a node search system, a load balancing system and a transfer optimizer to reduce communication costs. A middleware solution prototype and the performance evaluation of it in terms of the speed-up factor is shown. It was written in JAVA on Globus Toolkit 4 to build the grid infrastructure based on GNU/Linux computer grid nodes. A test was carried out and the results are shown for the named entity recognition search of symptoms and pathologies. The search was applied to a collection of 5,000 scientific documents taken from PubMed. In this paper we discuss the development of a grid application based on a middleware solution. It has been tested on a knowledge discovery in text process to extract new and useful information about symptoms and pathologies from a large collection of unstructured scientific documents. As an example a computation of Knowledge Discovery in Database was applied on the output produced by the KDT user module to extract new knowledge about symptom and pathology bio-entities.
Nilsen, Charlotta; Andel, Ross; Fors, Stefan; Meinow, Bettina; Darin Mattsson, Alexander; Kåreholt, Ingemar
2014-08-27
People spend a considerable amount of time at work over the course of their lives, which makes the workplace important to health and aging. However, little is known about the potential long-term effects of work-related stress on late-life health. This study aims to examine work-related stress in late midlife and educational attainment in relation to serious health problems in old age. Data from nationally representative Swedish surveys were used in the analyses (n = 1,502). Follow-up time was 20-24 years. Logistic regressions were used to examine work-related stress (self-reported job demands, job control, and job strain) in relation to serious health problems measured as none, serious problems in one health domain, and serious problems in two or three health domains (complex health problems). While not all results were statistically significant, high job demands were associated with higher odds of serious health problems among women but lower odds of serious health problems among men. Job control was negatively associated with serious health problems. The strongest association in this study was between high job strain and complex health problems. After adjustment for educational attainment some of the associations became statistically nonsignificant. However, high job demands, remained related to lower odds of serious problems in one health domain among men, and low job control remained associated with higher odds of complex health problems among men. High job demands were associated with lower odds of complex health problems among men with low education, but not among men with high education, or among women regardless of level of education. The results underscore the importance of work-related stress for long-term health. Modification to work environment to reduce work stress (e.g., providing opportunities for self-direction/monitoring levels of psychological job demands) may serve as a springboard for the development of preventive strategies to improve public health both before and after retirement.
Job satisfaction of nurses and identifying factors of job satisfaction in Slovenian Hospitals
Lorber, Mateja; Skela Savič, Brigita
2012-01-01
Aim To determine the level of job satisfaction of nursing professionals in Slovenian hospitals and factors influencing job satisfaction in nursing. Methods The study included 4 hospitals selected from the hospital list comprising 26 hospitals in Slovenia. The employees of these hospitals represent 29.8% and 509 employees included in the study represent 6% of all employees in nursing in Slovenian hospitals. One structured survey questionnaire was administered to the leaders and the other to employees, both consisting 154 items evaluated on a 5 point Likert-type scale. We examined the correlation between independent variables (age, number of years of employment, behavior of leaders, personal characteristics of leaders, and managerial competencies of leaders) and the dependent variable (job satisfaction – satisfaction with the work, coworkers, management, pay, etc) by applying correlation analysis and multivariate regression analysis. In addition, factor analysis was used to establish characteristic components of the variables measured. Results We found a medium level of job satisfaction in both leaders (3.49 ± 0.5) and employees (3.19 ± 0.6), however, there was a significant difference between their estimates (t = 3.237; P = <0.001). Job satisfaction was explained by age (P < 0.05; β = 0.091), years of employment (P < 0.05; β = 0.193), personal characteristics of leaders (P < 0.001; β = 0.158), and managerial competencies of leaders (P < 0.000; β = 0.634) in 46% of cases. The factor analysis yielded four factors explaining 64% of the total job satisfaction variance. Conclusion Satisfied employees play a crucial role in an organization’s success, so health care organizations must be aware of the importance of employees’ job satisfaction. It is recommended to monitor employees’ job satisfaction levels on an annual basis. PMID:22661140
Horn, Gavin P; Kesler, Richard M; Kerber, Steve; Fent, Kenneth W; Schroeder, Tad J; Scott, William S; Fehling, Patricia C; Fernhall, Bo; Smith, Denise L
2018-03-01
Firefighters' thermal burden is generally attributed to high heat loads from the fire and metabolic heat generation, which may vary between job assignments and suppression tactic employed. Utilising a full-sized residential structure, firefighters were deployed in six job assignments utilising two attack tactics (1. Water applied from the interior, or 2. Exterior water application before transitioning to the interior). Environmental temperatures decreased after water application, but more rapidly with transitional attack. Local ambient temperatures for inside operation firefighters were higher than other positions (average ~10-30 °C). Rapid elevations in skin temperature were found for all job assignments other than outside command. Neck skin temperatures for inside attack firefighters were ~0.5 °C lower when the transitional tactic was employed. Significantly higher core temperatures were measured for the outside ventilation and overhaul positions than the inside positions (~0.6-0.9 °C). Firefighters working at all fireground positions must be monitored and relieved based on intensity and duration. Practitioner Summary: Testing was done to characterise the thermal burden experienced by firefighters in different job assignments who responded to controlled residential fires (with typical furnishings) using two tactics. Ambient, skin and core temperatures varied based on job assignment and tactic employed, with rapid elevations in core temperature in many roles.
[Predictors of intention to leave the nursing profession in two Italian hospitals].
Cortese, Claudio Giovanni
2013-01-01
Nursing shortage is acknowledged as worldwide issue: understanding the factors that foster nurses' intention to leave the profession (ITL) is therefore essential in lessening its impact. The present study aims at providing insight into the factors influencing nurses' ITL, taking into account personal characteristics, context characteristics and job satisfaction factors. The study was conducted in two hospitals of Northern Italy, by a questionnaire administered to all nurses employed; 746 questionnaires were distributed, of which 525 (70.4%) were returned completed. The questionnaire consisted of four sections: personal characteristics, context characteristics, job satisfaction (44 items of Italian adaptation of Stamps' Index of Work Satisfaction), and ITL (single-item). Descriptive statistics, reliability analysis, univariate analysis and multiple logistic regression model were carried using Pasw18. A higher job satisfaction was registered for Interaction with nurses, Professional status, and Autonomy; on the other hand, a perception of dissatisfaction was registered for Pay and Job requirements; 14.6% of respondents reported ITL. Finally, a low job satisfaction for Professional status, Pay, and Work organization policies, age < 30 years, and part-time schedule are associated to higher ITL. The study allowed to identify various predictors of ITL, enhancing the importance of regular monitoring of ITL. To limit ITL, organizations should: invest on some job satisfaction factors, promote organizational integration of newcomers, and prevent the escalation of work-family and work-life conflict.
Narisada, Akihiko; Hasegawa, Tomomi; Nakahigashi, Maki; Hirobe, Takaaki; Ikemoto, Tatsunori; Ushida, Takahiro; Kobayashi, Fumio
2015-05-01
Job strain, defined as a combination of high job demands and low job control, has been reported to elevate blood pressure (BP) during work. Meanwhile, a recent experimental study showed that ghrelin blunted the BP response to such mental stress. In the present study, we examined the hypothesis that des-acyl ghrelin may have some beneficial effects on worksite BP through modulating the BP response to work-related mental stress, i.e., job strain. Subjects were 34 overweight/obese male day-shift workers (mean age 41.7 ± 6.7 years). No subjects had received any anti-hypertensive medication. A 24-h ambulatory BP monitoring was recorded every 30 min on a regular working day. The average BP was calculated for Work BP, Morning BP, and Home BP. Job strain was assessed using the short version of the Japanese Job Content Questionnaire. Des-acyl ghrelin showed significant inverse correlations with almost all BPs except Morning SBP, Morning DBP, and Home DBP. In multiple regression analysis, des-acyl ghrelin inversely correlated with Work SBP after adjusting for confounding factors. Des-acyl ghrelin was also negatively associated with BP changes from Sleep to Morning, Sleep to Work, and Sleep to Home. Des-acyl ghrelin was inversely associated with Worksite BP, suggesting a unique beneficial effect of des-acyl ghrelin on Worksite BP in overweight/obese male day-shift workers.
The Effect of Military Service and Skill Transferability on the Civilian Earnings of Veterans.
1998-03-01
as well as job performance . The impact of children on the post-service earnings of veterans has been examined in several studies. Hirsch and Mehay...AUTHOR(S) Petroff Steven J. 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Postgraduate School Monterey, CA 93943-5000 9. SPONSORING...MONITORING AGENCY NAME(S) AND ADDRESS(ES) 5. FUNDING NUMBERS 8. PERFORMING ORGANIZATION REPORT NUMBER 10. SPONSORING / MONITORING AGENCY REPORT
Coble, Joseph B; Stewart, Patricia A; Vermeulen, Roel; Yereb, Daniel; Stanevich, Rebecca; Blair, Aaron; Silverman, Debra T; Attfield, Michael
2010-10-01
Air monitoring surveys were conducted between 1998 and 2001 at seven non-metal mining facilities to assess exposure to respirable elemental carbon (REC), a component of diesel exhaust (DE), for an epidemiologic study of miners exposed to DE. Personal exposure measurements were taken on workers in a cross-section of jobs located underground and on the surface. Air samples taken to measure REC were also analyzed for respirable organic carbon (ROC). Concurrent measurements to assess exposure to nitric oxide (NO) and nitrogen dioxide (NO₂), two gaseous components of DE, were also taken. The REC measurements were used to develop quantitative estimates of average exposure levels by facility, department, and job title for the epidemiologic analysis. Each underground job was assigned to one of three sets of exposure groups from specific to general: (i) standardized job titles, (ii) groups of standardized job titles combined based on the percentage of time in the major underground areas, and (iii) larger groups based on similar area carbon monoxide (CO) air concentrations. Surface jobs were categorized based on their use of diesel equipment and proximity to DE. A total of 779 full-shift personal measurements were taken underground. The average REC exposure levels for underground jobs with five or more measurements ranged from 31 to 58 μg m⁻³ at the facility with the lowest average exposure levels and from 313 to 488 μg m⁻³ at the facility with the highest average exposure levels. The average REC exposure levels for surface workers ranged from 2 to 6 μg m⁻³ across the seven facilities. There was much less contrast in the ROC compared with REC exposure levels measured between surface and underground workers within each facility, as well as across the facilities. The average ROC levels underground ranged from 64 to 195 μg m⁻³, while on the surface, the average ROC levels ranged from 38 to 71 μg m⁻³ by facility, an ∼2- to 3-fold difference. The average NO and NO₂ levels underground ranged from 0.20 to 1.49 parts per million (ppm) and from 0.10 to 0.60 ppm, respectively, and were ∼10 times higher than levels on the surface, which ranged from 0.02 to 0.11 ppm and from 0.01 to 0.06 ppm, respectively. The ROC, NO, and NO₂ concentrations underground were correlated with the REC levels (r = 0.62, 0.71, and 0.62, respectively). A total of 80% of the underground jobs were assigned an exposure estimate based on measurements taken for the specific job title or for other jobs with a similar percentage of time spent in the major underground work areas. The average REC exposure levels by facility were from 15 to 64 times higher underground than on the surface. The large contrast in exposure levels measured underground versus on the surface, along with the differences between the mining facilities and between underground jobs within the facilities resulted in a wide distribution in the exposure estimates for evaluation of exposure-response relationships in the epidemiologic analyses.
Coble, Joseph B.; Stewart, Patricia A.; Vermeulen, Roel; Yereb, Daniel; Stanevich, Rebecca; Blair, Aaron; Silverman, Debra T.; Attfield, Michael
2010-01-01
Air monitoring surveys were conducted between 1998 and 2001 at seven non-metal mining facilities to assess exposure to respirable elemental carbon (REC), a component of diesel exhaust (DE), for an epidemiologic study of miners exposed to DE. Personal exposure measurements were taken on workers in a cross-section of jobs located underground and on the surface. Air samples taken to measure REC were also analyzed for respirable organic carbon (ROC). Concurrent measurements to assess exposure to nitric oxide (NO) and nitrogen dioxide (NO2), two gaseous components of DE, were also taken. The REC measurements were used to develop quantitative estimates of average exposure levels by facility, department, and job title for the epidemiologic analysis. Each underground job was assigned to one of three sets of exposure groups from specific to general: (i) standardized job titles, (ii) groups of standardized job titles combined based on the percentage of time in the major underground areas, and (iii) larger groups based on similar area carbon monoxide (CO) air concentrations. Surface jobs were categorized based on their use of diesel equipment and proximity to DE. A total of 779 full-shift personal measurements were taken underground. The average REC exposure levels for underground jobs with five or more measurements ranged from 31 to 58 μg m−3 at the facility with the lowest average exposure levels and from 313 to 488 μg m−3 at the facility with the highest average exposure levels. The average REC exposure levels for surface workers ranged from 2 to 6 μg m−3 across the seven facilities. There was much less contrast in the ROC compared with REC exposure levels measured between surface and underground workers within each facility, as well as across the facilities. The average ROC levels underground ranged from 64 to 195 μg m−3, while on the surface, the average ROC levels ranged from 38 to 71 μg m−3 by facility, an ∼2- to 3-fold difference. The average NO and NO2 levels underground ranged from 0.20 to 1.49 parts per million (ppm) and from 0.10 to 0.60 ppm, respectively, and were ∼10 times higher than levels on the surface, which ranged from 0.02 to 0.11 ppm and from 0.01 to 0.06 ppm, respectively. The ROC, NO, and NO2 concentrations underground were correlated with the REC levels (r = 0.62, 0.71, and 0.62, respectively). A total of 80% of the underground jobs were assigned an exposure estimate based on measurements taken for the specific job title or for other jobs with a similar percentage of time spent in the major underground work areas. The average REC exposure levels by facility were from 15 to 64 times higher underground than on the surface. The large contrast in exposure levels measured underground versus on the surface, along with the differences between the mining facilities and between underground jobs within the facilities resulted in a wide distribution in the exposure estimates for evaluation of exposure–response relationships in the epidemiologic analyses. PMID:20876232
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Hearings. 658.417 Section 658.417 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR ADMINISTRATIVE PROVISIONS GOVERNING THE JOB SERVICE SYSTEM Job Service Complaint System State Agency Js Complaint System § 658.417...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-16
... monitoring equipment. Information on Decision: Information on the final decision for this transaction will be... information which would jeopardize jobs in the United States by supplying information that competitors could...
20 CFR 658.425 - Decision of DOL Administrative Law Judge.
Code of Federal Regulations, 2010 CFR
2010-04-01
... ADMINISTRATIVE PROVISIONS GOVERNING THE JOB SERVICE SYSTEM Job Service Complaint System Federal Js Complaint... jursidiction to consider the validity or constitutionality of JS regulations or of the Federal statutes under...
Matsuzuki, Hiroe; Haruyama, Yasuo; Muto, Takashi; Aikawa, Kaoru; Ito, Akiyoshi; Katamoto, Shizuo
2013-03-01
Many kitchen work environments are considered to be severe; however, when kitchens are reformed or work systems are changed, the question of how this influences kitchen workers and environments arises. The purpose of this study is to examine whether there was a change in workload and job-related stress for workers after a workplace environment and work system change in a hospital kitchen. The study design is a pre-post comparison of a case, performed in 2006 and 2008. The air temperature and humidity in the workplace were measured. Regarding workload, work hours, fluid loss, heart rate, and amount of activity [metabolic equivalents of task (METs)] of 7 and 8 male subjects pre- and post-reform, respectively, were measured. Job-related stress was assessed using a self-reporting anonymous questionnaire for 53 and 45 workers pre- and post-system change, respectively. After the reform and work system change, the kitchen space had increased and air-conditioners had been installed. The workplace environment changes included the introduction of temperature-controlled wagons whose operators were limited to male workers. The kitchen air temperature decreased, so fluid loss in the subjects decreased significantly. However, heart rate and METs in the subjects increased significantly. As for job-related stress, although workplace environment scores improved, male workers' total job stress score increased. These results suggest that not only the workplace environment but also the work system influenced the workload and job stress on workers.
2011-01-01
Background For hospital accreditation and health promotion reasons, we examined whether the 22-item Job Content Questionnaire (JCQ) could be applied to evaluate job strain of individual hospital employees and to determine the number of factors extracted from JCQ. Additionally, we developed an Excel module of self-evaluation diagnostic system for consultation with experts. Methods To develop an Excel-based self-evaluation diagnostic system for consultation to experts to make job strain assessment easier and quicker than ever, Rasch rating scale model was used to analyze data from 1,644 hospital employees who enrolled in 2008 for a job strain survey. We determined whether the 22-item Job Content Questionnaire (JCQ) could evaluate job strain of individual employees in work sites. The respective item responding to specific groups' occupational hazards causing job stress was investigated by using skewness coefficient with its 95% CI through item-by-item analyses. Results Each of those 22 items on the questionnaire was examined to have five factors. The prevalence rate of Chinese hospital workers with high job strain was 16.5%. Conclusions Graphical representations of four quadrants, item-by-item bar chart plots and skewness 95% CI comparison generated in Excel can help employers and consultants of an organization focusing on a small number of key areas of concern for each worker in job strain. PMID:21682912
Chien, Tsair-Wei; Lai, Wen-Pin; Wang, Hsien-Yi; Hsu, Sen-Yen; Castillo, Roberto Vasquez; Guo, How-Ran; Chen, Shih-Chung; Su, Shih-Bin
2011-06-18
For hospital accreditation and health promotion reasons, we examined whether the 22-item Job Content Questionnaire (JCQ) could be applied to evaluate job strain of individual hospital employees and to determine the number of factors extracted from JCQ. Additionally, we developed an Excel module of self-evaluation diagnostic system for consultation with experts. To develop an Excel-based self-evaluation diagnostic system for consultation to experts to make job strain assessment easier and quicker than ever, Rasch rating scale model was used to analyze data from 1,644 hospital employees who enrolled in 2008 for a job strain survey. We determined whether the 22-item Job Content Questionnaire (JCQ) could evaluate job strain of individual employees in work sites. The respective item responding to specific groups' occupational hazards causing job stress was investigated by using skewness coefficient with its 95% CI through item-by-item analyses. Each of those 22 items on the questionnaire was examined to have five factors. The prevalence rate of Chinese hospital workers with high job strain was 16.5%. Graphical representations of four quadrants, item-by-item bar chart plots and skewness 95% CI comparison generated in Excel can help employers and consultants of an organization focusing on a small number of key areas of concern for each worker in job strain.
Cahalin, Lawrence P
2009-01-01
Job strain is the psychological and physiological response to a lack of control or support in the work environment. It appears to be an important risk factor for continued employment throughout the lifespan. Reducing job strain earlier in a workers life has the potential to have substantial beneficial health effects throughout a workers life. Early screening for job strain should be implemented in known high risk or high strain jobs. This is particularly important since there a fewer younger workers entering the labor force and there will be a growing need for older workers to remain in the workforce. Furthermore, healthier workers will require less medical care and are likely to work longer if they are willing and able. Healthier older workers who are willing and able to work longer will defer receipt of retirement benefits while continuing to pay into the Social Security System. Further investigation of older individuals (1) willingness and motivation to work past the normal retirement age, (2) career and employment security, skills development, and reconciliation of working and non-working life, and (3) job strain and effects of reducing job strain is needed. The current job strain literature has been expanded to the Social Security System arena and suggests that reducing job strain has the potential to help eliminate the Social Security drain by increasing older worker labor force retention.
Enlisted Personnel Individualized Career System (EPICS) Test and Evaluation
1984-01-01
The EPICS program, which was developed using an integrated personnel systems approach ( IPSA ), delays formal school training until after personnel have...received shipboard on-job training complemented by job performance aids (3PAs). Early phases of the program, which involved developing the IPSA EPICS...detailed description of the conception and development of the EPICS IPSA model, the execution of the front-end job design analyses, 3PA and instructional
Production Experiences with the Cray-Enabled TORQUE Resource Manager
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ezell, Matthew A; Maxwell, Don E; Beer, David
High performance computing resources utilize batch systems to manage the user workload. Cray systems are uniquely different from typical clusters due to Cray s Application Level Placement Scheduler (ALPS). ALPS manages binary transfer, job launch and monitoring, and error handling. Batch systems require special support to integrate with ALPS using an XML protocol called BASIL. Previous versions of Adaptive Computing s TORQUE and Moab batch suite integrated with ALPS from within Moab, using PERL scripts to interface with BASIL. This would occasionally lead to problems when all the components would become unsynchronized. Version 4.1 of the TORQUE Resource Manager introducedmore » new features that allow it to directly integrate with ALPS using BASIL. This paper describes production experiences at Oak Ridge National Laboratory using the new TORQUE software versions, as well as ongoing and future work to improve TORQUE.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamblin, T.
2014-08-29
Large-scale systems like Sequoia allow running small numbers of very large (1M+ process) jobs, but their resource managers and schedulers do not allow large numbers of small (4, 8, 16, etc.) process jobs to run efficiently. Cram is a tool that allows users to launch many small MPI jobs within one large partition, and to overcome the limitations of current resource management software for large ensembles of jobs.
Aptitude and Trait Predictors of Manned and Unmanned Aircraft Pilot Job Performance
2016-04-22
actually fly RPAs. To address this gap, the present study evaluated pre-accession trait (Big Five personality domains) and aptitude (spatial...knowledge, and personality traits that predict successful job performance for manned aircraft pilots also predict successful job performance for RPA...aptitude and personality traits , job performance, remotely-piloted aircraft, unmanned aircraft systems 16. SECURITY CLASSIFICATION OF: 17
Integrated Job Skills and Reading Skills Training System. Final Report.
ERIC Educational Resources Information Center
Sticht, Thomas G.; And Others
An exploratory study was conducted to evaluate the feasibility of determining the reading demands of navy jobs, using a methodology that identifies both the type of reading tasks performed on the job and the level of general reading skill required to perform that set of reading tasks. Next, a survey was made of the navy's job skills training…
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false General. 532.601 Section 532.601 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PREVAILING RATE SYSTEMS Job Grading System § 532.601 General. The Office of Personnel Management shall establish a job grading system...
20 CFR 658.424 - Federal hearings.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Federal hearings. 658.424 Section 658.424 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR ADMINISTRATIVE PROVISIONS GOVERNING THE JOB SERVICE SYSTEM Job Service Complaint System Federal Js Complaint System § 658.424 Federal...
Wang, Yean; Guo, Yingqi; Zeng, Shouchui
2018-04-01
Social work education in China is undergoing far-reaching development. However, an important issue, low professional commitment, has been identified. Why do social work graduates-especially master's level graduates-take jobs unrelated to social work? To answer this question, it is important to take into account that the professionalization of social work is happening unevenly across China as a result of uneven social and economic development. Models used in past research do not consider the possibility that the low intention for social work jobs and its potential predictors may vary across regions. To address this problem, Geographic Information Systems software is being adopted to explore the varying degrees of social work graduates' job intention, its predictors across China, and the association between job intention and predictors at both national and regional levels. Authors of this study found substantial geographic variation in predictors of social work graduates' job intention across regions. Their findings also suggest some heterogeneity in the association between job intention and specific correlates that would be masked in the traditional nationwide model. Policymakers aiming to improve the job intention of social work graduates should consider regional variation as part of their approach.
Park, Soo Kyung; Rhee, Min-Kyoung; Barak, Michàlle Mor
2016-01-01
Although nonregular workers experience higher job stress, poorer mental health, and different job stress dimensions relative to regular workers, little is known about which job stress dimensions are associated with poor mental health among nonregular workers. This study investigated the association between job stress dimensions and mental health among Korean nonregular workers. Data were collected from 333 nonregular workers in Seoul and Gyeonggi Province, and logistic regression analysis was conducted. Results of the study indicated that high job insecurity and lack of rewards had stronger associations with poor mental health than other dimensions of job stress when controlling for sociodemographic and psychosocial variables. It is important for the government and organizations to improve job security and reward systems to reduce job stress among nonregular workers and ultimately alleviate their mental health issues.
Chu, Li-Chuan
2017-07-01
To examine the relationships of providing compassion at work with job performance and mental health, as well as to identify the role of interpersonal relationship quality in moderating these relationships. This study adopted a two-stage survey completed by 235 registered nurses employed by hospitals in Taiwan. All hypotheses were tested using hierarchical regression analyses. The results show that providing compassion is an effective predictor of job performance and mental health, whereas interpersonal relationship quality can moderate the relationships of providing compassion with job performance and mental health. When nurses are frequently willing to listen, understand, and help their suffering colleagues, the enhancement engendered by providing compassion can improve the provider's job performance and mental health. Creating high-quality relationships in the workplace can strengthen the positive benefits of providing compassion. Motivating employees to spontaneously exhibit compassion is crucial to an organization. Hospitals can establish value systems, belief systems, and cultural systems that support a compassionate response to suffering. In addition, nurses can internalize altruistic belief systems into their own personal value systems through a long process of socialization in the workplace. © 2017 Sigma Theta Tau International.
Optimizing Optics For Remotely Controlled Underwater Vehicles
NASA Astrophysics Data System (ADS)
Billet, A. B.
1984-09-01
The past decade has shown a dramatic increase in the use of unmanned tethered vehicles in worldwide marine fields. These vehicles are used for inspection, debris removal and object retrieval. With advanced robotic technology, remotely operated vehicles (ROVs) are now able to perform a variety of jobs previously accomplished only by divers. The ROVs can be used at greater depths and for riskier jobs, and safety to the diver is increased, freeing him for safer, more cost-effective tasks requiring human capabilities. Secondly, the ROV operation becomes more cost effective to use as work depth increases. At 1000 feet a diver's 10 minutes of work can cost over $100,000 including support personnel, while an ROV operational cost might be 1/20 of the diver cost per day, based on the condition that the cost for ROV operation does not change with depth, as it does for divers. In the ROV operation the television lens must be as good as the human eye, with better light gathering capability than the human eye. The RCV-150 system is an example of these advanced technology vehicles. With the requirements of manueuverability and unusual inspection, a responsive, high performance, compact vehicle was developed. The RCV-150 viewing subsystem consists of a television camera, lights, and topside monitors. The vehicle uses a low light level Newvicon television camera. The camera is equipped with a power-down iris that closes for burn protection when the power is off. The camera can pan f 50 degrees and tilt f 85 degrees on command from the surface. Four independently controlled 250 watt quartz halogen flood lamps illuminate the viewing area as required; in addition, two 250 watt spotlights are fitted. A controlled nine inch CRT monitor provides real time camera pictures for the operator. The RCV-150 vehicle component system consists of the vehicle structure, the vehicle electronics, and hydraulic system which powers the thruster assemblies and the manipulator. For this vehicle, a light weight, high response hydraulic system was developed in a very small package.
Expert Systems in Education and Training: Automated Job Aids or Sophisticated Instructional Media?
ERIC Educational Resources Information Center
Romiszowski, Alexander J.
1987-01-01
Describes the current status and limitations of expert systems, and explores the possible applications of such systems in education and training. The use of expert systems as tutors, as job aids, and as a vehicle for students to develop their own expert systems on specific topics are discussed. (40 references) (CLB)
Majoring in Information Systems: Reasons Why Students Select (or Not) Information Systems as a Major
ERIC Educational Resources Information Center
Snyder, Johnny; Slauson, Gayla Jo
2014-01-01
Filling the pipeline for information systems workers is critical in the information era. Projected growth rates for jobs requiring information systems expertise are significantly higher than the projected growth rates for other jobs. Why then do relatively few students choose to major in information systems? This paper reviews survey results from…
20 CFR 658.423 - Handling of other complaints by the Regional Administrator.
Code of Federal Regulations, 2010 CFR
2010-04-01
... LABOR ADMINISTRATIVE PROVISIONS GOVERNING THE JOB SERVICE SYSTEM Job Service Complaint System Federal Js... office receives a JS-related complaint and the appropriate official determines that the nature and scope...
Environmental and biological monitoring for lead exposure in California workplaces.
Rudolph, L; Sharp, D S; Samuels, S; Perkins, C; Rosenberg, J
1990-01-01
Patterns of environmental and biological monitoring for lead exposure were surveyed in lead-using industries in California. Employer self-reporting indicates a large proportion of potentially lead-exposed workers have never participated in a monitoring program. Only 2.6 percent of facilities have done environmental monitoring for lead, and only 1.4 percent have routine biological monitoring programs. Monitoring practices vary by size of facility, with higher proportions in industries in which larger facilities predominate. Almost 80 percent of battery manufacturing employees work in job classifications which have been monitored, versus only 1 percent of radiator-repair workers. These findings suggest that laboratory-based surveillance for occupational lead poisoning may seriously underestimate the true number of lead poisoned workers and raise serious questions regarding compliance with key elements of the OSHA Lead Standard. PMID:2368850
Scheduling with genetic algorithms
NASA Technical Reports Server (NTRS)
Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.
1994-01-01
In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.
Monitoring performance of a highly distributed and complex computing infrastructure in LHCb
NASA Astrophysics Data System (ADS)
Mathe, Z.; Haen, C.; Stagni, F.
2017-10-01
In order to ensure an optimal performance of the LHCb Distributed Computing, based on LHCbDIRAC, it is necessary to be able to inspect the behavior over time of many components: firstly the agents and services on which the infrastructure is built, but also all the computing tasks and data transfers that are managed by this infrastructure. This consists of recording and then analyzing time series of a large number of observables, for which the usage of SQL relational databases is far from optimal. Therefore within DIRAC we have been studying novel possibilities based on NoSQL databases (ElasticSearch, OpenTSDB and InfluxDB) as a result of this study we developed a new monitoring system based on ElasticSearch. It has been deployed on the LHCb Distributed Computing infrastructure for which it collects data from all the components (agents, services, jobs) and allows creating reports through Kibana and a web user interface, which is based on the DIRAC web framework. In this paper we describe this new implementation of the DIRAC monitoring system. We give details on the ElasticSearch implementation within the DIRAC general framework, as well as an overview of the advantages of the pipeline aggregation used for creating a dynamic bucketing of the time series. We present the advantages of using the ElasticSearch DSL high-level library for creating and running queries. Finally we shall present the performances of that system.
Matching People and Jobs: Value Systems and Employee Selection.
ERIC Educational Resources Information Center
Heflich, Debra L.
1981-01-01
Offers strategies, based on six value systems, to reduce employee turnover. Maintains that understanding the value systems of people as they relate to jobs is the key to improving the selection process, and that employees should be chosen in accordance with how well their value systems match their work and work environments.
FermiGrid—experience and future plans
NASA Astrophysics Data System (ADS)
Chadwick, K.; Berman, E.; Canal, P.; Hesselroth, T.; Garzoglio, G.; Levshina, T.; Sergeev, V.; Sfiligoi, I.; Sharma, N.; Timm, S.; Yocum, D. R.
2008-07-01
Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid (OSG) and the Worldwide LHC Computing Grid Collaboration (WLCG). FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the OSG, EGEE, and the WLCG. Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure - the successes and the problems.
FermiGrid - experience and future plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chadwick, K.; Berman, E.; Canal, P.
2007-09-01
Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid and the WLCG. FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the Open Science Grid (OSG), EGEE and themore » Worldwide LHC Computing Grid Collaboration (WLCG). Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure--the successes and the problems.« less
Jobs masonry in LHCb with elastic Grid Jobs
NASA Astrophysics Data System (ADS)
Stagni, F.; Charpentier, Ph
2015-12-01
In any distributed computing infrastructure, a job is normally forbidden to run for an indefinite amount of time. This limitation is implemented using different technologies, the most common one being the CPU time limit implemented by batch queues. It is therefore important to have a good estimate of how much CPU work a job will require: otherwise, it might be killed by the batch system, or by whatever system is controlling the jobs’ execution. In many modern interwares, the jobs are actually executed by pilot jobs, that can use the whole available time in running multiple consecutive jobs. If at some point the available time in a pilot is too short for the execution of any job, it should be released, while it could have been used efficiently by a shorter job. Within LHCbDIRAC, the LHCb extension of the DIRAC interware, we developed a simple way to fully exploit computing capabilities available to a pilot, even for resources with limited time capabilities, by adding elasticity to production MonteCarlo (MC) simulation jobs. With our approach, independently of the time available, LHCbDIRAC will always have the possibility to execute a MC job, whose length will be adapted to the available amount of time: therefore the same job, running on different computing resources with different time limits, will produce different amounts of events. The decision on the number of events to be produced is made just in time at the start of the job, when the capabilities of the resource are known. In order to know how many events a MC job will be instructed to produce, LHCbDIRAC simply requires three values: the CPU-work per event for that type of job, the power of the machine it is running on, and the time left for the job before being killed. Knowing these values, we can estimate the number of events the job will be able to simulate with the available CPU time. This paper will demonstrate that, using this simple but effective solution, LHCb manages to make a more efficient use of the available resources, and that it can easily use new types of resources. An example is represented by resources provided by batch queues, where low-priority MC jobs can be used as "masonry" jobs in multi-jobs pilots. A second example is represented by opportunistic resources with limited available time.
Software to Control and Monitor Gas Streams
NASA Technical Reports Server (NTRS)
Arkin, C.; Curley, Charles; Gore, Eric; Floyd, David; Lucas, Damion
2012-01-01
This software package interfaces with various gas stream devices such as pressure transducers, flow meters, flow controllers, valves, and analyzers such as a mass spectrometer. The software provides excellent user interfacing with various windows that provide time-domain graphs, valve state buttons, priority- colored messages, and warning icons. The user can configure the software to save as much or as little data as needed to a comma-delimited file. The software also includes an intuitive scripting language for automated processing. The configuration allows for the assignment of measured values or calibration so that raw signals can be viewed as usable pressures, flows, or concentrations in real time. The software is based on those used in two safety systems for shuttle processing and one volcanic gas analysis system. Mass analyzers typically have very unique applications and vary from job to job. As such, software available on the market is usually inadequate or targeted on a specific application (such as EPA methods). The goal was to develop powerful software that could be used with prototype systems. The key problem was to generalize the software to be easily and quickly reconfigurable. At Kennedy Space Center (KSC), the prior art consists of two primary methods. The first method was to utilize Lab- VIEW and a commercial data acquisition system. This method required rewriting code for each different application and only provided raw data. To obtain data in engineering units, manual calculations were required. The second method was to utilize one of the embedded computer systems developed for another system. This second method had the benefit of providing data in engineering units, but was limited in the number of control parameters.
Hoboubi, Naser; Choobineh, Alireza; Kamari Ghanavati, Fatemeh; Keshavarzi, Sareh; Akbar Hosseini, Ali
2017-03-01
Job stress and job satisfaction are important factors affecting workforce productivity. This study was carried out to investigate the job stress, job satisfaction, and workforce productivity levels, to examine the effects of job stress and job satisfaction on workforce productivity, and to identify factors associated with productivity decrement among employees of an Iranian petrochemical industry. In this study, 125 randomly selected employees of an Iranian petrochemical company participated. The data were collected using the demographic questionnaire, Osipow occupational stress questionnaire to investigate the level of job stress, Job Descriptive Index to examine job satisfaction, and Hersey and Goldsmith questionnaire to investigate productivity in the study population. The levels of employees' perceived job stress and job satisfaction were moderate-high and moderate, respectively. Also, their productivity was evaluated as moderate. Although the relationship between job stress and productivity indices was not statistically significant, the positive correlation between job satisfaction and productivity indices was statistically significant. The regression modeling demonstrated that productivity was significantly associated with shift schedule, the second and the third dimensions of job stress (role insufficiency and role ambiguity), and the second dimension of job satisfaction (supervision). Corrective measures are necessary to improve the shift work system. "Role insufficiency" and "role ambiguity" should be improved and supervisor support must be increased to reduce job stress and increase job satisfaction and productivity.
10 CFR 851.21 - Hazard identification and assessment.
Code of Federal Regulations, 2013 CFR
2013-01-01
.... Procedures must include methods to: (1) Assess worker exposure to chemical, physical, biological, or safety workplace hazards through appropriate workplace monitoring; (2) Document assessment for chemical, physical... hazards; (6) Perform routine job activity-level hazard analyses; (7) Review site safety and health...
10 CFR 851.21 - Hazard identification and assessment.
Code of Federal Regulations, 2014 CFR
2014-01-01
.... Procedures must include methods to: (1) Assess worker exposure to chemical, physical, biological, or safety workplace hazards through appropriate workplace monitoring; (2) Document assessment for chemical, physical... hazards; (6) Perform routine job activity-level hazard analyses; (7) Review site safety and health...
Software systems for operation, control, and monitoring of the EBEX instrument
NASA Astrophysics Data System (ADS)
Milligan, Michael; Ade, Peter; Aubin, François; Baccigalupi, Carlo; Bao, Chaoyun; Borrill, Julian; Cantalupo, Christopher; Chapman, Daniel; Didier, Joy; Dobbs, Matt; Grainger, Will; Hanany, Shaul; Hillbrand, Seth; Hubmayr, Johannes; Hyland, Peter; Jaffe, Andrew; Johnson, Bradley; Kisner, Theodore; Klein, Jeff; Korotkov, Andrei; Leach, Sam; Lee, Adrian; Levinson, Lorne; Limon, Michele; MacDermid, Kevin; Matsumura, Tomotake; Miller, Amber; Pascale, Enzo; Polsgrove, Daniel; Ponthieu, Nicolas; Raach, Kate; Reichborn-Kjennerud, Britt; Sagiv, Ilan; Tran, Huan; Tucker, Gregory S.; Vinokurov, Yury; Yadav, Amit; Zaldarriaga, Matias; Zilic, Kyle
2010-07-01
We present the hardware and software systems implementing autonomous operation, distributed real-time monitoring, and control for the EBEX instrument. EBEX is a NASA-funded balloon-borne microwave polarimeter designed for a 14 day Antarctic flight that circumnavigates the pole. To meet its science goals the EBEX instrument autonomously executes several tasks in parallel: it collects attitude data and maintains pointing control in order to adhere to an observing schedule; tunes and operates up to 1920 TES bolometers and 120 SQUID amplifiers controlled by as many as 30 embedded computers; coordinates and dispatches jobs across an onboard computer network to manage this detector readout system; logs over 3 GiB/hour of science and housekeeping data to an onboard disk storage array; responds to a variety of commands and exogenous events; and downlinks multiple heterogeneous data streams representing a selected subset of the total logged data. Most of the systems implementing these functions have been tested during a recent engineering flight of the payload, and have proven to meet the target requirements. The EBEX ground segment couples uplink and downlink hardware to a client-server software stack, enabling real-time monitoring and command responsibility to be distributed across the public internet or other standard computer networks. Using the emerging dirfile standard as a uniform intermediate data format, a variety of front end programs provide access to different components and views of the downlinked data products. This distributed architecture was demonstrated operating across multiple widely dispersed sites prior to and during the EBEX engineering flight.
Correlates of professional burnout in a sample of employees of cell and tissue banks in Poland.
Kamiński, Artur; Rozenek, Hanna; Banasiewicz, Jolanta; Wójtowicz, Stanisław; Błoński, Artur; Owczarek, Krzysztof
2018-02-03
Job Demands-Resources model proposes that the development of burnout follows excessive job demands and lack of job resources. Job demands are predictive of feeling of exhaustion, and lack of job resources-disengagement from work. This pilot study investigated professional burnout and its correlates in employees of Polish cell and tissue banks, many of whom were involved in procurement and processing of tissues from deceased donors, as it was hypothesized that job burnout in this population might influence the effectiveness of cell and tissue transplantation network in our country. This study utilized the Polish version of the Oldenburg Burnout Inventory (OLBI), which measures the two dimensions of burnout (exhaustion and disengagement), and the Psychosocial Working Conditions Questionnaire (PWC), a Polish instrument used for monitoring psychosocial stress at work. The study sample consisted of 31 participants. Their average time of working in a cell and tissue bank was 13.20 years. Majority of the PWC scales and subscales scores fell in the Average range, and the OLBI results for the Disengagement and the Exhaustion scales were in the Average range. A number of correlations between the Exhaustion or Disengagement and the PWC scales and subscales were detected, majority of which fell in the Moderate range. In spite of the limited number of participants, the results of this pilot study are consistent with the burnout literature reports. Among the detected correlates of professional burnout, it is job-related support which seems to be the most important factor which may influence the efficacy of transplantation network in Poland.
Constraint monitoring in TOSCA
NASA Technical Reports Server (NTRS)
Beck, Howard
1992-01-01
The Job-Shop Scheduling Problem (JSSP) deals with the allocation of resources over time to factory operations. Allocations are subject to various constraints (e.g., production precedence relationships, factory capacity constraints, and limits on the allowable number of machine setups) which must be satisfied for a schedule to be valid. The identification of constraint violations and the monitoring of constraint threats plays a vital role in schedule generation in terms of the following: (1) directing the scheduling process; and (2) informing scheduling decisions. This paper describes a general mechanism for identifying constraint violations and monitoring threats to the satisfaction of constraints throughout schedule generation.
Analysis and Processing the 3D-Range-Image-Data for Robot Monitoring
NASA Astrophysics Data System (ADS)
Kohoutek, Tobias
2008-09-01
Industrial robots are commonly used for physically stressful jobs in complex environments. In any case collisions with heavy and high dynamic machines need to be prevented. For this reason the operational range has to be monitored precisely, reliably and meticulously. The advantage of the SwissRanger® SR-3000 is that it delivers intensity images and 3D-information simultaneously of the same scene that conveniently allows 3D-monitoring. Due to that fact automatic real time collision prevention within the robots working space is possible by working with 3D-coordinates.
Models for interrupted monitoring of a stochastic process
NASA Technical Reports Server (NTRS)
Palmer, E.
1977-01-01
As computers are added to the cockpit, the pilot's job is changing from of manually flying the aircraft, to one of supervising computers which are doing navigation, guidance and energy management calculations as well as automatically flying the aircraft. In this supervisorial role the pilot must divide his attention between monitoring the aircraft's performance and giving commands to the computer. Normative strategies are developed for tasks where the pilot must interrupt his monitoring of a stochastic process in order to attend to other duties. Results are given as to how characteristics of the stochastic process and the other tasks affect the optimal strategies.
1990-09-01
6 Logistics Systems ............ 7 GOCESS Operation . . . . . . . ..... 9 Work Order Processing . . . . ... 12 Job Order Processing . . . . . . . . . . 14...orders and job orders to the Material Control Section will be discussed separately. Work Order Processing . Figure 2 illustrates typical WO processing...logistics function. The JO processing is similar. Job Order Processing . Figure 3 illustrates typical JO processing in a GOCESS operation. As with WOs, this
Chang, Ching-Sheng; Chen, Su-Yueh; Lan, Yi-Ting
2012-11-21
No previous studies have addressed the integrated relationships among system quality, service quality, job satisfaction, and system performance; this study attempts to bridge such a gap with evidence-based practice study. The convenience sampling method was applied to the information system users of three hospitals in southern Taiwan. A total of 500 copies of questionnaires were distributed, and 283 returned copies were valid, suggesting a valid response rate of 56.6%. SPSS 17.0 and AMOS 17.0 (structural equation modeling) statistical software packages were used for data analysis and processing. The findings are as follows: System quality has a positive influence on service quality (γ11= 0.55), job satisfaction (γ21= 0.32), and system performance (γ31= 0.47). Service quality (β31= 0.38) and job satisfaction (β32= 0.46) will positively influence system performance. It is thus recommended that the information office of hospitals and developers take enhancement of service quality and user satisfaction into consideration in addition to placing b on system quality and information quality when designing, developing, or purchasing an information system, in order to improve benefits and gain more achievements generated by hospital information systems.
Ruttenber, A J; McCrea, J S; Wade, T D; Schonbeck, M F; LaMontagne, A D; Van Dyke, M V; Martyny, J W
2001-02-01
We outline methods for integrating epidemiologic and industrial hygiene data systems for the purpose of exposure estimation, exposure surveillance, worker notification, and occupational medicine practice. We present examples of these methods from our work at the Rocky Flats Plant--a former nuclear weapons facility that fabricated plutonium triggers for nuclear weapons and is now being decontaminated and decommissioned. The weapons production processes exposed workers to plutonium, gamma photons, neutrons, beryllium, asbestos, and several hazardous chemical agents, including chlorinated hydrocarbons and heavy metals. We developed a job exposure matrix (JEM) for estimating exposures to 10 chemical agents in 20 buildings for 120 different job categories over a production history spanning 34 years. With the JEM, we estimated lifetime chemical exposures for about 12,000 of the 16,000 former production workers. We show how the JEM database is used to estimate cumulative exposures over different time periods for epidemiological studies and to provide notification and determine eligibility for a medical screening program developed for former workers. We designed an industrial hygiene data system for maintaining exposure data for current cleanup workers. We describe how this system can be used for exposure surveillance and linked with the JEM and databases on radiation doses to develop lifetime exposure histories and to determine appropriate medical monitoring tests for current cleanup workers. We also present time-line-based graphical methods for reviewing and correcting exposure estimates and reporting them to individual workers.
Wong, Carol A; Spence Laschinger, Heather K
2015-12-01
The frontline clinical manager role in healthcare is pivotal to the development of safe and healthy working conditions and optimal staff and patient care outcomes. However, in today's dynamic healthcare organizations managers face constant job demands from wider spans of control and complex role responsibilities but may not have adequate decisional authority to support effective work performance resulting in unnecessary job strain. Prolonged job strain can lead to burnout, health complaints, and increased turnover intention. Yet, there is limited research that examines frontline manager job strain and its impact on their well-being and work outcomes. The substantial cost associated with replacing experienced managers calls attention to the need to address job strain in order to retain this valuable organizational asset. Using Karasek's Job Demands-Control theory of job strain, a model was tested examining the effects of frontline manager job strain on their burnout (emotional exhaustion and cynicism), organizational commitment and ultimately, turnover intentions. Secondary analysis of data collected in an online cross-sectional survey of frontline managers was conducted using structural equation modeling. All 500 eligible frontline managers from 14 teaching hospitals in Ontario, Canada, were invited to participate and 159 responded for a 32% response rate. Participants received an email invitation with a secure link for the online survey. Ethics approval was obtained from the university ethics board and the respective ethics review boards of the 14 organizations involved in the study. The model was tested using path analysis techniques within structural equation modeling with maximum likelihood estimation. The final model fit the data acceptably (χ(2)=6.62, df=4, p=.16, IFI=99, CFI=.99, SRMR=.03, RMSEA=.06). Manager job strain was significantly positively associated with burnout which contributed to both lower organizational commitment and higher turnover intention. Organizational commitment was also negatively associated with turnover intention and there was an additional direct positive relationship between job strain and turnover intention. Preliminary support was found for a model showing that manager job strain contributes to burnout, reduced organizational commitment and higher turnover intentions. Findings suggest that organizations need to monitor and address manager job strain by ensuring managers' role demands are reasonable and that they have the requisite decision latitude to balance role demands. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOT National Transportation Integrated Search
1996-12-09
The purpose of this Human Factors Job Aid is to serve as a desk reference for : human factors integration during system acquisition. The first chapter contains : an overview of the FAA human factors process in system acquisitions. The : remaining eig...
Second Evaluation of Job Queuing/Scheduling Software. Phase 1
NASA Technical Reports Server (NTRS)
Jones, James Patton; Brickell, Cristy; Chancellor, Marisa (Technical Monitor)
1997-01-01
The recent proliferation of high performance workstations and the increased reliability of parallel systems have illustrated the need for robust job management systems to support parallel applications. To address this issue, NAS compiled a requirements checklist for job queuing/scheduling software. Next, NAS evaluated the leading job management system (JMS) software packages against the checklist. A year has now elapsed since the first comparison was published, and NAS has repeated the evaluation. This report describes this second evaluation, and presents the results of Phase 1: Capabilities versus Requirements. We show that JMS support for running parallel applications on clusters of workstations and parallel systems is still lacking, however, definite progress has been made by the vendors to correct the deficiencies. This report is supplemented by a WWW interface to the data collected, to aid other sites in extracting the evaluation information on specific requirements of interest.
Simulation as a planning tool for job-shop production environment
NASA Astrophysics Data System (ADS)
Maram, Venkataramana; Nawawi, Mohd Kamal Bin Mohd; Rahman, Syariza Abdul; Sultan, Sultan Juma
2015-12-01
In this paper, we made an attempt to use discrete event simulation software ARENA® as a planning tool for job shop production environment. We considered job shop produces three types of Jigs with different sequence of operations to study and improve shop floor performance. The sole purpose of the study is to identifying options to improve machines utilization, reducing job waiting times at bottleneck machines. First, the performance of the existing system was evaluated by using ARENA®. Then identified improvement opportunities by analyzing base system results. Second, updated the model with most economical options. The proposed new system outperforms with that of the current base system by 816% improvement in delay times at paint shop by increase 2 to 3 and Jig cycle time reduces by Jig1 92%, Jig2 65% and Jig3 41% and hence new proposal was recommended.
State-Building: Job Creation, Investment Promotion, and the Provision of Basic Services
2010-09-01
gap between what practitioners need to know and what research can currently show with reasonable confidence. There is a further wide gap between what...IRAs), which selectively rehired staff and paid higher wages in return for monitored performance. There may be scope for research on whether vari...rehired staff and paid higher wages in return for monitored performance. There may be scope for research on whether vari- ations in the costs of tax
Abusive User Policy | High-Performance Computing | NREL
below. First Incident The user's ability to run new jobs or store new data will be suspended temporarily acknowledged and participated in a remedy, ability to run new jobs or store new data will be restored. Second Incident Suspend running new jobs or storing new data. Terminate jobs if necessary. The system and
ERIC Educational Resources Information Center
Harlan, Sharon L., Ed.; Steinberg, Ronnie J., Ed.
This comprehensive review of the public system of occupational education and job training for women in the United States focuses on education and training for occupations that require less than a four-year college degree. Chapter 1, "Job Training for Women: The Problem in a Policy Context" (Harlan, Steinberg), sketches an outline of job training…
Evaluation of a Job Aid System for Combat Leaders: Rifle Platoon and Squad
1988-02-01
the Behavioral and Social Sciences February 1988 Approved for public release: distribution...of difficulty, there are no effective , standardized job performance aids available to help the leader accomplish his job . A need therefore ex- ists...and effective as a job aid for combat leaders. The evaluations suggest that most personnel who have seen and used the CLG are very much in favor
A History-based Estimation for LHCb job requirements
NASA Astrophysics Data System (ADS)
Rauschmayr, Nathalie
2015-12-01
The main goal of a Workload Management System (WMS) is to find and allocate resources for the given tasks. The more and better job information the WMS receives, the easier will be to accomplish its task, which directly translates into higher utilization of resources. Traditionally, the information associated with each job, like expected runtime, is defined beforehand by the Production Manager in best case and fixed arbitrary values by default. In the case of LHCb's Workload Management System no mechanisms are provided which automate the estimation of job requirements. As a result, much more CPU time is normally requested than actually needed. Particularly, in the context of multicore jobs this presents a major problem, since single- and multicore jobs shall share the same resources. Consequently, grid sites need to rely on estimations given by the VOs in order to not decrease the utilization of their worker nodes when making multicore job slots available. The main reason for going to multicore jobs is the reduction of the overall memory footprint. Therefore, it also needs to be studied how memory consumption of jobs can be estimated. A detailed workload analysis of past LHCb jobs is presented. It includes a study of job features and their correlation with runtime and memory consumption. Following the features, a supervised learning algorithm is developed based on a history based prediction. The aim is to learn over time how jobs’ runtime and memory evolve influenced due to changes in experiment conditions and software versions. It will be shown that estimation can be notably improved if experiment conditions are taken into account.
Magnavita, Nicola
2014-09-01
Violence at work (WV) is an important occupational hazard for health care workers (HCWs). A number of surveys addressing the causes and effects of WV have shown that it is associated with work-related stress. However, it is not clear what direction this relationship takes, that is, whether job strain facilitates aggression against HCWs or WV is the cause of job strain. From 2003 to 2009, HCWs from a public health care unit were asked to self-assess their level of work-related stress and to report aggression that occurred in the 12-month period preceding their routine medical examination. In 2009, physical and mental health and job satisfaction were also assessed. A total of 698 out of 723 HCWs (96.5%) completed the study. Job strain and lack of social support were predictors of the occurrence of nonphysical aggression during the ensuing year. HCWs who experienced WV reported high strain and low support at work in the following year. The experience of nonphysical violence and a prolonged state of strain and social isolation were significant predictors of psychological problems and bad health at follow-up. The relationship between work-related distress and WV is bidirectional. The monitoring of workers through questionnaires distributed before their periodic examination is a simple and effective way of studying WV and monitoring distress. The findings of the present study may facilitate the subsequent design of participatory intervention for the prevention of violence in healthcare facilities. This should always be accompanied by measures designed to reduce strain and improve social support. © 2014 Sigma Theta Tau International.
London, L.; Myers, J. E.
1998-01-01
RATIONALE: Job exposure matrices (JEMs) are widely used in occupational epidemiology, particularly when biological or environmental monitoring data are scanty. However, as with most exposure estimates, JEMs may be vulnerable to misclassification. OBJECTIVES: To estimate the long term exposure of farm workers based on a JEM developed for use in a study of the neurotoxic effects of organophosphates and to evaluate the repeatability and validity of the JEM. METHODS: A JEM was constructed with secondary data from industry and expert opinion of the estimate of agrichemical exposure within every possible job activity in the JEM to weight job days for exposure to organophosphates. Cumulative lifetime and average intensity exposure of organophosphate exposure were calculated for 163 pesticide applicators and 84 controls. Repeat questionnaires were given to 29 participants three months later to test repeatability of measurements. The ability of JEM based exposure to predict a known marker of organophosphate exposure was used to validate the JEM. RESULTS: Cumulative lifetime exposure as measured in kg organophosphate exposure, was significantly associated with erythrocyte cholinesterase concentrations (partial r2 = 5%; p < 0.01), controlled for a range of confounders. Repeatability in a subsample of 29 workers of the estimates of cumulative (Pearson's r = 0.67; 95% confidence interval (95% CI) 0.41 to 0.83), and average lifetime intensity of exposure (Pearson's r = 0.60 95% CI 0.31 to 0.79) was adequate. CONCLUSION: The JEM seems promising for farming settings, particularly in developing countries where data on chemical application and biological monitoring are unavailable. PMID:9624271
Applicability Evaluation of Job Standards for Diabetes Nutritional Management by Clinical Dietitian.
Baek, Young Jin; Oh, Na Gyeong; Sohn, Cheong-Min; Woo, Mi-Hye; Lee, Seung Min; Ju, Dal Lae; Seo, Jung-Sook
2017-04-01
This study was conducted to evaluate applicability of job standards for diabetes nutrition management by hospital clinical dietitians. In order to promote the clinical nutrition services, it is necessary to present job standards of clinical dietitian and to actively apply these standardized tasks to the medical institution sites. The job standard of clinical dietitians for diabetic nutrition management was distributed to hospitals over 300 beds. Questionnaire was collected from 96 clinical dietitians of 40 tertiary hospitals, 47 general hospitals, and 9 hospitals. Based on each 5-point scale, the importance of overall duty was 4.4 ± 0.5, performance was 3.6 ± 0.8, and difficulty was 3.1 ± 0.7. 'Nutrition intervention' was 4.5 ± 0.5 for task importance, 'nutrition assessment' was 4.0 ± 0.7 for performance, and 'nutrition diagnosis' was 3.4 ± 0.9 for difficulty. These 3 items were high in each category. Based on the grid diagram, the tasks of both high importance and high performance were 'checking basic information,' 'checking medical history and therapy plan,' 'decision of nutritional needs,' 'supply of foods and nutrients,' and 'education of nutrition and self-management.' The tasks with high importance but low performance were 'derivation of nutrition diagnosis,' 'planning of nutrition intervention,' 'monitoring of nutrition intervention process.' The tasks of both high importance and high difficulty were 'derivation of nutrition diagnosis,' 'planning of nutrition intervention,' 'supply of foods and nutrients,' 'education of nutrition and self-management,' and 'monitoring of nutrition intervention process.' The tasks of both high performance and high difficulty were 'documentation of nutrition assessment,' 'supply of foods and nutrients,' and 'education of nutrition and self-management.'
77 FR 15143 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-14
...); and Education Jobs Fund, Public Law 111-226, Sec. 101, 124 Stat. 2389 (2010). Accordingly, the Board... classification, system location, storage, retrievability, safeguards, retention and disposal, and system manager.... Sec. 1521, 1523(a)(1), 123 Stat. 115, 289-90 (2009) (Recovery Act), and Education Jobs Fund, Public...
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Neil, Lori Ross; Conway, T. J.; Tobey, D. H.
The Secure Power Systems Professional Phase III final report was released last year which an appendix of Job Profiles. This new report is that appendix broken out as a standalone document to assist utilities in recruiting and developing Secure Power Systems Professionals at their site.
20 CFR 658.416 - Action on JS-related complaints.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Action on JS-related complaints. 658.416... ADMINISTRATIVE PROVISIONS GOVERNING THE JOB SERVICE SYSTEM Job Service Complaint System State Agency Js Complaint System § 658.416 Action on JS-related complaints. (a) The appropriate State agency official handling an...
Memory handling in the ATLAS submission system from job definition to sites limits
NASA Astrophysics Data System (ADS)
Forti, A. C.; Walker, R.; Maeno, T.; Love, P.; Rauschmayr, N.; Filipcic, A.; Di Girolamo, A.
2017-10-01
In the past few years the increased luminosity of the LHC, changes in the linux kernel and a move to a 64bit architecture have affected the ATLAS jobs memory usage and the ATLAS workload management system had to be adapted to be more flexible and pass memory parameters to the batch systems, which in the past wasn’t a necessity. This paper describes the steps required to add the capability to better handle memory requirements, included the review of how each component definition and parametrization of the memory is mapped to the other components, and what changes had to be applied to make the submission chain work. These changes go from the definition of tasks and the way tasks memory requirements are set using scout jobs, through the new memory tool developed to do that, to how these values are used by the submission component of the system and how the jobs are treated by the sites through the CEs, batch systems and ultimately the kernel.
NASA Astrophysics Data System (ADS)
Gleason, J. L.; Hillyer, T. N.; Wilkins, J.
2012-12-01
The CERES Science Team integrates data from 5 CERES instruments onboard the Terra, Aqua and NPP missions. The processing chain fuses CERES observations with data from 19 other unique sources. The addition of CERES Flight Model 5 (FM5) onboard NPP, coupled with ground processing system upgrades further emphasizes the need for an automated job-submission utility to manage multiple processing streams concurrently. The operator-driven, legacy-processing approach relied on manually staging data from magnetic tape to limited spinning disk attached to a shared memory architecture system. The migration of CERES production code to a distributed, cluster computing environment with approximately one petabyte of spinning disk containing all precursor input data products facilitates the development of a CERES-specific, automated workflow manager. In the cluster environment, I/O is the primary system resource in contention across jobs. Therefore, system load can be maximized with a throttling workload manager. This poster discusses a Java and Perl implementation of an automated job management tool tailored for CERES processing.
Directory of Development Activities.
ERIC Educational Resources Information Center
Control Data Corp., Minneapolis, Minn.
Assembled in a loose leaf notebook, this collection of independent on-the-job activities is designed to facilitate employee development and intended to help improve an organization's performance appraisal system. The on-the-job development activities described derive from job descriptions, performance appraisal forms, and discussions with job…
20 CFR 638.802 - Student records management.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 638.802 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR JOB CORPS PROGRAM UNDER TITLE IV-B OF THE JOB TRAINING PARTNERSHIP ACT Administrative Provisions § 638.802 Student records management. The Job Corps Director shall develop guidelines for a system of maintaining records...
ERIC Educational Resources Information Center
Simon, DeShea; Jackson, Kanata
2015-01-01
This study examined the perspectives on academic preparation and job skill needs of Information Systems program graduates from an Eastern state in the US. A historical review of the literature surrounding information systems skill requirements was conducted for this study, to provide an understanding of the changes in information systems over the…
Tethered to work: A family systems approach linking mobile device use to turnover intentions.
Ferguson, Merideth; Carlson, Dawn; Boswell, Wendy; Whitten, Dwayne; Butts, Marcus M; Kacmar, K Michele Micki
2016-04-01
We examined the use of a mobile device for work during family time (mWork) to determine the role that it plays in employee turnover intentions. Using a sample of 344 job incumbents and their spouses, we propose a family systems model of turnover and examine 2 paths through which we expect mWork to relate to turnover intentions: the job incumbent and the spouse. From the job incumbent, we found that the job incumbent's mWork associated with greater work-to-family conflict and burnout, and lower organizational commitment. From the spouse, we found that incumbent mWork and greater work-to-family conflict associated with increased resentment by the spouse and lower spousal commitment to the job incumbent's organization. Both of these paths played a role in predicting job incumbent turnover intentions. We discuss implications and opportunities for future research on mWork for integrating work and family into employee turnover intentions. (c) 2016 APA, all rights reserved).
Reaction Buildup of PBX Explosives JOB-9003 under Different Initiation Pressures
NASA Astrophysics Data System (ADS)
Zhang, Xu; Wang, Yan-fei; Hung, Wen-bin; Gu, Yan; Zhao, Feng; Wu, Qiang; Yu, Xin; Yu, Heng
2017-04-01
Aluminum-based embedded multiple electromagnetic particle velocity gauge technique has been developed in order to measure the shock initiation behavior of JOB-9003 explosives. In addition, another gauge element called a shock tracker has been used to monitor the progress of the shock front as a function of time, thus providing a position-time trajectory of the wave front as it moves through the explosive sample. The data are used to determine the position and time for shock to detonation transition. All the experimental results show that: the rising-up time of Al-based electromagnetic particle velocity gauge was very fast and less than 20 ns; the reaction buildup velocity profiles and the position-time for shock to detonation transition of HMX-based PBX explosive JOB-9003 with 1-8 mm depth from the origin of impact plane under different initiation pressures are obtained with high accuracy.
JOB BUILDER remote batch processing subsystem
NASA Technical Reports Server (NTRS)
Orlov, I. G.; Orlova, T. L.
1980-01-01
The functions of the JOB BUILDER remote batch processing subsystem are described. Instructions are given for using it as a component of a display system developed by personnel of the System Programming Laboratory, Institute of Space Research, USSR Academy of Sciences.
78 FR 2284 - Methodology for Selecting Job Corps Centers for Closure; Comments Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-10
... (GED), and career technical training credentials, including industry-recognized credentials, state... align existing career technical training programs to technical standards established by industries or... technical training. Both PIPs and CAPs are used for continued monitoring and implemented for USDA and...
Improvements to the User Interface for LHCb's Software continuous integration system.
NASA Astrophysics Data System (ADS)
Clemencic, M.; Couturier, B.; Kyriazi, S.
2015-12-01
The purpose of this paper is to identify a set of steps leading to an improved interface for LHCb's Nightly Builds Dashboard. The goal is to have an efficient application that meets the needs of both the project developers, by providing them with a user friendly interface, as well as those of the computing team supporting the system, by providing them with a dashboard allowing for better monitoring of the build job themselves. In line with what is already used by LHCb, the web interface has been implemented with the Flask Python framework for future maintainability and code clarity. The Database chosen to host the data is the schema-less CouchDB[7], serving the purpose of flexibility in document form changes. To improve the user experience, we use JavaScript libraries such as JQuery[11].
Characterization of inhalation exposure to jet fuel among U.S. Air Force personnel.
Merchant-Borna, Kian; Rodrigues, Ema G; Smith, Kristen W; Proctor, Susan P; McClean, Michael D
2012-07-01
Jet propulsion fuel-8 (JP-8) is the primary jet fuel used by the US military, collectively consuming ~2.5 billion gallons annually. Previous reports suggest that JP-8 is potentially toxic to the immune, respiratory, and nervous systems. The objectives of this study were to evaluate inhalation exposure to JP-8 constituents among active duty United States Air Force (USAF) personnel while performing job-related tasks, identify significant predictors of inhalation exposure to JP-8, and evaluate the extent to which surrogate exposure classifications were predictive of measured JP-8 exposures. Seventy-three full-time USAF personnel from three different air force bases were monitored during four consecutive workdays where personal air samples were collected and analyzed for benzene, ethylbenzene, toluene, xylenes, total hydrocarbons (THC), and naphthalene. The participants were categorized a priori into high- and low-exposure groups, based on their exposure to JP-8 during their typical workday. Additional JP-8 exposure categories included job title groups and self-reported exposure to JP-8. Linear mixed-effects models were used to evaluate predictors of personal air concentrations. The concentrations of THC in air were significantly different between a priori exposure groups (2.6 mg m(-3) in high group versus 0.5 mg m(-3) in low, P < 0.0001), with similar differences observed for other analytes in air. Naphthalene was strongly correlated with THC (r = 0.82, P < 0.0001) and both were positively correlated with the relative humidity of the work environment. Exposures to THC and naphthalene varied significantly by job categories based on USAF specialty codes and were highest among personnel working in fuel distribution/maintenance, though self-reported exposure to JP-8 was an even stronger predictor of measured exposure in models that explained 72% (THC) and 67% (naphthalene) of between-worker variability. In fact, both self-report JP-8 exposure and a priori exposure groups explained more between-worker variability than job categories. Personal exposure to JP-8 varied by job and was positively associated with the relative humidity. However, self-reported exposure to JP-8 was an even stronger predictor of measured exposure than job title categories, suggesting that self-reported JP-8 exposure is a valid surrogate metric of exposure when personal air measurements are not available.
Implementation and Evaluation of Self-Scheduling in a Hospital System.
Wright, Christina; McCartt, Peggy; Raines, Diane; Oermann, Marilyn H
Inflexible work schedules affect job satisfaction and influence nurse turnover. Job satisfaction is a significant predictor of nurse retention. Acute care hospitals report that job satisfaction is influenced by autonomy and educational opportunity. This project discusses implementation of computer-based self-scheduling in a hospital system and its impact. It is important for staff development educators to be aware that self-scheduling may play a key role in autonomy, professional development, turnover, and hospital costs.
Environment Canada cuts threaten the future of science and international agreements
NASA Astrophysics Data System (ADS)
Thompson, Anne M.; Salawitch, Ross J.; Hoff, Raymond M.; Logan, Jennifer A.; Einaudi, Franco
2012-02-01
In August 2011, 300 Environment Canada scientists and staff working on environmental monitoring and protection learned that their jobs would be terminated, and an additional 400-plus Environment Canada employees received notice that their positions were targeted for elimination. These notices received widespread coverage in the Canadian media and international attention in Nature News. Environment Canada is a government agency responsible for meteorological services as well as environmental research. We are concerned that research and observations related to ozone depletion, tropospheric pollution, and atmospheric transport of toxic chemicals in the northern latitudes may be seriously imperiled by the budget cuts that led to these job terminations. Further, we raise the questions being asked by the international community, scientists, and policy makers alike: First, will Canada be able to meet its obligations to the monitoring and assessment studies that support the various international agreements inTable 1? Second, will Canada continue to be a leader in Arctic research.
Opportunistic Computing with Lobster: Lessons Learned from Scaling up to 25k Non-Dedicated Cores
NASA Astrophysics Data System (ADS)
Wolf, Matthias; Woodard, Anna; Li, Wenzhao; Hurtado Anampa, Kenyi; Yannakopoulos, Anna; Tovar, Benjamin; Donnelly, Patrick; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas
2017-10-01
We previously described Lobster, a workflow management tool for exploiting volatile opportunistic computing resources for computation in HEP. We will discuss the various challenges that have been encountered while scaling up the simultaneous CPU core utilization and the software improvements required to overcome these challenges. Categories: Workflows can now be divided into categories based on their required system resources. This allows the batch queueing system to optimize assignment of tasks to nodes with the appropriate capabilities. Within each category, limits can be specified for the number of running jobs to regulate the utilization of communication bandwidth. System resource specifications for a task category can now be modified while a project is running, avoiding the need to restart the project if resource requirements differ from the initial estimates. Lobster now implements time limits on each task category to voluntarily terminate tasks. This allows partially completed work to be recovered. Workflow dependency specification: One workflow often requires data from other workflows as input. Rather than waiting for earlier workflows to be completed before beginning later ones, Lobster now allows dependent tasks to begin as soon as sufficient input data has accumulated. Resource monitoring: Lobster utilizes a new capability in Work Queue to monitor the system resources each task requires in order to identify bottlenecks and optimally assign tasks. The capability of the Lobster opportunistic workflow management system for HEP computation has been significantly increased. We have demonstrated efficient utilization of 25 000 non-dedicated cores and achieved a data input rate of 30 Gb/s and an output rate of 500GB/h. This has required new capabilities in task categorization, workflow dependency specification, and resource monitoring.
Job Involvement and Organizational Commitment of Employees of Prehospital Emergency Medical System
Rahati, Alireza; Sotudeh-Arani, Hossein; Adib-Hajbaghery, Mohsen; Rostami, Majid
2015-01-01
Background: Several studies are available on organizational commitment of employees in different organizations. However, the organizational commitment and job involvement of the employees in the prehospital emergency medical system (PEMS) of Iran have largely been ignored. Objectives: This study aimed to investigate the organizational commitment and job involvement of the employees of PEMS and the relationship between these two issues. Materials and Methods: This cross-sectional study was conducted on 160 employees of Kashan PEMS who were selected through a census method in 2014. A 3-part instrument was used in this study, including a demographic questionnaire, the Allen and Miller’s organizational commitment inventory, and the Lodahl and Kejner’s job involvement inventory. We used descriptive statistics, Spearman correlation coefficient, Kruskal-Wallis, Friedman, analysis of variance, and Tukey post hoc tests to analyze the data. Results: The mean job involvement and organizational commitment scores were 61.78 ± 10.69 and 73.89 ± 13.58, respectively. The mean scores of job involvement and organizational commitment were significantly different in subjects with different work experiences (P = 0.043 and P = 0.012, respectively). However, no significant differences were observed between the mean scores of organizational commitment and job involvement in subjects with different fields of study, different levels of interest in the profession, and various educational levels. A direct significant correlation was found between the total scores of organizational commitment and job involvement of workers in Kashan PEMS (r = 0.910, P < 0.001). Conclusions: This study showed that the employees in the Kashan PEMS obtained half of the score of organizational commitment and about two-thirds of the job involvement score. Therefore, the higher level managers of the emergency medical system are advised to implement some strategies to increase the employees’ job involvement and organizational commitment. PMID:26835470
ERIC Educational Resources Information Center
Taylor, James C.
For more than 80 years, jobs in the United States have been designed by people for others. For most of these years, the experts in job design have placed the production technology above the job holder in importance. Since the 1950s, many jobs have been redesigned around new, computer-based technology. Often, the net effect has been to make those…
Diabetes management and hypoglycemia in safety sensitive jobs.
Lee, See-Muah; Koh, David; Chui, Winnie Kl; Sum, Chee-Fang
2011-03-01
The majority of people diagnosed with diabetes mellitus are in the working age group in developing countries. The interrelationship of diabetes and work, that is, diabetes affecting work and work affecting diabetes, becomes an important issue for these people. Therapeutic options for the diabetic worker have been developed, and currently include various insulins, insulin sensitizers and secretagogues, incretin mimetics and enhancers, and alpha glucosidase inhibitors. Hypoglycemia and hypoglycaemic unawareness are important and unwanted treatment side effects. The risk they pose with respect to cognitive impairment can have safety implications. The understanding of the therapeutic options in the management of diabetic workers, blood glucose awareness training, and self-monitoring blood glucose will help to mitigate this risk. Employment decisions must also take into account the extent to which the jobs performed by the worker are safety sensitive. A risk assessment matrix, based on the extent to which a job is considered safety sensitive and based on the severity of the hypoglycaemia, may assist in determining one's fitness to work. Support at the workplace, such as a provision of healthy food options and arrangements for affected workers will be helpful for such workers. Arrangements include permission to carry and consume emergency sugar, flexible meal times, self-monitoring blood glucose when required, storage/disposal facilities for medicine such as insulin and needles, time off for medical appointments, and structured self-help programs.
Diabetes Management and Hypoglycemia in Safety Sensitive Jobs
Koh, David; Chui, Winnie KL; Sum, Chee-Fang
2011-01-01
The majority of people diagnosed with diabetes mellitus are in the working age group in developing countries. The interrelationship of diabetes and work, that is, diabetes affecting work and work affecting diabetes, becomes an important issue for these people. Therapeutic options for the diabetic worker have been developed, and currently include various insulins, insulin sensitizers and secretagogues, incretin mimetics and enhancers, and alpha glucosidase inhibitors. Hypoglycemia and hypoglycaemic unawareness are important and unwanted treatment side effects. The risk they pose with respect to cognitive impairment can have safety implications. The understanding of the therapeutic options in the management of diabetic workers, blood glucose awareness training, and self-monitoring blood glucose will help to mitigate this risk. Employment decisions must also take into account the extent to which the jobs performed by the worker are safety sensitive. A risk assessment matrix, based on the extent to which a job is considered safety sensitive and based on the severity of the hypoglycaemia, may assist in determining one's fitness to work. Support at the workplace, such as a provision of healthy food options and arrangements for affected workers will be helpful for such workers. Arrangements include permission to carry and consume emergency sugar, flexible meal times, self-monitoring blood glucose when required, storage/disposal facilities for medicine such as insulin and needles, time off for medical appointments, and structured self-help programs. PMID:22953182
Multi-core processing and scheduling performance in CMS
NASA Astrophysics Data System (ADS)
Hernández, J. M.; Evans, D.; Foulkes, S.
2012-12-01
Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resulting in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.
Laser Transcutaneous Bilirubin Meter: A New Device For Bilirubin Monitoring In Neonatal Jaundice
NASA Astrophysics Data System (ADS)
Hamza, Mostafa; Hamza, Mohammad
1988-06-01
Neonates with jaundice require monitoring of serum bilirubin which should be repeated at frequent intervals. However, taking blood samples from neonates is not always an easy job, plus being an invasive and traumatising procedure with the additional risk of blood loss. In this paper the authors present the theory and design of a new noninvasive device for transcutaneous bilirubinometry, using a differential absorption laser system. The new technique depends upon illuminating the skin of the neonate with radiation from a two wave-length oscillation laser. The choice of the wavelengths follows the principles of optical bilirubinometry. For obtaining more accurate measurements, different pairs of two wave-lengths are incorporated in the design. The presence of hemoglobin is corrected for by appropriate selection of the laser wavelengths. The new design was tested for accuracy and precision using an argon ion laser. Correlation study between serum bilirubin determination by laser transcutaneous bilirubinometry and by American optical bilirubinometer was highly significant.
Importance and effects of altered workplace ergonomics in modern radiology suites.
Harisinghani, Mukesh G; Blake, Michael A; Saksena, Mansi; Hahn, Peter F; Gervais, Debra; Zalis, Michael; da Silva Dias Fernandes, Leonor; Mueller, Peter R
2004-01-01
The transition from a film-based to a filmless soft-copy picture archiving and communication system (PACS)-based environment has resulted in improved work flow as well as increased productivity, diagnostic accuracy, and job satisfaction. Adapting to this filmless environment in an efficient manner requires seamless integration of various components such as PACS workstations, the Internet and hospital intranet, speech recognition software, paperless electronic hospital medical records, e-mail, office software, and telecommunications. However, the importance of optimizing workplace ergonomics has received little attention. Factors such as the position of the work chair, workstation table, keyboard, mouse, and monitors, along with monitor refresh rates and ambient room lighting, have become secondary considerations. Paying close attention to the basics of workplace ergonomics can go a long way in increasing productivity and reducing fatigue, thus allowing full realization of the potential benefits of a PACS. Optimization of workplace ergonomics should be considered in the basic design of any modern radiology suite. Copyright RSNA, 2004
Civil Service Systems and Job Discrimination
ERIC Educational Resources Information Center
Coutourier, Jean
1975-01-01
This testimony, before a public hearing of the New York City Commission on Human Rights in May 1974, focuses on the National Civil Service League: essential elements of the League's program for achieving equal employment opportunity include outreach recruitment, accurate job descriptions, valid job-related examinations, and aggressive…
Job-Sharing the Principalship.
ERIC Educational Resources Information Center
Brown, Shelley; Feltham, Wendy
1997-01-01
The coprincipals of a California elementary school share their ideas for building a successful job-sharing partnership. They suggest it is important to find the right partner, develop and present a job-sharing proposal, establish systems of communication with each other, evaluate one's progress, focus on the principalship, and provide leadership…
DOT National Transportation Integrated Search
2001-08-01
This study assesses how to manage the effects or outcomes of organizational change on job security and employee commitment in transit systems using trust-building, empowerment, employee reassurance, and job redesign strategies. The major findings are...
I/O-aware bandwidth allocation for petascale computing systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Zhou; Yang, Xu; Zhao, Dongfang
In the Big Data era, the gap between the storage performance and an appli- cation's I/O requirement is increasing. I/O congestion caused by concurrent storage accesses from multiple applications is inevitable and severely harms the performance. Conventional approaches either focus on optimizing an ap- plication's access pattern individually or handle I/O requests on a low-level storage layer without any knowledge from the upper-level applications. In this paper, we present a novel I/O-aware bandwidth allocation framework to coordinate ongoing I/O requests on petascale computing systems. The motivation behind this innovation is that the resource management system has a holistic view ofmore » both the system state and jobs' activities and can dy- namically control the jobs' status or allocate resource on the y during their execution. We treat a job's I/O requests as periodical subjobs within its lifecycle and transform the I/O congestion issue into a classical scheduling problem. Based on this model, we propose a bandwidth management mech- anism as an extension to the existing scheduling system. We design several bandwidth allocation policies with different optimization objectives either on user-oriented metrics or system performance. We conduct extensive trace- based simulations using real job traces and I/O traces from a production IBM Blue Gene/Q system at Argonne National Laboratory. Experimental results demonstrate that our new design can improve job performance by more than 30%, as well as increasing system performance.« less
Computers Launch Faster, Better Job Matching
ERIC Educational Resources Information Center
Stevenson, Gloria
1976-01-01
Employment Security Automation Project (ESAP), a five-year program sponsored by the Employment and Training Administration, features an innovative computer-assisted job matching system and instantaneous computer-assisted service for unemployment insurance claimants. ESAP will also consolidate existing automated employment security systems to…
Recruitment recommendation system based on fuzzy measure and indeterminate integral
NASA Astrophysics Data System (ADS)
Yin, Xin; Song, Jinjie
2017-08-01
In this study, we propose a comprehensive evaluation approach based on indeterminate integral. By introducing the related concepts of indeterminate integral and their formulas into the recruitment recommendation system, we can calculate the suitability of each job for different applicants with the defined importance for each criterion listed in the job advertisements, the association between different criteria and subjective assessment as the prerequisite. Thus we can make recommendations to the applicants based on the score of the suitability of each job from high to low. In the end, we will exemplify the usefulness and practicality of this system with samples.
20 CFR 658.421 - Handling of JS-related complaints.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Handling of JS-related complaints. 658.421... ADMINISTRATIVE PROVISIONS GOVERNING THE JOB SERVICE SYSTEM Job Service Complaint System Federal Js Complaint System § 658.421 Handling of JS-related complaints. (a) No JS-related complaint shall be handled at the...
20 CFR 658.414 - Referral of non-JS-related complaints.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Referral of non-JS-related complaints. 658... ADMINISTRATIVE PROVISIONS GOVERNING THE JOB SERVICE SYSTEM Job Service Complaint System State Agency Js Complaint System § 658.414 Referral of non-JS-related complaints. (a) To facilitate the operation of the...
Job Stress and Job Satisfaction among Health-Care Workers of Endoscopy Units in Korea
Nam, Seung-Joo; Chun, Hoon Jai; Moon, Jeong Seop; Park, Sung Chul; Hwang, Young-Jae; Yoo, In Kyung; Lee, Jae Min; Kim, Seung Han; Choi, Hyuk Soon; Kim, Eun Sun; Keum, Bora; Jeen, Yoon Tae; Lee, Hong Sik; Kim, Chang Duck
2016-01-01
Background/Aims: The management of job-related stress among health-care workers is critical for the improvement of healthcare services; however, there is no existing research on endoscopy unit workers as a team. Korea has a unique health-care system for endoscopy unit workers. In this study, we aimed to estimate job stress and job satisfaction among health-care providers in endoscopy units in Korea. Methods: We performed a cross-sectional survey of health-care providers in the endoscopy units of three university-affiliated hospitals in Korea. We analyzed the job stress levels by using the Korean occupational stress scale, contributing factors, and job satisfaction. Results: Fifty-nine workers completed the self-administered questionnaires. The job stress scores for the endoscopy unit workers (46.39±7.81) were relatively lower compared to those of the national sample of Korean workers (51.23±8.83). Job stress differed across job positions, with nurses showing significantly higher levels of stress (48.92±7.97) compared to doctors (42.59±6.37). Job stress and job satisfaction were negatively correlated with each other (R2=0.340, p<0.001). Conclusions: An endoscopy unit is composed of a heterogeneous group of health-care professionals (i.e., nurses, fellows, and professors), and job stress and job satisfaction significantly differ according to job positions. Job demand, insufficient job control, and job insecurity are the most important stressors in the endoscopy unit. PMID:26898513
Job Stress and Job Satisfaction among Health-Care Workers of Endoscopy Units in Korea.
Nam, Seung-Joo; Chun, Hoon Jai; Moon, Jeong Seop; Park, Sung Chul; Hwang, Young-Jae; Yoo, In Kyung; Lee, Jae Min; Kim, Seung Han; Choi, Hyuk Soon; Kim, Eun Sun; Keum, Bora; Jeen, Yoon Tae; Lee, Hong Sik; Kim, Chang Duck
2016-05-01
The management of job-related stress among health-care workers is critical for the improvement of healthcare services; however, there is no existing research on endoscopy unit workers as a team. Korea has a unique health-care system for endoscopy unit workers. In this study, we aimed to estimate job stress and job satisfaction among health-care providers in endoscopy units in Korea. We performed a cross-sectional survey of health-care providers in the endoscopy units of three university-affiliated hospitals in Korea. We analyzed the job stress levels by using the Korean occupational stress scale, contributing factors, and job satisfaction. Fifty-nine workers completed the self-administered questionnaires. The job stress scores for the endoscopy unit workers (46.39±7.81) were relatively lower compared to those of the national sample of Korean workers (51.23±8.83). Job stress differed across job positions, with nurses showing significantly higher levels of stress (48.92±7.97) compared to doctors (42.59±6.37). Job stress and job satisfaction were negatively correlated with each other (R (2) =0.340, p<0.001). An endoscopy unit is composed of a heterogeneous group of health-care professionals (i.e., nurses, fellows, and professors), and job stress and job satisfaction significantly differ according to job positions. Job demand, insufficient job control, and job insecurity are the most important stressors in the endoscopy unit.
Evolution of grid-wide access to database resident information in ATLAS using Frontier
NASA Astrophysics Data System (ADS)
Barberis, D.; Bujor, F.; de Stefano, J.; Dewhurst, A. L.; Dykstra, D.; Front, D.; Gallas, E.; Gamboa, C. F.; Luehring, F.; Walker, R.
2012-12-01
The ATLAS experiment deployed Frontier technology worldwide during the initial year of LHC collision data taking to enable user analysis jobs running on the Worldwide LHC Computing Grid to access database resident data. Since that time, the deployment model has evolved to optimize resources, improve performance, and streamline maintenance of Frontier and related infrastructure. In this presentation we focus on the specific changes in the deployment and improvements undertaken, such as the optimization of cache and launchpad location, the use of RPMs for more uniform deployment of underlying Frontier related components, improvements in monitoring, optimization of fail-over, and an increasing use of a centrally managed database containing site specific information (for configuration of services and monitoring). In addition, analysis of Frontier logs has allowed us a deeper understanding of problematic queries and understanding of use cases. Use of the system has grown beyond user analysis and subsystem specific tasks such as calibration and alignment, extending into production processing areas, such as initial reconstruction and trigger reprocessing. With a more robust and tuned system, we are better equipped to satisfy the still growing number of diverse clients and the demands of increasingly sophisticated processing and analysis.
Monitoring and Testing the Parts Cleaning Stations, Abrasive Blasting Cabinets, and Paint Booths
NASA Technical Reports Server (NTRS)
Jordan, Tracee M.
2004-01-01
I have the opportunity to work in the Environmental Management Office (EMO) this summer. One of the EMO's tasks is to make sure the Environmental Management System is implemented to the entire Glenn Research Center (GRC). The Environmental Management System (EMS) is a policy or plan that is oriented toward minimizing an organization's impact to the environment. Our EMS includes the reduction of solid waste regeneration and the reduction of hazardous material use, waste, and pollution. With the Waste Management Team's (WMT) help, the EMS can be implemented throughout the NASA Glenn Research Center. The WMT is responsible for the disposal and managing of waste throughout the GRC. They are also responsible for the management of all chemical waste in the facility. My responsibility is to support the waste management team by performing an inventory on parts cleaning stations, abrasive cabinets, and paint booths through out the entire facility. These booths/stations are used throughout the center and they need to be monitored and tested for hazardous waste and material. My job is to visit each of these booths/stations, take samples of the waste, and analyze the samples.
ERIC Educational Resources Information Center
Matherly, Donna
1983-01-01
The author presents survey findings on problems involved in the implementation of new technology. The results of questionnaires returned by 286 administrative systems operants are presented concerning interpersonal relations, career advancement, job security, personal comfort, job design, and job satisfaction. (CT)
5 CFR 532.315 - Additional survey jobs.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Additional survey jobs. 532.315 Section... RATE SYSTEMS Determining Rates for Principal Types of Positions § 532.315 Additional survey jobs. (a) For appropriated fund surveys, when the lead agency adds to the industries to be surveyed, it shall...
5 CFR 532.315 - Additional survey jobs.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Additional survey jobs. 532.315 Section... RATE SYSTEMS Determining Rates for Principal Types of Positions § 532.315 Additional survey jobs. (a) For appropriated fund surveys, when the lead agency adds to the industries to be surveyed, it shall...
5 CFR 532.315 - Additional survey jobs.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Additional survey jobs. 532.315 Section... RATE SYSTEMS Determining Rates for Principal Types of Positions § 532.315 Additional survey jobs. (a) For appropriated fund surveys, when the lead agency adds to the industries to be surveyed, it shall...
5 CFR 532.315 - Additional survey jobs.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 5 Administrative Personnel 1 2012-01-01 2012-01-01 false Additional survey jobs. 532.315 Section... RATE SYSTEMS Determining Rates for Principal Types of Positions § 532.315 Additional survey jobs. (a) For appropriated fund surveys, when the lead agency adds to the industries to be surveyed, it shall...
From Franchise to Programming: Jobs in Cable Television.
ERIC Educational Resources Information Center
Stanton, Michael
1985-01-01
This article takes a look at some of the key jobs at every level of the cable industry. It discusses winning a franchise, building and running the system, and programing and production. Job descriptions include engineer, market analyst, programers, financial analysts, strand mappers, customer service representatives, access coordinator, and studio…
Case Studies in Job Analysis and Training Evaluation.
ERIC Educational Resources Information Center
McKillip, Jack
2001-01-01
An information technology certification program was evaluated by 1,671 systems engineers using job analysis that rated task importance. Professional librarians (n=527) rated importance of their tasks in similar fashion. Results of scatter diagrams provided evidence to enhance training effectiveness by focusing on job tasks significantly related to…
Linking Job-Embedded Professional Development and Mandated Teacher Evaluation: Teacher as Learner
ERIC Educational Resources Information Center
Derrington, Mary Lynne; Kirk, Julia
2017-01-01
This study explores the link between individualized, job-embedded professional development and teacher evaluation. Moreover, the study explores and describes job-embedded strategies that principals used to facilitate teacher development while working within a state-mandated evaluation system. The theoretical frame utilized four elements of…
Wientjens, Wim; Cairns, Douglas
2012-10-01
In the fight against discrimination, the IDF launched the first ever International Charter of Rights and Responsibilities of People with Diabetes in 2011: a balance between rights and duties to optimize health and quality of life, to enable as normal a life as possible and to reduce/eliminate the barriers which deny realization of full potential as members of society. It is extremely frustrating to suffer blanket bans and many examples exist, including insurance, driving licenses, getting a job, keeping a job and family affairs. In this article, an example is given of how pilots with insulin treated diabetes are allowed to fly by taking the responsibility of using special blood glucose monitoring protocols. At this time the systems in the countries allowing flying for pilots with insulin treated diabetes are applauded, particularly the USA for private flying, and Canada for commercial flying. Encouraging developments may be underway in the UK for commercial flying and, if this materializes, could be used as an example for other aviation authorities to help adopt similar protocols. However, new restrictions implemented by the new European Aviation Authority take existing privileges away for National Private Pilot Licence holders with insulin treated diabetes in the UK. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
40 CFR 240.211-3 - Recommended procedures: Operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... measurements and laboratory analyses required by the responsible agency. (12) Complete records of monitoring... waste received and processed, summarized on a monthly basis. (2) A summary of the laboratory analyses including at least monthly averages. (3) Number and qualifications of personnel in each job category; total...
Utility Considerations in Emotional Stability Monitoring for Nuclear Plant Personnel,
Training, and (4) Employee assistance program. Some data exists which show more acts of sabotage, theft, or vandalism occur in environments in which...definition of criteria and the judgment of a professional psychologist or psychiatrist who knows the job environment . Finally, it must provide
Principals' Perceptions of Instructional Leadership Development
ERIC Educational Resources Information Center
Brabham, Carla E.
2017-01-01
Instructional leadership is an important aspect of student achievement and the overall success of schools. Principals, as instructional leaders, need continual reflection on their competency. Job-embedded professional development (JEPD) for teachers is implemented and monitored by instructional leaders. The purpose of this case study was to…
Some Knowledge Areas in Blindness Rehabilitation.
ERIC Educational Resources Information Center
Giesen, J. Martin; Cavenaugh, Brenda S.; Johnson, Cherie A.
1998-01-01
Provides an outline of knowledge areas in rehabilitation counseling and rehabilitation teaching related to visual impairments such as: core areas; planning and delivery services; job development, placement, and follow-along; job engineering; Braille and other tactual systems; communication systems; computers for individuals with visual…
2012-01-01
Background No previous studies have addressed the integrated relationships among system quality, service quality, job satisfaction, and system performance; this study attempts to bridge such a gap with evidence-based practice study. Methods The convenience sampling method was applied to the information system users of three hospitals in southern Taiwan. A total of 500 copies of questionnaires were distributed, and 283 returned copies were valid, suggesting a valid response rate of 56.6%. SPSS 17.0 and AMOS 17.0 (structural equation modeling) statistical software packages were used for data analysis and processing. Results The findings are as follows: System quality has a positive influence on service quality (γ11= 0.55), job satisfaction (γ21= 0.32), and system performance (γ31= 0.47). Service quality (β31= 0.38) and job satisfaction (β32= 0.46) will positively influence system performance. Conclusions It is thus recommended that the information office of hospitals and developers take enhancement of service quality and user satisfaction into consideration in addition to placing b on system quality and information quality when designing, developing, or purchasing an information system, in order to improve benefits and gain more achievements generated by hospital information systems. PMID:23171394
Pilots 2.0: DIRAC pilots for all the skies
NASA Astrophysics Data System (ADS)
Stagni, F.; Tsaregorodtsev, A.; McNab, A.; Luzzi, C.
2015-12-01
In the last few years, new types of computing infrastructures, such as IAAS (Infrastructure as a Service) and IAAC (Infrastructure as a Client), gained popularity. New resources may come as part of pledged resources, while others are opportunistic. Most of these new infrastructures are based on virtualization techniques. Meanwhile, some concepts, such as distributed queues, lost appeal, while still supporting a vast amount of resources. Virtual Organizations are therefore facing heterogeneity of the available resources and the use of an Interware software like DIRAC to hide the diversity of underlying resources has become essential. The DIRAC WMS is based on the concept of pilot jobs that was introduced back in 2004. A pilot is what creates the possibility to run jobs on a worker node. Within DIRAC, we developed a new generation of pilot jobs, that we dubbed Pilots 2.0. Pilots 2.0 are not tied to a specific infrastructure; rather they are generic, fully configurable and extendible pilots. A Pilot 2.0 can be sent, as a script to be run, or it can be fetched from a remote location. A pilot 2.0 can run on every computing resource, e.g.: on CREAM Computing elements, on DIRAC Computing elements, on Virtual Machines as part of the contextualization script, or IAAC resources, provided that these machines are properly configured, hiding all the details of the Worker Nodes (WNs) infrastructure. Pilots 2.0 can be generated server and client side. Pilots 2.0 are the “pilots to fly in all the skies”, aiming at easy use of computing power, in whatever form it is presented. Another aim is the unification and simplification of the monitoring infrastructure for all kinds of computing resources, by using pilots as a network of distributed sensors coordinated by a central resource monitoring system. Pilots 2.0 have been developed using the command pattern. VOs using DIRAC can tune pilots 2.0 as they need, and extend or replace each and every pilot command in an easy way. In this paper we describe how Pilots 2.0 work with distributed and heterogeneous resources providing the necessary abstraction to deal with different kind of computing resources.
Chui, Michelle A; Look, Kevin A; Mott, David A
2014-01-01
Workload has been described both objectively (e.g., number of prescriptions dispensed per pharmacist) as well as subjectively (e.g., pharmacist's perception of busyness). These approaches might be missing important characteristics of pharmacist workload that have not been previously identified and measured. To measure the association of community pharmacists' workload perceptions at three levels (organization, job, and task) with job satisfaction, burnout, and perceived performance of two tasks in the medication dispensing process. A secondary data analysis was performed using cross-sectional survey data collected from Wisconsin (US) community pharmacists. Organization-related workload was measured as staffing adequacy; job-related workload was measured as general and specific job demands; task-related workload was measured as internal and external mental demands. Pharmacists' perceived task performance was assessed for patient profile review and patient consultation. The survey was administered to a random sample of 500 pharmacists who were asked to opt in if they were a community pharmacist. Descriptive statistics and correlations of study variables were determined. Two structural equation models were estimated to examine relationships between the study variables and perceived task performance. From the 224 eligible community pharmacists that agreed to participate, 165 (73.7%) usable surveys were completed and returned. Job satisfaction and job-related monitoring demands had direct positive associations with both dispensing tasks. External task demands were negatively related to perceived patient consultation performance. Indirect effects on both tasks were primarily mediated through job satisfaction, which was positively related to staffing adequacy and cognitive job demands and negatively related to volume job demands. External task demands had an additional indirect effect on perceived patient consultation performance, as it was associated with lower levels of job satisfaction and higher levels of burnout. Allowing community pharmacists to concentrate on tasks and limiting interruptions while performing these tasks are important factors in improving quality of patient care and pharmacist work life. The results have implications for strategies to improve patient safety and pharmacist performance. Copyright © 2014 Elsevier Inc. All rights reserved.
Chui, Michelle A.; Look, Kevin A.; Mott, David A.
2013-01-01
Background Workload has been described both objectively (e.g., number of prescriptions dispensed per pharmacist) as well as subjectively (e.g., pharmacist’s perception of busyness). These approaches might be missing important characteristics of pharmacist workload that have not been previously identified and measured. Objectives To measure the association of community pharmacists’ workload perceptions at three levels (organization, job, and task) with job satisfaction, burnout, and perceived performance of two tasks in the medication dispensing process. Methods A secondary data analysis was performed using cross-sectional survey data collected from Wisconsin (US) community pharmacists. Organization–related workload was measured as staffing adequacy; job-related workload was measured as general and specific job demands; task-related workload was measured as internal and external mental demands. Pharmacists’ perceived task performance was assessed for patient profile review and patient consultation. The survey was administered to a random sample of 500 pharmacists who were asked to opt in if they were a community pharmacist. Descriptive statistics and correlations of study variables were determined. Two structural equation models were estimated to examine relationships between the study variables and perceived task performance. Results From the 224 eligible community pharmacists that agreed to participate, 165 (73.7%) usable surveys were completed and returned. Job satisfaction and job-related monitoring demands had direct positive associations with both dispensing tasks. External task demands were negatively related to perceived patient consultation performance. Indirect effects on both tasks were primarily mediated through job satisfaction, which was positively related to staffing adequacy and cognitive job demands and negatively related to volume job demands. External task demands had an additional indirect effect on perceived patient consultation performance, as it was associated with lower levels of job satisfaction and higher levels of burnout. Implications/Conclusions Allowing community pharmacists to concentrate on tasks and limiting interruptions while performing these tasks are important factors in improving quality of patient care and pharmacist work life. The results have implications for strategies to improve patient safety and pharmacist performance. PMID:23791360
NASA Astrophysics Data System (ADS)
Zhang, Ding; Zhang, Yingjie
2017-09-01
A framework for reliability and maintenance analysis of job shop manufacturing systems is proposed in this paper. An efficient preventive maintenance (PM) policy in terms of failure effects analysis (FEA) is proposed. Subsequently, reliability evaluation and component importance measure based on FEA are performed under the PM policy. A job shop manufacturing system is applied to validate the reliability evaluation and dynamic maintenance policy. Obtained results are compared with existed methods and the effectiveness is validated. Some vague understandings for issues such as network modelling, vulnerabilities identification, the evaluation criteria of repairable systems, as well as PM policy during manufacturing system reliability analysis are elaborated. This framework can help for reliability optimisation and rational maintenance resources allocation of job shop manufacturing systems.
Job Scheduling in a Heterogeneous Grid Environment
NASA Technical Reports Server (NTRS)
Shan, Hong-Zhang; Smith, Warren; Oliker, Leonid; Biswas, Rupak
2004-01-01
Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.
Womack, Sarah K; Armstrong, Thomas J
2005-09-01
The present study evaluates the effectiveness of a decision support system used to evaluate and control physical job stresses and prevent re-injury of workers who have experienced or are concerned about work-related musculoskeletal disorders. The software program is a database that stores detailed job information such as standardized work data, videos, and upper-extremity physical stress ratings for over 400 jobs in the plant. Additionally, the database users were able to record comments about the jobs and related control issues. The researchers investigated the utility and effectiveness of the software by analyzing its use over a 20-month period. Of the 197 comments entered by the users, 25% pertained to primary prevention, 75% pertained to secondary prevention, and 94 comments (47.7%) described ergonomic interventions. Use of the software tool improved primary and secondary prevention by improving the quality and efficiency of the ergonomic job analysis process.
Joseph, Nataria T.; Muldoon, Matthew F.; Manuck, Stephen B.; Matthews, Karen A.; MacDonald, Leslie A.; Grosch, James; Kamarck, Thomas W.
2016-01-01
Objective The objectives of this study were to determine whether job strain is more strongly associated with higher ambulatory blood pressure (ABP) among blue-collar workers compared to white-collar workers; to examine whether this pattern generalizes across working and nonworking days and across sex; and to examine whether this pattern is accounted for by psychosocial factors or health behaviors during daily life. Methods 480 healthy workers (mean age = 43; 53% female)in the Adult Health and Behavior Project – Phase 2 (AHAB-II)completed ABP monitoring during 3 working days and 1 nonworking day. Job strain was operationalized as high psychological demand (> sample median) combined with low decision latitude (< sample median) (Karasek model; Job Content Questionnaire). Results Covariate-adjusted multilevel random coefficients regressions demonstrated that associations between job strain and systolic and diastolic ABP were stronger among blue-collar workers compared to white-collar workers (b = 6.53, F(1, 464)= 3.89, p = .049 and b = 5.25, F(1, 464)= 6.09, p = .014, respectively). This pattern did not vary by sex but diastolic ABP findings were stronger when participants were at work. The stronger association between job strain and ABP among blue-collar workers was not accounted for by education, momentary physical activity or substance use, but was partially accounted for by covariation between higher hostility and blue-collar status. Conclusions Job strain is associated with ABP among blue-collar workers. These results extend previous findings to a mixed-sex sample and nonworking days and provide, for the first time, comprehensive exploration of several behavioral and psychosocial explanations for this finding. PMID:27359177
Autotasked Performance in the NAS Workload: A Statistical Analysis
NASA Technical Reports Server (NTRS)
Carter, R. L.; Stockdale, I. E.; Kutler, Paul (Technical Monitor)
1998-01-01
A statistical analysis of the workload performance of a production quality FORTRAN code for five different Cray Y-MP hardware and system software configurations is performed. The analysis was based on an experimental procedure that was designed to minimize correlations between the number of requested CPUs and the time of day the runs were initiated. Observed autotasking over heads were significantly larger for the set of jobs that requested the maximum number of CPUs. Speedups for UNICOS 6 releases show consistent wall clock speedups in the workload of around 2. which is quite good. The observed speed ups were very similar for the set of jobs that requested 8 CPUs and the set that requested 4 CPUs. The original NAS algorithm for determining charges to the user discourages autotasking in the workload. A new charging algorithm to be applied to jobs run in the NQS multitasking queues also discourages NAS users from using auto tasking. The new algorithm favors jobs requesting 8 CPUs over those that request less, although the jobs requesting 8 CPUs experienced significantly higher over head and presumably degraded system throughput. A charging algorithm is presented that has the following desirable characteristics when applied to the data: higher overhead jobs requesting 8 CPUs are penalized when compared to moderate overhead jobs requesting 4 CPUs, thereby providing a charging incentive to NAS users to use autotasking in a manner that provides them with significantly improved turnaround while also maintaining system throughput.
Dedicated heterogeneous node scheduling including backfill scheduling
Wood, Robert R [Livermore, CA; Eckert, Philip D [Livermore, CA; Hommes, Gregg [Pleasanton, CA
2006-07-25
A method and system for job backfill scheduling dedicated heterogeneous nodes in a multi-node computing environment. Heterogeneous nodes are grouped into homogeneous node sub-pools. For each sub-pool, a free node schedule (FNS) is created so that the number of to chart the free nodes over time. For each prioritized job, using the FNS of sub-pools having nodes useable by a particular job, to determine the earliest time range (ETR) capable of running the job. Once determined for a particular job, scheduling the job to run in that ETR. If the ETR determined for a lower priority job (LPJ) has a start time earlier than a higher priority job (HPJ), then the LPJ is scheduled in that ETR if it would not disturb the anticipated start times of any HPJ previously scheduled for a future time. Thus, efficient utilization and throughput of such computing environments may be increased by utilizing resources otherwise remaining idle.